On 8/27/18 6:38 PM, Fox, Kevin M wrote:
I think in this context, kubelet without all of kubernetes still has the value 
that it provides an abstraction layer that podmon/paunch is being suggested to 
handle.

It does not need the things you mention, network, sidecar, scaleup/down, etc. 
You can use as little as you want.

For example, make a pod yaml per container with hostNetwork: true. it will run 
just like it was on the host then. You can do just one container. no sidecars 
necessary. Without the apiserver, it can't do scaleup/down even if you wanted 
to.

It provides declarative yaml based management of containers, similar to paunch. 
so you can skip needing that component.

That would be a step into the right direction IMO.


It also already provides crio and docker support via cri.

It does provide a little bit of orchestration, in that you drive things with 
declarative yaml. You drop in a yaml file in /etc/kubernetes/manifests, and it 
will create the container. you delete it, it removes the container. If you 
change it, it will update the container. and if something goes wrong with the 
container, it will try and get it back to the requested state automatically. 
And, it will recover the containers on reboot without help.

Thanks,
Kevin

________________________________________
From: Sergii Golovatiuk [sgolo...@redhat.com]
Sent: Monday, August 27, 2018 3:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice       
API calls

Hi,

On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra <ramis...@redhat.com> wrote:
On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk <sgolo...@redhat.com>
wrote:

Hi,

On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra <ramis...@redhat.com> wrote:
On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker <sba...@redhat.com> wrote:
Steve mentioned kubectl (kubernetes CLI which communicates with


Not sure what he meant. May be I miss something, but not heard of 'kubectl
standalone', though he might have meant standalone k8s cluster on every node
as you think.


kube-api) not kubelet which is only one component of kubernetes. All
kubernetes components may be compiled as one binary (hyperkube) which
can be used to minimize footprint. Generated ansible for kubelet is
not enough as kubelet doesn't have any orchestration logic.


What orchestration logic do we've with TripleO atm? AFAIK we've provide
roles data for service placement across nodes, right?
I see standalone kubelet as a first step for scheduling openstack services
with in k8s cluster in the future (may be).

It's half measure. I don't see any advantages of that move. We should
either adopt whole kubernetes or doesn't use its components at all as
the maintenance cost will be expensive. Using kubelet requires to
resolve networking communication, scale-up/down, sidecar, or inter
services dependencies.



This was a while ago now so this could be worth revisiting in the
future.
We'll be making gradual changes, the first of which is using podman to
manage single containers. However podman has native support for the pod
format, so I'm hoping we can switch to that once this transition is
complete. Then evaluating kubectl becomes much easier.

Question. Rather then writing a middle layer to abstract both
container
engines, couldn't you just use CRI? CRI is CRI-O's native language,
and
there is support already for Docker as well.


We're not writing a middle layer, we're leveraging one which is already
there.

CRI-O is a socket interface and podman is a CLI interface that both sit
on
top of the exact same Go libraries. At this point, switching to podman
needs
a much lower development effort because we're replacing docker CLI
calls.

I see good value in evaluating kubelet standalone and leveraging it's
inbuilt grpc interfaces with cri-o (rather than using podman) as a long
term
strategy, unless we just want to provide an alternative to docker
container
runtime with cri-o.

I see no value using kubelet without kubernetes IMHO.





Thanks,
Kevin
________________________________________
From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for
nice
API calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:

On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes <jaypi...@gmail.com> wrote:

On 08/15/2018 04:01 PM, Emilien Macchi wrote:

On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi <emil...@redhat.com
<mailto:emil...@redhat.com>> wrote:

       More seriously here: there is an ongoing effort to converge
the
       tools around containerization within Red Hat, and we, TripleO
are
       interested to continue the containerization of our services
(which
       was initially done with Docker & Docker-Distribution).
       We're looking at how these containers could be managed by k8s
one
       day but way before that we plan to swap out Docker and join
CRI-O
       efforts, which seem to be using Podman + Buildah (among other
things).

I guess my wording wasn't the best but Alex explained way better
here:


http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52

If I may have a chance to rephrase, I guess our current intention
is
to
continue our containerization and investigate how we can improve
our
tooling to better orchestrate the containers.
We have a nice interface (openstack/paunch) that allows us to run
multiple container backends, and we're currently looking outside of
Docker to see how we could solve our current challenges with the
new
tools.
We're looking at CRI-O because it happens to be a project with a
great
community, focusing on some problems that we, TripleO have been
facing
since we containerized our services.

We're doing all of this in the open, so feel free to ask any
question.

I appreciate your response, Emilien, thank you. Alex' responses to
Jeremy on the #openstack-tc channel were informative, thank you
Alex.

For now, it *seems* to me that all of the chosen tooling is very Red
Hat
centric. Which makes sense to me, considering Triple-O is a Red Hat
product.

Perhaps a slight clarification here is needed. "Director" is a Red
Hat
product. TripleO is an upstream project that is now largely driven by
Red Hat and is today marked as single vendor. We welcome others to
contribute to the project upstream just like anybody else.

And for those who don't know the history the TripleO project was once
multi-vendor as well. So a lot of the abstractions we have in place
could easily be extended to support distro specific implementation
details. (Kind of what I view podman as in the scope of this thread).

I don't know how much of the current reinvention of container
runtimes
and various tooling around containers is the result of politics. I
don't
know how much is the result of certain companies wanting to "own"
the
container stack from top to bottom. Or how much is a result of
technical
disagreements that simply cannot (or will not) be resolved among
contributors in the container development ecosystem.

Or is it some combination of the above? I don't know.

What I *do* know is that the current "NIH du jour" mentality
currently
playing itself out in the container ecosystem -- reminding me very
much
of the Javascript ecosystem -- makes it difficult for any potential
*consumers* of container libraries, runtimes or applications to be
confident that any choice they make towards one of the other will be
the
*right* choice or even a *possible* choice next year -- or next
week.
Perhaps this is why things like openstack/paunch exist -- to give
you
options if something doesn't pan out.

This is exactly why paunch exists.

Re, the podman thing I look at it as an implementation detail. The
good news is that given it is almost a parity replacement for what we
already use we'll still contribute to the OpenStack community in
similar ways. Ultimately whether you run 'docker run' or 'podman run'
you end up with the same thing as far as the existing TripleO
architecture goes.

Dan

You have a tough job. I wish you all the luck in the world in making
these decisions and hope politics and internal corporate management
decisions play as little a role in them as possible.

Best,
-jay



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Rabi Mishra



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards,
Sergii Golovatiuk

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Rabi Mishra


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards,
Sergii Golovatiuk

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to