> 
> >
> > I have seen no arguments why to use containers other than to try and
> make it "easier" for new ceph people.
> 
> I advise to read the whole thread again, especially Sage his comments,
> as there are other benefits. It would free up resources that can be
> dedicated to (arguably) more pressing issues.

I see or hear none of that in cephadm presentations. Are you sure you are not 
using some future never going to  exist feature to substantiate the current 
state?

> Containers are not being used as they should be.
> 
> There is no "should be", there is no one answer to that, other than 42.

The expected general ambitionless reply.

To me a should be is that container images should run on any container platform.
To me a should be is that you ship your container having only the relevant and 
necessary binaries. Sort of like haproxy (9MB) or java/python where you only 
have some execution environment included.
To me a should be is that the task in the container, is just one process and 
does not spawn anything else.
To me a should be is that the task can be scaled to multiple instances by the 
OC.

> Containers have been there before Docker, but Docker made them popular,
> exactly for the same reason as why Ceph wants to use them: ship a known
> good version (CI tests) of the software with all dependencies

Yes yes this is constantly been written. As if this was a huge problem the last 
decade? Please show me a roadmap of ceph development where this is a high 
priority. I rather see resources go to the crimson development and testing 
before a releasing.

Do you know your argument of shipping code was quite similar to the senior 
software engineer of redhat. Who thought it was not a problem that accidentally 
the root fs of node was mounted in a container. Because the driver should have 
been ran in a namespace. Is the short version of this not 'hide our crappy 
code'?

>, that can
> be run "as is" on any supported platform.
> 

This is incorrect, afaik I need to have a platform that has podman not? I would 
like to run ceph containers on mesos without podman. Where is this man page???? 
And to be honest how long is docker going to supply these huge images for free? 
It is not like there is distributed network where you can get your container 
images.

To me this development just looks wrong. It just adds complexity while the 
beauty is to keep things simple. Today I heard some agent is going to be 
developed and added to the nodes, I heard something about port conflicts and 
some issues with ip addressing support. 
It looks like this OC is developed from the ground up or this podman is 
seriously lacking functionality. Does it publish automatically assigned ports 
via srv records, do tasks get dns entries? How is it with dhcp?

In any case my current OC does not have such issues, and I am quite happy with 
it. Why would any company like to administer 2nd OC just for ceph? If ceph goes 
containers, it should be in the OC the client chooses to use. Ceph just needs 
to make this work regardless of what OC the client has, not develop something 
custom.

From a security perspective there is also some doubt think twice here. Do you 
really want to run Grafana, Prometheus and haproxy on the OC that is running 
your osd's??? The ceph OC has to allow CAP_SYS_ADMIN[1] to tasks to be able to 
run drivers to mount lvm's. (I can't imagine there are lot of people disabling 
this like I have done, and run these as external drivers). So if there is any 
exploit in your ceph OC a single task could destroy a node (and maybe worse, 
get automatically started on a different node, so it can destroy that one as 
well)

So how is this going to look like when you have a ceph cluster with 100 osd, 3 
monitors and 3 rgw, prometheus, prafana.

From a performance perspective others have already been writing that 
containerizing osd's could very well influence latency. I have already seen 
this with a dns server on a macvtap interface. However I have to admit I do not 
have the resources to test and give a proper judgement on this. 
If people here are talking about fixing c-states and frequencies. I wonder how 
far the containerizing of osd's even goes. Say these are not fully 
containerized because of performance issues, then you have custom OC from ceph 
that runs 8 processes containerized from the 108. I would think maybe add this 
"To me a should be is " list?

This ceph containerizing is almost a joke. I will bet you can not even run 
Prometheus as a stateful task on a ceph rbd image in this CO. Do you know why? 
Because the csi-ceph driver development team do not grasp the concept of csi 
and code a driver that can only be used to Kubernetes.

Ceph and containers sure, if I may quote a red haired film star "Kill it in the 
utero" 😉

[1]
https://lwn.net/Articles/486306/
 


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to