> > Finer grained ability to allocate resources to services. (This process
> gets 2g of ram and 1 cpu)
> 
> do you really believe this is a benefit? How can it be a benefit to have
> crashing or slow OSDs? Sounds cool but doesn't work in most environments
> I
> ever had my hands on.
> We often encounter cluster that fall apart or have a meltdown just
> because
> they run out of memory and we use tricks like zram to help them out and
> recover their clusters. If I now go and do it per container/osd in a
> finer
> grained way, it will just blow up even more.
> 

Indeed it mostly makes sense in a shared environment or so. Where you are 
forced to isolate processes or you want your containers to scale or migrate 
automatically. But it is not like osd.23 is ever going to move from hosta to 
hostb.
So you can not use the benefits of limiting cpu and memory resources, you 
cannot use the benefits of network isolation, you cannot use the benefits of 
migrating tasks, you cannot use the benefits of scaling tasks. And this applies 
to >90% of your processes (the osd's)

In dedicated ceph cluster you create extra complexity and dependencies on 
storage of your container images. I guess the container images are 300MB 400MB 
for an osd or so (assuming they are el7)?

I would say that ceph should start supporting something like alpine linux 
packages or so, so you have a rgw, monitor, mds that is only 20MB. Furthermore 
something needs to change in mon, mds and rgw design.
Afik each rgw still requires a unique client id, which makes it not very 
friendly to scaling such a task.



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to