[ceph-users] Re: ceph on kubernetes

2022-10-06 Thread Clyso GmbH - Ceph Foundation Member

Hello Oğuz,

we have been supporting several rook/ceph clusters in the hyperscalers 
for years, including Azure.


A few quick notes:

* you can be prepared to run into some issues with the default config of 
the osds.


* in Azure, there is the issue with the quality of the network in some 
regions


* also this year azure has introduced a new pricing model for inter az 
communication.


* the Azure VMs with the respective disk classes will surprise you a bit 
in terms of backfilling, recovery, etc.


Best regards, Joachim


___
Clyso GmbH - Ceph Foundation Member

Am 05.10.22 um 13:44 schrieb Nico Schottelius:

Hey Oğuz,

the typical recommendations of native ceph still uphold in k8s,
additionally something you need to consider:

- Hyperconverged setup or dedicated nodes - what is your workload and
   budget
- Similar to native ceph, think about where you want to place data, this
   influences the selector inside rook of which devices / nodes to add
- Inside & Outside consumption: rook is very good with in-cluster
   configurations, creating PVCs/PVs, however you can also use rook
- mgr: usually we run 1+2 (standby) on native clusters, with k8s/rook it
   might be good enough to use 1 mgr, as k8s can take care of
   restarting/redeploying
- traffic separation: if that is a concern, you might want to go with
   multus in addition to your standard CNI
- Rook does not assign `resource` specs to OSD pods by default, if you
   hyperconverge you should be aware of that
- Always have the ceph-toolbox deployed - while you need it rarely, when
   you need it, you don't want to think about where to get the pod and
   how access it

Otherwise from our experience rook/ceph is probably the easiest in
regards to updates, easier than native handling and I suppose (*) easier
than cephadm as well.

Best regards,

Nico

(*) Can only judge from the mailing list comments, we cannot use cephadm
as our hosts are natively running Alpine Linux without systemd.

Oğuz Yarımtepe  writes:


Hi,

I am using Ceph on RKE2. Rook operator is installed on a rke2 cluster
running on Azure vms. I would like to learn whether there are best
practices for ceph on Kubernetes, like separating ceph nodes or pools or
using some custom settings for Kubernetes environment. Will be great if
anyone shares tips.

Regards.


--
Sustainable and modern Infrastructures by ungleich.ch
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph on kubernetes

2022-10-05 Thread Nico Schottelius

Hey Oğuz,

the typical recommendations of native ceph still uphold in k8s,
additionally something you need to consider:

- Hyperconverged setup or dedicated nodes - what is your workload and
  budget
- Similar to native ceph, think about where you want to place data, this
  influences the selector inside rook of which devices / nodes to add
- Inside & Outside consumption: rook is very good with in-cluster
  configurations, creating PVCs/PVs, however you can also use rook
- mgr: usually we run 1+2 (standby) on native clusters, with k8s/rook it
  might be good enough to use 1 mgr, as k8s can take care of
  restarting/redeploying
- traffic separation: if that is a concern, you might want to go with
  multus in addition to your standard CNI
- Rook does not assign `resource` specs to OSD pods by default, if you
  hyperconverge you should be aware of that
- Always have the ceph-toolbox deployed - while you need it rarely, when
  you need it, you don't want to think about where to get the pod and
  how access it

Otherwise from our experience rook/ceph is probably the easiest in
regards to updates, easier than native handling and I suppose (*) easier
than cephadm as well.

Best regards,

Nico

(*) Can only judge from the mailing list comments, we cannot use cephadm
as our hosts are natively running Alpine Linux without systemd.

Oğuz Yarımtepe  writes:

> Hi,
>
> I am using Ceph on RKE2. Rook operator is installed on a rke2 cluster
> running on Azure vms. I would like to learn whether there are best
> practices for ceph on Kubernetes, like separating ceph nodes or pools or
> using some custom settings for Kubernetes environment. Will be great if
> anyone shares tips.
>
> Regards.


--
Sustainable and modern Infrastructures by ungleich.ch
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph in kubernetes

2022-03-07 Thread ceph . novice

Hi Hans.
 
any chance you could write up a blog, GitHub Gist, wiki,   to 
describe WHAT exactly you run and HOW... with (config) examples?!?
I wanted to also run the same kind of setup @ home, but hadn't the time to even 
start thinking / reading how to setup CEPH at home (OK, I had an CEPH 
"installation" bases on VirtualBox, Vagrant, Ansible, looong time ago on my 
"stand alone" Workstation, but ...)
 
Kind regards
 notna
 
 

Gesendet: Montag, 07. März 2022 um 10:43 Uhr
Von: "Hans van den Bogert" 
An: ceph-users@ceph.io
Betreff: [ceph-users] Re: Ceph in kubernetes
Just to add to warm fuzzy feeling, although just in a homelab, I'm using
rook for many years now, it's awesome. I trust* it with the family
photo's on a selfhosted nextcloud. All on K8s/Ceph/RGW.


Hans


* I have backups though ;)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph in kubernetes

2022-03-07 Thread Hans van den Bogert
Just to add to warm fuzzy feeling, although just in a homelab, I'm using 
rook for many years now, it's awesome. I trust* it with the family 
photo's on a selfhosted nextcloud. All on K8s/Ceph/RGW.



Hans


* I have backups though ;)

On 3/7/22 09:45, Bo Thorsen wrote:

Hi Nico and Janne,

Thank you very much for your quick answers. I did investigate rook 
already and had the feeling this might be the answer. But one of the 
things that is extremely hard is to find The Right Way (TM) to handle 
the setup I want. Getting the warm fuzzy feeling that I chose the proper 
way forward is difficult. You helped me do that, so thank you :)


Bo.

Den 07-03-2022 kl. 09:39 skrev Nico Schottelius:


Good morning Bo,

as Janne pointing out, rook is indeed a very good solution for k8s based
clusters.
We are even replacing our native Ceph clusters with rook, as it feels
smarter in a long term perspective than setting on cephadm.

Rook also allows to inject quite some parameters and while it is a bit
learning k8s in the beginning, it is sufficiently flexible to allow any
Ceph tuning that you'd have done without it.

Bo Thorsen  writes:


So here's my idea on how to solve this problem: Add a 1TB SSD in each
worker node and let ceph use these disks for storage.


This is probably best done using the auto capture of disks, which rook
supports by default.

Best regards,

Nico


--
Sustainable and modern Infrastructures by ungleich.ch

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph in kubernetes

2022-03-07 Thread Anthony D'Atri

Convergence by any other name:

Proxmox, VSAN, etc.

Sure, why not? Performance might not be stellar, but it sounds like you 
probably don’t need it to be.
Look into dedicating a couple of cores to Ceph if you can, to avoid contention 
with compute tasks.

How many nodes do you have?

> 
> I'm running a small kubernetes cluster in the office which is mostly used for 
> gitlab runners. But there are some internal services we would like to run in 
> it, which would require persistent storage.
> 
> So here's my idea on how to solve this problem: Add a 1TB SSD in each worker 
> node and let ceph use these disks for storage.
> 
> The nodes are all simple, they just have a single disk and does nothing other 
> than run kubernetes. So I can install sdb in each and have a completely 
> uniform setup which I hope will make installation simpler.
> 
> Does this setup make sense? I'm not worried about the amount of disk space in 
> the ceph cluster, it's going to be way more than I need. So it's more a 
> question of whether someone who truly understands ceph thinks this is a good 
> or a bad idea?
> 
> I would appreciate any thoughts you have about this idea for a setup.
> 
> Thank you,
> 
> Bo Thorsen.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph in kubernetes

2022-03-06 Thread Janne Johansson
I think this is what "Rook" aims for, if you haven't looked at that, I
suggest doing it now.

Den mån 7 mars 2022 kl 08:42 skrev Bo Thorsen :
>
> I'm running a small kubernetes cluster in the office which is mostly
> used for gitlab runners. But there are some internal services we would
> like to run in it, which would require persistent storage.
>
> So here's my idea on how to solve this problem: Add a 1TB SSD in each
> worker node and let ceph use these disks for storage.
>
> The nodes are all simple, they just have a single disk and does nothing
> other than run kubernetes. So I can install sdb in each and have a
> completely uniform setup which I hope will make installation simpler.
>
> Does this setup make sense? I'm not worried about the amount of disk
> space in the ceph cluster, it's going to be way more than I need. So
> it's more a question of whether someone who truly understands ceph
> thinks this is a good or a bad idea?
>
> I would appreciate any thoughts you have about this idea for a setup.
>
> Thank you,
>
> Bo Thorsen.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io