[ceph-users] Re: Ha proxy and S3

2024-03-27 Thread Gheorghiță Butnaru
yes, you can deploy an ingress service with cephadm [1].

You can customize the haproxy config if you need something specific [2].
ceph config-key set mgr/cephadm/services/ingress/haproxy.cfg -i
haproxy.cfg.j2


[1]
https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw
[2]
https://docs.ceph.com/en/quincy/cephadm/services/monitoring/#using-custom-configuration-files

On Wed, Mar 27, 2024 at 10:21 AM Albert Shih  wrote:

> Hi,
>
> If I'm correct in a S3 installation it's good practice to have a HA proxy,
> I also read somewhere the cephadm tool can deploy the HA Proxy.
>
> But is it a good practice to use cephadm to deploy the HA Proxy or it's
> better do deploy it manually on a other server (who does only that).
>
> Regards
>
> --
> Albert SHIH 🦫 🐸
> France
> Heure locale/Local time:
> mer. 27 mars 2024 09:18:04 CET
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] splitting Volume Group with odd number of PE in 2 logical volumes

2021-02-23 Thread Gheorghiță Butnaru
Hello,

Recently I deployed a small ceph cluster using cephadm.
In this cluster, I have 3 OSD nodes with 8 HDDs Hitachi (9.1 TiB), 4 NVMes
Micron_9300 (2.9 TiB), and 2 NVMes Intel Optane P4800X (375 GiB). I want to
use spinning disks for the data block, 2.9 NVMes for the block.DB and the
intel Optane for the block.wal.

I tried with a spec file and also via the ceph dashboard but I encountered
one problem.
I would expect 1 lv on every data disk, 4 lv on wal disks, and 2 lv on DB
disks. The problem arises on DB disks where only 1 lv gets created.
After some debugging, I think that the problem is generated when the VG
gets divided into 2. I have 763089 Total PE and the first LV was created
using 381545 PE (round-up for 763089/2). Thanks to that, the creation of
the second LV fails: Volume group
"ceph-c7078851-d3c1-4745-96b6-f98a45d3da93" has insufficient free space
(381544 extents): 381545 required.

Is this an expected behavior or not? Should I create the LVs by myself?

Gheorghita BUTNARU,
Gheorghe Asachi Technical University of Iasi


smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io