[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Freddy Andersen
I would use croit

From: Drew Weaver 
Date: Wednesday, March 3, 2021 at 7:45 AM
To: 'ceph-users@ceph.io' 
Subject: [ceph-users] Questions RE: Ceph/CentOS/IBM
Howdy,

After the IBM acquisition of RedHat the landscape for CentOS quickly changed.

As I understand it right now Ceph 14 is the last version that will run on 
CentOS/EL7 but CentOS8 was "killed off".

So given that, if you were going to build a Ceph cluster today would you even 
bother doing it using a non-commercial distribution or would you just use RHEL 
8 (or even their commercial Ceph product).

Secondly, are we expecting IBM to "kill off" Ceph as well?

Thanks,
-Drew

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: multiple-domain for S3 on rgws with same ceph backend on one zone

2021-02-22 Thread Freddy Andersen
You need to enable users with tenants … 
https://docs.ceph.com/en/latest/radosgw/multitenancy/

From: Simon Pierre DESROSIERS 
Date: Monday, February 22, 2021 at 7:27 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] multiple-domain for S3 on rgws with same ceph backend on 
one zone
Hello,

We have functional ceph swarm with a pair of S3 rgw in front that uses
A.B.C.D domain to be accessed.

Now a new client asks to have access using the domain : E.C.D, but to
already existing buckets.  This is not a scenario discussed in the docs.
Apparently, looking at the code and by trying it, rgw does not support
multiple domains for the variable rgw_dns_name.


But reading through parts of the code, I am no dev, and my c++ is 25 years
rusty, I get the impression that maybe we could just add a second pair of
rgw S3 servers that would give service to the same buckets, but using a
different domain.

Am I wrong ?  Let's say this works, is this an unconscious behaviour that
the ceph team would remove down the road ?

Is there another solution that I might have missed ?  We do not have
multi-zone and there are no plans for it.  And Cname (rgw_resolve_cname)
seems to only be of use when using static sites (again, from my poor code
reading abilities).

Thank you

--
**AVERTISSEMENT** : Ce courriel et les pièces qui y sont jointes sont
destinés exclusivement au(x) destinataire(s) mentionné(s) ci-dessus et
peuvent contenir de l’information privilégiée ou confidentielle. Si vous
avez reçu ce courriel par erreur, ou s’il ne vous est pas destiné, veuillez
le mentionner immédiatement à l’expéditeur et effacer ce courriel ainsi que
les pièces jointes, le cas échéant. La copie ou la redistribution non
autorisée de ce courriel peut être illégale. Le contenu de ce courriel ne
peut être interprété qu’en conformité avec les lois et règlements qui
régissent les pouvoirs des diverses instances décisionnelles compétentes de
la Ville de Montréal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Backups of monitor

2021-02-12 Thread Freddy Andersen
I would say everyone recommends at least 3 monitors and since they need to be 
1,3,5 or 7 I always read that as 5 is the best number (if you have 5 servers in 
your cluster). The other reason is high availability since the MONs use Paxos 
for the quorum and I like to have 3 in the quorum you need 5 to be able to do 
maintenance. (2 out of 3, 3 out of 5… ) So if you are doing maintenance on a 
mon host in a 5 mon cluster you will still have 3 in the quorum.

From: huxia...@horebdata.cn 
Date: Friday, February 12, 2021 at 8:42 AM
To: Freddy Andersen , Marc , 
Michal Strnad , ceph-users 
Subject: [ceph-users] Re: Backups of monitor
Why 5 instead of 3 MONs are required?



huxia...@horebdata.cn

From: Freddy Andersen
Date: 2021-02-12 16:05
To: huxia...@horebdata.cn; Marc; Michal Strnad; ceph-users
Subject: Re: [ceph-users] Re: Backups of monitor
I would say production should have 5 MON servers

From: huxia...@horebdata.cn 
Date: Friday, February 12, 2021 at 7:59 AM
To: Marc , Michal Strnad , 
ceph-users 
Subject: [ceph-users] Re: Backups of monitor
Normally any production Ceph cluster will have at least 3 MONs, does it reall 
need a backup of MON?

samuel



huxia...@horebdata.cn

From: Marc
Date: 2021-02-12 14:36
To: Michal Strnad; ceph-users@ceph.io
Subject: [ceph-users] Re: Backups of monitor
So why not create an extra start it only when you want to make a backup, wait 
until it is up to date, stop it and then stop it to back it up?


> -Original Message-
> From: Michal Strnad 
> Sent: 11 February 2021 21:15
> To: ceph-users@ceph.io
> Subject: [ceph-users] Backups of monitor
>
> Hi all,
>
> We are looking for a proper solution for backups of monitor (all maps
> that they hold). On the internet we found advice that we have to stop
> one of monitor, back it up (dump) and start daemon again. But this is
> not right approach due to risk of loosing quorum and need of
> synchronization after monitor is back online.
>
> Our goal is to have at least some (recent) metadata of objects in
> cluster for the last resort when all monitors are in very bad
> shape/state and we could start any of them. Maybe there is another
> approach but we are not aware of it.
>
> We are running the latest nautilus and three monitors on every cluster.
>
> Ad. We don't want to use more monitors than thee.
>
>
> Thank you
> Cheers
> Michal
> --
> Michal Strnad
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Backups of monitor

2021-02-12 Thread Freddy Andersen
I would say production should have 5 MON servers

From: huxia...@horebdata.cn 
Date: Friday, February 12, 2021 at 7:59 AM
To: Marc , Michal Strnad , 
ceph-users 
Subject: [ceph-users] Re: Backups of monitor
Normally any production Ceph cluster will have at least 3 MONs, does it reall 
need a backup of MON?

samuel



huxia...@horebdata.cn

From: Marc
Date: 2021-02-12 14:36
To: Michal Strnad; ceph-users@ceph.io
Subject: [ceph-users] Re: Backups of monitor
So why not create an extra start it only when you want to make a backup, wait 
until it is up to date, stop it and then stop it to back it up?


> -Original Message-
> From: Michal Strnad 
> Sent: 11 February 2021 21:15
> To: ceph-users@ceph.io
> Subject: [ceph-users] Backups of monitor
>
> Hi all,
>
> We are looking for a proper solution for backups of monitor (all maps
> that they hold). On the internet we found advice that we have to stop
> one of monitor, back it up (dump) and start daemon again. But this is
> not right approach due to risk of loosing quorum and need of
> synchronization after monitor is back online.
>
> Our goal is to have at least some (recent) metadata of objects in
> cluster for the last resort when all monitors are in very bad
> shape/state and we could start any of them. Maybe there is another
> approach but we are not aware of it.
>
> We are running the latest nautilus and three monitors on every cluster.
>
> Ad. We don't want to use more monitors than thee.
>
>
> Thank you
> Cheers
> Michal
> --
> Michal Strnad
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: share haproxy config for radosgw

2021-02-07 Thread Freddy Andersen
Something like this works…

# HAProxy configuration

#--
# Global settings
#--
global
log /dev/loglocal0
log /dev/loglocal1 notice
user haproxy
group haproxy
chroot  /var/lib/haproxy
daemon
stats socket /var/lib/haproxy/stats mode 660 level admin
maxconn 65536
spread-checks 4
tune.ssl.default-dh-param2048
ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

#--
# common defaults that all the 'listen' and 'backend' sections will
# use- if not designated in their block
#--
defaults
log global
mode http
retries 3
balance roundrobin
option  abortonclose
option  redispatch
option  dontlognull
option  log-health-checks
maxconn 20480
timeout connect 5s
timeout client  50s
timeout server  50s
timeout http-request20s
timeout http-keep-alive 30s
timeout check   10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

#--
# frontend instances
#--
frontend ext-http-in
bind   10.1.2.10:80 name s3
bind   10.1.2.10:443 ssl crt certificate.pem name secure-s3
maxconn 25000
option  forwardfor if-none
option  http-server-close
option  httplog
default_backend be_rgw-zone1
use_backend be_rgw-zone1 if host_s3

#--
# backend instances
#--
backend be_rgw-zone1
mode http
option http-server-close
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server radosgw-vip1 10.1.2.1:80 check
server radosgw-vip2 10.1.2.2:80 check
server radosgw-vip3 10.1.2.3:80 check

From: Szabo, Istvan (Agoda) 
Date: Sunday, February 7, 2021 at 8:25 PM
To: Marc , ceph-users@ceph.io 
Subject: [ceph-users] Re: share haproxy config for radosgw
Let me join to this thread, I'd be interested also with HTTPS and beast 
configuration on HA+Proxy level. Haven't managed to make it ever work.


-Original Message-
From: Marc 
Sent: Monday, February 8, 2021 5:19 AM
To: ceph-users@ceph.io
Subject: [ceph-users] share haproxy config for radosgw

Email received from outside the company. If in doubt don't click links nor open 
attachments!


I was wondering if someone could post a config for haproxy. Is there something 
specific to configure? Like binding clients to a specific backend server, 
client timeouts, security specific to rgw etc.
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: some ceph general questions about the design

2020-04-20 Thread Freddy Andersen
1. Do not use raid for osd disks... 1 ods per disk
2-3. I would have 3 or more osd nodes... more is better for when you have 
issues or need maintenance. We use vms for mon nodes with mgr on each mon node. 
5 is the recommended for a production cluster but you can be ok with 3 for a 
small cluster
4. Again we use vms for rgw and scale these to traffic needs.


Sent from my iPhone

> On Apr 20, 2020, at 1:08 PM, harald.freid...@gmail.com wrote:
> 
> Hello together,
> 
> we want to create a productive ceph storage system in our datacenter in may 
> this year with openstack and ucs and i tested a lot in my cep test 
> enviroment, and i have some general questions.
> 
> whats receommended?
> 
> 1. shoud i use a raid controller a create for example a raid 5 with all disks 
> on each osd server? or should i passtrough all disks to ceph osd?
> 2. if i have a 2 pyhsicaly node osd cluster, did i need 3 physicall mons?
> 3. if i have a 3 physically node osd cluster, did i need 5 physicall mons?
> 3. where i should in install the mgr? on osd or mon
> 4. where i should in install the rgw? on osd or mon OR on 1 or 2 separate 
> machines?
> 
> in my testlab i created 3 VMs osds with mgr installed, and 5 VMs mons , and 1 
> VM as rgw -> is this correct?
> 
> thx in advance
> hfreidhof
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io