[ceph-users] Re: [Suspicious newsletter] Re: RGW memory consumption

2021-08-14 Thread Szabo, Istvan (Agoda)
I’d say if it is not round-robined, same hosts will go to the same endpoints 
and you can end up in an unbalanced situation.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

On 2021. Aug 14., at 16:57, mhnx  wrote:


Email received from the internet. If in doubt, don't click any link nor open 
any attachment !

 I use hardware loadbalancer. I think it was Round-robin but I'm not sure. When 
I listen requests, rgw usages seems equal.
What is your point by asking loadbalancer? If you think that the leaked rgw is 
using more its not the case you are looking for. I couldnt find any difference. 
Rgw's and hosts are identical. I've restarted that rgw and memory usage is 
gone. I will be watching all Rgw's to understand beter.
I give +1 vote, its probably memory leak.


14 Ağu 2021 Cmt 09:56 tarihinde Szabo, Istvan (Agoda) 
mailto:istvan.sz...@agoda.com>> şunu yazdı:
Are you using loadbalancer? Maybe you use source based balancing method?

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

On 2021. Aug 13., at 16:14, mhnx 
mailto:morphinwith...@gmail.com>> wrote:

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !


Hello Martin. I'm using 14.2.16 Our S3 usage is similar.
I have 10 rgw. They're running on OSD nodes.
9 RGW is between 3G and 5G. But One of my rgw is using 85G. I have 256G ram
and that's why I didn't see that before. Thanks for the warning.

But the question is Why 9 RGW is low and one of them very high? Weird...

My ceph.conf:

[radosgw.9]
rgw data = /var/lib/ceph/radosgw/ceph-radosgw.9
rgw zonegroup = xx
rgw zone = xxx
rgw zonegroup root pool = xx.root
rgw zone root pool = xx.root
host = HOST9
rgw dns name = s3..com
rgw frontends = beast port=8010
rgw user max buckets=999
log file = /var/log/ceph/radosgw.9.log
rgw run sync thread = false
rgw_dynamic_resharding = false



Martin Traxl mailto:martin.tr...@1und1.de>>, 13 Ağu 2021 
Cum, 14:53 tarihinde şunu
yazdı:

We are experiencing this behaviour eversince this cluster is productive
and gets "some load". We started with this cluster in May this year,
running Ceph 14.2.15 and already had this same issue. It just took a little
longer until all RAM was consumed, as the load was a little lower than it
is now.

This is my config diff (I stripped some hostnames/IPs):


{
   "diff": {
   "admin_socket": {
   "default": "$run_dir/$cluster-$name.$pid.$cctid.asok",
   "final":
"/var/run/ceph/ceph-client.rgw.#.882549.94336165049544.asok"
   },
   "bluefs_buffered_io": {
   "default": true,
   "file": true,
   "final": true
   },
   "cluster_network": {
   "default": "",
   "file": "#/26",
   "final": "#/26"
   },
   "daemonize": {
   "default": true,
   "override": false,
   "final": false
   },
   "debug_rgw": {
   "default": "1/5",
   "final": "1/5"
   },
   "filestore_fd_cache_size": {
   "default": 128,
   "file": 2048,
   "final": 2048
   },
   "filestore_op_threads": {
   "default": 2,
   "file": 8,
   "final": 8
   },
   "filestore_queue_max_ops": {
   "default": 50,
   "file": 100,
   "final": 100
   },
   "fsid": {
   "default": "----",
   "file": "#",
   "override": "#",
   "final": "#"
   },
   "keyring": {
   "default": "$rgw_data/keyring",
   "final": "/var/lib/ceph/radosgw/ceph-rgw.#/keyring"
   },
   "mon_host": {
   "default": "",
   "file": "# # #",
   "final": "# # #"
   },

   "mon_osd_down_out_interval": {
   "default": 600,
   "file": 1800,
   "final": 1800
   },
   "mon_osd_down_out_subtree_limit": {
   "default": "rack",
   "file": "host",
   "final": "host"
   },
   "mon_osd_initial_require_min_compat_client": {
   "default": "jewel",
   "file": "jewel",
   "final": "jewel"
   },
   "mon_osd_min_down_reporters": {
   "default": 2,
   "file": 2,
   "final": 2
   },
   "mon_osd_reporter_subtree_level": {
   "default": "host",
   "file": "host",
   "final": "host"
   },
   "ms_client_mode": {
   "default": "crc secure",
   "file": "secure",
   "final": "s

[ceph-users] v15-2-14-octopus no docker images on docker hub ceph/ceph ?

2021-08-14 Thread Jadkins21
Hi Everyone,

Just been trying to upgrade from v15.2.11 to the latest v15.2.14 and getting an 
error over docker unable to pull image.

I've had a look on 
https://hub.docker.com/r/ceph/ceph/tags?page=1&ordering=last_updated&name=v15.2.14
 and there doesn't seem to be any images built for this release ?

The release has been published on, 
https://docs.ceph.com/en/latest/releases/octopus/#v15-2-14-octopus

And looking on the tracker is looks like some are not marked as Done, 
https://tracker.ceph.com/issues/51982

Am I just being too impatient ? or did I miss something around docker being 
discontinued for cephadmin ? (hope not, it's great)

Thanks

Jamie
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Deployment of Monitors and Managers

2021-08-14 Thread Anthony D'Atri
Deploying mons and mgrs together is very common, especially if you have 
dedicated hosts (ie. they aren’t on OSD nodes).


> On Aug 14, 2021, at 1:06 AM, Michel Niyoyita  wrote:
> 
> Dear Ceph users,
> 
> I am going to deploy ceph in production , and I am going to deploy 3
> monitors on 3 differents  hosts  to make a quorum. Is there any
> inconvenience if I deploy 2 managers on the same hosts where I deployed
> monitors ? Is it mendatory to be separate?
> 
> Kindly advise.
> 
> Best regards
> 
> Michel
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Recovery stuck and Multiple PG fails

2021-08-14 Thread Amudhan P
Suresh,

The problem is some of my OSD services is not stable it crashes
continuously.

I have attached OSD log lines during the failure which are already in debug
mode.

let me know if you need more details.

On Sat, Aug 14, 2021 at 8:10 PM Suresh Rama  wrote:

> Amudhan,
>
> Have you looked at the logs and did you try enabling debug to see why the
> OSDs are marked down? There should be some reason right? Just focus on the
> MON and take one node/OSD by enabling debug to see what is happening.
> https://docs.ceph.com/en/latest/cephadm/operations/.
>
> Thanks,
> Suresh
>
> On Sat, Aug 14, 2021, 9:53 AM Amudhan P  wrote:
>
>> Hi,
>> I am stuck with ceph cluster with multiple PG errors due to multiple OSD
>> was stopped and starting OSD's manually again didn't help. OSD service
>> stops again there is no issue with HDD for sure but for some reason, OSD
>> stops.
>>
>> I am using running ceph version 15.2.5 on podman container.
>>
>> How do I recover these pg failures?
>>
>> can someone help me to recover this or where to look further?
>>
>> pgs: 0.360% pgs not active
>>  124186/5082364 objects degraded (2.443%)
>>  29899/5082364 objects misplaced (0.588%)
>>  670 active+clean
>>  69  active+undersized+remapped
>>  26  active+undersized+degraded+remapped+backfill_wait
>>  16  active+undersized+remapped+backfill_wait
>>  15  active+undersized+degraded+remapped
>>  13  active+clean+remapped
>>  9   active+recovery_wait+degraded
>>  4   active+remapped+backfill_wait
>>  3   stale+down
>>  3   active+undersized+remapped+inconsistent
>>  2   active+recovery_wait+degraded+remapped
>>  1   active+recovering+degraded+remapped
>>  1   active+clean+remapped+inconsistent
>>  1   active+recovering+degraded
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
Aug 14 20:25:32 node1 bash[29321]: debug-16> 2021-08-14T14:55:32.139+ 
7f2097869700 10 monclient: handle_auth_request added challenge on 0x5564eccdb400
Aug 14 20:25:32 node1 bash[29321]: debug-15> 2021-08-14T14:55:32.139+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7171/7172 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] r=0 lpr=7180 
pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] 
exit Started/Primary/Peering/GetInfo 0.486478 5 0.000268
Aug 14 20:25:32 node1 bash[29321]: debug-14> 2021-08-14T14:55:32.139+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7171/7172 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] r=0 lpr=7180 
pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] 
enter Started/Primary/Peering/GetLog
Aug 14 20:25:32 node1 bash[29321]: debug-13> 2021-08-14T14:55:32.139+ 
7f2083fd2700  3 osd.7 7180 handle_osd_map epochs [7180,7180], i have 7180, src 
has [5697,7180]
Aug 14 20:25:32 node1 bash[29321]: debug-12> 2021-08-14T14:55:32.143+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7176/7177 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] backfill=[34] r=0 
lpr=7180 pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering 
mbc={}] exit Started/Primary/Peering/GetLog 0.004066 2 0.000112
Aug 14 20:25:32 node1 bash[29321]: debug-11> 2021-08-14T14:55:32.143+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7176/7177 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] backfill=[34] r=0 
lpr=7180 pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering 
mbc={}] enter Started/Primary/Peering/GetMissing
Aug 14 20:25:32 node1 bash[29321]: debug-10> 2021-08-14T14:55:32.143+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7176/7177 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] backfill=[34] r=0 
lpr=7180 pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering 
mbc={}] exit Started/Primary/Peering/GetMissing 0.57 0 0.00
Aug 14 20:25:32 node1 bash[29321]: debug -9> 2021-08-14T14:55:32.143+ 
7f207afc0700  5 osd.7 pg_epoch: 7180 pg[2.cd( v 6838'194480 
(6838'187296,6838'194480] local-lis/les=7176/7177 n=5007 ec=226/226 
lis/c=7176/6927 les/c/f=7177/6928/0 sis=7180) [7,34]/[7,47] backfill=[34] r=0 
lpr=7180 pi=[6927,7180)/1 crt=6838'194480 lcod 0'0 mlcod 0'0 remapped+peering 
mbc={}] enter Started/Primary/Peering/WaitUpThru
Aug 14 20:25:32 node1 bash[29321]: debug -8> 2021-08-14T14:

[ceph-users] Re: [Suspicious newsletter] Re: RGW memory consumption

2021-08-14 Thread mhnx
 I use hardware loadbalancer. I think it was Round-robin but I'm not sure.
When I listen requests, rgw usages seems equal.
What is your point by asking loadbalancer? If you think that the leaked rgw
is using more its not the case you are looking for. I couldnt find any
difference. Rgw's and hosts are identical. I've restarted that rgw and
memory usage is gone. I will be watching all Rgw's to understand beter.
I give +1 vote, its probably memory leak.


14 Ağu 2021 Cmt 09:56 tarihinde Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com> şunu yazdı:

> Are you using loadbalancer? Maybe you use source based balancing method?
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
> On 2021. Aug 13., at 16:14, mhnx  wrote:
>
> Email received from the internet. If in doubt, don't click any link nor
> open any attachment !
> 
>
> Hello Martin. I'm using 14.2.16 Our S3 usage is similar.
> I have 10 rgw. They're running on OSD nodes.
> 9 RGW is between 3G and 5G. But One of my rgw is using 85G. I have 256G ram
> and that's why I didn't see that before. Thanks for the warning.
>
> But the question is Why 9 RGW is low and one of them very high? Weird...
>
> My ceph.conf:
>
> [radosgw.9]
> rgw data = /var/lib/ceph/radosgw/ceph-radosgw.9
> rgw zonegroup = xx
> rgw zone = xxx
> rgw zonegroup root pool = xx.root
> rgw zone root pool = xx.root
> host = HOST9
> rgw dns name = s3..com
> rgw frontends = beast port=8010
> rgw user max buckets=999
> log file = /var/log/ceph/radosgw.9.log
> rgw run sync thread = false
> rgw_dynamic_resharding = false
>
>
>
> Martin Traxl , 13 Ağu 2021 Cum, 14:53 tarihinde
> şunu
> yazdı:
>
> We are experiencing this behaviour eversince this cluster is productive
>
> and gets "some load". We started with this cluster in May this year,
>
> running Ceph 14.2.15 and already had this same issue. It just took a little
>
> longer until all RAM was consumed, as the load was a little lower than it
>
> is now.
>
>
> This is my config diff (I stripped some hostnames/IPs):
>
>
>
> {
>
>"diff": {
>
>"admin_socket": {
>
>"default": "$run_dir/$cluster-$name.$pid.$cctid.asok",
>
>"final":
>
> "/var/run/ceph/ceph-client.rgw.#.882549.94336165049544.asok"
>
>},
>
>"bluefs_buffered_io": {
>
>"default": true,
>
>"file": true,
>
>"final": true
>
>},
>
>"cluster_network": {
>
>"default": "",
>
>"file": "#/26",
>
>"final": "#/26"
>
>},
>
>"daemonize": {
>
>"default": true,
>
>"override": false,
>
>"final": false
>
>},
>
>"debug_rgw": {
>
>"default": "1/5",
>
>"final": "1/5"
>
>},
>
>"filestore_fd_cache_size": {
>
>"default": 128,
>
>"file": 2048,
>
>"final": 2048
>
>},
>
>"filestore_op_threads": {
>
>"default": 2,
>
>"file": 8,
>
>"final": 8
>
>},
>
>"filestore_queue_max_ops": {
>
>"default": 50,
>
>"file": 100,
>
>"final": 100
>
>},
>
>"fsid": {
>
>"default": "----",
>
>"file": "#",
>
>"override": "#",
>
>"final": "#"
>
>},
>
>"keyring": {
>
>"default": "$rgw_data/keyring",
>
>"final": "/var/lib/ceph/radosgw/ceph-rgw.#/keyring"
>
>},
>
>"mon_host": {
>
>"default": "",
>
>"file": "# # #",
>
>"final": "# # #"
>
>},
>
>
>"mon_osd_down_out_interval": {
>
>"default": 600,
>
>"file": 1800,
>
>"final": 1800
>
>},
>
>"mon_osd_down_out_subtree_limit": {
>
>"default": "rack",
>
>"file": "host",
>
>"final": "host"
>
>},
>
>"mon_osd_initial_require_min_compat_client": {
>
>"default": "jewel",
>
>"file": "jewel",
>
>"final": "jewel"
>
>},
>
>"mon_osd_min_down_reporters": {
>
>"default": 2,
>
>"file": 2,
>
>"final": 2
>
>},
>
>"mon_osd_reporter_subtree_level": {
>
>"default": "host",
>
>"file": "host",
>
>"final": "host"
>
>},
>
>"ms_client_mode": {
>
>"default": "crc secure",
>
>"file": "secure",
>
>"final": "secure"
>
>},
>
>"ms_cluster_mode": {
>
>"default": "crc secure",
>
>"file": "secure",
>
>"final": "secure"
>
>},
>
>"ms_mon_client_mode": {
>
>  

[ceph-users] Re: Recovery stuck and Multiple PG fails

2021-08-14 Thread Suresh Rama
Amudhan,

Have you looked at the logs and did you try enabling debug to see why the
OSDs are marked down? There should be some reason right? Just focus on the
MON and take one node/OSD by enabling debug to see what is happening.
https://docs.ceph.com/en/latest/cephadm/operations/.

Thanks,
Suresh

On Sat, Aug 14, 2021, 9:53 AM Amudhan P  wrote:

> Hi,
> I am stuck with ceph cluster with multiple PG errors due to multiple OSD
> was stopped and starting OSD's manually again didn't help. OSD service
> stops again there is no issue with HDD for sure but for some reason, OSD
> stops.
>
> I am using running ceph version 15.2.5 on podman container.
>
> How do I recover these pg failures?
>
> can someone help me to recover this or where to look further?
>
> pgs: 0.360% pgs not active
>  124186/5082364 objects degraded (2.443%)
>  29899/5082364 objects misplaced (0.588%)
>  670 active+clean
>  69  active+undersized+remapped
>  26  active+undersized+degraded+remapped+backfill_wait
>  16  active+undersized+remapped+backfill_wait
>  15  active+undersized+degraded+remapped
>  13  active+clean+remapped
>  9   active+recovery_wait+degraded
>  4   active+remapped+backfill_wait
>  3   stale+down
>  3   active+undersized+remapped+inconsistent
>  2   active+recovery_wait+degraded+remapped
>  1   active+recovering+degraded+remapped
>  1   active+clean+remapped+inconsistent
>  1   active+recovering+degraded
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Recovery stuck and Multiple PG fails

2021-08-14 Thread Amudhan P
Hi,
I am stuck with ceph cluster with multiple PG errors due to multiple OSD
was stopped and starting OSD's manually again didn't help. OSD service
stops again there is no issue with HDD for sure but for some reason, OSD
stops.

I am using running ceph version 15.2.5 on podman container.

How do I recover these pg failures?

can someone help me to recover this or where to look further?

pgs: 0.360% pgs not active
 124186/5082364 objects degraded (2.443%)
 29899/5082364 objects misplaced (0.588%)
 670 active+clean
 69  active+undersized+remapped
 26  active+undersized+degraded+remapped+backfill_wait
 16  active+undersized+remapped+backfill_wait
 15  active+undersized+degraded+remapped
 13  active+clean+remapped
 9   active+recovery_wait+degraded
 4   active+remapped+backfill_wait
 3   stale+down
 3   active+undersized+remapped+inconsistent
 2   active+recovery_wait+degraded+remapped
 1   active+recovering+degraded+remapped
 1   active+clean+remapped+inconsistent
 1   active+recovering+degraded
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Re: RGW memory consumption

2021-08-14 Thread Szabo, Istvan (Agoda)
Are you using loadbalancer? Maybe you use source based balancing method?

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

On 2021. Aug 13., at 16:14, mhnx  wrote:

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !


Hello Martin. I'm using 14.2.16 Our S3 usage is similar.
I have 10 rgw. They're running on OSD nodes.
9 RGW is between 3G and 5G. But One of my rgw is using 85G. I have 256G ram
and that's why I didn't see that before. Thanks for the warning.

But the question is Why 9 RGW is low and one of them very high? Weird...

My ceph.conf:

[radosgw.9]
rgw data = /var/lib/ceph/radosgw/ceph-radosgw.9
rgw zonegroup = xx
rgw zone = xxx
rgw zonegroup root pool = xx.root
rgw zone root pool = xx.root
host = HOST9
rgw dns name = s3..com
rgw frontends = beast port=8010
rgw user max buckets=999
log file = /var/log/ceph/radosgw.9.log
rgw run sync thread = false
rgw_dynamic_resharding = false



Martin Traxl , 13 Ağu 2021 Cum, 14:53 tarihinde şunu
yazdı:

We are experiencing this behaviour eversince this cluster is productive
and gets "some load". We started with this cluster in May this year,
running Ceph 14.2.15 and already had this same issue. It just took a little
longer until all RAM was consumed, as the load was a little lower than it
is now.

This is my config diff (I stripped some hostnames/IPs):


{
   "diff": {
   "admin_socket": {
   "default": "$run_dir/$cluster-$name.$pid.$cctid.asok",
   "final":
"/var/run/ceph/ceph-client.rgw.#.882549.94336165049544.asok"
   },
   "bluefs_buffered_io": {
   "default": true,
   "file": true,
   "final": true
   },
   "cluster_network": {
   "default": "",
   "file": "#/26",
   "final": "#/26"
   },
   "daemonize": {
   "default": true,
   "override": false,
   "final": false
   },
   "debug_rgw": {
   "default": "1/5",
   "final": "1/5"
   },
   "filestore_fd_cache_size": {
   "default": 128,
   "file": 2048,
   "final": 2048
   },
   "filestore_op_threads": {
   "default": 2,
   "file": 8,
   "final": 8
   },
   "filestore_queue_max_ops": {
   "default": 50,
   "file": 100,
   "final": 100
   },
   "fsid": {
   "default": "----",
   "file": "#",
   "override": "#",
   "final": "#"
   },
   "keyring": {
   "default": "$rgw_data/keyring",
   "final": "/var/lib/ceph/radosgw/ceph-rgw.#/keyring"
   },
   "mon_host": {
   "default": "",
   "file": "# # #",
   "final": "# # #"
   },

   "mon_osd_down_out_interval": {
   "default": 600,
   "file": 1800,
   "final": 1800
   },
   "mon_osd_down_out_subtree_limit": {
   "default": "rack",
   "file": "host",
   "final": "host"
   },
   "mon_osd_initial_require_min_compat_client": {
   "default": "jewel",
   "file": "jewel",
   "final": "jewel"
   },
   "mon_osd_min_down_reporters": {
   "default": 2,
   "file": 2,
   "final": 2
   },
   "mon_osd_reporter_subtree_level": {
   "default": "host",
   "file": "host",
   "final": "host"
   },
   "ms_client_mode": {
   "default": "crc secure",
   "file": "secure",
   "final": "secure"
   },
   "ms_cluster_mode": {
   "default": "crc secure",
   "file": "secure",
   "final": "secure"
   },
   "ms_mon_client_mode": {
   "default": "secure crc",
   "file": "secure",
   "final": "secure"
   },
   "ms_mon_cluster_mode": {
   "default": "secure crc",
   "file": "secure",
   "final": "secure"
   },
   "ms_mon_service_mode": {
   "default": "secure crc",
   "file": "secure",
   "final": "secure"
   },

   "ms_service_mode": {
   "default": "crc secure",
   "file": "secure",
   "final": "secure"
   },
   "objecter_inflight_ops": {
   "default": 24576,
   "final": 24576
   },
   "osd_backfill_scan_max": {
   "default": 512,
   "file": 16,
   "final": 16
   },
   "osd_backfill_scan_min": {
   "default": 64,
   "file": 8,
   "final": 8
   },
   "osd_deep_scrub_stride": {
   "default": "524288",
   "file": "1048576",
   "final": "1048576"
   },
   "osd_fast_sh

[ceph-users] Deployment of Monitors and Managers

2021-08-14 Thread Michel Niyoyita
Dear Ceph users,

I am going to deploy ceph in production , and I am going to deploy 3
monitors on 3 differents  hosts  to make a quorum. Is there any
inconvenience if I deploy 2 managers on the same hosts where I deployed
monitors ? Is it mendatory to be separate?

Kindly advise.

Best regards

Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io