[ceph-users] Re: The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
So the bucket has been deleted on the master zone which has been removed from 
the other zones as well. On the master zone after deep scrub the omap 
disappeared but on the secondary zone it's still there.
It was 3 at the beginning after I scrubbed the affected OSDs (not just pgs) I 
have 6 omap.
The bucket which I assumed caused the issue is not there anymore.

This is the log:

2021-06-01T08:48:11.560449+0700 osd.39 (osd.39) 1 : cluster [DBG] 20.6 
deep-scrub starts
2021-06-01T08:48:13.218814+0700 osd.39 (osd.39) 2 : cluster [WRN] Large omap 
object found. Object: 
20:62d24dc9:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.5:head PG: 
20.93b24b46 (20.6) Key count: 244664 Size (bytes): 69659918
2021-06-01T08:48:15.245975+0700 osd.39 (osd.39) 3 : cluster [DBG] 20.6 
deep-scrub ok
2021-06-01T08:48:16.623097+0700 osd.39 (osd.39) 4 : cluster [DBG] 20.e 
deep-scrub starts
2021-06-01T08:48:20.201926+0700 osd.39 (osd.39) 5 : cluster [WRN] Large omap 
object found. Object: 
20:77cbb881:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.9:head PG: 
20.811dd3ee (20.e) Key count: 244363 Size (bytes): 69582418
2021-06-01T08:48:20.275906+0700 osd.39 (osd.39) 6 : cluster [DBG] 20.e 
deep-scrub ok
2021-06-01T08:48:21.560212+0700 osd.39 (osd.39) 7 : cluster [DBG] 20.15 
deep-scrub starts
2021-06-01T08:48:22.456133+0700 osd.39 (osd.39) 8 : cluster [WRN] Large omap 
object found. Object: 
20:a8a0840e:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.3:head PG: 
20.70210515 (20.15) Key count: 244169 Size (bytes): 69508179
2021-06-01T08:48:25.202051+0700 osd.39 (osd.39) 9 : cluster [DBG] 20.15 
deep-scrub ok
2021-06-01T08:48:26.019422+0700 osd.37 (osd.37) 4 : cluster [DBG] 20.f 
deep-scrub starts
2021-06-01T08:48:28.370919+0700 osd.37 (osd.37) 5 : cluster [WRN] Large omap 
object found. Object: 
20:f59fa8d7:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.6:head PG: 
20.eb15f9af (20.f) Key count: 244243 Size (bytes): 69539068
2021-06-01T08:48:29.010877+0700 osd.37 (osd.37) 6 : cluster [DBG] 20.f 
deep-scrub ok
2021-06-01T08:48:29.573160+0700 osd.39 (osd.39) 10 : cluster [DBG] 20.18 
deep-scrub starts
2021-06-01T08:48:32.682416+0700 osd.39 (osd.39) 11 : cluster [WRN] Large omap 
object found. Object: 
20:1e2579ff:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.4:head PG: 
20.ff9ea478 (20.18) Key count: 243858 Size (bytes): 69426645
2021-06-01T08:48:33.383843+0700 osd.39 (osd.39) 12 : cluster [DBG] 20.18 
deep-scrub ok
2021-06-01T08:48:35.021940+0700 osd.37 (osd.37) 7 : cluster [DBG] 20.14 
deep-scrub starts
2021-06-01T08:48:36.139892+0700 osd.37 (osd.37) 8 : cluster [WRN] Large omap 
object found. Object: 
20:291b1669:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.29868038.2.1:head PG: 
20.9668d894 (20.14) Key count: 244533 Size (bytes): 69611691
2021-06-01T08:48:38.843235+0700 osd.37 (osd.37) 9 : cluster [DBG] 20.14 
deep-scrub ok

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: Szabo, Istvan (Agoda)  
Sent: Monday, May 31, 2021 10:53 PM
To: Matt Vandermeulen 
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: [Suspicious newsletter] Re: The always welcomed large 
omap

Yeah, I found a bucket at the moment in progress the deleting, will preshard it.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: Matt Vandermeulen 
Sent: Monday, May 31, 2021 9:43 PM
To: Szabo, Istvan (Agoda) 
Cc: ceph-users@ceph.io
Subject: [Suspicious newsletter] [ceph-users] Re: The always welcomed large omap

All the index data will be in OMAP, which you can see a listing of with

`ceph osd df tree`

Do you have large buckets (many, many objects in a single bucket) with few 
shards?  You may have to reshard one (or some) of your buckets.
It'll take some reading if you're using multisite, in order to coordinate it 
(though I'm unfamiliar with how it works with multisite in Octopus).

On 2021-05-31 02:25, Szabo, Istvan (Agoda) wrote:
> Hi,
>
> Any way to clean up large-omap in the index pool?
> PG deep_scrub didn't help.
> I know how to clean in the log pool, but no idea in the index pool :/ 
> It's an octopus deployment 15.2.10.
>
> Thank you
>
> 
> This message is confidential and is for the sole use of the intended 
> recipient(s). It may also be privileged or otherwise protected by 
> copyright or other legal rules. If you have received it by mistake 
> please let us know by reply email and delete it from your system. It 
> is prohibited to copy this message or disclose its content to anyone.
> Any confidentiality or privilege is not waived or lost by any mistaken 
> delivery or unauthorized disclosure of the message. All messages 

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread Marco Pizzolo
David,

What I can confirm is that if this fix is already in 16.2.4 and 15.2.13,
then there's another issue resulting in the same situation, as it continues
to happen in the latest available images.
We are going to try and see if we can install a 15.2.x release and
subsequently upgrade using a fixed image.  We were not finding a good way
to bootstrap directly with a custom image, but maybe we missed something.
cephadm bootstrap command didn't seem to support image path.

Thanks for your help thus far.  I'll update later today or tomorrow when we
get the chance to go the upgrade route.

Seems tragic that when an all-stopping, immediately reproducible issue such
as this occurs, adopters are allowed to flounder for so long.  Ceph has had
a tremendously positive impact for us since we began using it
in luminous/mimic, but situations such as this are hard to look past.  It's
really unfortunate as our existing production clusters have been rock solid
thus far, but this does shake one's confidence, and I would wager that I'm
not alone.

Marco






On Mon, May 31, 2021 at 3:57 PM David Orman  wrote:

> Does the image we built fix the problem for you? That's how we worked
> around it. Unfortunately, it even bites you with less OSDs if you have
> DB/WAL on other devices, we have 24 rotational drives/OSDs, but split
> DB/WAL onto multiple NVMEs. We're hoping the remoto fix (since it's
> merged upstream and pushed) will land in the next point release of
> 16.x (and it sounds like 15.x), since this is a blocking issue without
> using patched containers. I guess testing isn't done against clusters
> with these kinds of configurations, as we can replicate it on any of
> our dev/test clusters with this type of drive configuration. We
> weren't able to upgrade any clusters/deploy new hosts on any clusters,
> so it caused quite an issue until we figured out the problem and
> resolved it.
>
> If you want to build your own images, this is the simple Dockerfile we
> used to get beyond this issue:
>
> $ cat Dockerfile
> FROM docker.io/ceph/ceph:v16.2.3
> COPY process.py /lib/python3.6/site-packages/remoto/process.py
>
> The process.py is the patched version we submitted here:
>
> https://github.com/alfredodeza/remoto/pull/63/commits/6f98078a1479de1f246f971f311146a3c1605494
> (merged upstream).
>
> Hope this helps,
> David
>
> On Mon, May 31, 2021 at 11:43 AM Marco Pizzolo 
> wrote:
> >
> > Unfortunately Ceph 16.2.4 is still not working for us.  We continue to
> have issues where the 26th OSD is not fully created and started.  We've
> confirmed that we do get the flock as described in:
> >
> > https://tracker.ceph.com/issues/50526
> >
> > -
> >
> > I have verified in our labs a way to reproduce easily the problem:
> >
> > 0. Please stop the cephadm orchestrator:
> >
> > In your bootstrap node:
> >
> > # cephadm shell
> > # ceph mgr module disable cephadm
> >
> > 1. In one of the hosts where you want to create osds and you have a big
> amount of devices:
> >
> > See if you have a "cephadm" filelock:
> > for example:
> >
> > # lslocks | grep cephadm
> > python3 1098782  FLOCK   0B WRITE 0 0   0
> /run/cephadm/9fa2b396-adb5-11eb-a2d3-bc97e17cf960.lock
> >
> > if that is the case. just kill the process to start with a "clean"
> situation
> >
> > 2. Go to the folder: /var/lib/ceph/
> >
> > you will find there a file called "cephadm.xx".
> >
> > execute:
> >
> > # python3 cephadm.xx ceph-volume inventory
> >
> > 3. If the problem is present in your cephadm file, you will have the
> command blocked and you will see again a cephadm filelock
> >
> > 4. In the case that the modification was not present. Change your
> cephadm.xx file to include the modification I did (is just to
> remove the verbosity parameter in the call_throws call)
> >
> >
> https://github.com/ceph/ceph/blob/2f4dc3147712f1991242ef0d059690b5fa3d8463/src/cephadm/cephadm#L4576
> >
> > go to step 1, to clean the filelock and try again... with the
> modification in place it must work.
> >
> > -
> >
> > For us, it takes a few seconds but then the manual execution does come
> back, and there are no file locks, however we remain unable to add any
> further OSDs.
> >
> > Furthermore, this is happening as part of the creation of a new Pacific
> Cluster creation post bootstrap and adding one OSD daemon at a time and
> allowing each OSD to be created, set in, and brought up.
> >
> > How is everyone else managing to get past this, or are we the only ones
> (aside from David) using >25 OSDs per host?
> >
> > Our luck has been the same with 15.2.13 and 16.2.4, and using both
> Docker and Podman on Ubuntu 20.04.2
> >
> > Thanks,
> > Marco
> >
> >
> >
> > On Sun, May 30, 2021 at 7:33 AM Peter Childs  wrote:
> >>
> >> I've actually managed to get a little further with my problem.
> >>
> >> As I've said before these servers are slightly distorted in config.
> >>
> >> 63 drives and only 48g if memory.
> >>
> >> Once I create about 

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread David Orman
Does the image we built fix the problem for you? That's how we worked
around it. Unfortunately, it even bites you with less OSDs if you have
DB/WAL on other devices, we have 24 rotational drives/OSDs, but split
DB/WAL onto multiple NVMEs. We're hoping the remoto fix (since it's
merged upstream and pushed) will land in the next point release of
16.x (and it sounds like 15.x), since this is a blocking issue without
using patched containers. I guess testing isn't done against clusters
with these kinds of configurations, as we can replicate it on any of
our dev/test clusters with this type of drive configuration. We
weren't able to upgrade any clusters/deploy new hosts on any clusters,
so it caused quite an issue until we figured out the problem and
resolved it.

If you want to build your own images, this is the simple Dockerfile we
used to get beyond this issue:

$ cat Dockerfile
FROM docker.io/ceph/ceph:v16.2.3
COPY process.py /lib/python3.6/site-packages/remoto/process.py

The process.py is the patched version we submitted here:
https://github.com/alfredodeza/remoto/pull/63/commits/6f98078a1479de1f246f971f311146a3c1605494
(merged upstream).

Hope this helps,
David

On Mon, May 31, 2021 at 11:43 AM Marco Pizzolo  wrote:
>
> Unfortunately Ceph 16.2.4 is still not working for us.  We continue to have 
> issues where the 26th OSD is not fully created and started.  We've confirmed 
> that we do get the flock as described in:
>
> https://tracker.ceph.com/issues/50526
>
> -
>
> I have verified in our labs a way to reproduce easily the problem:
>
> 0. Please stop the cephadm orchestrator:
>
> In your bootstrap node:
>
> # cephadm shell
> # ceph mgr module disable cephadm
>
> 1. In one of the hosts where you want to create osds and you have a big 
> amount of devices:
>
> See if you have a "cephadm" filelock:
> for example:
>
> # lslocks | grep cephadm
> python3 1098782  FLOCK   0B WRITE 0 0   0 
> /run/cephadm/9fa2b396-adb5-11eb-a2d3-bc97e17cf960.lock
>
> if that is the case. just kill the process to start with a "clean" situation
>
> 2. Go to the folder: /var/lib/ceph/
>
> you will find there a file called "cephadm.xx".
>
> execute:
>
> # python3 cephadm.xx ceph-volume inventory
>
> 3. If the problem is present in your cephadm file, you will have the command 
> blocked and you will see again a cephadm filelock
>
> 4. In the case that the modification was not present. Change your 
> cephadm.xx file to include the modification I did (is just to remove 
> the verbosity parameter in the call_throws call)
>
> https://github.com/ceph/ceph/blob/2f4dc3147712f1991242ef0d059690b5fa3d8463/src/cephadm/cephadm#L4576
>
> go to step 1, to clean the filelock and try again... with the modification in 
> place it must work.
>
> -
>
> For us, it takes a few seconds but then the manual execution does come back, 
> and there are no file locks, however we remain unable to add any further OSDs.
>
> Furthermore, this is happening as part of the creation of a new Pacific 
> Cluster creation post bootstrap and adding one OSD daemon at a time and 
> allowing each OSD to be created, set in, and brought up.
>
> How is everyone else managing to get past this, or are we the only ones 
> (aside from David) using >25 OSDs per host?
>
> Our luck has been the same with 15.2.13 and 16.2.4, and using both Docker and 
> Podman on Ubuntu 20.04.2
>
> Thanks,
> Marco
>
>
>
> On Sun, May 30, 2021 at 7:33 AM Peter Childs  wrote:
>>
>> I've actually managed to get a little further with my problem.
>>
>> As I've said before these servers are slightly distorted in config.
>>
>> 63 drives and only 48g if memory.
>>
>> Once I create about 15-20 osds it continues to format the disks but won't 
>> actually create the containers or start any service.
>>
>> Worse than that on reboot the disks disappear, not stop working but not 
>> detected by Linux, which makes me think I'm hitting some kernel limit.
>>
>> At this point I'm going to cut my loses and give up and use the small 
>> slightly more powerful 30x drive systems I have (with 256g memory), maybe 
>> transplanting the larger disks if I need more capacity.
>>
>> Peter
>>
>> On Sat, 29 May 2021, 23:19 Marco Pizzolo,  wrote:
>>>
>>> Thanks David
>>> We will investigate the bugs as per your suggestion, and then will look to 
>>> test with the custom image.
>>>
>>> Appreciate it.
>>>
>>> On Sat, May 29, 2021, 4:11 PM David Orman  wrote:

 You may be running into the same issue we ran into (make sure to read
 the first issue, there's a few mingled in there), for which we
 submitted a patch:

 https://tracker.ceph.com/issues/50526
 https://github.com/alfredodeza/remoto/issues/62

 If you're brave (YMMV, test first non-prod), we pushed an image with
 the issue we encountered fixed as per above here:
 https://hub.docker.com/repository/docker/ormandj/ceph/tags?page=1 . We
 'upgraded' to this when we encountered the mgr 

[ceph-users] Re: [Suspicious newsletter] Re: The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
Yeah, I found a bucket at the moment in progress the deleting, will preshard it.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: Matt Vandermeulen 
Sent: Monday, May 31, 2021 9:43 PM
To: Szabo, Istvan (Agoda) 
Cc: ceph-users@ceph.io
Subject: [Suspicious newsletter] [ceph-users] Re: The always welcomed large omap

All the index data will be in OMAP, which you can see a listing of with

`ceph osd df tree`

Do you have large buckets (many, many objects in a single bucket) with few 
shards?  You may have to reshard one (or some) of your buckets.
It'll take some reading if you're using multisite, in order to coordinate it 
(though I'm unfamiliar with how it works with multisite in Octopus).

On 2021-05-31 02:25, Szabo, Istvan (Agoda) wrote:
> Hi,
>
> Any way to clean up large-omap in the index pool?
> PG deep_scrub didn't help.
> I know how to clean in the log pool, but no idea in the index pool :/
> It's an octopus deployment 15.2.10.
>
> Thank you
>
> 
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by
> copyright or other legal rules. If you have received it by mistake
> please let us know by reply email and delete it from your system. It
> is prohibited to copy this message or disclose its content to anyone.
> Any confidentiality or privilege is not waived or lost by any mistaken
> delivery or unauthorized disclosure of the message. All messages sent
> to and from Agoda may be monitored to ensure compliance with company
> policies, to protect the company's interests and to remove potential
> malware. Electronic messages may be intercepted, amended, lost or
> deleted, or contain viruses.
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-05-31 Thread Marco Pizzolo
Unfortunately Ceph 16.2.4 is still not working for us.  We continue to have
issues where the 26th OSD is not fully created and started.  We've
confirmed that we do get the flock as described in:

https://tracker.ceph.com/issues/50526

-

*I have verified in our labs a way to reproduce easily the problem:*

*0. Please stop the cephadm orchestrator:*


*In your bootstrap node:*

*# cephadm shell
# ceph mgr module disable cephadm
*

*1. In one of the hosts where you want to create osds and you have a big
amount of devices:*


*See if you have a "cephadm" filelock:for example:*

*# lslocks | grep cephadm
python3 1098782  FLOCK   0B WRITE 0 0   0
/run/cephadm/9fa2b396-adb5-11eb-a2d3-bc97e17cf960.lock
*

*if that is the case. just kill the process to start with a "clean"
situation*

*2. Go to the folder: /var/lib/ceph/*

*you will find there a file called "cephadm.xx".*


*execute:*

*# python3 cephadm.xx ceph-volume inventory
*

*3. If the problem is present in your cephadm file, you will have the
command blocked and you will see again a cephadm filelock*

*4. In the case that the modification was not present. Change your
cephadm.xx file to include the modification I did (is just to
remove the verbosity parameter in the call_throws call)*

*https://github.com/ceph/ceph/blob/2f4dc3147712f1991242ef0d059690b5fa3d8463/src/cephadm/cephadm#L4576*


*go to step 1, to clean the filelock and try again... with the modification
in place it must work.*

*-*

For us, it takes a few seconds but then the manual execution does come
back, and there are no file locks, however we remain unable to add any
further OSDs.

Furthermore, this is happening as part of the creation of a new Pacific
Cluster creation post bootstrap and adding one OSD daemon at a time and
allowing each OSD to be created, set in, and brought up.

How is everyone else managing to get past this, or are we the only ones
(aside from David) using >25 OSDs per host?

Our luck has been the same with 15.2.13 and 16.2.4, and using both Docker
and Podman on Ubuntu 20.04.2

Thanks,
Marco



On Sun, May 30, 2021 at 7:33 AM Peter Childs  wrote:

> I've actually managed to get a little further with my problem.
>
> As I've said before these servers are slightly distorted in config.
>
> 63 drives and only 48g if memory.
>
> Once I create about 15-20 osds it continues to format the disks but won't
> actually create the containers or start any service.
>
> Worse than that on reboot the disks disappear, not stop working but not
> detected by Linux, which makes me think I'm hitting some kernel limit.
>
> At this point I'm going to cut my loses and give up and use the small
> slightly more powerful 30x drive systems I have (with 256g memory), maybe
> transplanting the larger disks if I need more capacity.
>
> Peter
>
> On Sat, 29 May 2021, 23:19 Marco Pizzolo,  wrote:
>
>> Thanks David
>> We will investigate the bugs as per your suggestion, and then will look
>> to test with the custom image.
>>
>> Appreciate it.
>>
>> On Sat, May 29, 2021, 4:11 PM David Orman  wrote:
>>
>>> You may be running into the same issue we ran into (make sure to read
>>> the first issue, there's a few mingled in there), for which we
>>> submitted a patch:
>>>
>>> https://tracker.ceph.com/issues/50526
>>> https://github.com/alfredodeza/remoto/issues/62
>>>
>>> If you're brave (YMMV, test first non-prod), we pushed an image with
>>> the issue we encountered fixed as per above here:
>>> https://hub.docker.com/repository/docker/ormandj/ceph/tags?page=1 . We
>>> 'upgraded' to this when we encountered the mgr hanging on us after
>>> updating ceph to v16 and experiencing this issue using: "ceph orch
>>> upgrade start --image docker.io/ormandj/ceph:v16.2.3-mgrfix". I've not
>>> tried to boostrap a new cluster with a custom image, and I don't know
>>> when 16.2.4 will be released with this change (hopefully) integrated
>>> as remoto accepted the patch upstream.
>>>
>>> I'm not sure if this is your exact issue, see the bug reports and see
>>> if you see the lock/the behavior matches, if so - then it may help you
>>> out. The only change in that image is that patch to remoto being
>>> overlaid on the default 16.2.3 image.
>>>
>>> On Fri, May 28, 2021 at 1:15 PM Marco Pizzolo 
>>> wrote:
>>> >
>>> > Peter,
>>> >
>>> > We're seeing the same issues as you are.  We have 2 new hosts Intel(R)
>>> > Xeon(R) Gold 6248R CPU @ 3.00GHz w/ 48 cores, 384GB RAM, and 60x 10TB
>>> SED
>>> > drives and we have tried both 15.2.13 and 16.2.4
>>> >
>>> > Cephadm does NOT properly deploy and activate OSDs on Ubuntu 20.04.2
>>> with
>>> > Docker.
>>> >
>>> > Seems to be a bug in Cephadm and a product regression, as we have 4
>>> near
>>> > identical nodes on Centos running Nautilus (240 x 10TB SED drives) and
>>> had
>>> > no problems.
>>> >
>>> > FWIW we had no luck yet with 

[ceph-users] Re: The always welcomed large omap

2021-05-31 Thread Matt Vandermeulen

All the index data will be in OMAP, which you can see a listing of with

`ceph osd df tree`

Do you have large buckets (many, many objects in a single bucket) with 
few shards?  You may have to reshard one (or some) of your buckets.  
It'll take some reading if you're using multisite, in order to 
coordinate it (though I'm unfamiliar with how it works with multisite in 
Octopus).


On 2021-05-31 02:25, Szabo, Istvan (Agoda) wrote:

Hi,

Any way to clean up large-omap in the index pool?
PG deep_scrub didn't help.
I know how to clean in the log pool, but no idea in the index pool :/
It's an octopus deployment 15.2.10.

Thank you


This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by
copyright or other legal rules. If you have received it by mistake
please let us know by reply email and delete it from your system. It
is prohibited to copy this message or disclose its content to anyone.
Any confidentiality or privilege is not waived or lost by any mistaken
delivery or unauthorized disclosure of the message. All messages sent
to and from Agoda may be monitored to ensure compliance with company
policies, to protect the company's interests and to remove potential
malware. Electronic messages may be intercepted, amended, lost or
deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Szabo, Istvan (Agoda)
Yeah, this would be interesting for me as well.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: mhnx  
Sent: Monday, May 31, 2021 4:33 PM
To: Szabo, Istvan (Agoda) 
Cc: Ceph Users 
Subject: Re: [Suspicious newsletter] [ceph-users] Bucket creation on RGW 
Multisite env.

Yes you're right. I have a Global sync rule in the zonegroup:
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""

If I need to stop/start the sync after creation I use the command:
radosgw-admin bucket sync enable/disable --bucket=$newbucket

I developed it but clients can create bucket and I have no control on the 
bucket if it's created out of my program.

Because of that I'm looking something:
1- Do not sync buckets if its not enabled by me.
2- Sync all the other "sync started" buckets.





Szabo, Istvan (Agoda) , 31 May 2021 Pzt, 12:11 
tarihinde şunu yazdı:
>
> Bucket is created but if no sync rule set, the data will not be synced across.
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
> -Original Message-
> From: mhnx 
> Sent: Monday, May 31, 2021 4:07 PM
> To: Ceph Users 
> Subject: [Suspicious newsletter] [ceph-users] Bucket creation on RGW 
> Multisite env.
>
> Hello.
>
> I have a multisite RGW environment.
> When I create a new bucket, the bucket is directly created on master and 
> secondary.
> If I don't want to sync a bucket, I need to stop sync after creation.
> Is there any global option as "Do not sync directly, only start if I want to" 
> ?
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io
>
> 
> This message is confidential and is for the sole use of the intended 
> recipient(s). It may also be privileged or otherwise protected by copyright 
> or other legal rules. If you have received it by mistake please let us know 
> by reply email and delete it from your system. It is prohibited to copy this 
> message or disclose its content to anyone. Any confidentiality or privilege 
> is not waived or lost by any mistaken delivery or unauthorized disclosure of 
> the message. All messages sent to and from Agoda may be monitored to ensure 
> compliance with company policies, to protect the company's interests and to 
> remove potential malware. Electronic messages may be intercepted, amended, 
> lost or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Szabo, Istvan (Agoda)
Bucket is created but if no sync rule set, the data will not be synced across.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: mhnx 
Sent: Monday, May 31, 2021 4:07 PM
To: Ceph Users 
Subject: [Suspicious newsletter] [ceph-users] Bucket creation on RGW Multisite 
env.

Hello.

I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master and 
secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread Soumya Koduri

On 5/31/21 3:02 PM, mhnx wrote:

Yes you're right. I have a Global sync rule in the zonegroup:
 "sync_from_all": "true",
 "sync_from": [],
 "redirect_zone": ""

If I need to stop/start the sync after creation I use the command:
radosgw-admin bucket sync enable/disable --bucket=$newbucket

I developed it but clients can create bucket and I have no control on
the bucket if it's created out of my program.

Because of that I'm looking something:
1- Do not sync buckets if its not enabled by me.
2- Sync all the other "sync started" buckets.



This can be achieved using Multisite sync-policy  [1]. Create a group 
policy which allows sync (but not enabled) across all the zones & 
buckets. And then create another policy at bucket(s) level to enable 
sync for only those particular ones. Refer to Example-3 in the same page 
[2].



Thanks,

Soumya


[1] https://docs.ceph.com/en/latest/radosgw/multisite-sync-policy/

[2] 
https://docs.ceph.com/en/latest/radosgw/multisite-sync-policy/#example-3-mirror-a-specific-bucket








Szabo, Istvan (Agoda) , 31 May 2021 Pzt, 12:11
tarihinde şunu yazdı:

Bucket is created but if no sync rule set, the data will not be synced across.

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: mhnx 
Sent: Monday, May 31, 2021 4:07 PM
To: Ceph Users 
Subject: [Suspicious newsletter] [ceph-users] Bucket creation on RGW Multisite 
env.

Hello.

I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master and 
secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] The always welcomed large omap

2021-05-31 Thread Szabo, Istvan (Agoda)
Hi,

Any way to clean up large-omap in the index pool?
PG deep_scrub didn't help.
I know how to clean in the log pool, but no idea in the index pool :/
It's an octopus deployment 15.2.10.

Thank you


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Bucket creation on RGW Multisite env.

2021-05-31 Thread mhnx
Yes you're right. I have a Global sync rule in the zonegroup:
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""

If I need to stop/start the sync after creation I use the command:
radosgw-admin bucket sync enable/disable --bucket=$newbucket

I developed it but clients can create bucket and I have no control on
the bucket if it's created out of my program.

Because of that I'm looking something:
1- Do not sync buckets if its not enabled by me.
2- Sync all the other "sync started" buckets.





Szabo, Istvan (Agoda) , 31 May 2021 Pzt, 12:11
tarihinde şunu yazdı:
>
> Bucket is created but if no sync rule set, the data will not be synced across.
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
> -Original Message-
> From: mhnx 
> Sent: Monday, May 31, 2021 4:07 PM
> To: Ceph Users 
> Subject: [Suspicious newsletter] [ceph-users] Bucket creation on RGW 
> Multisite env.
>
> Hello.
>
> I have a multisite RGW environment.
> When I create a new bucket, the bucket is directly created on master and 
> secondary.
> If I don't want to sync a bucket, I need to stop sync after creation.
> Is there any global option as "Do not sync directly, only start if I want to" 
> ?
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
> ceph-users-le...@ceph.io
>
> 
> This message is confidential and is for the sole use of the intended 
> recipient(s). It may also be privileged or otherwise protected by copyright 
> or other legal rules. If you have received it by mistake please let us know 
> by reply email and delete it from your system. It is prohibited to copy this 
> message or disclose its content to anyone. Any confidentiality or privilege 
> is not waived or lost by any mistaken delivery or unauthorized disclosure of 
> the message. All messages sent to and from Agoda may be monitored to ensure 
> compliance with company policies, to protect the company's interests and to 
> remove potential malware. Electronic messages may be intercepted, amended, 
> lost or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Bucket creation on RGW Multisite env.

2021-05-31 Thread mhnx
Hello.

I have a multisite RGW environment.
When I create a new bucket, the bucket is directly created on master
and secondary.
If I don't want to sync a bucket, I need to stop sync after creation.
Is there any global option as "Do not sync directly, only start if I want to" ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Wolfgang Lendl

Hi,

CentOS7 is only partially supported for octopus

"Note that the dashboard, prometheus, and restful manager modules will 
not work on the CentOS 7 build due to Python 3 module dependencies that 
are missing in CentOS 7."


https://docs.ceph.com/en/latest/releases/octopus/


cheers
wolfgang

On 31.05.2021 10:29, Andreas Haupt wrote:

Dear all,

ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
that cannot be resolved here (not part of CentOS & EPEL 7):

python3-cherrypy
python3-routes
python3-jwt

Does anybody know where they are expected to come from?

Thanks,
Andreas

___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io


--
Wolfgang Lendl
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21231
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Fabrice Bacchella
I had a similar problem with pacific when using the build from Centos, I switch 
to the rpm directly from ceph and it went fine.

> Le 31 mai 2021 à 10:29, Andreas Haupt  a écrit :
> 
> Dear all,
> 
> ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
> that cannot be resolved here (not part of CentOS & EPEL 7):
> 
> python3-cherrypy
> python3-routes
> python3-jwt
> 
> Does anybody know where they are expected to come from?
> 
> Thanks,
> Andreas
> -- 
> | Andreas Haupt| E-Mail: andreas.ha...@desy.de
> |  DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt
> |  Platanenallee 6 | Phone:  +49/33762/7-7359
> |  D-15738 Zeuthen | Fax:+49/33762/7-7216
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Andreas Haupt
Dear all,

ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
that cannot be resolved here (not part of CentOS & EPEL 7):

python3-cherrypy
python3-routes
python3-jwt

Does anybody know where they are expected to come from?

Thanks,
Andreas
-- 
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
|  DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt
|  Platanenallee 6 | Phone:  +49/33762/7-7359
|  D-15738 Zeuthen | Fax:+49/33762/7-7216



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io