Found it:
for bucket in `radosgw-admin metadata list bucket.instance | jq .[] | cut
-f2 -d\"`; do
if radosgw-admin metadata get --metadata-key=bucket.instance:$bucket |
grep --silent website_conf; then
echo $bucket
fi
done
Am Do., 16. Sept. 2021 um 09:49 Uhr schrieb Boris Behrens :
Hello again,
as my tests with some fresh clusters answerd most of my config questions, I
now wanted to start with our production cluster and the basic setup looks
good, but the sync does not work:
[root@3cecef5afb05 ~]# radosgw-admin sync status
realm
Hi people,
is there a way to find bucket that use the s3website feature?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ok, I think I found the basic problem.
I used to talk to the endpoint that is also the Domain for the s3websites.
After switching the domains around everything worked fine. :partyemote:
I have wrote down what I think how things work together (wrote down here
IYAI https://pastebin.com/6Gj9Q5hJ),
. 2021 um 11:47 Uhr schrieb Boris Behrens :
> Dear ceph community,
>
> I am still stuck with the multi zonegroup configuration. I did these steps:
> 1. Create realm (company), zonegroup(eu), zone(eu-central-1), sync user on
> the site fra1
> 2. Pulled the realm and the period i
> e: istvan.sz...@agoda.com
> ---
>
> -Original Message-
> From: Boris Behrens
> Sent: Monday, September 13, 2021 4:48 PM
> To: ceph-users@ceph.io
> Subject: [Suspicious newsletter] [ceph-users] Problem with multi zonegroup
> configuration
>
> Email received fro
Dear ceph community,
I am still stuck with the multi zonegroup configuration. I did these steps:
1. Create realm (company), zonegroup(eu), zone(eu-central-1), sync user on
the site fra1
2. Pulled the realm and the period in fra2
3. Creted the zonegroup(eu-central-2), zone (eu-central-2), modified
m, multiple dc BUT no sync?
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
> -Original Message-
>
an empty response (because there are
no buckets to list).
I get this against both radosgw locations.
I have an nginx in between the internet and radosgw that will just proxy
pass every address and sets host and x-forwarded-for header.
Am Fr., 30. Juli 2021 um 16:46 Uhr schrieb Boris Behrens
create and attach an empty block device, and they will certainly not
check if the partitions are aligned correctly.
Cheers
Boris
Am Fr., 13. Aug. 2021 um 08:44 Uhr schrieb Janne Johansson <
icepic...@gmail.com>:
> Den tors 12 aug. 2021 kl 17:04 skrev Boris Behrens :
> > Hi ev
Hi everybody,
we just stumbled over a problem where the rbd image does not shrink, when
files are removed.
This only happenes when the rbd image is partitioned.
* We tested it with centos8/ubuntu20.04 with ext4 and a gpt partition table
(/boot and /)
* the kvm device is virtio-scsi-pci with krbd
, where I sync the actual zone data,
but have a global namespace where all buckets and users are uniqe.
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list --
-4e1b-af2e-ac4454f24c9d (eu)
zone ff7a8b0c-07e6-463a-861b-78f0adeba8ad (eu-central-1)
metadata sync no sync (zone is master)
2021-07-27 11:24:24.645 7fe30fc07840 0 data sync zone:07cdb1c7 ERROR:
failed to fetch datalog info
data sync source: 07cdb1c7-8c8e-4a23-ab1e-fcfb88982f38 (eu-
.
We are currently running 14.2.21 through the board.
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
im groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
Lopez
:
>
> Thanks for further clarification Dan.
>
> Boris, if you have a test/QA environment on the same code as production, you
> can confirm if the problem is as above. Do NOT do this in production - if the
> problem exists it might result in losing production data.
>
> 1
Good morning everybody,
we've dug further into it but still don't know how this could happen.
What we ruled out for now:
* Orphan objects cleanup process.
** There is only one bucket with missing data (I checked all other
buckets yesterday)
** The "keep this files" list is generated by
ex shard much larger than others - ceph-users -
> lists.ceph.io"
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/MO7IHRGJ7TGPKT3GXCKMFLR674G3YGUX/
>
> On Mon, 19 Jul 2021, 18:00 Boris Behrens, wrote:
>>
>> Hi Dan,
>> how do I find out if a buc
Hi Dan,
how do I find out if a bucket got versioning enabled?
Am Mo., 19. Juli 2021 um 17:00 Uhr schrieb Dan van der Ster
:
>
> Hi Boris,
>
> Does the bucket have object versioning enabled?
> We saw something like this once a while ago: `s3cmd ls` showed an
> entry for an o
m to have a filename
(_shadow_.Sxj4BEhZS6PZg1HhsvSeqJM4Y0wRCto_4)
It doesn't seem to be a careless "rados -p POOL rm OBJECT" because
then it should be still in the "radosgw-admin bucket radoslist
--bucket BUCKET" output. (just tested that on a testbucket).
Am Fr., 16. Juli 2021 um
-78f0adeba8ad.83821626.6927__shadow_.yscyiu0DpWRh_Agsnii3635ZNnrO16x_5
What are those files? o0
Am Sa., 17. Juli 2021 um 22:54 Uhr schrieb Boris Behrens :
>
> Hi k,
>
> all systems run 14.2.21
>
> Cheers
> Boris
>
> Am Sa., 17. Juli 2021 um 22:12 Uhr schrieb Konstantin Shalyg
Hi k,
all systems run 14.2.21
Cheers
Boris
Am Sa., 17. Juli 2021 um 22:12 Uhr schrieb Konstantin Shalygin :
>
> Boris, what is your Ceph version?
>
>
> k
>
> On 17 Jul 2021, at 11:04, Boris Behrens wrote:
>
> I really need help with this issue.
>
>
--
Die
Is it possible to not complete a file upload so the actual file is not
there, but it is listed in the bucket index?
I really need help with this issue.
Am Fr., 16. Juli 2021 um 19:35 Uhr schrieb Boris Behrens :
>
> exactly.
> rados rm wouldn't remove it from the "radosgw-admin bu
Am Fr., 16. Juli 2021 um 19:35 Uhr schrieb Boris Behrens :
>
> exactly.
> rados rm wouldn't remove it from the "radosgw-admin bucket radoslist"
> list, correct?
>
> our usage statistics are not really usable because it fluctuates in a
> 200tb range.
>
&
build the "bi" from the pool level (rados ls), so I'm not sure the
> bucketindex is "that" much important, knowing that you can rebuilt it
> from the pool. (?)
>
>
>
>
> On 7/16/21 1:47 PM, Boris Behrens wrote:
> > [Externe UL*]
> >
>
Hi,
is there a difference between those two?
I always thought that radosgw-admin radoslist only shows the objects
that are somehow associated with a bucket. But if the bucketindex is
broken, would this reflect in the output?
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal
sage stats that can confirm that the data has been
> deleted and/or are still there. (at the pool level maybe?)
> Hopping for you that it's just a data/index/shard mismatch...
>
>
> On 7/16/21 12:44 PM, Boris Behrens wrote:
> > [Externe UL*]
> >
> > Hi Jean-Sebastien,
> >
Hi Jean-Sebastien,
I have the exact opposite. Files can be listed (the are in the bucket
index), but are not available anymore.
Am Fr., 16. Juli 2021 um 18:41 Uhr schrieb Jean-Sebastien Landry
:
>
> Hi Boris, I don't have any answer for you, but I have situation similar
> to yours.
Is there way to remove a file from a bucket without removing it from
the bucketindex?
Am Fr., 16. Juli 2021 um 17:36 Uhr schrieb Boris Behrens :
>
> Hi everybody,
> a customer mentioned that he got problems in accessing hist rgw data.
> I checked the bucket index and the file should
might be somewhere else in
ceph?" how can this happen?
We do occational orphan objects cleanups but this does not pull the
bucket index into account.
It is a large bucket with 2.1m files in it and with 34 shards.
Cheers and happy weekend
Boris
--
Die Selbsthilfegruppe "UTF-8-Proble
Am Do., 27. Mai 2021 um 07:47 Uhr schrieb Janne Johansson :
>
> Den ons 26 maj 2021 kl 16:33 skrev Boris Behrens :
> >
> > Hi Janne,
> > do you know if there can be data duplication which leads to orphan objects?
> >
> > I am currently huntin stra
Johansson :
>
> I guess normal round robin should work out fine too, regardless of if
> there are few clients making several separate connections or many
> clients making a few.
>
> Den ons 26 maj 2021 kl 12:32 skrev Boris Behrens :
> >
> > Hello togehter,
> >
> >
Hello togehter,
is there any best practive on the balance mode when I have a HAproxy
in front of my rgw_frontend?
currently we use "balance leastconn".
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe sen
The more files I delete, the more space is used.
How can this be?
Am Di., 25. Mai 2021 um 14:41 Uhr schrieb Boris Behrens :
>
> Am Di., 25. Mai 2021 um 09:23 Uhr schrieb Boris Behrens :
> >
> > Hi,
> > I am still searching for a reason why these two values diff
Am Di., 25. Mai 2021 um 09:23 Uhr schrieb Boris Behrens :
>
> Hi,
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but the difference get larger
Am Di., 25. Mai 2021 um 09:39 Uhr schrieb Konstantin Shalygin :
>
> Hi,
>
> On 25 May 2021, at 10:23, Boris Behrens wrote:
>
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, m
Hi,
I am still searching for a reason why these two values differ so much.
I am currently deleting a giant amount of orphan objects (43mio, most
of them under 64kb), but the difference get larger instead of smaller.
This was the state two days ago:
>
> [root@s3db1 ~]# radosgw-admin bucket stats
can I
delete them fast?
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Reading through the bugtracker: https://tracker.ceph.com/issues/50293
Thanks for your patience.
Am Do., 20. Mai 2021 um 15:10 Uhr schrieb Boris Behrens :
> I try to bump it once more, because it makes finding orphan objects nearly
> impossible.
>
> Am Di., 11. Mai 2021 um 13:03
I try to bump it once more, because it makes finding orphan objects nearly
impossible.
Am Di., 11. Mai 2021 um 13:03 Uhr schrieb Boris Behrens :
> Hi together,
>
> I still search for orphan objects and came across a strange bug:
> There is a huge multipart upload happening
for your support Igor <3
Am Di., 18. Mai 2021 um 09:54 Uhr schrieb Boris Behrens :
> One more question:
> How do I get rid of the bluestore spillover message?
> osd.68 spilled over 64 KiB metadata from 'db' device (13 GiB used of
> 50 GiB) to slow device
>
> I tried an
One more question:
How do I get rid of the bluestore spillover message?
osd.68 spilled over 64 KiB metadata from 'db' device (13 GiB used of
50 GiB) to slow device
I tried an offline compactation, which did not help.
Am Mo., 17. Mai 2021 um 15:56 Uhr schrieb Boris Behrens :
> I h
>
> Thanks,
>
> Igor
>
> On 5/17/2021 3:47 PM, Boris Behrens wrote:
> > The FSCK looks good:
> >
> > [root@s3db10 export-bluefs2]# ceph-bluestore-tool --path
> > /var/lib/ceph/osd/ceph-68 fsck
> > fsck success
> >
> > Am Mo., 17. Mai 2021
See my last mail :)
Am Mo., 17. Mai 2021 um 14:52 Uhr schrieb Igor Fedotov :
> Would you try fsck without standalone DB?
>
> On 5/17/2021 3:39 PM, Boris Behrens wrote:
> > Here is the new output. I kept both for now.
> >
> > [root@s3db10 export-bluefs2]# ls *
> &g
The FSCK looks good:
[root@s3db10 export-bluefs2]# ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-68 fsck
fsck success
Am Mo., 17. Mai 2021 um 14:39 Uhr schrieb Boris Behrens :
> Here is the new output. I kept both for now.
>
> [root@s3db10 export-bluefs2]# ls *
> db:
> 018
.sst 020006.sst
020041.sst 020064.sst 020096.sst 020114.sst
db.slow:
db.wal:
020085.log 020088.log
[root@s3db10 export-bluefs2]# du -hs
12G .
[root@s3db10 export-bluefs2]# cat db/CURRENT
MANIFEST-020084
Am Mo., 17. Mai 2021 um 14:28 Uhr schrieb Igor Fedotov :
> On 5/17/2021 2:53 PM, Bo
lueFS directory
> structure - multiple .sst files, CURRENT and IDENTITY files etc?
>
> If so then please check and share the content of /db/CURRENT
> file.
>
>
> Thanks,
>
> Igor
>
> On 5/17/2021 1:32 PM, Boris Behrens wrote:
> > Hi Igor,
> > I posted it on paste
Hi Igor,
I posted it on pastebin: https://pastebin.com/Ze9EuCMD
Cheers
Boris
Am Mo., 17. Mai 2021 um 12:22 Uhr schrieb Igor Fedotov :
> Hi Boris,
>
> could you please share full OSD startup log and file listing for
> '/var/lib/ceph/osd/ceph-68'?
>
>
> Thanks,
>
> Ig
Hi,
sorry for replying to this old thread:
I tried to add a block.db to an OSD but now the OSD can not start with the
error:
Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -7> 2021-05-17
09:50:38.362 7fc48ec94a80 -1 rocksdb: Corruption: CURRENT file does not end
with newline
Mai 17
I actually WAS the amount of watchers... narf..
This is so embarissing.. Thanks a lot for all your input.
Am Di., 11. Mai 2021 um 13:54 Uhr schrieb Boris Behrens :
> I tried to debug it with --debug-ms=1.
> Maybe someone could help me to wrap my head around it?
> https://pastebin.com
I tried to debug it with --debug-ms=1.
Maybe someone could help me to wrap my head around it?
https://pastebin.com/LD9qrm3x
Am Di., 11. Mai 2021 um 11:17 Uhr schrieb Boris Behrens :
> Good call. I just restarted the whole cluster, but the problem still
> persists.
> I do
Hi together,
I still search for orphan objects and came across a strange bug:
There is a huge multipart upload happening (around 4TB), and listing the
rados objects in the bucket loops over the multipart upload.
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
>>
>> Kind regards,
>> Thomas
>>
>>
>> Am 11. Mai 2021 08:47:06 MESZ schrieb Boris Behrens :
>> >Hi Amit,
>> >
>> >I just pinged the mons from every system and they are all available.
>> >
>> >Am Mo., 10. M
he problem was gone.
> I have no good way to debug the problem since it never occured again after
> we restarted the OSDs.
>
> Kind regards,
> Thomas
>
>
> Am 11. Mai 2021 08:47:06 MESZ schrieb Boris Behrens :
> >Hi Amit,
> >
> >I just pinged the mons from ev
all nodes are successfully ping.
>
>
> -AmitG
>
>
> On Tue, 11 May 2021 at 12:12 AM, Boris Behrens wrote:
>
>> Hi guys,
>>
>> does someone got any idea?
>>
>> Am Mi., 5. Mai 2021 um 16:16 Uhr schrieb Boris Behrens :
>>
>> > Hi,
>&g
Hi guys,
does someone got any idea?
Am Mi., 5. Mai 2021 um 16:16 Uhr schrieb Boris Behrens :
> Hi,
> since a couple of days we experience a strange slowness on some
> radosgw-admin operations.
> What is the best way to debug this?
>
> For example creating a user takes over
ral-1-s3db3)
* We also added dedicated rgw daemons for garbage collection, because the
current one were not able to keep up.
* So basically ceph status went from "rgw: 1 daemon active (eu-central-1)"
to "rgw: 14 daemons active (eu-central-1-s3db1, eu-central-1-s3
Hi,
I have a lot of multipart uploads that look like they never finished. Some
of them date back to 2019.
Is there a way to clean them up when they didn't finish in 28 days?
I know I can implement a LC policy per bucket, but how do I implement it
cluster wide?
Cheers
Boris
--
Die
Uhr schrieb Boris Behrens :
> Hi Anthony,
>
> yes we are using replication, the lost space is calculated before it's
> replicated.
> RAW STORAGE:
> CLASS SIZEAVAIL USEDRAW USED %RAW USED
> hdd 1.1 PiB 191 TiB 968 TiB
is mixed, but the most amount of data is in huge files.
We store our platforms RBD snapshots in it.
Cheers
Boris
Am Di., 27. Apr. 2021 um 06:49 Uhr schrieb Anthony D'Atri <
anthony.da...@gmail.com>:
> Are you using Replication? EC? How many copies / which profile?
> On which Ceph
HI,
we still have the problem that our rgw eats more diskspace than it should.
Summing up the "size_kb_actual" of all buckets show only half of the used
diskspace.
There are 312TiB stored acording to "ceph df" but we only need around 158TB.
I've already wrote to this ML with the problem, but
Am Fr., 23. Apr. 2021 um 12:16 Uhr schrieb Ilya Dryomov :
> On Fri, Apr 23, 2021 at 12:03 PM Boris Behrens wrote:
> >
> >
> >
> > Am Fr., 23. Apr. 2021 um 11:52 Uhr schrieb Ilya Dryomov <
> idryo...@gmail.com>:
> >>
> >>
> >> This
Am Fr., 23. Apr. 2021 um 11:52 Uhr schrieb Ilya Dryomov :
>
> This snippet confirms my suspicion. Unfortunately without a verbose
> log from that VM from three days ago (i.e. when it got into this state)
> it's hard to tell what exactly went wrong.
>
> The problem is that the VM doesn't consider
ge": {
"search_stage": "comparing",
"shard": 0,
"marker": ""
}
}
}
},
Am Fr., 16. Apr. 2021 um 10:57 Uhr schrieb Boris Behrens :
> Could this also be failed multipart
Am Do., 22. Apr. 2021 um 17:27 Uhr schrieb Ilya Dryomov :
> On Thu, Apr 22, 2021 at 5:08 PM Boris Behrens wrote:
> >
> >
> >
> > Am Do., 22. Apr. 2021 um 16:43 Uhr schrieb Ilya Dryomov <
> idryo...@gmail.com>:
> >>
> >> On Thu, Apr 22,
Am Do., 22. Apr. 2021 um 16:43 Uhr schrieb Ilya Dryomov :
> On Thu, Apr 22, 2021 at 4:20 PM Boris Behrens wrote:
> >
> > Hi,
> >
> > I have a customer VM that is running fine, but I can not make snapshots
> > anymore.
> > rbd snap create rbd/IMAGE@test-bb
Hi,
I have a customer VM that is running fine, but I can not make snapshots
anymore.
rbd snap create rbd/IMAGE@test-bb-1
just hangs forever.
When I checked the status with
rbd status rbd/IMAGE
it shows one watcher, the cpu node where the VM is running.
What can I do to investigate further,
Hi Istvan,
both of them require bucket access, correct?
Is there a way to add the LC policy globally?
Cheers
Boris
Am Mo., 19. Apr. 2021 um 11:58 Uhr schrieb Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com>:
> Hi,
>
> You have 2 ways:
>
> First is using s3vrowser app a
Hi,
is there a way to remove multipart uploads that are older than X days?
It doesn't need to be build into ceph or is automated to the end. Just
something I don't need to build on my own.
I currently try to debug a problem where ceph reports a lot more used space
than it actually requires (
Could this also be failed multipart uploads?
Am Do., 15. Apr. 2021 um 18:23 Uhr schrieb Boris Behrens :
> Cheers,
>
> [root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg
> "numpg": 187,
> "numpg_primary": 64,
> "nump
Cheers,
[root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg
"numpg": 187,
"numpg_primary": 64,
"numpg_replica": 121,
"numpg_stray": 2,
"numpg_removing": 0,
Am Do., 15. Apr. 2021 um 18:18 Uhr schrieb
ues ,bluestore_min_alloc_size_hdd &
> bluestore_min_alloc_size_sdd, If you are using hdd disk then
> bluestore_min_alloc_size_hdd are applicable.
>
> On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens wrote:
>
>> So, I need to live with it? A value of zero leads to use the
l are actually bucket object size but on OSD level the
> bluestore_min_alloc_size default 64KB and SSD are 16KB
>
>
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/administration_guide/osd-bluestore
>
> -AmitG
>
> On Thu, Apr 15, 2021 at 7:29 PM Boris
Hi,
maybe it is just a problem in my understanding, but it looks like our s3
requires twice the space it should use.
I ran "radosgw-admin bucket stats", and added all "size_kb_actual" values
up and divided to TB (/1024/1024/1024).
The resulting space is 135,1636733 TB. When I tripple it because
er the 90% default limit.
>
> -- dan
>
> On Tue, Mar 30, 2021 at 3:18 PM Boris Behrens wrote:
> >
> > The output from ceph osd pool ls detail tell me nothing, except that the
> pgp_num is not where it should be. Can you help me to read the output? How
> do I estimate
itting should take. This will help:
>
> ceph status
> ceph osd pool ls detail
>
> -- dan
>
> On Tue, Mar 30, 2021 at 3:00 PM Boris Behrens wrote:
> >
> > I would think due to splitting, because the balancer doesn't refuses
> it's work, because to many misplace
> On 3/30/21 12:55 PM, Boris Behrens wrote:
> > I just move one PG away from the OSD, but the diskspace will not get
> freed.
>
> How did you move? I would suggest you use upmap:
>
> ceph osd pg-upmap-items
> Invalid command: missing required parameter pgid()
> osd pg
any other trick.
>
> -- dan
>
> On Tue, Mar 30, 2021 at 2:07 PM Boris Behrens wrote:
> >
> > One week later the ceph is still balancing.
> > What worries me like hell is the %USE on a lot of those OSDs. Does ceph
> > resolv this on it's own? We are currently down to
iB 6.7 TiB 6.7 TiB 322 MiB 16 GiB 548
GiB 92.64 1.18 121 up osd.66
46 hdd 7.27739 1.0 7.3 TiB 6.8 TiB 6.7 TiB 316 MiB 16 GiB 536
GiB 92.81 1.18 119 up osd.46
Am Di., 23. März 2021 um 19:59 Uhr schrieb Boris Behrens :
> Good point. Thanks for the hi
I just move one PG away from the OSD, but the diskspace will not get freed.
Do I need to do something to clean obsolete objects from the osd?
Am Di., 30. März 2021 um 11:47 Uhr schrieb Boris Behrens :
> Hi,
> I have a couple OSDs that currently get a lot of data, and are running
> t
Hi,
I have a couple OSDs that currently get a lot of data, and are running
towards 95% fillrate.
I would like to forcefully remap some PGs (they are around 100GB) to more
empty OSDs and drop them from the full OSDs. I know this would lead to
degraded objects, but I am not sure how long the
out.
Am Mi., 24. März 2021 um 16:31 Uhr schrieb Janne Johansson <
icepic...@gmail.com>:
> Den ons 24 mars 2021 kl 14:55 skrev Boris Behrens :
> >
> > Oh cool. Thanks :)
> >
> > How do I find the correct weight after it is added?
> > For the current process I j
Oh cool. Thanks :)
How do I find the correct weight after it is added?
For the current process I just check the other OSDs but this might be a
question that someone will raise.
I could imagine that I need to adjust the ceph-gentle-reweight's target
weight to the correct one.
Am Mi., 24. März
Hi people,
I currently try to add ~30 OSDs to our cluster and wanted to use the
gentle-rerweight script for that.
I use ceph-colume lvm prepare --data /dev/sdX to create the osd and want to
start it without weighting it in.
systemctl start ceph-osd@OSD starts the OSD with full weight.
Is this
umulating on the mons and osds --
> this itself will start to use a lot of space, and active+clean is the
> only way to trim the old maps.
>
> -- dan
>
> On Tue, Mar 23, 2021 at 7:05 PM Boris Behrens wrote:
> >
> > So,
> > doing nothing and wait for the ceph to recove
that's the
> case and we can see about changing osd_max_backfills, some weights or
> maybe using the upmap-remapped tool.
>
> -- Dan
>
> On Tue, Mar 23, 2021 at 6:07 PM Boris Behrens wrote:
> >
> > Ok, I should have listened to you :)
> >
> > In the last wee
+backfilling
32 active+remapped+backfill_toofull
io:
client: 27 MiB/s rd, 69 MiB/s wr, 497 op/s rd, 153 op/s wr
recovery: 1.5 GiB/s, 922 objects/s
Am Di., 16. März 2021 um 09:34 Uhr schrieb Boris Behrens :
> Hi Dan,
>
> my EC profile look very "default&q
er.
>
> 2. You can also use another script from that repo to see the PGs per
> OSD normalized to crush weight:
> ceph-scripts/tools/ceph-pg-histogram --normalize --pool=15
>
>This might explain what is going wrong.
>
> Cheers, Dan
>
>
> On Mon, Mar 15, 20
;
d...@vanderster.com>:
> OK thanks. Indeed "prepared 0/10 changes" means it thinks things are
> balanced.
> Could you again share the full ceph osd df tree?
>
> On Mon, Mar 15, 2021 at 2:54 PM Boris Behrens wrote:
> >
> > Hi Dan,
> >
> > I've set the autoscal
er
> /var/log/ceph/ceph-mgr.*.log
>
> -- Dan
>
> On Mon, Mar 15, 2021 at 1:47 PM Boris Behrens wrote:
> >
> > Hi,
> > this unfortunally did not solve my problem. I still have some OSDs that
> fill up to 85%
> >
> > According to the logging, the autoscale
u might need to fail to a new mgr... I'm not sure if the current
> active will read that new config.
>
> .. dan
>
>
> On Sat, Mar 13, 2021, 4:36 PM Boris Behrens wrote:
>
>> Hi,
>>
>> ok thanks. I just changed the value and rewighted everything back t
Hi,
do you know why the OSDs are not starting?
When I had the problem that a start does not work, I tried the 'ceph-volume
lvm activate --all' on the host, which brought the OSDs back up.
But I can't tell you if it is safe to remove the OSD.
Cheers
Boris
Am So., 14. März 2021 um 02:38 Uhr
the 4TB disks and we needed a lot of storage fast, because of a DC
move. If you have any recommendations I would be happy to hear them.
Cheers
Boris
Am Sa., 13. März 2021 um 16:20 Uhr schrieb Dan van der Ster <
d...@vanderster.com>:
> Thanks.
>
> Decreasing the max deviation to 2 o
recommend debug_mgr 4/5 so you can see some basic upmap balancer
> logging.
>
> .. Dan
>
>
>
>
>
>
> On Sat, Mar 13, 2021, 3:49 PM Boris Behrens wrote:
>
>> Hello people,
>>
>> I am still struggeling with the balancer
>> (https://www.mail-ar
Hello people,
I am still struggeling with the balancer
(https://www.mail-archive.com/ceph-users@ceph.io/msg09124.html)
Now I've read some more and might think that I do not have enough PGs.
Currently I have 84OSDs and 1024PGs for the main pool (3008 total). I
have the autoscaler enabled, but I
ill only go to 50% (or 4 TB) - so in
> effect wasting 4TB of the 8 TB disk
>
> our cluster & our pool
> All our disks no matter what are 8 TB in size.
>
>
>
>
>
> >>> Boris Behrens 3/11/2021 5:53 AM >>>
> Hi,
> I know this topic seem
Hi,
I know this topic seems to be handled a lot (as far as I can see), but I
reached the end of my google_foo.
* We have OSDs that are near full, but there are also OSDs that are only
loaded with 50%.
* We have 4,8,16 TB rotating disks in the cluster.
* The disks that get packed are 4TB disks and
Hi,
I am in the process of resharding large buckets and to find them I ran
radosgw-admin bucket limit check | grep '"fill_status": "OVER' -B5
and I see that there are two buckets with negative num_objects
"bucket": "ncprod",
"tenant": "",
After doing
radosgw-admin period update --commit
it looks like it is gone now.
Sorry for spamming the ML, but I am not denvercoder9 :)
Am Mi., 10. März 2021 um 08:29 Uhr schrieb Boris Behrens :
> Ok,
> i changed the value to
> "metadata_heap": "",
> but it
Ok,
i changed the value to
"metadata_heap": "",
but it is still used.
Any ideas how to stop this?
Am Mi., 10. März 2021 um 08:14 Uhr schrieb Boris Behrens :
> Found it.
> [root@s3db1 ~]# radosgw-admin zone get --rgw-zone=eu-central-1
> {
> "id&qu
b84a-459b-bce2-bccac338b3ef"
}
Am Mi., 10. März 2021 um 07:37 Uhr schrieb Boris Behrens :
> Good morning ceph people,
>
> I have a pool that got a whitespace as name. And I want to know what
> creates the pool.
> I already renamed it, but something recreates the pool.
>
&g
201 - 300 of 315 matches
Mail list logo