right on target per
> target setting in ceph.conf. Disc's and network appear to be 20%
> utilized.
>
> I'm not a normal Ceph user. I don't care about client access at all. The
> mclock assumptions are wrong for me. I want my data to be replicated
> correctly as fast as poss
I don't have access to Slack, but thank you for all your work! Fingers crossed
for a quick release.
Kind regards,
Sake
> Op 23-05-2024 16:20 CEST schreef Yuri Weinstein :
>
>
> We are still working on the last-minute fixes, see this for details
> https://ceph-storage.sla
really need some
fixes of this release.
Kind regards,
Sake
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ceph crash archive-all
Am 22. März 2024 22:26:50 MEZ schrieb Albert Shih :
>Hi,
>
>Very basic question : 2 days ago I reboot all the cluster. Everything work
>fine. But I'm guessing during the shutdown 4 osd was mark as crash
>
>[WRN] RECENT_CRASH: 4 daemons have recently cr
I'm trying to add a new storage host into a Ceph cluster (quincy 17.2.6). The
machine has boot drives, one free SSD and 10 HDDs. The plan is to have each HDD
be an OSD with a DB on a equal size lvm of the SDD. This machine is newer but
otherwise similar to other machines already in the cluster
and sent for
moderation. It never got approved or rejected. So I moved on, can't be using
something that does NOTHING with no way to proceed past that point.
From: Roberto Maggi @ Debian
Sent: April 24, 2024 01:39
To: Ceph Users
Subject: [ceph-users] ceph
s it will have enough RAM to complete the replay?
> >
> > On 4/22/24 11:37 AM, Sake Ceph wrote:
> >> Just a question: is it possible to block or disable all clients? Just
> >> to prevent load on the system.
> >>
> >> Kind regards,
&g
Just a question: is it possible to block or disable all clients? Just to
prevent load on the system.
Kind regards,
Sake
> Op 22-04-2024 20:33 CEST schreef Erich Weiler :
>
>
> I also see this from 'ceph health detail':
>
> # ceph health detail
> HEALTH_WARN 1 file
EST schreef duluxoz :
>
>
> Hi All,
>
> *Something* is chewing up a lot of space on our `\var` partition to the
> point where we're getting warnings about the Ceph monitor running out of
> space (ie > 70% full).
>
> I've been looking, but I can't find anything signif
also helpful is the output of:
cephpg{poolnum}.{pg-id}query
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 16.03.24 um 13:52 schrieb Eugen Block:
Yeah, the whole story would help
e seeing time spent waiting on fdatsync in
> bstore_kv_sync if the drives you are using don't have power loss
> protection and can't perform flushes quickly. Some consumer grade
> drives are actually slower at this than HDDs.
>
>
> Mark
>
>
> On 2/22/24 11:04, Work Ceph w
Hello guys,
We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of
IO utilization for bstore_kv_sync thread during processes such as adding a
new pool and increasing/reducing the number of PGs in a pool.
It is funny though that the IO utilization (reported with IOTOP) is 99.99
I would say drop it for squid release or if you keep it in squid, but going to
disable it in a minor release later, please make a note in the release notes if
the option is being removed.
Just my 2 cents :)
Best regards,
Sake
___
ceph-users mailing
t from dashboard because of security
> reasons. (But so far we are planning to keep it as it is atleast for the
> older releases)
>
> Regards,
> Nizam
>
>
> On Thu, Jan 25, 2024, 19:41 Sake Ceph wrote:
> > After upgrading to 17.2.7 our load balancers can't check the sta
.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I'm following the guide @ https://docs.ceph.com/en/latest/rbd/rados-rbd-cmds/
but I'm not following why would an `mgr` permission be required to have a
functioning RBD client?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
damaged. One of those got repaired, but the following file keeps
giving errors and can't be removed.
What can I do now? Below some information.
# ceph tell mds.atlassian-prod:0 damage ls
[
{
"damage_type": "backtrace",
"id": 224901
CET schreef Sake Ceph :
>
>
> Hi!
>
> As I'm reading through the documentation about subtree pinning, I was
> wondering if the following is possible.
>
> We've got the following directory structure.
> /
> /app1
> /app2
> /app3
> /app4
>
> Ca
to rank 3?
I would like to load balance the subfolders of /app1 to 2 (or 3) MDS servers.
Best regards,
Sake
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
That wasn't really clear in the docs :(
> Op 21-12-2023 17:26 CET schreef Patrick Donnelly :
>
>
> On Thu, Dec 21, 2023 at 3:05 AM Sake Ceph wrote:
> >
> > Hi David
> >
> > Reducing max_mds didn't work. So I executed a fs reset:
> > ceph fs set
Hi David
Reducing max_mds didn't work. So I executed a fs reset:
ceph fs set atlassian-prod allow_standby_replay false
ceph fs set atlassian-prod cluster_down true
ceph mds fail atlassian-prod.pwsoel13142.egsdfl
ceph mds fail atlassian-prod.pwsoel13143.qlvypn
ceph fs reset atlassian-prod
ceph fs
Starting a new thread, forgot subject in the previous.
So our FS down. Got the following error, what can I do?
# ceph health detail
HEALTH_ERR 1 filesystem is degraded; 1 mds daemon damaged
[WRN] FS_DEGRADED: 1 filesystem is degraded
fs atlassian/prod is degraded
[ERR] MDS_DAMAGE: 1 mds
Don't forget with stretch mode, osds only communicate with mons in the same DC
and the tiebreaker only communicate with the other mons (to prevent split brain
scenarios).
Little late response, but I wanted you to know this :)
___
ceph-users mailing
sword doesn't seem to be applied
> > (don't know why yet). But since it's an "initial" password you can
> > choose something simple like "admin", and during the first login you
> > are asked to change it anyway. And then you can choose your more
>
onsider the failed
> datacenter anymore, deploy an additional mon somewhere and maybe
> reduce the size/min_size. Am I missing something?
>
> Thanks,
> Eugen
>
> [1] https://docs.ceph.com/en/reef/rados/operations/stretch-mode/#id2
>
> ____
is not right (shouldn't
happen). But applying the Grafana spec on the other mgr, I get the following
error in the log files:
services/grafana/ceph-dashboard.yml.j2 Traceback (most recent call last): File
"/usr/share/ceph/mgr/cephadm/template.py",
line 40, in rende
Using podman version 4.4.1 on RHEL 8.8, Ceph 17.2.7
I used 'podman system prune -a -f' and 'podman volume prune -f' to cleanup
files, but this leaves a lot of files over in
/var/lib/containers/storage/overlay and a empty folder
/var/lib/ceph//custom_config_files/grafana..
Found those files
To bad, that doesn't work :(
> Op 09-11-2023 09:07 CET schreef Sake Ceph :
>
>
> Hi,
>
> Well to get promtail working with Loki, you need to setup a password in
> Grafana.
> But promtail wasn't working with the 17.2.6 release, the URL was set to
> container
, the default dashboards are
great! So a wipe isn't a problem, it's what I want.
Best regards,
Sake
> Op 09-11-2023 08:19 CET schreef Eugen Block :
>
>
> Hi,
> you mean you forgot your password? You can remove the service with
> 'ceph orch rm grafana', then re-apply your grafa
a credentials error
on environment where I tried to use Grafana with Loki in the past (with 17.2.6
of Ceph/cephadm). I changed the password in the past within Grafana, but how
can I overwrite this now? Or is there a way to cleanup all Grafana files?
Best regards,
Sake
with the configured password gives a credentials error on environment where I tried to use Grafana with Loki in the past (with 17.2.6 of Ceph/cephadm). I changed the password in the past within Grafana, but how can I overwrite this now? Or is there a way to cleanup all Grafana files?
Best
Hi,
another short note regarding the documentation, the paths are designed
for a package installation.
the paths for container installation look a bit different e.g.:
/var/lib/ceph//osd.y/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso
to NVMe, PCIe 5.0 and newer technologies
with high IOPs and low latencies.
2.) Everything that requires high data security, strong consistency and
higher failure domains as host we do with Ceph.
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH
I'm fairly new to the community so I figured I'd ask about this here before
creating an issue - I'm not sure how supported this config is.
I am running rook v1.12.6 and ceph 18.2.0. I've enabled the dashboard in the
CRD and it has been working for a while. However, the charts are empty.
I do
@Eugen
We have seen the same problems 8 years ago. I can only recommend never
to use cache tiering in production.
At Cephalocon this was part of my talk and as far as I remember cache
tiering will also disappear from ceph soon.
Cache tiering has been deprecated in the Reef release as it has
Hi,
we have often seen strange behavior and also interesting pg targets from
pg_autoscaler in the last years.
That's why we disable it globally.
The commands:
ceph osd reweight-by-utilization
ceph osd test-reweight-by-utilization
are from the time before the upmap balancer was introduced
Hi an idea is to see what
Ceph osd test-reweight-by-utilization
shows.
If it looks usefull you can run the above command without "test"
Hth
Mehmet
Am 22. September 2023 11:22:39 MESZ schrieb b...@sanger.ac.uk:
>Hi Folks,
>
>We are currently running with one nearfull OSD an
eased the number of disks from 3 to 9, that is 3
>per node. The addition of storage capacity was successful, resulting in 6 new
>OSDs in the cluster.
>
>But, after this operation, we noticed that Rebuilding Data Resiliency is
>stuck at 5% and not moving forward. At the same
Another the possibility is also the ceph mon discovery via DNS:
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/#looking-up-monitors-through-dns
Regards, Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph
is the following:
1. Set norebalance
2. One by one, do this for each OSD
* Purge the OSD from the dashboard
* cephadm ceph-volume lvm zap
* cephadm may automatically find and add the OSD, otherwise I'll add it
manually
3. use pgremapper<https://github.com/digitaloc
ations/add-or-rm-osds/#replacing-an-osd
Many thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
on?
> Snapshot deletion is an asynchronous operation, so they are added to
> the queue and deleted at some point. Does the status/range change?
> Which exact Octopus version are you running? I have two test clusters
> (latest Octopus) with rbd mirroring and when I set that up I expecte
Hello guys,
We are facing/seeing an unexpected mark in one of our pools. Do you guys
know what does "removed_snaps_queue" it mean? We see some notation such as
"d5~3" after this tag. What does it mean? We tried to look into the docs,
but could not find anything meaningful.
Thank you Patrick for responding and fix the issue! Good to know the issue is
know and been worked on :-)
> Op 21-07-2023 15:59 CEST schreef Patrick Donnelly :
>
>
> Hello Sake,
>
> On Fri, Jul 21, 2023 at 3:43 AM Sake Ceph wrote:
> >
> > At 01:27 this morn
--
# ceph fs status
atlassian-opl - 8 clients
=
RANK STATE MDSACTIVITY DNS
INOS DIRS CAPS
0active atlassian-opl.mds5.zsxfep Reqs:0 /s 7830 7803
635 3706
0-s standby-replay atlassian
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 19.07.23 um 09:00 schrieb Eugen Block:
Hi,
during cluster upgrades from L to N or later one had to rebuild OSDs
which were originally deployed by ceph-disk
someone know a workaround to set the correct URL for the time being?
Best regards,
Sake
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
you can also test it directly with ceph bench, if the WAL is on the
flash device:
https://www.clyso.com/blog/verify-ceph-osd-db-and-wal-setup/
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https
depends from project to project.
e.g., the features of both projects are not identical.
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 06.07.23 um 07:16 schrieb Nico Schottelius
system. If they were written to the Ceph system in a Rados object we would
not have that limitation, or if they were written in a shared folder path
that is mounted in all iSCSI GW. We will validate this setup to see if they
would work just fine for Multipath with iSCSI reservation.
On Fri, Jun 23
Thank you all guys that tried to help here. We discovered the issue, and it
had nothing to do with Ceph or iSCSI GW.
The issue was being caused by a Switch that was acting as the "router" for
the network of the iSCSI GW. All end clients (applications) were separated
into diffe
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks for the help so far guys!
Has anybody used (made it work) the default ceph-iscsi implementation with
VMware and/or Windows CSV storage system with a single target/portal in
iSCSI?
On Wed, Jun 21, 2023 at 6:02 AM Maged Mokhtar wrote:
>
> On 20/06/2023 01:16, Work Ceph wrote:
>
t uses for ceph-iscsi, though I'd try to use the native RBD client
> instead if possible.
>
> Veeam appears by default to store really tiny blocks, so there's a lot of
> protocol overhead. I understand that Veeam can be configured to use "large
> blocks" that can make a dist
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
We started noticing some unexpected performance issues with iSCSI. I mean,
an SSD pool is reaching 100MB of write speed
I see, thanks for the feedback guys!
It is interesting that Ceph Manager does not allow us to export iSCSI
blocks without selecting 2 or more iSCSI portals. Therefore, we will always
use at least two, and as a consequence that feature is not going to be
supported. Can I export an RBD image via
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use
in the target system RBD image as well?
Thanks in advance!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Regards, Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 17.05.23 um 14:24 schrieb Rok Jaklič:
thx.
I tried with:
ceph config set mon rgw_delete_multi_obj_max_num 1
ceph config set
Adam & Mark topics: bluestore and bluestore v2
https://youtu.be/FVUoGw6kY5k
https://youtu.be/7D5Bgd5TuYw
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https://www.clyso.com/
Am 15.05.23 um 16:47 schrieb
Don't know if it helps, but we have also experienced something similar
with osd images. We changed the image tag from version to sha and it did
not happen again.
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
and upgrade paths/strategy.
From version to version, we also test for up to 6 months before putting
them into production.
However, our goal is always to use Ceph versions that still get
backports and on the other hand, only use the features we really need.
Our developers also always aim
Awesome! Thanks.
What is the default then for RBD images? Is it the default to delete them
and not to use the trash? Or, do we need a configuration to make Ceph use
the trash?
We are using Ceph Octopus.
On Wed, May 10, 2023 at 6:33 PM Reto Gysi wrote:
> Hi
>
> For me with ceph versi
Hello guys,
We have a doubt regarding snapshot management, when a protected snapshot is
created, should it be deleted when its RBD image is removed from the system?
If not, how can we list orphaned snapshots in a pool?
___
ceph-users mailing list
"bucket does not exist" or "permission denied".
Had received similar error messages with another client program. The default
region did not match the region of the cluster.
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Pre
Hello Thomas,
I would strongly recommend you to read the messages on the mailing list
regarding ceph version 16.2.11,16.2.12 and 16.2.13.
Joachim
___
ceph ambassador DACH
ceph consultant since 2012
Clyso GmbH - Premier Ceph Foundation Member
https
Hi André,
at the cephalocon 2023 last week in amsterdam there were two
presentations by Adam and Mark that might help you.
Joachim
___
Clyso GmbH - Ceph Foundation Member
Am 21.04.23 um 10:53 schrieb André Gemünd:
Dear Ceph-users,
in the meantime I found
images and only
work with the already existing ones?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
te, the clients can be restarted using the new
> > target image name. Attempting to restart the clients using the
> > source image name will result in failure.
>
> So I don't think you can live-migrate without interruption, at least
> not at the moment.
>
> Regards,
Hello guys,
We have been reading the docs, and trying to reproduce that process in our
Ceph cluster. However, we always receive the following message:
```
librbd::Migration: prepare: image has watchers - not migrating
rbd: preparing migration failed: (16) Device or resource busy
```
We
>Here goes:
>
># ceph -s
> cluster:
>id: e1327a10-8b8c-11ed-88b9-3cecef0e3946
>health: HEALTH_OK
>
> services:
>mon: 5 daemons, quorum
> bcgonen-a,bcgonen-b,bcgonen-c,bcgonen-r0h0,bcgonen-r0h1 (age 16h)
>mgr: bcgonen-b.furndm(active, since 8d
The disks are 14.9, but the exact size of them does not matter much in this
context. We figured out the issue. The raw used space accounts for the
Rocks.DB and WAL space. Therefore, as we dedicated an NVME device in each
host for them, Ceph is showing that space as used space already. It is
funny
nd RocksDB
to an NVME device; however, Ceph still seems to think that we use our data
plane disks to store those elements. We have about 375TB (5 * 5 * 15) in
HDD disks, and Ceph seems to be discounting from the usable space the
volume (space) dedicated to WAL and Rocks.DB, which are applied into
diffe
To add more information, in case that helps:
```
# ceph -s
cluster:
id:
health: HEALTH_OK
task status:
data:
pools: 6 pools, 161 pgs
objects: 223 objects, 7.0 KiB
usage: 9.3 TiB used, 364 TiB / 373 TiB avail
pgs: 161 active+clean
# ceph df
Hello guys!
We noticed an unexpected situation. In a recently deployed Ceph cluster we
are seeing a raw usage, that is a bit odd. We have the following setup:
We have a new cluster with 5 nodes with the following setup:
- 128 GB of RAM
- 2 cpus Intel(R) Intel Xeon Silver 4210R
- 1
Hello Jan,
I had the same on two cluster from nautlus to pacific.
On both it did help to fire
Ceph tell osd.* compact
If this had not help, i would go for a recreate of the osds...
Hth
Mehmet
Am 31. März 2023 10:56:42 MESZ schrieb j.kr...@profihost.ag:
>Hi,
>
>we have a ver
Need to know some more about your cluster...
Ceph -s
Ceph osd df tree
Replica or ec?
...
Perhaps this can give us some insight
Mehmet
Am 31. März 2023 18:08:38 MESZ schrieb Johan Hattne :
>Dear all;
>
>Up until a few hours ago, I had a seemingly normally-behaving cluster (Quincy,
Hi Fabien,
we have also used it several times for 2 DC setups.
However, we always try to use as few chunks as possible, as it is very
inefficient when storing small files (min alloc size) and it can also
lead to quite some problems with backfill and recovery in large ceph
clusters.
Joachim
https://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/#confval-osd_op_queue
___
Clyso GmbH - Ceph Foundation Member
Am 21.03.23 um 12:51 schrieb Gauvain Pocentek:
(adding back the list)
On Tue, Mar 21, 2023 at 11:25 AM Joachim Kraftmayer
wrote
for testing you can try: https://github.com/aquarist-labs/s3gw
___
Clyso GmbH - Ceph Foundation Member
Am 28.02.23 um 16:31 schrieb Marc:
Anyone know of a s3 compatible interface that I can just run, and reads/writes
files from a local file system and not from
which version of cephadm you are using?
___
Clyso GmbH - Ceph Foundation Member
Am 10.03.23 um 11:17 schrieb xadhoo...@gmail.com:
looking at ceph orch upgrade check
I find out
},
"ce
.
Nevertheless, you should make sure to have backups of the most important data.
I love ceph but also have a huge fear that something might not work and about
1.5PB of data (raw) might disappear...
Hth
Mehmet
Am 20. Februar 2023 10:21:18 MEZ schrieb Nicola Mori :
>Dear Ceph users,
>
>m
Hi All,
I'm getting this error while setting up a ceph cluster. I'm relatively new to
ceph, so there is no telling what kind of mistakes I've been making. I'm using
cephadm, ceph v16 and I apparently have a stray daemon. But it also doesn't
seem to exist and I can't get ceph to forget about
Which Version?
There is a ceph-volume lvm activate --all which could help.. see also
https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
Hth
Mehmet
Am 29. Dezember 2022 08:43:00 MEZ schrieb Ml Ml :
>Hello,
>
>after reinstalling one node (ceph06) from Backup the OSDs on that
the crush tree,
>even if both refer to the same OSDs, they have different ids: -1 vs -2). We
>cleared that by setting the same root for both crush rules and then PG
>autoscaler kicked in and started doing its thing.
>
>The "ceph osd df" output shows the OMAP jumping si
Hi Jelle,
did you try:
ceph osd force-create-pg
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#pool-size-1
Regards, Joachim
___
Clyso GmbH - Ceph Foundation Member
Am 22.11.22 um 11:33 schrieb Jelle de Jong:
Hello everybody
Hi Nico,
I did not used this till today but perhaps it will be helpfull for you?
https://docs.ceph.com/en/latest/rados/troubleshooting/memory-profiling/
There is also something like this
ceph tell {daemon-type}{daemon-id} heap release
But did you already just restarted the osd in question
Could you please share output of
Ceph osd df tree
There could be an hint...
Hth
Am 14. Oktober 2022 18:45:40 MESZ schrieb Matthew Darwin :
>Hi,
>
>I am hoping someone can help explain this strange message. I took 1 physical
>server offline which contains 11 OSDs. "ceph -
Hello Oğuz,
we have been supporting several rook/ceph clusters in the hyperscalers
for years, including Azure.
A few quick notes:
* you can be prepared to run into some issues with the default config of
the osds.
* in Azure, there is the issue with the quality of the network in some
at 6:20 AM Dominique Ramaekers <
>dominique.ramaek...@cometal.be> wrote:
>
>>
>> Ceph.conf isn't available on that node/container.
>>
>> Wat happens if you try to start a cephadm shell on that node?
>>
>>
>> > -Oorspronkelijk bericht-----
>&g
Hi,
Is ceph still backfilling? What is the actual output of ceph -s?
If not backfilling, it is strange that you only have 84 pgs on osd.11 but 93.59
percent in use...
Are you able to find a pg on 11 which is too big?
Perhaps pg query will help to find. Otherwise you should lower the weight
Do you have mon_compact_on_start on true and tried an mon restart?
Just a guess
Hth
Mehmet
Am 27. Juni 2022 16:46:26 MESZ schrieb Wyll Ingersoll
:
>
>Running Ceph Pacific 16.2.7
>
>We have a very large cluster with 3 monitors. One of the monitor DBs is > 2x
>the s
Hi Hans.
any chance you could write up a blog, GitHub Gist, wiki, to
describe WHAT exactly you run and HOW... with (config) examples?!?
I wanted to also run the same kind of setup @ home, but hadn't the time to even
start thinking / reading how to setup CEPH at home (OK, I had an CEPH
-12-21T01:01:02.209+0100 7fd368cebf00 -1 rocksdb: Corruption: Bad
> table magic number: expected 9863518390377041911, found 0 in db/002182.sst
> 2021-12-21T01:01:02.213+0100 7fd368cebf00 -1
> bluestore(/var/lib/ceph/osd/ceph-7) _open_db erroring opening db:
> 2021-1
some benchmarks on RBD and found that the ceph built-in
> benchmark commands were both, way too optimistic and highly unreliable.
> Successive executions of the same command (say, rbd bench for a 30 minute
> interval) would give results with factor 2-3 between averages. I moved to use
>
old mailing lists ;-)
The IOPS Graph is the most significant, the IO waitstates of the system dropped
by nearly 50% which is the reason for our drop in IOPS overall i think. just no
clue why… (Update was on the 10th of Nov). i guess i want these waitstates back
:-o
https://kai.freshx.de/img/ceph
Dear List,
until we upgraded our cluster 3 weeks ago we had a cute high performing small
productive CEPH cluster running Nautilus 14.2.22 on Proxmox 6.4 (Kernel 5.4-143
at this time). Then we started the upgrade to Octopus 15.2.15. Since we did an
online upgrade, we stopped the autoconvert
Hi,
"Input/output error"
This is an indication for a hardware error.
So you should check the disk an create a new osd...
Hth
Mehmet
Am 26. November 2021 11:02:55 MEZ schrieb "huxia...@horebdata.cn"
:
>Dear Cephers,
>
>I just had one Ceph osd node (Luminous 12.
to 1.
>>
>>> And If the SSD is broken down, it will cause all OSDs which share it down?
>> yes.
>>
>>> Wait for your replies.
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>
-octopus )
time s3cmd modify s3://test/test --add-header=x-amz-meta-foo3:Bar
```
We followed the developer instructions to spin-up the cluster and
bisected the following commit.
https://github.com/ceph/ceph/commit/99f7c4aa1286edfea6961b92bb44bb8fe22bd599
I'm not that involved to easily identify
1 - 100 of 160 matches
Mail list logo