ceph-users
Thread
Date
Earlier messages
Messages by Thread
[ceph-users] Recheck for "large omap objects" warning
Jan Kasprzak
[ceph-users] tentacle 20.1.1 RC1 QE validation status
Yuri Weinstein
[ceph-users] CephFS shares with multiple access rights: Use subvolumes or multiple filesystems?
Sophonet
[ceph-users] Re: CephFS shares with multiple access rights: Use subvolumes or multiple filesystems?
Eugen Block
[ceph-users] Experience with mixed-arch clusters?
Alexander Schreiber
[ceph-users] Re: Experience with mixed-arch clusters?
Redouane Kachach
[ceph-users] Re: Experience with mixed-arch clusters?
Alexander Schreiber
[ceph-users] SSL cert expired for download.ceph.com
Boris
[ceph-users] radosgw multisite unable to initialize site config
Kevin Hrpcek
[ceph-users] Re: radosgw multisite unable to initialize site config
wissem mimouna
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
[ceph-users] Re: radosgw multisite unable to initialize site config
Eugen Block
[ceph-users] Re: radosgw multisite unable to initialize site config
Kevin Hrpcek
[ceph-users] Re: radosgw multisite unable to initialize site config
Redouane Kachach
[ceph-users] Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Michel Jouvin
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Janne Johansson
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Enrico Bocchi
[ceph-users] Re: Scrub errors after OSD_TOO_MANY_REPAIRS: how to recover?
Michel Jouvin
[ceph-users] SQUID ganesha failures
Patrick Begou
[ceph-users] Mirroring images per snapshot to a different pool on remote site
Emmanuel Lacour
[ceph-users] Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Boris
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Janne Johansson
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Anthony D'Atri
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Niklas Hambüchen
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Stefan Kooman
[ceph-users] Re: Why does recovering objects take much longer than the outage that caused them?
Burkhard Linke
[ceph-users] Where/how to set PG auto repair/heal?
lejeczek
[ceph-users] Re: Where/how to set PG auto repair/heal?
Eugen Block
[ceph-users] September Ceph Science User Group Virtual Meeting
Enrico Bocchi
[ceph-users] Async dirops maturity
William David Edwards
[ceph-users] capabilities needed for subvolume management?
Robert Sander
[ceph-users] CephFS maturity on Squid?
William David Edwards
[ceph-users] Re: CephFS maturity on Squid?
Eugen Block
[ceph-users] Re: CephFS maturity on Squid?
Tim Holloway
[ceph-users] Re: CephFS maturity on Squid?
Nigel Williams
[ceph-users] Re: CephFS maturity on Squid?
Tim Holloway
[ceph-users] Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Boris
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Stefan Kooman
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Boris
[ceph-users] Re: Reef CEPHADM_CHECK_PUBLIC_MEMBERSHIP false positive alerts on some hosts (IPv6 only cluster)
Stefan Kooman
[ceph-users] advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Anthony D'Atri
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Anthony D'Atri
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Jan Kasprzak
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Janne Johansson
[ceph-users] Re: advice needed for sizing of wal+db device for hdd
Adrian Sevcenco
[ceph-users] cephfs volume mount error using both ceph-csi and mount.ceph (using -o mon_addr)
John Jasen
[ceph-users] Re: cephfs volume mount error using both ceph-csi and mount.ceph (using -o mon_addr)
John Jasen
[ceph-users] [ceph-user]ceph-ansible || Ceph upgrade from octopus to quincy
Danish Khan
[ceph-users] Re: Ceph orchestrator not refreshing device list
Bob Gibson
[ceph-users] How important is the "default" data pool being replicated for CephFS
Mikael Öhman
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Anthony D'Atri
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Eugen Block
[ceph-users] Re: How important is the "default" data pool being replicated for CephFS
Frédéric Nass
[ceph-users] v20.1.0 Tentacle RC0 released
Yuri Weinstein
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Robert Sander
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Ilya Dryomov
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Ilya Dryomov
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
[ceph-users] Re: v20.1.0 Tentacle RC0 released
John Mulligan
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
[ceph-users] Re: v20.1.0 Tentacle RC0 released
Eugen Block
[ceph-users] Is there a faster way to merge PGs?
Justin Mammarella
[ceph-users] Re: Is there a faster way to merge PGs?
Eugen Block
[ceph-users] Re: Is there a faster way to merge PGs?
Eugen Block
[ceph-users] Re: [EXT] Re: Is there a faster way to merge PGs?
Justin Mammarella
[ceph-users] Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Eugen Block
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Joachim Kraftmayer
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Anthony D'Atri
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Eugen Block
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Michel Jouvin
[ceph-users] Re: Upgrade CEPH 14.x -> 16.X + switch from Filestore to Bluestore = strange behavior
Olivier Delcourt
[ceph-users] Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Eugen Block
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Michel Jouvin
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
[ceph-users] Re: Failing upgrade 18.2.7 to 19.2.3 - failing to activate via raw takes 5 minutes (before proceeding to lvm)
Mikael Öhman
[ceph-users] OSD socket closed
Samuel Moya Tinoco
[ceph-users] Re: OSD socket closed
Kelson White
[ceph-users] Re: OSD socket closed
Ilya Dryomov
[ceph-users] Re: OSD socket closed
Administrator
[ceph-users] Reducing the OSD Heartbeat Grace & Interval
Alexander Hussein-Kershaw
[ceph-users] test
wissem mimouna
[ceph-users] peer... is using msgr V1 protocol ?
lejeczek
[ceph-users] Re: peer... is using msgr V1 protocol ?
Janne Johansson
[ceph-users] Re: peer... is using msgr V1 protocol ?
lejeczek
[ceph-users] [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alex from North
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Anthony D'Atri
[ceph-users] Re: [18.2.4 Reef] Using RAID1 as Metadata Device in Ceph – Risks and Recommendations
Alexander Patrakov
[ceph-users] ceph restful key permissions
BASSAGET Cédric
[ceph-users] Re: ceph restful key permissions
Pedro Gonzalez
[ceph-users] CSC Meeting Notes September 8th 2025
Adam King
[ceph-users] Moving to Zoom for today's CDS session
Anthony Middleton
[ceph-users] After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Best Regards
[ceph-users] Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Anthony D'Atri
[ceph-users] 回复:Re: After upgrading Ceph cluster from Octopus (v15.2.13) to Quincy (v17.2.8), read latency on HDD OSDs increased from 15ms to around 100ms during peak I/O periods.
Best Regards
[ceph-users] Question about shard placement in erasure code pools
Soeren Malchow
[ceph-users] Re: Question about shard placement in erasure code pools
Eugen Block
[ceph-users] Deploy haproxy.nfs
Thomas
[ceph-users] RadosGW read/list-all user across all tenants, users, buckets
Jacques Hoffmann
[ceph-users] Re: RadosGW read/list-all user across all tenants, users, buckets
wissem mimouna
[ceph-users] nfs-ganesha, subvolumes and subtree checking?
Davíð Steinn Geirsson
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Eugen Block
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Davíð Steinn Geirsson
[ceph-users] Re: nfs-ganesha, subvolumes and subtree checking?
Eugen Block
[ceph-users] Append to EC pool object (docs vs. reality)
Jan Kasprzak
[ceph-users] Re: Append to EC pool object (docs vs. reality)
Anthony D'Atri
[ceph-users] Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Eugen Block
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Igor Fedotov
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Anthony D'Atri
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Soenke Schippmann
[ceph-users] Re: Ceph OSD with migrated db device fails after upgrade to 19.2.3 from 18.2.7
Anthony D'Atri
[ceph-users] Ceph Scrubs
Alex
[ceph-users] squid 19.2.2 - drive in use but ceph is seeing it as "available"
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 - drive in use but ceph is seeing it as "available"
GLE, Vivien
[ceph-users] Re: squid 19.2.2 - drive in use but ceph is seeing it as "available"
Steven Vacaroaia
[ceph-users] Is there a preferred EC plugin?
Boris
[ceph-users] Re: Is there a preferred EC plugin?
Stefan Kooman
[ceph-users] Re: Is there a preferred EC plugin?
Boris
[ceph-users] Re: Is there a preferred EC plugin?
Ryan Sleeth
[ceph-users] Grafana: Ceph-Cluster Advanced "in" OSDs not counted correctly
Eugen Block
[ceph-users] Re: Grafana: Ceph-Cluster Advanced "in" OSDs not counted correctly
Eugen Block
[ceph-users] squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Eugen Block
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 cephadm - adding more placement hosts using osd spec yml file
Steven Vacaroaia
[ceph-users] How to remove failed OSD & reuse it?
lejeczek
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
[ceph-users] Re: How to remove failed OSD & reuse it?
Eugen Block
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
[ceph-users] Re: How to remove failed OSD & reuse it?
lejeczek
[ceph-users] [CEPH-MDS] directory access blocked while looping subtree empty export
????
[ceph-users] Librados async operations in C
Jan Kasprzak
[ceph-users] Re: snap-schedule not running
Sophonet
[ceph-users] Re: snap-schedule not running
Eugen Block
[ceph-users] Re: snap-schedule not running
Sophonet
[ceph-users] Re: snap-schedule not running
Eugen Block
[ceph-users] Re: snap-schedule not running
Sophonet
[ceph-users] Re: snap-schedule not running
Eugen Block
[ceph-users] squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Anthony D'Atri
[ceph-users] Re: squid 19.2.2 - added disk does not reflect in available capacity on pools
Steven Vacaroaia
[ceph-users] snap-schedule not running
Sophonet
[ceph-users] Re: ceph-users Digest, Vol 134, Issue 88
renjianxinlover
[ceph-users] Debian Packages for Trixie
Andrew
[ceph-users] Re: Debian Packages for Trixie
Jens Galsgaard
[ceph-users] Re: Debian Packages for Trixie
Daniel Baumann
[ceph-users] Re: Debian Packages for Trixie
Konold, Martin
[ceph-users] Re: Debian Packages for Trixie
Andrew
[ceph-users] OSD crc errors: Faulty SSD?
Roland Giesler
[ceph-users] Re: OSD crc errors: Faulty SSD?
Anthony D'Atri
[ceph-users] Re: OSD crc errors: Faulty SSD?
Igor Fedotov
[ceph-users] Ceph Steering Committee Meeting Minutes for 2025-08-25
Patrick Donnelly
[ceph-users] cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Björn Lässig
[ceph-users] Re: cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Yaarit Hatuka
[ceph-users] Re: cannot reach telemetry.ceph.com -> HEALTH_WARN after cluster upgrade to 19.2.3
Björn Lässig
[ceph-users] MGR does not generate prometheus config for ceph-exporter
Adam Prycki
[ceph-users] Re: MGR does not generate prometheus config for ceph-exporter
Eugen Block
[ceph-users] NVME-oF only with quay.ceph.io/ceph-ci/ceph:squid-nvmeof ?
Robert Sander
[ceph-users] For posterity: cct->_conf->osd_fast_shutdown_timeout OSD errors / Run Full Recovery from ONodes (might take a while) during Reef 18.2.1 to 18.2.7 upgrade
Anthony D'Atri
[ceph-users] Large file downloads (~2TB) from CephFS drop at ~100 GB
Mevludin Blazevic
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Janne Johansson
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Anthony D'Atri
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Mevludin Blazevic
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Janne Johansson
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Mevludin Blazevic
[ceph-users] Re: Large file downloads (~2TB) from CephFS drop at ~100 GB
Burkhard Linke
[ceph-users] Ceph Roundup: Umbrella Summit, Windows Survey, Cephalocon & More!
Anthony Middleton
[ceph-users] Problems afte update Debian bullseye to trixie and quinct to reef
Christian Peters
[ceph-users] Re: Problems afte update Debian bullseye to trixie and quinct to reef
Anthony D'Atri
[ceph-users] new osd's with custom device-class sata not being used by pool?
jbuburuzlist
Earlier messages