[ceph-users] RadosGW multipart fragments not being cleaned up by lifecycle policy on Quincy

2023-03-01 Thread Sean Houghton
The latest version of quincy seems to be having problems cleaning up multipart fragments from canceled uploads. The bucket is empty: % s3cmd -c .s3cfg ls s3://warp-benchmark % However, it's got 11TB of data and 700k objects. # radosgw-admin bucket stats --bucket=warp-benchmark {

[ceph-users] Demystify EC CLAY and LRC helper chunks?

2022-12-12 Thread Sean Matheny
eed to communicate with the other, assuming the matching CRUSH hierarchy is in place). Anyone have any good resources on this beyond the documentation, or at a minimum can explain or confirm the slightly spooky nature of the "helper chunks" mentioned above? With thanks, Sean Mathen

[ceph-users] Re: Odd 10-minute delay before recovery IO begins

2022-12-05 Thread Sean Matheny
Hi all, Thanks for the great responses. Confirming that this was the issue (feature). No idea why this was set differently for us in Nautilus. This should make the recovery benchmarking a bit faster now. :) Cheers, Sean > On 6/12/2022, at 3:09 PM, Wesley Dillingham wrote: > > I

[ceph-users] Odd 10-minute delay before recovery IO begins

2022-12-05 Thread Sean Matheny
get osd osd_recovery_sleep_hdd 60.10 7[ceph: root@ /]# ceph config get osd osd_recovery_sleep_ssd 80.00 9[ceph: root@ /]# ceph config get osd osd_recovery_sleep_hybrid 100.025000 Thanks in advance. Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealand eScience Infrastructure

[ceph-users] Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-11-16 Thread Sean Matheny
erasure (either in normal write and read, or in recovery scenarios)? Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealand eScience Infrastructure (NeSI) e: sean.math...@nesi.org.nz > On 12/11/2022, at 9:43 AM, Jeremy Austin wrote: > > I'm running 16.2.9 and have been u

[ceph-users] Cephadm - db and osd partitions on same disk

2022-11-07 Thread Sean Matheny
a storage node, rather than two. Can cephadm use partitions instead of whole disks to accomplish this, or is this unsupported? Thanks in advance, Sean ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-10-20 Thread Sean Matheny
of any bad experiences, or any reason not to use over jerasure? Any reason to use cauchy-good instead of reed-solomon for the use case above? Ngā mihi, Sean Matheny HPC Cloud Platform DevOps Lead New Zealand eScience Infrastructure (NeSI) e: sean.math...@nesi.org.nz

[ceph-users] Re: Issues after a shutdown

2022-07-25 Thread Sean Redmond
f data. > ^C > --- 192.168.30.14 ping statistics --- > 3 packets transmitted, 0 received, 100% packet loss, time 2062ms > > That's very weird... but this gives me something to figure out. Hmmm. > Thank you. > > On Mon, Jul 25, 2022 at 3:01 PM Sean Redmond > wrote: > >> Looks go

[ceph-users] Re: Cephfs + inotify

2021-10-08 Thread Sean
watching process. If you use an RBD image, that should work, however. In that case the kernel sees the RDB image as a raw block device, and is in full control of the mounted filesystem. ~ Sean On Oct 6, 2021 at 11:55:25 AM, nORKy wrote: > Hi, > > intotify does not work with cephfs.

[ceph-users] Re: rocksdb corruption with 16.2.6

2021-09-20 Thread Sean
In my case it happened after upgrading from v16.2.4 to v16.2.5 a couple months ago. ~ Sean On Sep 20, 2021 at 9:02:45 AM, David Orman wrote: > Same question here, for clarity, was this on upgrading to 16.2.6 from > 16.2.5? Or upgrading > from some other release? > > On Mon, Se

[ceph-users] Re: rocksdb corruption with 16.2.6

2021-09-20 Thread Sean
10.91409 1.0 11 TiB 3.3 TiB 3.3 TiB 20 KiB 9.4 GiB 7.6 TiB 30.03 1.03 35 uposd.16 ~ Sean On Sep 20, 2021 at 8:27:39 AM, Paul Mezzanini wrote: > I got the exact same error on one of my OSDs when upgrading to 16. I > used it as an exercise on trying to fix a c

[ceph-users] Re: Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image

2021-09-18 Thread Sean
/ceph/ceph:v16.2.6 ~ Sean On Sep 18, 2021 at 6:02:06 AM, Cem Zafer wrote: > Here is the detail error. > Thanks. > > root@ceph100:~# ceph health detail > HEALTH_WARN Upgrade: failed to pull target image > [WRN] UPGRADE_FAILED_PULL: Upgrade: failed to pull target image >

[ceph-users] Re: anyone using cephfs or rgw for 'streaming' videos?

2021-09-18 Thread Sean
just haven’t had a reason to. ~ Sean On Sep 18, 2021 at 8:11:49 AM, Marc wrote: > > Currently I am bit testing with gstreamer, and thought about archiving > streams in multiple formats, like hls and mkv. I thought it would be nice > to use rgw to store files via an s3fs mount, an

[ceph-users] Stretch Cluster with rgw and cephfs?

2021-08-19 Thread Sean Matheny
words of wisdom. :) Sean Matheny New Zealand eScience Infrastructure (NeSI) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Questions on Ceph on ARM

2020-07-06 Thread Sean Johnson
It should be fine. I use arm64 systems as clients, and would expect them to be fine for servers. The biggest problem would be performance. ~ Sean On Jul 6, 2020, 5:04 AM -0500, norman , wrote: > Hi all, > > I'm using Ceph on X86 and ARM, is it safe make x86 and arm64 in the same

[ceph-users] Re: [Octopus] OSD won’t work with Docker

2020-07-03 Thread Sean Johnson
container ran as expected. ~ Sean On Jul 3, 2020, 9:02 AM -0500, Sean Johnson , wrote: > I have a situation were OSDs won’t work as Docker containers with Octopus on > an Ubuntu 2020.04 host. > > The cephadm adopt —style legacy —name osd.8 command works as expected, and > sets up th

[ceph-users] [Octopus] OSD won’t work with Docker

2020-07-03 Thread Sean Johnson
I have a situation were OSDs won’t work as Docker containers with Octopus on an Ubuntu 2020.04 host. The cephadm adopt —style legacy —name osd.8 command works as expected, and sets up the /var/lib/ceph/ directory as expected: root@balin:~# ll

[ceph-users] Re: how to restart daemons on 15.2 on Debian 10

2020-05-18 Thread Sean Johnson
Use the same pattern …. systemctl restart ceph-{fsid}@osd.{id}.service ~Sean > On May 18, 2020, at 7:16 AM, Ml Ml wrote: > > Thanks, > > The following seems to work for me on Debian 10 and 15.2.1: > > systemctl restart ceph-5436dd5d-83d4-4dc8-a93b-60ab5db145df@mon.ce

[ceph-users] Re: how to restart daemons on 15.2 on Debian 10

2020-05-17 Thread Sean Johnson
I have OSD’s on the brain … that line should have read: systemctl restart ceph-{fsid}@mon.{host}.service > On May 17, 2020, at 10:08 AM, Sean Johnson wrote: > > In case that doesn’t work, there’s also a systemd service that contains the > fsid of the cluster. > > So, in

[ceph-users] Re: how to restart daemons on 15.2 on Debian 10

2020-05-17 Thread Sean Johnson
~Sean > On May 15, 2020, at 9:31 AM, Simon Sutter wrote: > > Hello Michael, > > > I had the same problems. It's very unfamiliar, if you never worked with the > cephadm tool. > > The Way I'm doing it is to go into the cephadm container: > # cephadm shell > &

[ceph-users] ceph octopus OSDs won't start with docker

2020-05-07 Thread Sean Johnson
king since the OSDs are online and functioning, I’d really like to have them under the `ceph orch` management like the rest of the systems. ~Sean signature.asc Description: Message signed with OpenPGP ___ ceph-users mailing list -- ceph-users@ceph.io To u

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-19 Thread Sean Matheny
$host < On 19/02/2020, at 11:42 PM, Wido den Hollander wrote: > > > > On 2/19/20 10:11 AM, Paul Emmerich wrote: >> On Wed, Feb 19, 2020 at 10:03 AM Wido den Hollander wrote: >>> >>> >>> >>> On 2/19/20 8:49 AM, Sean Matheny wrote: >>

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
Thanks, If the OSDs have a newer epoch of the OSDMap than the MON it won't work. How can I verify this? (i.e the epoch of the monitor vs the epoch of the osd(s)) Cheers, Sean On 19/02/2020, at 7:25 PM, Wido den Hollander mailto:w...@42on.com>> wrote: On 2/19/20 5:45 AM, Sean Matheny

[ceph-users] Re: [FORGED] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
. Cheers, Sean root@ntr-mon01:/var/log/ceph# ceph -s cluster: id: ababdd7f-1040-431b-962c-c45bea5424aa health: HEALTH_WARN pauserd,pausewr,noout,norecover,noscrub,nodeep-scrub flag(s) set 157 osds down 1 host (15 osds) down Reduced data

[ceph-users] Lost all Monitors in Nautilus Upgrade, best way forward?

2020-02-18 Thread Sean Matheny
Hi folks, Our entire cluster is down at the moment. We started upgrading from 12.2.13 to 14.2.7 with the monitors. The first monitor we upgraded crashed. We reverted to luminous on this one and tried another, and it was fine. We upgraded the rest, and they all worked. Then we upgraded the