The latest version of quincy seems to be having problems cleaning up multipart
fragments from canceled uploads.
The bucket is empty:
% s3cmd -c .s3cfg ls s3://warp-benchmark
%
However, it's got 11TB of data and 700k objects.
# radosgw-admin bucket stats --bucket=warp-benchmark
{
eed to communicate with the other, assuming the matching CRUSH
hierarchy is in place).
Anyone have any good resources on this beyond the documentation, or at a
minimum can explain or confirm the slightly spooky nature of the "helper
chunks" mentioned above?
With thanks,
Sean Mathen
Hi all,
Thanks for the great responses. Confirming that this was the issue (feature).
No idea why this was set differently for us in Nautilus.
This should make the recovery benchmarking a bit faster now. :)
Cheers,
Sean
> On 6/12/2022, at 3:09 PM, Wesley Dillingham wrote:
>
> I
get osd osd_recovery_sleep_hdd
60.10
7[ceph: root@ /]# ceph config get osd osd_recovery_sleep_ssd
80.00
9[ceph: root@ /]# ceph config get osd osd_recovery_sleep_hybrid
100.025000
Thanks in advance.
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure
erasure (either in normal write
and read, or in recovery scenarios)?
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)
e: sean.math...@nesi.org.nz
> On 12/11/2022, at 9:43 AM, Jeremy Austin wrote:
>
> I'm running 16.2.9 and have been u
a storage
node, rather than two.
Can cephadm use partitions instead of whole disks to accomplish this, or is
this unsupported?
Thanks in advance,
Sean
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
of any bad experiences,
or any reason not to use over jerasure? Any reason to use cauchy-good instead
of reed-solomon for the use case above?
Ngā mihi,
Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)
e: sean.math...@nesi.org.nz
f data.
> ^C
> --- 192.168.30.14 ping statistics ---
> 3 packets transmitted, 0 received, 100% packet loss, time 2062ms
>
> That's very weird... but this gives me something to figure out. Hmmm.
> Thank you.
>
> On Mon, Jul 25, 2022 at 3:01 PM Sean Redmond
> wrote:
>
>> Looks go
watching process.
If you use an RBD image, that should work, however. In that case the kernel
sees the RDB image as a raw block device, and is in full control of the
mounted filesystem.
~ Sean
On Oct 6, 2021 at 11:55:25 AM, nORKy wrote:
> Hi,
>
> intotify does not work with cephfs.
In my case it happened after upgrading from v16.2.4 to v16.2.5 a couple
months ago.
~ Sean
On Sep 20, 2021 at 9:02:45 AM, David Orman wrote:
> Same question here, for clarity, was this on upgrading to 16.2.6 from
> 16.2.5? Or upgrading
> from some other release?
>
> On Mon, Se
10.91409 1.0 11 TiB 3.3 TiB 3.3 TiB 20 KiB 9.4 GiB
7.6 TiB 30.03 1.03 35 uposd.16
~ Sean
On Sep 20, 2021 at 8:27:39 AM, Paul Mezzanini wrote:
> I got the exact same error on one of my OSDs when upgrading to 16. I
> used it as an exercise on trying to fix a c
/ceph/ceph:v16.2.6
~ Sean
On Sep 18, 2021 at 6:02:06 AM, Cem Zafer wrote:
> Here is the detail error.
> Thanks.
>
> root@ceph100:~# ceph health detail
> HEALTH_WARN Upgrade: failed to pull target image
> [WRN] UPGRADE_FAILED_PULL: Upgrade: failed to pull target image
>
just haven’t had a reason to.
~ Sean
On Sep 18, 2021 at 8:11:49 AM, Marc wrote:
>
> Currently I am bit testing with gstreamer, and thought about archiving
> streams in multiple formats, like hls and mkv. I thought it would be nice
> to use rgw to store files via an s3fs mount, an
words of wisdom. :)
Sean Matheny
New Zealand eScience Infrastructure (NeSI)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It should be fine. I use arm64 systems as clients, and would expect them to be
fine for servers. The biggest problem would be performance.
~ Sean
On Jul 6, 2020, 5:04 AM -0500, norman , wrote:
> Hi all,
>
> I'm using Ceph on X86 and ARM, is it safe make x86 and arm64 in the same
container ran
as expected.
~ Sean
On Jul 3, 2020, 9:02 AM -0500, Sean Johnson , wrote:
> I have a situation were OSDs won’t work as Docker containers with Octopus on
> an Ubuntu 2020.04 host.
>
> The cephadm adopt —style legacy —name osd.8 command works as expected, and
> sets up th
I have a situation were OSDs won’t work as Docker containers with Octopus on an
Ubuntu 2020.04 host.
The cephadm adopt —style legacy —name osd.8 command works as expected, and sets
up the /var/lib/ceph/ directory as expected:
root@balin:~# ll
Use the same pattern ….
systemctl restart ceph-{fsid}@osd.{id}.service
~Sean
> On May 18, 2020, at 7:16 AM, Ml Ml wrote:
>
> Thanks,
>
> The following seems to work for me on Debian 10 and 15.2.1:
>
> systemctl restart ceph-5436dd5d-83d4-4dc8-a93b-60ab5db145df@mon.ce
I have OSD’s on the brain … that line should have read:
systemctl restart ceph-{fsid}@mon.{host}.service
> On May 17, 2020, at 10:08 AM, Sean Johnson wrote:
>
> In case that doesn’t work, there’s also a systemd service that contains the
> fsid of the cluster.
>
> So, in
~Sean
> On May 15, 2020, at 9:31 AM, Simon Sutter wrote:
>
> Hello Michael,
>
>
> I had the same problems. It's very unfamiliar, if you never worked with the
> cephadm tool.
>
> The Way I'm doing it is to go into the cephadm container:
> # cephadm shell
>
&
king since the OSDs are online and functioning,
I’d really like to have them under the `ceph orch` management like the rest of
the systems.
~Sean
signature.asc
Description: Message signed with OpenPGP
___
ceph-users mailing list -- ceph-users@ceph.io
To u
$host < On 19/02/2020, at 11:42 PM, Wido den Hollander wrote:
>
>
>
> On 2/19/20 10:11 AM, Paul Emmerich wrote:
>> On Wed, Feb 19, 2020 at 10:03 AM Wido den Hollander wrote:
>>>
>>>
>>>
>>> On 2/19/20 8:49 AM, Sean Matheny wrote:
>>
Thanks,
If the OSDs have a newer epoch of the OSDMap than the MON it won't work.
How can I verify this? (i.e the epoch of the monitor vs the epoch of the osd(s))
Cheers,
Sean
On 19/02/2020, at 7:25 PM, Wido den Hollander
mailto:w...@42on.com>> wrote:
On 2/19/20 5:45 AM, Sean Matheny
.
Cheers,
Sean
root@ntr-mon01:/var/log/ceph# ceph -s
cluster:
id: ababdd7f-1040-431b-962c-c45bea5424aa
health: HEALTH_WARN
pauserd,pausewr,noout,norecover,noscrub,nodeep-scrub flag(s) set
157 osds down
1 host (15 osds) down
Reduced data
Hi folks,
Our entire cluster is down at the moment.
We started upgrading from 12.2.13 to 14.2.7 with the monitors. The first
monitor we upgraded crashed. We reverted to luminous on this one and tried
another, and it was fine. We upgraded the rest, and they all worked.
Then we upgraded the
25 matches
Mail list logo