On Fri, Nov 18, 2022 at 2:32 PM Frank Schilder wrote:
>
> Hi Patrick,
>
> we plan to upgrade next year. Can't do any faster. However, distributed
> ephemeral pinning was introduced with octopus. It was one of the major new
> features and is explained in the octopus documentation in detail.
>
> A
We have a cluster running Octopus (15.2.17) that I need to get updated and am
getting cephadm failures when updating the managers, and have tried both
Pacific and Quincy with the same results. The cluster was deployed with cephadm
on centos stream 8 using podman and due to network isolation of t
Hi Patrick,
we plan to upgrade next year. Can't do any faster. However, distributed
ephemeral pinning was introduced with octopus. It was one of the major new
features and is explained in the octopus documentation in detail.
Are you saying that it is actually not implemented?
If so, how much of
On Fri, Nov 18, 2022 at 2:11 PM Frank Schilder wrote:
>
> Hi Patrick,
>
> thanks for your super fast answer.
>
> > I assume you mean "distributed ephemeral pinning"?
>
> Yes. Just to remove any potential for a misunderstanding from my side, I
> enabled it with (copy-paste from the command history
Hi Patrick,
thanks for your super fast answer.
> I assume you mean "distributed ephemeral pinning"?
Yes. Just to remove any potential for a misunderstanding from my side, I
enabled it with (copy-paste from the command history, /mnt/admin/cephfs/ is the
mount point of "/" with all possible clie
Hi Sean,
My use of EC is specifically for slow, bulk storage. I did test jerasure
some years ago, but I don't think I kept my results. I'm having issues
today with arxiv.org which had papers… I wanted to reduce disk usage
primarily, and network IO secondarily. In my case, I preferred the reduced
On Fri, Nov 18, 2022 at 12:51 PM Frank Schilder wrote:
>
> Hi Patrick,
>
> thanks! I did the following but don't know how to interpret the result. The
> three directories we have ephemeral pinning set are:
>
> /shares
> /hpc/home
> /hpc/groups
I assume you mean "distributed ephemeral pinning"?
Hi Patrick,
thanks! I did the following but don't know how to interpret the result. The
three directories we have ephemeral pinning set are:
/shares
/hpc/home
/hpc/groups
If I understand the documentation correctly, everything under /hpc/home/user
should be on the same MDS. Trying it out I get
Have you tried setting ms_bind_msgr1 to false?
Em sex., 18 de nov. de 2022 às 14:35, Oleksiy Stashok
escreveu:
> Hey guys,
>
> Is there a way to disable the legacy msgr v1 protocol for all ceph
> services?
>
> Thank you.
> Oleksiy
> ___
> ceph-users ma
Hey guys,
Is there a way to disable the legacy msgr v1 protocol for all ceph services?
Thank you.
Oleksiy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Thu, Nov 17, 2022 at 4:45 AM Frank Schilder wrote:
>
> Hi Patrick,
>
> thanks for your explanation. Is there a way to check which directory is
> exported? For example, is the inode contained in the messages somewhere? A
> readdir would usually happen on log-in and the number of slow exports s
Dear List
I'm searching for a way to automate the snapshot creation/cleanup of RBD
volumes. Ideally, there would be something like the "Snapshot Scheduler for
cephfs"[1] but I understand
this is not as "easy" with RBD devices since ceph has no idea of the overlaying
filesystem.
So what I basi
Hi Dan and Igor,
looks very much like BFQ is indeed the culprit. I rolled back everything to
none (high-performance SAS SSDs) and mq-deadline (low-medium performance SATA
SSDs) and started a full speed data movement from the slow to the fast disks.
The cluster operates as good as in the past no
Hi Frank,
bfq was definitely broken, deadlocking io for a few CentOS Stream 8
kernels between EL 8.5 and 8.6 -- we also hit that in production and
switched over to `none`.
I don't recall exactly when the upstream kernel was also broken but
apparently this was the fix:
https://marc.info/?l=linux-b
I still find it strange that a power outage can break a cluster, we've
had multiple outages this year and the cluster recovered sucessfully
every time. Although I should add that it's not containerized yet,
it's still running on Nautilus.
Anyway, did you verify that all directories are there
Hi,
I wonder if it's because you try to start it with the admin keyring
instead of the rgw client keyring. Have you tried it?
Zitat von Marcus Müller :
Hi all,
I try to install a new rgw node. After trying to execute this command:
/usr/bin/radosgw -f --cluster ceph --name client.rgw.s3-00
16 matches
Mail list logo