Orchestration is hard, especially with every permutation. The devs have
implemented what they feel is the right solution for their own needs from the
sound of it. The orchestration was made modular to support non containerized
deployment. It just takes someone to step up and implement the permut
Hello Folks,
We are running Ceph Octopus 15.2.13 release and would like to use the disk
prediction module. So far issues we faced are:
1. Ceph documentation does not mention to install
`ceph-mgr-diskprediction-local.noarch`
2. Even if I install the needed package, after mgr restart, it does not
ap
Christian;
Do the second site's RGW instance(s) have access to the first site's OSDs? Is
the reverse true?
It's been a while since I set up the multi-site sync between our clusters, but
I seem to remember that, while metadata is exchanged RGW1<-->RGW2, data is
exchanged OSD1<-->RGW2.
Anyone
> Orchestration is hard, especially with every permutation. The devs have
> implemented what they feel is the right solution for their own needs
> from the sound of it. The orchestration was made modular to support non
> containerized deployment. It just takes someone to step up and implement
>
Hey ceph-users,
I setup a multisite sync between two freshly setup Octopus clusters.
In the first cluster I created a bucket with some data just to test the
replication of actual data later.
I then followed the instructions on
https://docs.ceph.com/en/octopus/radosgw/multisite/#migrating-a-s
Hey Sage,
Sage Weil writes:
> Thank you for bringing this up. This is in fact a key reason why the
> orchestration abstraction works the way it does--to allow other
> runtime environments to be supported (FreeBSD!
> sysvinit/Devuan/whatever for systemd haters!)
I would like you to stop labeli
Hey Sage,
thanks for the reply.
Sage Weil writes:
> Rook is based on kubernetes, and cephadm on podman or docker. These
> are well-defined runtimes. Yes, some have bugs, but our experience so
> far has been a big improvement over the complexity of managing package
> dependencies across even
Btw: dd bs=1M count=2048 if=/dev/rbd6 of=/dev/null => gives me 50MB/sec.
So reading the block device seems to work?!
On Fri, Jun 25, 2021 at 12:39 PM Ml Ml wrote:
>
> I started the mount 15mins ago.:
> mount -nv /dev/rbd6 /mnt/backup-cluster5
>
> ps:
> root 1143 0.2 0.0 8904 3088 pts/
I started the mount 15mins ago.:
mount -nv /dev/rbd6 /mnt/backup-cluster5
ps:
root 1143 0.2 0.0 8904 3088 pts/0D+ 12:17 0:03 |
\_ mount -nv /dev/rbd6 /mnt/backup-cluster5
There is no timout or ANY msg in dmesg until now.
strace -p 1143 : seems to do nothing.
iotop --pi
> The security issue (50 containers -> 50 versions of openssl to patch)
> also still stands — the earlier question on this list (when to expect
> patched containers for a CVE affecting a library)
I assume they use the default el7/el8 as a base layer, so when that is updated,
you will get the upda
> rgw, grafana, prom, haproxy, etc are all optional components. The
Is this Prometheus stateful? Where is this data stored?
> Early on the team building the container images opted for a single
> image that includes all of the daemons for simplicity. We could build
> stripped down images for eac
Am 18.06.21 um 20:42 schrieb Sage Weil:
Following up with some general comments on the main container
downsides and on the upsides that led us down this path in the first
place.
[...]
Thanks, Sage, for the nice and concise summary on the Cephadm benefits, and the
reasoning on why the path was
Dear Marc
> Adding to this. I can remember that I was surprised that a mv on cephfs
> between directories linked to different pools
This is documented behaviour and should not be surprising. Placement is
assigned on file creation time. Hence, placement changes only affect newly
created files,
The rbd Client is not on one of the OSD Nodes.
I now added a "backup-proxmox/cluster5a" to it and it works perfectly.
Just that one rbd image sucks. The last thing i remember was to resize
the Image from 6TB to 8TB and i then did a xfs_grow on it.
Does that ring a bell?
On Wed, Jun 23, 2021 at
> but our experience so
> far has been a big improvement over the complexity of managing package
> dependencies across even just a handful of distros
Do you have some charts or docs that show this complexity problem, because I
have problems understanding it.
This is very likely due to that my un
>
> This thread would not be so long if docker/containers solved the
> problems, but it did not. It solved some, but introduced new ones. So we
> cannot really say its better now.
The only thing I can deduct from this thread, is the necessity to create a
solution for eg. 'dentists' to install
You can use clay codes(1).
This reads less data for reconstruction.
1- https://docs.ceph.com/en/latest/rados/operations/erasure-code-clay/
On Fri, Jun 25, 2021 at 2:50 PM Andrej Filipcic wrote:
>
>
> Hi,
>
> on a large cluster with ~1600 OSDs, 60 servers and using 16+3 erasure
> coded pools, the
Hi,
on a large cluster with ~1600 OSDs, 60 servers and using 16+3 erasure
coded pools, the recovery after OSD failure (HDD) is quite slow. Typical
values are at 4GB/s with 125 ops/s and 32MB object sizes, which then
takes 6-8 hours, during that time the pgs are degraded. I tried to speed
it
This thread would not be so long if docker/containers solved the problems,
but it did not. It solved some, but introduced new ones. So we cannot
really say its better now.
Again, I think focus should more on a working ceph with clean documentation
while leaving software management, packages to adm
Hello Cephers,
it is a mystery. My cluster is out of error state. How - don't really
know. I initiated deep scrubbing for affected pgs yesterday. Maybe that
was fixing it.
Cheers,
Vadim
On 6/24/21 1:15 PM, Vadim Bulst wrote:
Dear List,
since my update yesterday from 14.2.18 to 14.2.20 i g
On Fri, Jun 25, 2021 at 11:25 AM Ml Ml wrote:
>
> The rbd Client is not on one of the OSD Nodes.
>
> I now added a "backup-proxmox/cluster5a" to it and it works perfectly.
> Just that one rbd image sucks. The last thing i remember was to resize
> the Image from 6TB to 8TB and i then did a xfs_grow
Hi,
We have a containerised ceph cluster in version 16.2.4 (15 hosts, 180 osds)
deployed with ceph-ansible.
Our host run on centos 7 (kernel 3.10) with ceph-deamon docker image based on
centos 8.
I cannot find in the documentation which native distribution is recommended,
should it be the same
Thanks for clarification
> according to what i tested, this is not the case. deletion of a topic only
> prevents the creation of new notifications with that topic.
> it does not effect the deletion of notifications with that topic, not the
> actual sending of these notifications.
>
> note that we
Adding to this. I can remember that I was surprised that a mv on cephfs between
directories linked to different pools, only some meta(?) data was moved/changed
and some data stayed still in the old pool.
I am not sure if this is still the same in newer ceph versions, but I rather
see data being
24 matches
Mail list logo