If you use block_db_size and limit in your yaml file, e.g.
block_db_size: 64G (or whatever you choose)
limit: 6
this should not consume the entire disk but only as much as you
configured. Can you try if that works for you?
Zitat von "Schweiss, Chip" :
I'm trying to set up a new ceph clus
Hi,
What limits are there on the "reasonable size" of an rbd?
E.g. when I try to create a 1 PB rbd with default 4 MiB objects on my
octopus cluster:
$ rbd create --size 1P --data-pool rbd.ec rbd.meta/fs
2021-01-20T18:19:35.799+1100 7f47a99253c0 -1 librbd::image::CreateRequest:
validate_layou
Hi Dietmar,
thanks for that info. I repeated a benchmark test we were using when trying to
find out what the problem is. Its un-taring an anaconda2 archive, which
produces a high mixed load on a file system. I remember that it used to take
ca. 4 minutes on a freshly mounted client. After reduci
Can you describe your Ceph deployment?
On Wed, Jan 20, 2021 at 11:24 AM Adam Boyhan wrote:
> I have been doing some testing with RBD-Mirror Snapshots to a remote Ceph
> cluster.
>
> Does anyone know if the images on the remote cluster can be utilized in
> anyway? Would love the ability to clone
Awesome information. I new I had to be missing something.
All of my clients will be far newer than mimic so I don't think that will be an
issue.
Added the following to my ceph.conf on both clusters.
rbd_default_clone_format = 2
root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@Tes
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan wrote:
>
> That's what I though as well, specially based on this.
>
>
>
> Note
>
> You may clone a snapshot from one pool to an image in another pool. For
> example, you may maintain read-only images and snapshots as templates in one
> pool, and writea
That's what I though as well, specially based on this.
Note
You may clone a snapshot from one pool to an image in another pool. For
example, you may maintain read-only images and snapshots as templates in one
pool, and writeable clones in another pool.
root@Bunkcephmon2:~# rbd clone CephT
But you should be able to clone the mirrored snapshot on the remote
cluster even though it’s not protected, IIRC.
Zitat von Adam Boyhan :
Two separate 4 node clusters with 10 OSD's in each node. Micron 9300
NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
white papers.
Two separate 4 node clusters with 10 OSD's in each node. Micron 9300 NVMe's are
the OSD drives. Heavily based on the Micron/Supermicro white papers.
When I attempt to protect the snapshot on a remote image, it errors with read
only.
root@Bunkcephmon2:~# rbd snap protect CephTestPool1/vm-100-d
Have you tried just using them?
(RO, if you do RW things might go crazy, would be nice to try though).
You might be able to create a clone too, and I guess worst case just cp/deep
cp.
I'm interested in your findings btw. I'd be greatful if you share them :)
Thanks!
On 01/20 14:23, Adam Boyhan
I have been doing some testing with RBD-Mirror Snapshots to a remote Ceph
cluster.
Does anyone know if the images on the remote cluster can be utilized in anyway?
Would love the ability to clone them, or even readonly would be nice.
___
ceph-users ma
I'm trying to set up a new ceph cluster with cephadm on a SUSE SES trial
that has Ceph 15.2.8
Each OSD node has 18 rotational SAS disks, 4 NVMe 2TB SSDs for DB, and 2
NVME2 200GB Optane SSDs for WAL.
These servers will eventually have 24 rotational SAS disks that they will
inherit from existing s
Hi Dietmar,
thanks for that. I reduced the value and, indeed, the number of caps clients
were holding started going down.
A question about the particular value of 64K. Did you run several tests and
found this one to be optimal, or was it just a lucky guess?
Thanks and best regards,
===
Hi,
I'm looking the suse documentation regarding their option to have rbd on win.
I want to try on windows server 2019 vm, but I got this error:
PS C:\Users\$admin$> rbd create image01 --size 4096 --pool windowstest -m
10.118.199.248,10.118.199.249,10.118.199.250 --id windowstest --keyring
C:/P
Hi Frank,
yes I ran several tests going down from the default 1M, 256k, 128k, 96k
to 64k which seemed to be opimal in our case.
~Best
Dietmar
On 1/20/21 1:01 PM, Frank Schilder wrote:
Hi Dietmar,
thanks for that. I reduced the value and, indeed, the number of caps clients
were holding s
15 matches
Mail list logo