Hi Strahil Nikolov
Thank you for the suggestion but it does not help...
[root@beclovkvma01 ~]# sudo gluster volume remove-brick datastore1 replica 1
beclovkvma02.bec..net:/data/brick2/brick2
beclovkvma02.bec..net:/data/brick3/brick3
beclovkvma02.bec..net:/data/brick4/brick4
beclovkvma02
-o preallocation=metadata brings it down to 7m40s
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/c
I have been trying to figure out why cloning a VM and creating a template from
ovirt is so slow. I am using ovirt 4.3.10 over NFS. My NFS server is running
NFS 4 over RAID10 with SSD disks over a 10G network and 9000 MTU
Therocially I should be writing a 50GB file in around 1m30s
a direct copy f
You have to specify the volume type.When you remove 1 brick from a replica 3
volume - you are actually converting it to replica 2.
As you got 2 data bricks + 1 arbiter, then Just remove the arbiter brick and
the missing node's brick:
gluster volume remove-brick VOL replica 1 node2:/brick node3:/b
I have seen vdsmd leak memory for years (I've been running oVirt since
version 3.5), but never been able to nail it down. I've upgraded a
cluster to oVirt 4.4.9 (reloading the hosts with CentOS 8-stream), and I
still see it happen. One host in the cluster, which has been up 8 days,
has vdsmd with
the volume configured with Distributed Replicate volume with 7 bricks, when I
try from GUI getting below error.
Error while executing action Remove Gluster Volume Bricks: Volume remove brick
force failed: rc=-1 out=() err=['Remove arbiter brick(s) only when converting
from arbiter to replica 2
when I try to remove all the node 2 bricks getting below error
volume remove-brick commit force: failed: Bricks not from same subvol for
replica
when try to remove node 2 just one brick getting below error
volume remove-brick commit force: failed: Remove brick incorrect brick count of
1 for r
7 matches
Mail list logo