If you enabled sharding, don't disable it any more.My previous comment was just 
a statement and not a recommendation.
Based on the output, the /home2/brick2  is part of the root.
Obviosly, gluster thinks that the brick is full and you got very few options:- 
extend any brick or add more bricks to the volume- shutdown all VMs on that 
volume, reduce the minimum reserved space in gluster (gluster volume set data1 
cluster.min-free-disk 0%) and then storage migrate the disk to a bigger 
datastore- shitdown all VMs on that volume(datastore),reduce the minimum 
reserved space in gluster (gluster volume set data1 cluster.min-free-disk 0%) 
and then delete unnecessary data (like snapshots you don't need any more).
The first option is the most reliable. Once gluster has some free space to 
operate, you will be able to delete (consolidate) VM snapshots or to delete VMs 
that are less important (for example sandboxes, test VMs, etc). Yet, paused VMs 
will resume automatically and will continue to fill in Gluster which could lead 
to the same situation - so just power them off (ugly but most probably 
necessary).

 Best Regards,Strahil Nikolov
 
  On Mon, Jun 14, 2021 at 20:21, supo...@logicworks.pt<supo...@logicworks.pt> 
wrote:   # df -h /home/brick1 
Sist.fichs           Tama  Ocup Livre Uso% Montado em 
/dev/mapper/cl-home  1,8T  1,8T   18G 100% /home
 
# df -h /home2/brick2      
Sist.fichs           Tama  Ocup Livre Uso% Montado em 
/dev/mapper/cl-root   50G   28G   23G  56% /
 

If sharding is enable the restore pauses the VM with unknown storage error

Thanks
José

De: "Strahil Nikolov" <hunter86...@yahoo.com>
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" 
<a...@triadic.us>
Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

And what is the status of the bricks:
df -h /home/brick1 /home2/brick2
When sharding is not enabled, the qcow2 disks cannot be spread between the 
bricks.

Best Regards,Strahil Nikolov 
 
    # gluster volume info data1   
  
Volume Name: data1 
Type: Distribute 
Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 2 
Transport-type: tcp 
Bricks: 
Brick1: gs.domain.pt:/home/brick1 
Brick2: gs.domain.pt:/home2/brick2 
Options Reconfigured: 
nfs.disable: on 
transport.address-family: inet 
storage.fips-mode-rchecksum: on 
storage.owner-uid: 36 
storage.owner-gid: 36 
cluster.min-free-disk: 10% 
performance.quick-read: off 
performance.read-ahead: off 
performance.io-cache: off 
performance.low-prio-threads: 32 
network.remote-dio: enable 
cluster.eager-lock: enable 
cluster.quorum-type: auto 
cluster.server-quorum-type: server 
cluster.data-self-heal-algorithm: full 
cluster.locking-scheme: granular 
cluster.shd-wait-qlength: 10000 
features.shard: off 
user.cifs: off 
cluster.choose-local: off 
client.event-threads: 4 
server.event-threads: 4 
performance.client-io-threads: on
 

# gluster volume status data1 
Status of volume: data1 
Gluster process                             TCP Port  RDMA Port  Online  Pid 
------------------------------------------------------------------------------ 
Brick gs.domain.pt:/home/brick1    49153     0          Y       1824862 
Brick gs.domain.pt:/home2/brick2   49154     0          Y       1824880 
  
Task Status of Volume data1 
------------------------------------------------------------------------------ 
There are no active volume tasks
 
# gluster volume heal data1 info summary 
This command is supported for only volumes of replicate/disperse type. Volume 
data1 is not of type replicate/disperse 
Volume heal failed.
 

# df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ 
Sist.fichs           Tama  Ocup Livre Uso% Montado em 
gs.domain.pt:/data1  1,9T  1,8T   22G  99% 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1
 
thanks
José

De: "Strahil Nikolov" <hunter86...@yahoo.com>
Para: supo...@logicworks.pt
Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" 
<a...@triadic.us>
Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

Can you provode the output of:gluster volume info VOLUME gluster volume status 
VOLUMEgluster volume heal VOLUME info summary
df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume>

In pure replica volumes , the bricks should be of the same size. If not - the 
smallest one defines the size of the volume.If the VM has thin qcow2 disks, it 
will grow slowly till it reaches its maximum size or till the volume space is 
finished.
Best Regards,Strahil Nikolov


Well, I have one brick without space, 1.8TB.In fact I don't know why, because I 
only have one VM on that domain storage with less the 1TB.
When I try to start the VM I get this error:

VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR 
trusted.libvirt.security.selinux on 
/rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c:
 No space left on device.
I'm stuck in here
Thanks

José
De: "Strahil Nikolov" <hunter86...@yahoo.com>
Para: supo...@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org>
Cc: "Alex McWhirter" <a...@triadic.us>, "José Ferradeira via Users" 
<users@ovirt.org>
Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09
Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues

So,
how is it going ?Do you have space ?
Best Regards,Strahil Nikolov

On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov<hunter86...@yahoo.com> wrote:You 
need to use thick VM disks on Gluster, which is the default behavior for a long 
time.Also, check all bricks' free space. Most probably you are out of space on 
one of the bricks (term for server + mountpoint combination).
Best Regards,Strahil Nikolov

On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> 
wrote:_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOUEDEVSVCQ7RMW56DCSJ/





  

  
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DSNWUBL5RFCO3N3ZYBCMAFVI4WN2DRIE/

Reply via email to