Hi Sahina, I just deleted the volume and create a new one. The engine still keeps showing the errors from the old volume on the Tasks. I run the command
PGPASSWORD=<password> /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u engine -d engine But it didn't clean the messages nor the locked gluster on the engine volume window Any idea? Thanks José From: "Sahina Bose" <sab...@redhat.com> To: supo...@logicworks.pt Cc: "users" <users@ovirt.org> Sent: Tuesday, January 8, 2019 2:17:02 PM Subject: Re: [ovirt-users] Re: Disk full On Tue, Jan 8, 2019 at 6:37 PM <supo...@logicworks.pt> wrote: > > Hi Sahina, > > Still have the disk full, the idea is to delete de gluster volume and create > a new one. > In the engine when I try to put the gluster volume in maintenance it keeps in > locked state and does not go to maintenance. Even when I try to destroy it > does not allow because the operation is in progress. > I did a gluster volume stop but I don't know if I can do a gluster volume > delete You can delete the volume, if you do not need the data. The other option is to delete the disks from the gluster volume mount point. > > Any help? > > Thanks > > José Ferradeira > > ________________________________ > From: supo...@logicworks.pt > To: "Sahina Bose" <sab...@redhat.com> > Cc: "users" <users@ovirt.org> > Sent: Thursday, December 20, 2018 12:25:08 PM > Subject: [ovirt-users] Re: Disk full > > We moved the VM disk to the second gluster. On the ovirt-engine I cannot see > the old disk, only the disk attached to the VM on the second gluster. > We keep having the errors concerning the disk full. > Using CLI I can see the image on the first gluster volume. So ovirt-engine > was able to move the disk to the second volume but did not delete it from the > first volume. > > # gluster volume info gv0 > > Volume Name: gv0 > Type: Distribute > Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: gfs1.growtrade.pt:/home/brick1 > Options Reconfigured: > features.shard-block-size: 512MB > network.ping-timeout: 30 > storage.owner-gid: 36 > storage.owner-uid: 36 > user.cifs: off > features.shard: off > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 8 > cluster.locking-scheme: granular > cluster.data-self-heal-algorithm: full > cluster.server-quorum-type: server > cluster.quorum-type: auto > cluster.eager-lock: enable > network.remote-dio: enable > performance.low-prio-threads: 32 > performance.stat-prefetch: off > performance.io-cache: off > performance.read-ahead: off > performance.quick-read: off > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > > > Thanks > > ________________________________ > From: "Sahina Bose" <sab...@redhat.com> > To: supo...@logicworks.pt, "Krutika Dhananjay" <kdhan...@redhat.com> > Cc: "users" <users@ovirt.org> > Sent: Thursday, December 20, 2018 11:53:39 AM > Subject: Re: [ovirt-users] Disk full > > Is it possible for you to delete the old disks from storage domain > (you can use the ovirt-engine UI). Do you continue to see space used > despite doing that? > I see that you are on a much older version of gluster. Have you > considered updating to 3.12? > > Please also provide output of "gluster volume info <volumename>" > > On Thu, Dec 20, 2018 at 3:56 PM <supo...@logicworks.pt> wrote: > > > > Yes, I can see the image on the volume. > > Gluster version: > > glusterfs-client-xlators-3.8.12-1.el7.x86_64 > > glusterfs-cli-3.8.12-1.el7.x86_64 > > glusterfs-api-3.8.12-1.el7.x86_64 > > glusterfs-fuse-3.8.12-1.el7.x86_64 > > glusterfs-server-3.8.12-1.el7.x86_64 > > glusterfs-libs-3.8.12-1.el7.x86_64 > > glusterfs-3.8.12-1.el7.x86_64 > > > > > > Thanks > > > > José > > > > ________________________________ > > From: "Sahina Bose" <sab...@redhat.com> > > To: supo...@logicworks.pt > > Cc: "users" <users@ovirt.org> > > Sent: Wednesday, December 19, 2018 4:13:16 PM > > Subject: Re: [ovirt-users] Disk full > > > > Do you see the image on the gluster volume mount? Can you provide the > > gluster volume options and version of gluster? > > > > On Wed, 19 Dec 2018 at 4:04 PM, <supo...@logicworks.pt> wrote: > >> > >> Hi, > >> > >> I have a all in one intallation with 2 glusters volumes. > >> The disk of one VM filled up the brick, which is a partition. That > >> partition has 0% free disk space. > >> I moved the disk of that VM to the other gluster volume, the VM is working > >> with the disk on the other gluster volume. > >> When I move the disk, it didn't delete it from the brick, the engine keeps > >> complaining that there is no more disk space on that volume. > >> What can I do? > >> Is there a way to prevent this in the future? > >> > >> Many thanks > >> > >> José > >> > >> > >> > >> -- > >> ________________________________ > >> Jose Ferradeira > >> http://www.logicworks.pt > >> _______________________________________________ > >> Users mailing list -- users@ovirt.org > >> To unsubscribe send an email to users-le...@ovirt.org > >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > >> oVirt Code of Conduct: > >> https://www.ovirt.org/community/about/community-guidelines/ > >> List Archives: > >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/ > >> > > > > > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS6UCO7LD6XIXXFTVK2KJM7FD6X4TNT5/ >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6IFOC22P6GDI4AULH2LK6XTCQU24BLP/