Re: [ovirt-users] Re: Any way to terminate stuck export task

2021-07-05 Thread Strahil Nikolov
That NFS looks like it is not properly configured -> nobody:nobody is not 
suposed to be seen.
Change the ownership from nfs side to 36:36. Also, you can define 
(all_squash,anonuid=36,anongid=36) as export options.
 

Best Regards,Strahil Nikolov
 
  On Mon, Jul 5, 2021 at 12:52, Gianluca Cecchi 
wrote:   ___
Users mailing list -- us...@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/6HE5TY3GTB32JFIOHAUQ2HMNJVRQDEDK/
  


Re: [ovirt-users] Re: Any way to terminate stuck export task

2021-07-04 Thread Strahil Nikolov
Isn't it better to strace it before killing qemu-img .
Best Regards,Strahil Nikolov
 
 
  On Sun, Jul 4, 2021 at 0:15, Nir Soffer wrote:   On Sat, 
Jul 3, 2021 at 3:46 PM Gianluca Cecchi
 wrote:
>
> Hello,
> in oVirt 4.3.10 an export job to export domain takes too long, probably due 
> to the NFS server slow.
> How can I stop in a clean way the task?
> I see the exported file remains always at 4,5Gb of size.
> Command vmstat on host with qemu-img process gives no throughput but blocked 
> processes
>
> procs ---memory-- ---swap-- -io -system-- 
> --cpu-
>  r  b  swpd  free  buff  cache  si  so    bi    bo  in  cs us sy id wa st
>  1  2      0 170208752 474412 16985752    0    0  719    72 2948 5677  0  0 
>96  4  0
>  0  2      0 170207184 474412 16985780    0    0  3580    99 5043 6790  0  0 
>96  4  0
>  0  2      0 170208800 474412 16985804    0    0  1379    41 2332 5527  0  0 
>96  4  0
>
> and the generated file refreshes its timestamp but not the size
>
> # ll -a  
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> total 4675651
> drwxr-xr-x.  2 vdsm kvm      1024 Jul  3 14:10 .
> drwxr-xr-x. 12 vdsm kvm      1024 Jul  3 14:10 ..
> -rw-rw.  1 vdsm kvm 4787863552 Jul  3 14:33 
> bb94ae66-e574-432b-bf68-7497bb3ca9e6
> -rw-r--r--.  1 vdsm kvm        268 Jul  3 14:10 
> bb94ae66-e574-432b-bf68-7497bb3ca9e6.meta
>
> # du -sh  
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
> 4.5G    
> /rhev/data-center/mnt/172.16.1.137:_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/125ad0f8-2672-468f-86a0-115a7be287f0/
>
> The VM has two disks, 35Gb and 300GB, not full but quite occupied.
>
> Can I simply kill the qemu-img processes on the chosen hypervisor (I suppose 
> the SPM one)?

Killing the qemu-img process is the only way to stop qemu-img. The system
is designed to clean up properly after qemu-img terminates.

If this capability is important to you, you can file RFE to allow aborting
jobs from engine UI/API. This is already implemented internally, but we did
not expose the capability.

It would be useful to understand why qemu-img convert does not make progress.
If you can reproduce this by running qemu-img from the shell, it can be useful
to run it via strace and ask about this in qemu-block mailing list.

Example strace usage:

    strace -o convert.log -f -tt -T qemu-img convert ...

Also output of nfsstat during the copy can help.

Nir
___
Users mailing list -- us...@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/RAMVA5P5IBOXL3ZRJ73B577QQXGM6EKC/