[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-15 Thread Mikael Öhman
> What do you mean by "small block sizes"?

inside a VM, or directly on the mounted glusterfs;
dd if=/dev/zero of=tmpfile  bs=1M count=100 oflag=direct
of course, a terrible way to write data, but also things like compiling 
software inside one of the VMs was terrible slow, 5-10x slower than hardware, 
consisting of almost only idling.

Uploading disk images never got above 30MB/s. 
(and I did try all options i could find; using upload_disk.py on one of the 
hosts, even through a unix socket or with -d option, tweaking buffer size, all 
of which made no difference).
Adding an NFS volume and uploading to it I reach +200MB/s.

I tried tuning a few parameters on glusterfs but saw no improvements until I 
got to network.remote-dio, which made everything listed above really fast.

> Note that network.remote-dio is not the recommended configuration
> for ovirt, in particular if on hyperconverge setup when it can be harmful
> by delaying sanlock I/O.
> 
> https://github.com/oVirt/ovirt-site/blob/4a9b28aac48870343c5ea4d1e83a63c1...
> (Patch in discussion)

Oh, I had seen this page, thanks. Is "remote-dio=enabled" harmful as in things 
breaking, or just worse performance?
I was a bit reluctant to turn it on, but after seeing it was part of the virt 
group I thought it must have been safe.
Perhaps some of the other options like "performance.strict-o-direct on" would 
solve my performance issues in a nicer way (I will test it out first thing on 
monday)

Thanks (I'm not much of a filesystem guy, thanks for putting up with my 
ignorance)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFKVVKHYSLARJWED7KONQSN4RIG5FXSM/


[ovirt-users] Re: ovirt-imagio-proxy upload speed slow

2019-09-13 Thread Mikael Öhman
Perhaps it will help future sysadmins will learn from my mistake.
I also saw very poor upload speeds (~30MB/s) no matter what I tried. I went 
through the whole route with unix-sockets and whatnot.

But, in the end, it just turned out that the glusterfs itself was the 
bottleneck; abysmal performance for small block sizes.

I found the list of suggested performance tweaks that RHEL suggests. In 
particular, it was the "network.remote-dio=on" setting that made all the 
difference. Almost 10x faster.

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/configuring_red_hat_enterprise_virtualization_with_red_hat_gluster_storage/chap-hosting_virtual_machine_images_on_red_hat_storage_volumes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F657ZJ32EYON4X5FAE2BQAHDI4LV5D36/


[ovirt-users] Disk stuck initializing

2019-05-09 Thread Mikael Öhman
I created a extra disk to attach to a VM, which I used for several months.
When it was time to remove it (as I intended to replace it) I managed to detach 
it, but before I could remove it I got it stuck in with status "initializing".
I wish to delete this disk, as I'm done using it, but all options are grayed 
out in the UI.

The disk is not attached to any anything.
The underlying storage domain, which is running GlusterFS across the 3 oVirt 
hypervisors is showing no issues. Several other disks are running on with no 
issues.

The only thing I could find in the engine log was
2019-05-09 17:03:39,693+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(ServerService Thread Pool -- 72) [d3615da6-823c-4421-ad00-6d8060b8b291] Lock 
Acquired to object 'EngineLock:{exclusiveLocks='', 
sharedLocks='[0498e532-a1de-4281-953d-7dec31271437=DISK]'}'

What can I do to clean this up and free the used space?

Best regards, Mikael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFZDENSTWFRBTWFOB366AV7IKWSR2JQP/


[ovirt-users] Re: Debugging "non_operational" host during self-hosted deploy

2018-12-04 Thread Mikael Öhman
My coworker spotted that openvswitch picked up the external address/hostname of 
the host (despite we choosing internal hostnames/ips everywhere). We restarted 
from a clean slate, removing all networks except for the one we wished to 
deploy on, and now the deploy works fine.
Though, I had wished the logs would have been a bit more verbose on an error 
like this.

Best regards, Mikael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZBTVPHP4ENZGIUSNWAFJ36RFVAKZYJ32/


[ovirt-users] Debugging "non_operational" host during self-hosted deploy

2018-12-04 Thread Mikael Öhman
The "bootstrap_local_vm.yml" playbook fails at the end, during the task "Wait 
for the host to be up"
Looking through the ovirt-hosted-engine-setup-bootstrap_local_vm log, I found 
the reason is supposed to be:

"status": "non_operational",
"status_detail": "network_unreachable",

But that's it. I can't find anything wrong with any networks, neither on the 
host or in the partly-prepared HE VM.
Is there some verbose information I can dump to find out why it thinks the 
network is unreachable?
I can't find any logs indicating any issues until this step.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PMKKHNH354PYQC2D4OVFGUIU5HVR5XYM/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-09 Thread Mikael Öhman
This is, from what I can see, the last update Skylake-server in Ovirt: Am I 
correct to understand that this was never backported to 4.2?
I'm at 4.2.7 and would like to use Skylake-Server, but seems to be still 
unavailable.

As you mention backporting, I assume it is/will be in 4.3?

And as 4.3 release isn't anytime soon, is it recommended to apply Tobias 
"hack", or should I attempt to use some type of cpu-passthrough for now 
(though, I don't see a trivial way to enable this either).

Best regards, Mikael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YEIOCK4XCIMP5UKMDAA767EXGTY6T6A/