Hi,
it took some time to answer due to some other stuff, but now I had the
time to look into it.
Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:
[...]
Hi Bernhard,
With the latest version of the ovirt-imageio and the v2v we are
performing quite nicely, and without specifying
the differe
Moving disk from one gluster domain to another fails, either with the vm
running or down..
It strikes me that it says : File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 718, in blockCopy
if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self)
I'am sending the rele
Hi, It should be possible, as oVirt is able to support NFS 4.1 I have a
Synology NAS which is also able to support this version of the protocol, but
never found time to set this together and test it until now. Reagrds
Le 30-Aug-2018 12:16:32 +0200, xrs...@xrs444.net a crit:
Hello all, I've
Hi All,
I would like to understand how to find the point that fails when starting from
the Event showing in the GUI. With that event I get an correlation ID and how
would I trace all the following tasks, actions or events that are connected to
that correlation ID ?
Is it something like Correla
Hi,
thanks for your answer. I tried letting it restart automatically but it makes
no difference.
vdsm-client Volume getInfo shows the correct values
manually looking in the metadata file on the storage domain shows the correct
values
ssh into the machine and lsblk shows the correct value
Only
On Thu, 13 Sep 2018 11:08:28 +0200
Robert O'Kane wrote:
> Hello,
>
> I have a simmilar issue with ovirt-provider-ovn.
>
> But in my config I see:
>
> ovirt-sso-client-secret=to_be_set
>
> Where do I find / how do I generate this token?
>
Usually engine-setup will generate an appropriate aut
Hi,
but performance.strict-o-direct is not one of the option enabled by
gdeploy during installation because it's supposed to give some sort of
benefit?
Paolo
Il 14/09/2018 11:34, Leo David ha scritto:
> performance.strict-o-direct: on
> This was the bloody option that created the botleneck !
performance.strict-o-direct: on
This was the bloody option that created the botleneck ! It was ON.
So now i get an average of 17k random writes, which is not bad at all.
Below, the volume options that worked for me:
performance.strict-write-ordering: off
performance.strict-o-direct: off
server.
Hi Everyone,
So i have decided to take out all of the gluster volume custom options,
and add them one by one while activating/deactivating the storage domain &
rebooting one vm after each added option :(
The default options that giving bad iops ( ~1-2k) performance are :
performance.stat-prefetc
I've managed to upgrade them now by removing logical volumes, usually it's just
/dev/onn/home but one I had to keep reinstalling see were it failed so had to
lvremove /dev/onn/ovirt-node-ng-4.2.6.1-0.20180913.0+1
lvremove /dev/onn/var_crash
lvremove /dev/onn/var_log
lvremove /dev/onn/var_log_
Hi, there was a memory leak in the gluster client that is fixed in
release 3.12.13
(https://github.com/gluster/glusterdocs/blob/master/docs/release-notes/3.12.13.md).
What version of gluster are you using?
Paolo
Il 11/09/2018 16:51, Endre Karlson ha scritto:
> Hi, we are seeing some issues wher
On Thu, Sep 13, 2018 at 5:19 PM Pötter, Ulrich <
ulrich.poet...@hhi.fraunhofer.de> wrote:
>
>
> This worked. The VM has now a larger disk and the metadata on the storage
> domain show the new value (vdsm-client Volume getInfo ... too)
> Unfortunetaly the virtual size of the disk shown in the oVirt
12 matches
Mail list logo