On Thu, 2018-07-26 at 17:13 +0200, Simone Tiraboschi wrote:
>
>
> On Thu, Jul 26, 2018 at 5:09 PM Karli Sjöberg
> wrote:
> >
> > On Jul 26, 2018 15:48, Karli Sjöberg wrote:
> > > On Thu, 2018-07-26 at 14:14 +0200, Karli Sjöberg wrote:
> > > > On Thu, 2018-07-26 at 14:01 +0200, Simone
Hi Simone:
Yes, it's in a nested environment. L0 is vmware esxi 5.5.
regards,
Bong SF
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
Hi Guys,
Any update on this, please?
Thanks,
Hari
On Fri, Jul 27, 2018 at 5:52 AM, Greg Sheremeta wrote:
> + @Tomas Jelinek do you know this one?
>
> On Wed, Jul 18, 2018 at 1:57 AM Hari Prasanth Loganathan msystechnologies.com> wrote:
>
>> Hi Karli,
>>
>> Sorry for the confusion, Initially
It sounds like there is room for improvement in the UI. Can you share a
screenshot? I'm wondering if some field level help would make sense.
I'm not familiar with the docs for HCI, but we are in the process of
refreshing the docs on ovirt.org to version 4.2.
Best wishes (and thanks for your
-- Forwarded message -
From: Jayme
Date: Sun, Jul 29, 2018, 10:09 PM
Subject: Re: [ovirt-users] Re: up and running with ovirt 4.2 and gluster
To: Mike
On this same subject one thing I'm currently hung up on re: HCI setup is
the next step in the cockpit config for glusterfs. I
> Why would they be setup by default via the cockpit if they are no longer
> needee?
>
> On Sat, Jul 28, 2018, 1:13 PM femi adegoke, wrote:
I agree, that step alone is very confusing
- vmstore is/was "export" and is no longer needed
- iso domains are no longer needed
- The only
I create a pool with 3 VMs, V. 4.2. Now I want to delete just one VM of that
pool. Is that possible?
Thanks
Josá
--
Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
the vdsm log is endless.
These are the last lines
Hope it helps
OSError: [Errno 13] Permission denied
2018-07-29 16:56:17,069+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call
StorageDomain.getStats failed (error 350) in 0.19 seconds (__init__:573)
2018-07-29 16:56:17,097+0200 INFO
Wesley,
Try to disable compression on zfs fs
On 07/28/2018 11:01 AM, Wesley Stewart wrote:
Windows reportes about 500-600 MB/s over a 4GB file.
However I believe I found the issue. My NFS backend is ZFS which is apparently
notorious for horrible sync writes.
Everytime I try to change assigned Logical networks (add ip addresses,
assign a logical network, etc) the bond0 interface winds up going
down. I have to do a ifup bond0 from the console, and the engine
reconnects. After a lot of digging, I think this is the offending
command, but I don't know
Hello.
I'm trying to implement following backup scenario for my VMs:
storage domain is a volume on netapp appliance connected to oVirt cluster
via NFS3. Netapp produces its zero copy snapshots of this volume.
I want to be able to restore VMs from these snapshots, e.g.
Netapp can export snapshot
11 matches
Mail list logo