[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Yedidyah Bar David
On Tue, Apr 30, 2019 at 4:09 PM Todd Barton
 wrote:
>
> Thanks a bunch for the reply Didi and Simone.  I will admit this last setup 
> was a bit of wild attempt to see if i could get it working somehow so maybe 
> it wasn't the best example to submit...and yeah, should have been /24 
> subnets.  Initially I tried the single nic setup, but the outcome seemed to 
> be the same scenario.
>
> Honestly I've run through this setup so many times in the last week its all a 
> blur.  I started messing multiple nics in latest attempts to see if this was 
> something specific I should do in a cockpit setup as one of the articles I 
> read suggested multiple interfaces to separate traffic.
>
> My "production" 4.0 environment (currently a failed upgrade with a down host 
> that I can't seem to get back online) is 3 host gluster on 4 bonded 1Gbps 
> links.  With the exception of the upgrade issue/failure, it has been 
> rock-solid with good performance and I've only restarted hosts on upgrades in 
> 4+ years.  There are a few networking changes i would like to make in a 
> rebuild, but I wanted to test various options before implementing.  Getting a 
> single nic environment was the initial goal to get started.
>
> I'm doing this testing in a virtualized setup with pfsense as the 
> firewall/router and I can setup hosts/nics however I want.  I will start over 
> again with more straightforward setup and get more data on failure.  
> Considering I can setup the environment how i want, what would be your 
> recommended config for a single nic(or single bond) setup using cockpit?  
> Static IPs with host file resolution, DHCP with mac specific IPs, etc.

Much of such decisions is a matter of personal preferences,
acquaintance with the relevant technologies and tooling you have
around them, local needs/policies/mandates, existing infrastructure,
etc.

If you search the net, e.g. for "ovirt best practices" or "RHV best
practices", you can find various articles etc. that can provide some
good guidelines/ideas.

I suggest to read around a bit, then spend some good time on planning,
then carefully and systematically implement your design, verifying
each step right after doing it. When you run into problems, tell us
:-). Ideally, IMO, you should not give up on your design due to such
problems and try workarounds, inferior (in your eyes) solutions, etc.,
unless you manage to find existing open bugs that describe your
problem and you decide you can't want until they are solved. Instead,
try to fix problems, perhaps with the list members' help.

I realize spending a week on what is in your perception a simple,
straightforward task, does not leave you in the best mood for such a
methodical next attempt. Perhaps first take a break and do something
else :-), then start from a clean and fresh hardware/software
environment and mind.

Good luck and best regards,

>
> Thank you,
>
> Todd Barton
>
>
>
>
>  On Tue, 30 Apr 2019 05:20:04 -0400 Simone Tiraboschi 
>  wrote 
>
>
>
> On Tue, Apr 30, 2019 at 9:50 AM Yedidyah Bar David  wrote:
>
> On Tue, Apr 30, 2019 at 5:09 AM Todd Barton
>  wrote:
> >
> > I've having to rebuild an environment that started back in the early 3.x 
> > days.  A lot has changed and I'm attempting to use the Ovirt Node based 
> > setup to build a new environment, but I can't get through the hosted engine 
> > deployment process via the cockpit (I've done command line as well).  I've 
> > tried static DHCP address and static IPs as well as confirmed I have 
> > resolvable host-names.  This is a test environment so I can work through 
> > any issues in deployment.
> >
> > When the cockpit is displaying the waiting for host to come up task, the 
> > cockpit gets disconnected.  It appears to a happen when the bridge network 
> > is setup.  At that point, the deployment is messed up and I can't return to 
> > the cockpit.  I've tried this with one or two nic/interfaces and tried 
> > every permutation of static and dynamic ip addresses.  I've spent a week 
> > trying different setups and I've got to be doing something stupid.
> >
> > Attached is a screen capture of the resulting IP info after my latest try 
> > failing.  I used two nics, one for the gluster and bridge network and the 
> > other for the ovirt cockpit access.  I can't access cockpit on either ip 
> > address after the failure.
> >
> > I've attempted this setup as both a single host hyper-converged setup and a 
> > three host hyper-converged environment...same issue in both.
> >
> > Can someone please help me or give me some thoughts on what is wrong?
>
> There are two parts here: 1. Fix it so that you can continue (and so
> that if it happens to you on production, you know what to do) 2. Fix
> the code so that it does not happen again. They are not necessarily
> identical (or even very similar).
>
> At the point in time of taking the screen capture:
>
> 1. Did the ovirtmgmt bridge get the IP address of the intended nic? Which one?
>
> 2. Did you 

[ovirt-users] Re: ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-30 Thread Steffen Luitz
Hi,

I strongly suspect this is the same problem I’ve reported a few days ago as 
“New disk creation very slow after upgrade to 4.3.3”, in particular this:

In the UI the default is "preallocated" but changing it to thin provision does 
not make any difference, regardless of this setting, an fallocate process gets 
started on the SDM host and takes forever to create the pre-allocated image.
  
The observed slowness is just a symptom of always creating a pre-allocated 
image.

From a cursory look at the history, the _default_ in a HC environment is now 
supposed to be set to “preallocated”. 

https://github.com/oVirt/ovirt-engine/commit/8b0969cfca241821e6cb5e923a2d4a554d37f21f

I wonder if this change made it unconditional … 

 Best regards — Steffen


> On Apr 29, 2019, at 1:53 PM, Strahil Nikolov  wrote:
> 
> Hi All,
> 
> I have stumbled upon a potential bug in UI. Can someone test it , in order to 
> reproduce it ?
> 
> How to reproduce:
> 1. Create a VM
> 2. Within the new VM wizard - Create a disk and select Thin Provision.
> 3. Create the VM and wait for the disk to be completed.
> 4. Check the disk within UI - Allocation Policy is set to "Preallocation" 
> 
> Best Regards,
> Strahil Nikolov
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUB67DZTZHNRB65QUV2FFCOA257564QM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWUFQCFUT22MQFD6WETAJ5QPEAYKTKTK/


[ovirt-users] Re: Update storage lease via python sdk

2019-04-30 Thread klaasdemter
Hi,
with a little help from Red Hat my current solution looks like this:
https://github.com/Klaas-/ovirt-engine-sdk/blob/663cc06516f9ace45ba046a3b2ba14a6724cfb8a/sdk/examples/change_vm_lease_storage_domain.py#L51-L83

using a correlation_id to find the running task with jobs_service and then 
watching that until it's finished

I'll try to put a PR in through gerrit within the next days.

Greetings
Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6JIN6DVVSQORMF6KJSE2LZD432P32AQ/


[ovirt-users] Re: 4.3.x upgrade and issues with OVN

2019-04-30 Thread Weber, Charles (NIH/NIA/IRP) [E]
Complete shutdown of 4.3.3 cluster, change cluster from 4.2 to 4.3 
compatibility and upgrade of all hosts to current 4.3.3 with patches has 
seemingly fixed everything.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UYYSDXCZISZ3FJMQFVWK2XWIPPJX5GP4/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 2:55 PM Ralf Schenk  wrote:

> Hello,
>
> that is definitely not my problem. Did a complete new deployment (after
> rebooting host)
>
> Before deploying on my storage:
> root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
> total 17
> drwxrwxr-x 2 vdsm vdsm 2 Apr 30 13:53 .
> drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
>
> While deploying in late stage:
> root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
> total 18
> drwxrwxr-x 3 vdsm vdsm 4 Apr 30 14:51 .
> drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
> drwxr-xr-x 4 vdsm vdsm 4 Apr 30 14:51 d26e4a31-8d73-449d-bebc-f2ce7a979e5d
> -rwxr-xr-x 1 vdsm vdsm 0 Apr 30 14:51 __DIRECT_IO_TEST__
>
> Immediately the error occurs in GUI:
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>

Can you please share engine.log and vdsm.log?


>
>
> Am 30.04.2019 um 13:48 schrieb Simone Tiraboschi:
>
>
>
> On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  wrote:
>
>> Hello,
>>
>> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist setup
>> and running already. But Step 4 (Moving to hosted_storage Domain on NFS)
>> fails. The Host ist Node-NG 4.3.3.1 based.
>>
>> The intended NFS Domain gets mounted in the host but activation (I think
>> via EngineAPI fails):
>>
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[]". HTTP response code is 400.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
>> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
>> response code is 400."}
>>
>> mount in host shows:
>>
>> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
>> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>>
>> I also sshd into the locally running engine vi 192.168.122.XX and the VM
>> can mount the storage domain, too:
>>
>> [root@engine01 ~]# mount storage.rxmgmt.databay.de:/ovirt/hosted_storage
>> /mnt/ -o vers=4.1
>> [root@engine01 ~]# mount | grep nfs
>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
>> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
>> [root@engine01 ~]# ls -al /mnt/
>> total 18
>> drwxrwxr-x.  3 vdsm kvm4 Apr 30 12:59 .
>> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
>> drwxr-xr-x.  4 vdsm kvm4 Apr 30 12:40
>> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
>> -rwxr-xr-x.  1 vdsm kvm0 Apr 30 12:55 __DIRECT_IO_TEST__
>>
>> Anything I can do ?
>>
>
> 99% that folder was dirty (it already contained something) when you
> started the deployment.
> I can only suggest to clean that folder and start from scratch.
>
>
>> Log-Extract of ovirt-hosted-engine-setup-ansible-create_storage_domain
>> included.
>>
>>
>>
>> --
>>
>>
>> *Ralf Schenk*
>> fon +49 (0) 24 05 / 40 83 70
>> fax +49 (0) 24 05 / 40 83 759
>> mail *r...@databay.de* 
>>
>> *Databay AG*
>> Jens-Otto-Krag-Straße 11
>> D-52146 Würselen
>> *www.databay.de* 
>>
>> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
>> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
>> Philipp Hermanns
>> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>> --
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMFKVBNTHS/
>>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 

[ovirt-users] Re: ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-30 Thread Strahil Nikolov
I have raised a bug (1704782 – ovirt 4.3.3 doesn't allow creation of VM with 
"Thin Provision"-ed disk (always preallocated)) , despite not being sure if I 
have selected the right category.

Best Regards,
Strahil Nikolov

В вторник, 30 април 2019 г., 9:31:46 ч. Гринуич-4, Strahil Nikolov 
 написа: 

Hi Oliver,

can you check your version of UI ?

It seems that both VMs I had created are fully "Preallocated" instead of being 
"Thin Provisioned".

Can someone lead me what section of bugzilla should I open the bug ?

Here is some output:

[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# ssh engine "rpm -qa | grep ovirt | sort "
root@engine's password:
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-vm-infra-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-api-explorer-0.0.4-1.el7.noarch
ovirt-engine-backend-4.3.3.6-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.3.3.6-1.el7.noarch
ovirt-engine-dwh-4.3.0-1.el7.noarch
ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.9-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.3.6-1.el7.noarch
ovirt-engine-metrics-1.3.0.2-1.el7.noarch
ovirt-engine-restapi-4.3.3.6-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.3.3.6-1.el7.noarch
ovirt-engine-setup-base-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-tools-4.3.3.6-1.el7.noarch
ovirt-engine-tools-backup-4.3.3.6-1.el7.noarch
ovirt-engine-ui-extensions-1.0.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.3.6-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-wildfly-15.0.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-guest-tools-iso-4.3-2.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-deploy-java-1.8.0-1.el7.noarch
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-imageio-proxy-1.5.1-0.el7.noarch
ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
ovirt-iso-uploader-4.3.1-1.el7.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-provider-ovn-1.2.20-1.el7.noarch
ovirt-release43-4.3.3.1-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-web-ui-1.5.2-1.el7.noarch
python2-ovirt-engine-lib-4.3.3.6-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64

Best Regards,
Strahil Nikolov




В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener 
 написа: 





Hi Strahil,

sorry can’t reproduce it on NFS SD.

- UI and Disk usage looks ok, Thin Provision for Thin Provision created Disks. 
Sparse File with (0 Blocks)

Second:

UI and Disk usage looks ok also for Preallocated. Preallocated File with 
(2097152 Blocks)

Regards

Oliver


root@ovn-elem images]# stat 
620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c
  Datei: 
„620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c“
  Größe: 5368709120 Blöcke: 0          EA Block: 4096   reguläre Datei
Gerät: fd12h/64786d Inode: 12884902045  Verknüpfungen: 1
Zugriff: 

[ovirt-users] Re: ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-30 Thread Strahil Nikolov
 Hi Oliver,
can you check your version of UI ?
It seems that both VMs I had created are fully "Preallocated" instead of being 
"Thin Provisioned".
Can someone lead me what section of bugzilla should I open the bug ?
Here is some output:
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# ssh engine "rpm -qa | grep ovirt | sort "
root@engine's password:
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-vm-infra-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-api-explorer-0.0.4-1.el7.noarch
ovirt-engine-backend-4.3.3.6-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.3.3.6-1.el7.noarch
ovirt-engine-dwh-4.3.0-1.el7.noarch
ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.9-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.3.6-1.el7.noarch
ovirt-engine-metrics-1.3.0.2-1.el7.noarch
ovirt-engine-restapi-4.3.3.6-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.3.3.6-1.el7.noarch
ovirt-engine-setup-base-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-tools-4.3.3.6-1.el7.noarch
ovirt-engine-tools-backup-4.3.3.6-1.el7.noarch
ovirt-engine-ui-extensions-1.0.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.3.6-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-wildfly-15.0.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-guest-tools-iso-4.3-2.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-deploy-java-1.8.0-1.el7.noarch
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-imageio-proxy-1.5.1-0.el7.noarch
ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
ovirt-iso-uploader-4.3.1-1.el7.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-provider-ovn-1.2.20-1.el7.noarch
ovirt-release43-4.3.3.1-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-web-ui-1.5.2-1.el7.noarch
python2-ovirt-engine-lib-4.3.3.6-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64

Best Regards,Strahil Nikolov

В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener 
 написа:  
 
 Hi Strahil,
sorry can’t reproduce it on NFS SD.
- UI and Disk usage looks ok, Thin Provision for Thin Provision created Disks. 
Sparse File with (0 Blocks)
Second:
UI and Disk usage looks ok also for Preallocated. Preallocated File with 
(2097152 Blocks)
Regards
Oliver

root@ovn-elem images]# stat 
620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c  
Datei: 
„620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c“  
Größe: 5368709120 Blöcke: 0          EA Block: 4096   reguläre DateiGerät: 
fd12h/64786d Inode: 12884902045  Verknüpfungen: 1Zugriff: (0660/-rw-rw)  
Uid: (   36/    vdsm)   Gid: (   36/     kvm)Kontext: 
system_u:object_r:unlabeled_t:s0Zugriff    : 2019-04-30 02:23:36.170064398 
+0200Modifiziert: 2019-04-30 02:20:48.082782687 +0200Geändert   : 2019-04-30 
02:20:48.083782558 +0200 Geburt    : -[root@ovn-elem images]# stat 

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Todd Barton
Thanks a bunch for the reply Didi and Simone.  I will admit this last setup was 
a bit of wild attempt to see if i could get it working somehow so maybe it 
wasn't the best example to submit...and yeah, should have been /24 subnets.  
Initially I tried the single nic setup, but the outcome seemed to be the same 
scenario.



Honestly I've run through this setup so many times in the last week its all a 
blur.  I started messing multiple nics in latest attempts to see if this was 
something specific I should do in a cockpit setup as one of the articles I read 
suggested multiple interfaces to separate traffic.



My "production" 4.0 environment  (currently a failed upgrade with a down host 
that I can't seem to get back online) is 3 host gluster on 4 bonded 1Gbps 
links.  With the exception of the upgrade issue/failure, it has been rock-solid 
with good performance and I've only restarted hosts on upgrades in 4+ years.  
There are a few networking changes i would like to make in a rebuild, but I 
wanted to test various options before implementing.  Getting a single nic 
environment was the initial goal to get started.



I'm doing this testing in a virtualized setup with pfsense as the 
firewall/router and I can setup hosts/nics however I want.  I will start over 
again with more straightforward setup and get more data on failure.  
Considering I can setup the environment how i want, what would be your 
recommended config for a single nic(or single bond) setup using cockpit?  
Static IPs with host file resolution, DHCP with mac specific IPs, etc.



Thank you,


Todd Barton










 On Tue, 30 Apr 2019 05:20:04 -0400 Simone Tiraboschi  
wrote 







On Tue, Apr 30, 2019 at 9:50 AM Yedidyah Bar David  
wrote:

On Tue, Apr 30, 2019 at 5:09 AM Todd Barton

  wrote:

 >

 > I've having to rebuild an environment that started back in the early 3.x 
 > days.  A lot has changed and I'm attempting to use the Ovirt Node based 
 > setup to build a new environment, but I can't get through the hosted engine 
 > deployment process via the cockpit (I've done command line as well).  I've 
 > tried static DHCP address and static IPs as well as confirmed I have 
 > resolvable host-names.  This is a test environment so I can work through any 
 > issues in deployment.

 >

 > When the cockpit is displaying the waiting for host to come up task, the 
 > cockpit gets disconnected.  It appears to a happen when the bridge network 
 > is setup.  At that point, the deployment is messed up and I can't return to 
 > the cockpit.  I've tried this with one or two nic/interfaces and tried every 
 > permutation of static and dynamic ip addresses.  I've spent a week trying 
 > different setups and I've got to be doing something stupid.

 >

 > Attached is a screen capture of the resulting IP info after my latest try 
 > failing.  I used two nics, one for the gluster and bridge network and the 
 > other for the ovirt cockpit access.  I can't access cockpit on either ip 
 > address after the failure.

 >

 > I've attempted this setup as both a single host hyper-converged setup and a 
 > three host hyper-converged environment...same issue in both.

 >

 > Can someone please help me or give me some thoughts on what is wrong?

 

 There are two parts here: 1. Fix it so that you can continue (and so

 that if it happens to you on production, you know what to do) 2. Fix

 the code so that it does not happen again. They are not necessarily

 identical (or even very similar).

 

 At the point in time of taking the screen capture:

 

 1. Did the ovirtmgmt bridge get the IP address of the intended nic? Which one?

 

 2. Did you check routing? Default gateway, or perhaps you had/have

 specific other routes?

 

 3. What nics are in the bridge? Can you check/share output of 'brctl show'?

 

 4. Probably not related, just noting: You have there (currently on

 eth0 and on ovirtmgmt, perhaps you tried other combinations):

 http://10.1.2.61/16 and http://10.1.1.61/16 . It seems like you wanted two 
different

 subnets, but are actually using a single one. Perhaps you intended to

 use http://10.1.2.61/24 and http://10.1.1.61/24.


 

Good catch: the issue comes exactly form here!

Please see:

https://bugzilla.redhat.com/1694626



The issue happens when the user has two interfaces configured on the same IP 
subnet, the default gateway is configured to be reached from one of the two 
interfaces and the user chooses to create the management bridge on the other 
one.

When the engine, adding the host, creates the management bridge it also tries 
to configure the default gateway on the bridge and for some reason this disrupt 
the external connectivity on the host and the the user is going to loose it.



If you intend to use one interface for gluster and the other for the management 
network I'd strongly suggest to use two distinct subnets having the default 
gateway on the 

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Strahil

> Each data has to be written on the host itself and on the two remote ones so 
> you are going to have 1000 mbps / 2 (external replicas ) / 8 (bit/bytes) = a 
> max of 62.5 MB/s sustained throughput shared between all the VMs and this 
> ignoring all the overheads.
> In practice it will be much less ending in a barely usable environment.


I am currently in this type of setup (replica 2 arbiter 1 on 1 gbit/s  both 
storage and oVirt connectivity)  and the maximum write (sequential) speed I can 
reach is aprox 89 MB/s.
I guess with replica 3 the maximum will be as Simone stated.
My maximum read speed is aprox 500 MB/s  despite that local NVMe is capable of 
1.3 GB/s and I guess  FUSE  is not using  the  local brick no matter  I have 
set it up with cluster.use-local  (or whatever  it was called).

If possible , get a 10 gbit/s connectivity or multiple  gbit interfaces  ( as  
I'm planing to do in the nearest future).

Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGV5VUKY372HU5O4HCC6CIDQSTDIEXZP/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Ralf Schenk
Hello,

that is definitely not my problem. Did a complete new deployment (after
rebooting host)

Before deploying on my storage:
root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
total 17
drwxrwxr-x 2 vdsm vdsm 2 Apr 30 13:53 .
drwxr-xr-x 8 root root 8 Apr  2 18:02 ..

While deploying in late stage:
root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
total 18
drwxrwxr-x 3 vdsm vdsm 4 Apr 30 14:51 .
drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
drwxr-xr-x 4 vdsm vdsm 4 Apr 30 14:51 d26e4a31-8d73-449d-bebc-f2ce7a979e5d
-rwxr-xr-x 1 vdsm vdsm 0 Apr 30 14:51 __DIRECT_IO_TEST__

Immediately the error occurs in GUI:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
response code is 400."}


Am 30.04.2019 um 13:48 schrieb Simone Tiraboschi:
>
>
> On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  > wrote:
>
> Hello,
>
> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist
> setup and running already. But Step 4 (Moving to hosted_storage
> Domain on NFS) fails. The Host ist Node-NG 4.3.3.1 based.
>
> The intended NFS Domain gets mounted in the host but activation (I
> think via EngineAPI fails):
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail
> is "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
> HTTP response code is 400."}
>
> mount in host shows:
>
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
> type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>
> I also sshd into the locally running engine vi 192.168.122.XX and
> the VM can mount the storage domain, too:
>
> [root@engine01 ~]# mount
> storage.rxmgmt.databay.de:/ovirt/hosted_storage /mnt/ -o vers=4.1
> [root@engine01 ~]# mount | grep nfs
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
> 
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
> [root@engine01 ~]# ls -al /mnt/
> total 18
> drwxrwxr-x.  3 vdsm kvm    4 Apr 30 12:59 .
> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
> drwxr-xr-x.  4 vdsm kvm    4 Apr 30 12:40
> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
> -rwxr-xr-x.  1 vdsm kvm    0 Apr 30 12:55 __DIRECT_IO_TEST__
>
> Anything I can do ?
>
>
> 99% that folder was dirty (it already contained something) when you
> started the deployment.
> I can only suggest to clean that folder and start from scratch.
>  
>
> Log-Extract of
> ovirt-hosted-engine-setup-ansible-create_storage_domain included.
>
>
>
> -- 
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>       
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari,
> Dipl.-Kfm. Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>
> 
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMFKVBNTHS/
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen



[ovirt-users] Re: HA VM not work when VM OS cannot run normally

2019-04-30 Thread du_hon...@yeah.net

Thanks Alex


Regards
Hongyu Du
 
From: Alex K
Date: 2019-04-28 15:46
To: du_hongyu
CC: users; devel
Subject: Re: Re: [ovirt-users] HA VM not work when VM OS cannot run normally


On Sun, Apr 28, 2019, 05:36 du_hon...@yeah.net  wrote:
Hi,when I run a HA VM, it's OS not run such as kernel crashnormally  ,  but 
qemu command is not error,  engine think this VM is run, so HA Vm  not work
In this case you need a watchdog configuration.



Regards
Hongyu Du
 
From: Alex K
Date: 2019-04-27 02:03
To: du_hongyu
CC: users; devel
Subject: Re: [ovirt-users] HA VM not work when VM OS cannot run normally


On Fri, Apr 26, 2019, 12:02 du_hon...@yeah.net  wrote:
HI, I create a HA VM in ovirt follaw this article 
https://ovirt.org/develop/ha-vms.html#highly-available-vms-and-io-errors
when my vm‘s OS can not run,  but ovirt consider this vm is up, so is this a 
bug for HA VM? can you give me some advise?
Can you explain what does "cannot run" mean? In case you need availability 
based on internal VM components you might need to setup watchdog. 



Regards
Hongyu Du
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YQ6B2ZQ7LCF7NGFA4O3PSUL4A2RJXRG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6QXDDWX5A5YFGXNSSW3KL4QPHQMHXGN6/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  wrote:

> Hello,
>
> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist setup
> and running already. But Step 4 (Moving to hosted_storage Domain on NFS)
> fails. The Host ist Node-NG 4.3.3.1 based.
>
> The intended NFS Domain gets mounted in the host but activation (I think
> via EngineAPI fails):
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>
> mount in host shows:
>
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
> type nfs4
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>
> I also sshd into the locally running engine vi 192.168.122.XX and the VM
> can mount the storage domain, too:
>
> [root@engine01 ~]# mount storage.rxmgmt.databay.de:/ovirt/hosted_storage
> /mnt/ -o vers=4.1
> [root@engine01 ~]# mount | grep nfs
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
> [root@engine01 ~]# ls -al /mnt/
> total 18
> drwxrwxr-x.  3 vdsm kvm4 Apr 30 12:59 .
> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
> drwxr-xr-x.  4 vdsm kvm4 Apr 30 12:40
> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
> -rwxr-xr-x.  1 vdsm kvm0 Apr 30 12:55 __DIRECT_IO_TEST__
>
> Anything I can do ?
>

99% that folder was dirty (it already contained something) when you started
the deployment.
I can only suggest to clean that folder and start from scratch.


> Log-Extract of ovirt-hosted-engine-setup-ansible-create_storage_domain
> included.
>
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMFKVBNTHS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWMY3W3WBRAWSYT5UGFH2JQ4EEE64THT/


[ovirt-users] Re: Ovirt nodeNG RDMA support?

2019-04-30 Thread Sandro Bonazzola
Il giorno ven 26 apr 2019 alle ore 01:56  ha
scritto:

> When I was able to load CentosOS as a host OS,  Was able to use RDMA, but
> it seems like the 4.3x branch of nodeNG is missing RDMA support?  I enabled
> rdma and started the service, but gluster refuses to recognize that RDMA is
> available and always reports RDMA port as 0 and when I try to make a new
> drive with tcp,rdma transport options, it always fails.
>

Sahina, any hint?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E73V472TTQVOLIOSRCZSC5DV47VEKIDU/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2F5ATWVRSBV24X3QPQX6QWM6JDVIQFCO/


[ovirt-users] Re: Ovirt Nested in VMWare (Working approach)

2019-04-30 Thread andres
Hi,RabidCicada


I am trying to use nested virtualization on production to expand a 5 hosts 
cluster.
I want to use a single nested host in which I will not deploy the hosted engine 
VM.

This expansion will last just for a few months and our more critical VMs will 
never run on the nested host, so I am not worried if I need to make some 
adjustments in the node's code or adjust settings of some specific VMs.

Do you know if I would need to follow the same steps you did or shoud I need 
only to worry about vdsm on the node?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7BATBE5EBUDI74RTATVOOGNKH7Q2ZK2/


[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 9:50 AM Yedidyah Bar David  wrote:

> On Tue, Apr 30, 2019 at 5:09 AM Todd Barton
>  wrote:
> >
> > I've having to rebuild an environment that started back in the early 3.x
> days.  A lot has changed and I'm attempting to use the Ovirt Node based
> setup to build a new environment, but I can't get through the hosted engine
> deployment process via the cockpit (I've done command line as well).  I've
> tried static DHCP address and static IPs as well as confirmed I have
> resolvable host-names.  This is a test environment so I can work through
> any issues in deployment.
> >
> > When the cockpit is displaying the waiting for host to come up task, the
> cockpit gets disconnected.  It appears to a happen when the bridge network
> is setup.  At that point, the deployment is messed up and I can't return to
> the cockpit.  I've tried this with one or two nic/interfaces and tried
> every permutation of static and dynamic ip addresses.  I've spent a week
> trying different setups and I've got to be doing something stupid.
> >
> > Attached is a screen capture of the resulting IP info after my latest
> try failing.  I used two nics, one for the gluster and bridge network and
> the other for the ovirt cockpit access.  I can't access cockpit on either
> ip address after the failure.
> >
> > I've attempted this setup as both a single host hyper-converged setup
> and a three host hyper-converged environment...same issue in both.
> >
> > Can someone please help me or give me some thoughts on what is wrong?
>
> There are two parts here: 1. Fix it so that you can continue (and so
> that if it happens to you on production, you know what to do) 2. Fix
> the code so that it does not happen again. They are not necessarily
> identical (or even very similar).
>
> At the point in time of taking the screen capture:
>
> 1. Did the ovirtmgmt bridge get the IP address of the intended nic? Which
> one?
>
> 2. Did you check routing? Default gateway, or perhaps you had/have
> specific other routes?
>
> 3. What nics are in the bridge? Can you check/share output of 'brctl show'?
>
> 4. Probably not related, just noting: You have there (currently on
> eth0 and on ovirtmgmt, perhaps you tried other combinations):
> 10.1.2.61/16 and 10.1.1.61/16 . It seems like you wanted two different
> subnets, but are actually using a single one. Perhaps you intended to
> use 10.1.2.61/24 and 10.1.1.61/24.
>

Good catch: the issue comes exactly form here!
Please see:
https://bugzilla.redhat.com/1694626

The issue happens when the user has two interfaces configured on the same
IP subnet, the default gateway is configured to be reached from one of the
two interfaces and the user chooses to create the management bridge on the
other one.
When the engine, adding the host, creates the management bridge it also
tries to configure the default gateway on the bridge and for some reason
this disrupt the external connectivity on the host and the the user is
going to loose it.

If you intend to use one interface for gluster and the other for the
management network I'd strongly suggest to use two distinct subnets having
the default gateway on the subnet you are going to use for the management
network.

If you want to use two interfaces for reliability reasons I'd strongly
suggest to create a bond of the two instead.

Please also notice that deploying a three host hyper-converged environment
over a single 1 gbps interface will be really penalizing in terms of
storage performances.
Each data has to be written on the host itself and on the two remote ones
so you are going to have 1000 mbps / 2 (external replicas ) / 8 (bit/bytes)
= a max of 62.5 MB/s sustained throughput shared between all the VMs and
this ignoring all the overheads.
In practice it will be much less ending in a barely usable environment.

I'd strongly suggest to move to a 10 gbps environment if possible, or to
bond a few 1 gbps nics for gluster.


5. Can you ping from/to these two addresses from/to some other machine
> on the network? Your laptop? The storage?
>
> 6. If possible, please check/share relevant logs, including (from the
> host) /var/log/vdsm/* and /var/log/ovirt-hosted-engine-setup/*.
>
> Thanks and best regards,
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIYWEUXPA25BK3K23MPBISRGZN76AWV3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Yedidyah Bar David
On Tue, Apr 30, 2019 at 5:09 AM Todd Barton
 wrote:
>
> I've having to rebuild an environment that started back in the early 3.x 
> days.  A lot has changed and I'm attempting to use the Ovirt Node based setup 
> to build a new environment, but I can't get through the hosted engine 
> deployment process via the cockpit (I've done command line as well).  I've 
> tried static DHCP address and static IPs as well as confirmed I have 
> resolvable host-names.  This is a test environment so I can work through any 
> issues in deployment.
>
> When the cockpit is displaying the waiting for host to come up task, the 
> cockpit gets disconnected.  It appears to a happen when the bridge network is 
> setup.  At that point, the deployment is messed up and I can't return to the 
> cockpit.  I've tried this with one or two nic/interfaces and tried every 
> permutation of static and dynamic ip addresses.  I've spent a week trying 
> different setups and I've got to be doing something stupid.
>
> Attached is a screen capture of the resulting IP info after my latest try 
> failing.  I used two nics, one for the gluster and bridge network and the 
> other for the ovirt cockpit access.  I can't access cockpit on either ip 
> address after the failure.
>
> I've attempted this setup as both a single host hyper-converged setup and a 
> three host hyper-converged environment...same issue in both.
>
> Can someone please help me or give me some thoughts on what is wrong?

There are two parts here: 1. Fix it so that you can continue (and so
that if it happens to you on production, you know what to do) 2. Fix
the code so that it does not happen again. They are not necessarily
identical (or even very similar).

At the point in time of taking the screen capture:

1. Did the ovirtmgmt bridge get the IP address of the intended nic? Which one?

2. Did you check routing? Default gateway, or perhaps you had/have
specific other routes?

3. What nics are in the bridge? Can you check/share output of 'brctl show'?

4. Probably not related, just noting: You have there (currently on
eth0 and on ovirtmgmt, perhaps you tried other combinations):
10.1.2.61/16 and 10.1.1.61/16 . It seems like you wanted two different
subnets, but are actually using a single one. Perhaps you intended to
use 10.1.2.61/24 and 10.1.1.61/24.

5. Can you ping from/to these two addresses from/to some other machine
on the network? Your laptop? The storage?

6. If possible, please check/share relevant logs, including (from the
host) /var/log/vdsm/* and /var/log/ovirt-hosted-engine-setup/*.

Thanks and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIYWEUXPA25BK3K23MPBISRGZN76AWV3/


[ovirt-users] [QA][Feedback Needed] RDO Stein integration testing

2019-04-30 Thread Sandro Bonazzola
Hi,
RDO Stein has been released a few days ago:
https://blogs.rdoproject.org/2019/04/rdo-stein-released/
I opened test bug for the providers oVirt is consuming from OpenStack:
Neutron, Glance and Cinder.
Has anyone capacity for testing this (not on production systems!) and give
feedback?

Bug 1704350  - Test
neutron integration with RDO Stein
Bug 1704349  - Test
glance integration with RDO Stein
Bug 1704352  - Test
Cinder integration with RDO Stein

Thanks,

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HVEZN5UBPXWC3ELLTFVZYYVAMFCQ6IIV/