Re: [ovirt-users] Hyperconverged oVirt installation gluster problems

2017-06-16 Thread knarra

Hi,

grafton_sanity_check.sh checks if the disk has any labels or 
partitions present on it. Since your disk has already a partition and 
you are using the same disk to create gluster brick as well it fails. 
commenting out this script in the conf file and running again would 
resolve your issue.


Thanks
kasturi.

On 06/16/2017 06:56 PM, jesper andersson wrote:

Hi.

I'm trying to set up a 3 node ovirt cluster with gluster as this guide 
describes:

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
I've installed oVirt node 4.1.2 in one partition and left a partition 
to hold the gluster volumes on all three nodes. The problem is that I 
can't get through gdeploy for gluster install. I only get the error:

Error: Unsupported disk type!



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
changed: [host03] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)
changed: [host02] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)
changed: [host01] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)


TASK [debug] 
***

ok: [host01] => {
"changed": false,
"msg": "All items completed"
}
ok: [host02] => {
"changed": false,
"msg": "All items completed"
}
ok: [host03] => {
"changed": false,
"msg": "All items completed"
}

PLAY RECAP 
*

host01 : ok=2changed=1 unreachable=0failed=0
host02 : ok=2changed=1 unreachable=0failed=0
host03 : ok=2changed=1 unreachable=0failed=0


PLAY [gluster_servers] 
*


TASK [Enable or disable services] 
**

ok: [host01] => (item=chronyd)
ok: [host03] => (item=chronyd)
ok: [host02] => (item=chronyd)

PLAY RECAP 
*

host01 : ok=1changed=0 unreachable=0failed=0
host02 : ok=1changed=0 unreachable=0failed=0
host03 : ok=1changed=0 unreachable=0failed=0


PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**

changed: [host03] => (item=chronyd)
changed: [host01] => (item=chronyd)
changed: [host02] => (item=chronyd)

PLAY RECAP 
*

host01 : ok=1changed=1 unreachable=0failed=0
host02 : ok=1changed=1 unreachable=0failed=0
host03 : ok=1changed=1 unreachable=0failed=0


Error: Unsupported disk type!





[root@host01 scripts]# fdisk -l

Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0629cdcf

   Device Boot  Start End  Blocks   Id System

Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7c39

   Device Boot  Start End  Blocks   Id System
/dev/sda1   *2048 2099199 1048576   83 Linux
/dev/sda2 2099200   584843263   291372032   8e Linux LVM

Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728 bytes, 33030144 
sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB, 1073741824 bytes, 
2097152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB, 264266317824 
bytes, 516145152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB, 264266317824 
bytes, 516145152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1: 
248.2 GB, 248160190464 bytes, 484687872 sectors

Units = sectors of 1 * 512 = 512 

[ovirt-users] OVirt 4.1.2 - trim/discard on HDD/XFS/NFS contraproductive

2017-06-16 Thread Markus Stockhausen
Hi,

we just set up a new 4.1.2 OVirt cluster. It is a quite normal
HDD/XFS/NFS stack that worked quit well with 4.0 in the past.
Inside the VMs we use XFS too.

To our surprise we observe abysmal high IO during mkfs.xfs
and fstrim inside the VM. A simple example:

Step 1: Create 100G Thin disk
Result 1: Disk occupies ~10M on storage

Step 2: Format disk inside VM with mkfs.xfs
Result 2: Disk occupies 100G on storage

Changing the discard flag on the disk does not have any effect.

Am I missing something?

Best regards.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-16 Thread adent
If I reinstall and the rerun the hosted-engine setup how do I get the VMs in 
their current running state back into and being recognised by the new hosted 
engine?

Kind regards

Andrew

> On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:
> 
> 
> 
>> On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent  wrote:
>> Hi
>> 
>> Well I've got myself into a fine mess. 
>> 
>> host01 was setup with hosted-engine v4.1. This was successful. 
>> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still 
>> running with more VMs on it)
>> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded but 
>> I couldn't add any storage domains to it. Cannot remember why. 
>> In Ovirt engine UI I removed host02. 
>> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me it 
>> was already there (but it wasn't listed in the UI). 
>> Renamed the reinstalled host02 to host03, changed the ipaddress, reconfig 
>> the DNS server and added host03 into the Ovirt Engine UI. 
>> All good, and I was able to import more VMs to it. 
>> I was also able to shutdown a VM on host01 assign it to host03 and start the 
>> VM. Cool, everything working. 
>> The above was all last couple of weeks. 
>> 
>> This week I performed some yum updates on the Engine VM. No reboot. 
>> Today noticed that the Ovirt services in the Engine VM were in a endless 
>> restart loop. They would be up for a 5 minutes and then die. 
>> Looking into /var/log/ovirt-engine/engine.log and I could only see errors 
>> relating to host02. Ovirt was trying to find it and failing. Then falling 
>> over. 
>> I ran "hosted-engine --clean-metadata" thinking it would cleanup and remove 
>> bad references to hosts, but now realise that was a really bad idea as it 
>> didn't do what I'd hoped. 
>> At this point the sequence below worked, I could login to Ovirt UI but after 
>> 5 minutes the services would be off
>> service ovirt-engine restart
>> service ovirt-websocket-proxy restart
>> service httpd restart
>> 
>> I saw some reference to having to remove hosts from the database by hand in 
>> situations where under the hood of Ovirt a decommission host was still 
>> listed, but wasn't showing in the GUI. 
>> So I removed reference to host02 (vds_id and host_id) in the following 
>> tables in this order. 
>> vds_dynamic
>> vds_statistics
>> vds_static
>> host_device
>> 
>> Now when I try to start ovirt-websocket it will not start
>> service ovirt-websocket start
>> Redirecting to /bin/systemctl start  ovirt-websocket.service
>> Failed to start ovirt-websocket.service: Unit not found.
>> 
>> I'm now thinking that I need to do the following in the engine VM
>> # engine-cleanup
>> # yum remove ovirt-engine
>> # yum install ovirt-engine
>> # engine-setup 
>> But to run engine-cleanup I need to put the engine-vm into maintenance mode 
>> and because of the --clean-metadata that I ran earlier on host01 I cannot do 
>> that. 
>> 
>> What is the best course of action from here?
> 
> To be honest, with all the steps taken above, I'd install everything 
> (including OS) from scratch...
> There's a bit too much mess to try to clean up properly here.
> Y.
>  
>> 
>> Cheers
>> 
>> 
>> 
>> Andrew
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Recognizing Subinterfaces on oVirt Host

2017-06-16 Thread Adam Mills
Hey Team!

We are trying to nest some of our existing technology into the oVirt host
as to not have to reinvent tooling, etc. Our proposal is to have a
sub-interface on the 10G nic and place the VMs in that network. The network
will be advertised to the Top of Rack switch via BGP.

My current issue is that the oVirt web interface does not recognize the
existence of an em1:1 interface of network. Given the above parameters, is
there another way to accomplish what we are trying to do?

Thanks in advance!

Please refer to MS Paint style Visio for a visual

[image: Inline image 1]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent  wrote:

> Hi
>
> Well I've got myself into a fine mess.
>
> host01 was setup with hosted-engine v4.1. This was successful.
> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
> running with more VMs on it)
> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
> but I couldn't add any storage domains to it. Cannot remember why.
> In Ovirt engine UI I removed host02.
> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me it
> was already there (but it wasn't listed in the UI).
> Renamed the reinstalled host02 to host03, changed the ipaddress, reconfig
> the DNS server and added host03 into the Ovirt Engine UI.
> All good, and I was able to import more VMs to it.
> I was also able to shutdown a VM on host01 assign it to host03 and start
> the VM. Cool, everything working.
> The above was all last couple of weeks.
>
> This week I performed some yum updates on the Engine VM. No reboot.
> Today noticed that the Ovirt services in the Engine VM were in a endless
> restart loop. They would be up for a 5 minutes and then die.
> Looking into /var/log/ovirt-engine/engine.log and I could only see errors
> relating to host02. Ovirt was trying to find it and failing. Then falling
> over.
> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
> remove bad references to hosts, but now realise that was a really bad idea
> as it didn't do what I'd hoped.
> At this point the sequence below worked, I could login to Ovirt UI but
> after 5 minutes the services would be off
> service ovirt-engine restart
> service ovirt-websocket-proxy restart
> service httpd restart
>
> I saw some reference to having to remove hosts from the database by hand
> in situations where under the hood of Ovirt a decommission host was still
> listed, but wasn't showing in the GUI.
> So I removed reference to host02 (vds_id and host_id) in the following
> tables in this order.
> vds_dynamic
> vds_statistics
> vds_static
> host_device
>
> Now when I try to start ovirt-websocket it will not start
> service ovirt-websocket start
> Redirecting to /bin/systemctl start  ovirt-websocket.service
> Failed to start ovirt-websocket.service: Unit not found.
>
> I'm now thinking that I need to do the following in the engine VM
>
> # engine-cleanup
> # yum remove ovirt-engine
> # yum install ovirt-engine
> # engine-setup
>
> But to run engine-cleanup I need to put the engine-vm into maintenance
> mode and because of the --clean-metadata that I ran earlier on host01 I
> cannot do that.
>
> What is the best course of action from here?
>

To be honest, with all the steps taken above, I'd install everything
(including OS) from scratch...
There's a bit too much mess to try to clean up properly here.
Y.


>
> Cheers
>
>
> Andrew
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Remove host from hosted engine configuration

2017-06-16 Thread Mike Farnam
Thanks. I did this previously and got many errors and it didn't work. Since 
then I have tried several thing, one of which was reinitialize lockspace as the 
errors seemed to indicate that might be the problem. 
Now I was able to run the clean-metadata command successfully. 
Will that command fail if there are SAN lockspace issues?


> On Jun 15, 2017, at 10:56 PM, knarra  wrote:
> 
>> On 06/16/2017 08:17 AM, Mike Farnam wrote:
>> I had 3 hosts running in a hosted engine setup,  oVirt Engine Version: 
>> 4.1.2.2-1.el7.centos, using FC storage.  One of my hosts went unresponsive 
>> in the GUI, and attempts to bring it back were fruitless.  I eventually 
>> decided to just remove it and have gotten it removed from the GUI, but it 
>> still shows in “hosted-engine —vm-status” command on the other 2 hosts.  The 
>> 2 good nodes show it as the following:
>> 
>> --== Host 3 status ==--
>> 
>> conf_on_shared_storage : True
>> Status up-to-date  : False
>> Hostname   : host3.my.lab
>> Host ID: 3
>> Engine status  : unknown stale-data
>> Score  : 0
>> stopped: False
>> Local maintenance  : True
>> crc32  : bce9a8c5
>> local_conf_timestamp   : 2605898
>> Host timestamp : 2605882
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=2605882 (Thu Jun 15 15:18:13 2017)
>> host-id=3
>> score=0
>> vm_conf_refresh_time=2605898 (Thu Jun 15 15:18:29 2017)
>> conf_on_shared_storage=True
>> maintenance=True
>> state=LocalMaintenance
>> stopped=False
>> 
> you can use the command 'hosted-engine --clean-metadata --host-id= 
> --force-clean' so that this node does not show up in  hosted-engine 
> --vm-status.
>> 
>> 
>> How can I either remove this host altogether from the configuration, or 
>> repair it so that it is back in a good state?  The host is up, but due to my 
>> removal attempts earlier, reports “unknown stale data” for all 3 hosts in 
>> the config.
>> 
>> Thanks
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-06-16 Thread Jerome Roque
Good day oVirt Users,

I need some little help. I have a KVM and used oVirt for the management of
VMs. What I want is that my client will log on to their account and access
their virtual machine using their Smart phone. I tried to install mOvirt
and yes can connect to the console of my machine, but it is only accessible
for admin console. Tried to use web console, it downloaded console.vv but
can't open it. By any chance could make this thing possible?

Thank you,
Jerome
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hyperconverged oVirt installation gluster problems

2017-06-16 Thread jesper andersson
Hi.

I'm trying to set up a 3 node ovirt cluster with gluster as this guide
describes:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
I've installed oVirt node 4.1.2 in one partition and left a partition to
hold the gluster volumes on all three nodes. The problem is that I can't
get through gdeploy for gluster install. I only get the error:
Error: Unsupported disk type!



PLAY [gluster_servers]
*

TASK [Run a shell script]
**
changed: [host03] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)
changed: [host02] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)
changed: [host01] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)

TASK [debug]
***
ok: [host01] => {
"changed": false,
"msg": "All items completed"
}
ok: [host02] => {
"changed": false,
"msg": "All items completed"
}
ok: [host03] => {
"changed": false,
"msg": "All items completed"
}

PLAY RECAP
*
host01 : ok=2changed=1unreachable=0
failed=0
host02 : ok=2changed=1unreachable=0
failed=0
host03 : ok=2changed=1unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [Enable or disable services]
**
ok: [host01] => (item=chronyd)
ok: [host03] => (item=chronyd)
ok: [host02] => (item=chronyd)

PLAY RECAP
*
host01 : ok=1changed=0unreachable=0
failed=0
host02 : ok=1changed=0unreachable=0
failed=0
host03 : ok=1changed=0unreachable=0
failed=0


PLAY [gluster_servers]
*

TASK [start/stop/restart/reload services]
**
changed: [host03] => (item=chronyd)
changed: [host01] => (item=chronyd)
changed: [host02] => (item=chronyd)

PLAY RECAP
*
host01 : ok=1changed=1unreachable=0
failed=0
host02 : ok=1changed=1unreachable=0
failed=0
host03 : ok=1changed=1unreachable=0
failed=0


Error: Unsupported disk type!





[root@host01 scripts]# fdisk -l

Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0629cdcf

   Device Boot  Start End  Blocks   Id  System

Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7c39

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *2048 2099199 1048576   83  Linux
/dev/sda2 2099200   584843263   291372032   8e  Linux LVM

Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728 bytes, 33030144
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB, 1073741824 bytes,
2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB, 264266317824 bytes,
516145152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB, 264266317824 bytes,
516145152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1: 248.2
GB, 248160190464 bytes, 484687872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/onn_host01-pool00: 264.3 GB, 264266317824 bytes, 516145152
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/onn_host01-var: 16.1 GB, 16106127360 

Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 5:20 PM, Gianluca Cecchi 
wrote:

> On Thu, Apr 27, 2017 at 11:25 AM, Evgenia Tokar  wrote:
>
>> Hi,
>>
>> It looks like the graphical console fields are not editable for hosted
>> engine vm.
>> We are trying to figure out how to solve this issue, it is not
>> recommended to change db values manually.
>>
>> Thanks,
>> Jenny
>>
>>
>> On Thu, Apr 27, 2017 at 10:49 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Thu, Apr 27, 2017 at 9:46 AM, Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>


 BTW: if I try to set the video type to Cirrus from web admin gui (and
 automatically the Graphics Protocol becomes "VNC"), I get this when I press
 the OK button:

 Error while executing action:

 HostedEngine:

- There was an attempt to change Hosted Engine VM values that are
locked.

 The same if I choose "VGA"
 Gianluca

>>>
>>>
>>> I verified that I already have in place this parameter:
>>>
>>> [root@ractorshe ~]# engine-config -g AllowEditingHostedEngine
>>> AllowEditingHostedEngine: true version: general
>>> [root@ractorshe ~]#
>>>
>>>
>>
> Hello is there a solution for this problem?
> I'm now in 4.1.2 but still not able to access the engine console
>

I thought https://bugzilla.redhat.com/show_bug.cgi?id=1441570 was supposed
to handle it...
Can you share more information in the bug?
Y.


>
> [root@ractor ~]# hosted-engine --add-console-password --password=pippo
> no graphics devices configured
> [root@ractor ~]#
>
> In web admin
>
> Graphics protocol: None  (while in edit vm screen it appears as "SPICE"
> and still I can't modify it)
> Video Type: QXL
>
> Any chance for upcoming 4.1.3? Can I test it it there is new changes
> related to this problem.
>
> the qemu-kvm command line for hosted engine is now this one:
>
> qemu  8761 1  0 May30 ?01:33:29 /usr/libexec/qemu-kvm
> -name guest=c71,debug-threads=on -S -object secret,id=masterKey0,format=
> raw,file=/var/lib/libvirt/qemu/domain-3-c71/master-key.aes -machine
> pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m
> size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0,mem=1024 -uuid 202e6f2e-f8a1-4e81-a079-c775e86a58d5
> -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.
> centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5 -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-c71/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-05-30T13:18:37,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard
> -no-hpet -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0001-0001-0001-0001-00ec/556abaa8-0fcc-
> 4042-963b-f27db5e03837/images/7d5dd44f-f5d1-4984-9e76-
> 2b2f5e42a915/6d873dbd-c59d-4d6c-958f-a4a389b94be5,format=
> raw,if=none,id=drive-virtio-disk0,serial=7d5dd44f-f5d1-
> 4984-9e76-2b2f5e42a915,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1 -netdev
> tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=
> hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 202e6f2e-f8a1-4e81-a079-c775e86a58d5.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5901,addr=10.4.168.81,x509-dir=/etc/pki/vdsm/
> libvirt-spice,tls-channel=default,tls-channel=main,tls-
> channel=display,tls-channel=inputs,tls-channel=cursor,tls-
> channel=playback,tls-channel=record,tls-channel=smartcard,
> tls-channel=usbredir,seamless-migration=on -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,
> vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on
>
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> 

Re: [ovirt-users] Version of engine vs version of host

2017-06-16 Thread Yaniv Kaul
On Fri, Jun 16, 2017 at 6:27 PM, Gianluca Cecchi 
wrote:

> Hello,
> between problems solved in upcoming 4.1.3 release I see this:
>
> Lost Connection After Host Deploy when 4.1.3 Host Added to 4.1.2 Engine
> tracked by
> https://bugzilla.redhat.com/show_bug.cgi?id=1459484
>

I *think* the specific bug was discovered (and fixed) while developing
4.1.3.


>
>
> As a matter of principle I would prefer to force that an engine version
> must be greater or equal than all the hosts it is intended to manage.
> I don't find safe to allow this and probably unnecessary maintenance
> work... what do you think?
>
> For example if you go here:
> http://www.vmware.com/resources/compatibility/sim/
> interop_matrix.php#interop&1=&2=
>
> you can see that:
> - a vCenter Server 5.0U3 cannot manage an ESXi 5.1 host
> - a vCenter Server 5.1U3 cannot manage an ESXi 6.0 host
> - a vCenter Server 6.0U3 cannot manage an ESXi 6.5 host
>

We are more flexible ;-)

While I think it's a matter of taste, I think there are merits to upgrading
the hosts first. For example, assuming you have many hosts, to me it makes
sense to upgrade just one, see that things work well. Then, upgrade
another, perform live migration, etc, see that it's smooth, before
upgrading the manager, which is a bigger task sometimes (rollback is more
challenging, for example, it has a downtime requirements where as single
host maintenance is not requiring the same level of downtime, etc.).
In addition, there are host-based features (VDSM hooks) which do not
mandate a manager upgrade.


> In my opinion an administrator of the virtual infrastructure doesn't
> expect to be able to manage newer versions' hosts with older engines... and
> probably he/she doesn't feel this feature as a value added.
>

I'm on your side on this, as I believe the manager should always be the
most up-to-date, but I know others have different opinions and we'd like to
keep it that way.
Y.


> Just my thoughts.
> Cheers,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Version of engine vs version of host

2017-06-16 Thread Gianluca Cecchi
On Fri, Jun 16, 2017 at 5:27 PM, Gianluca Cecchi 
wrote:

> Hello,
> between problems solved in upcoming 4.1.3 release I see this:
>
> Lost Connection After Host Deploy when 4.1.3 Host Added to 4.1.2 Engine
> tracked by
> https://bugzilla.redhat.com/show_bug.cgi?id=1459484
>
> As a matter of principle I would prefer to force that an engine version
> must be greater or equal than all the hosts it is intended to manage.
> I don't find safe to allow this and probably unnecessary maintenance
> work... what do you think?
>
> For example if you go here:
> http://www.vmware.com/resources/compatibility/sim/
> interop_matrix.php#interop&1=&2=
>
> you can see that:
> - a vCenter Server 5.0U3 cannot manage an ESXi 5.1 host
> - a vCenter Server 5.1U3 cannot manage an ESXi 6.0 host
> - a vCenter Server 6.0U3 cannot manage an ESXi 6.5 host
>
> In my opinion an administrator of the virtual infrastructure doesn't
> expect to be able to manage newer versions' hosts with older engines... and
> probably he/she doesn't feel this feature as a value added.
>
> Just my thoughts.
> Cheers,
> Gianluca
>


Watching better, VMware supports greater minor versions through a
release... such as vCenter Server 5.1U1 is able to manage 5.1U3 ESXi
hosts that is the oVirt case here 4.1.2 vs 4.1.3 hosts...
Sorry
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Version of engine vs version of host

2017-06-16 Thread Gianluca Cecchi
Hello,
between problems solved in upcoming 4.1.3 release I see this:

Lost Connection After Host Deploy when 4.1.3 Host Added to 4.1.2 Engine
tracked by
https://bugzilla.redhat.com/show_bug.cgi?id=1459484

As a matter of principle I would prefer to force that an engine version
must be greater or equal than all the hosts it is intended to manage.
I don't find safe to allow this and probably unnecessary maintenance
work... what do you think?

For example if you go here:
http://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&1=&2=

you can see that:
- a vCenter Server 5.0U3 cannot manage an ESXi 5.1 host
- a vCenter Server 5.1U3 cannot manage an ESXi 6.0 host
- a vCenter Server 6.0U3 cannot manage an ESXi 6.5 host

In my opinion an administrator of the virtual infrastructure doesn't expect
to be able to manage newer versions' hosts with older engines... and
probably he/she doesn't feel this feature as a value added.

Just my thoughts.
Cheers,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-06-16 Thread Gianluca Cecchi
On Thu, Apr 27, 2017 at 11:25 AM, Evgenia Tokar  wrote:

> Hi,
>
> It looks like the graphical console fields are not editable for hosted
> engine vm.
> We are trying to figure out how to solve this issue, it is not recommended
> to change db values manually.
>
> Thanks,
> Jenny
>
>
> On Thu, Apr 27, 2017 at 10:49 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, Apr 27, 2017 at 9:46 AM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> BTW: if I try to set the video type to Cirrus from web admin gui (and
>>> automatically the Graphics Protocol becomes "VNC"), I get this when I press
>>> the OK button:
>>>
>>> Error while executing action:
>>>
>>> HostedEngine:
>>>
>>>- There was an attempt to change Hosted Engine VM values that are
>>>locked.
>>>
>>> The same if I choose "VGA"
>>> Gianluca
>>>
>>
>>
>> I verified that I already have in place this parameter:
>>
>> [root@ractorshe ~]# engine-config -g AllowEditingHostedEngine
>> AllowEditingHostedEngine: true version: general
>> [root@ractorshe ~]#
>>
>>
>
Hello is there a solution for this problem?
I'm now in 4.1.2 but still not able to access the engine console

[root@ractor ~]# hosted-engine --add-console-password --password=pippo
no graphics devices configured
[root@ractor ~]#

In web admin

Graphics protocol: None  (while in edit vm screen it appears as "SPICE" and
still I can't modify it)
Video Type: QXL

Any chance for upcoming 4.1.3? Can I test it it there is new changes
related to this problem.

the qemu-kvm command line for hosted engine is now this one:

qemu  8761 1  0 May30 ?01:33:29 /usr/libexec/qemu-kvm -name
guest=c71,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-c71/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m
size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
node,nodeid=0,cpus=0,mem=1024 -uuid 202e6f2e-f8a1-4e81-a079-c775e86a58d5
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0054-5910-8056-C4C04F30354A,uuid=202e6f2e-f8a1-4e81-a079-c775e86a58d5
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-c71/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-05-30T13:18:37,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/0001-0001-0001-0001-00ec/556abaa8-0fcc-4042-963b-f27db5e03837/images/7d5dd44f-f5d1-4984-9e76-2b2f5e42a915/6d873dbd-c59d-4d6c-958f-a4a389b94be5,format=raw,if=none,id=drive-virtio-disk0,serial=7d5dd44f-f5d1-4984-9e76-2b2f5e42a915,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/202e6f2e-f8a1-4e81-a079-c775e86a58d5.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/202e6f2e-f8a1-4e81-a079-c775e86a58d5.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5901,addr=10.4.168.81,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
-msg timestamp=on


Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine

2017-06-16 Thread Sahina Bose
I don't notice anything wrong on the gluster end.

Maybe Simone can help take a look at HE behaviour?

On Fri, Jun 16, 2017 at 6:14 PM, Joel Diaz  wrote:

> Good morning,
>
> Info requested below.
>
> [root@ovirt-hyp-02 ~]# hosted-engine --vm-start
>
> Exception in thread Client localhost:54321 (most likely raised during
> interpreter shutdown):VM exists and its status is Up
>
>
>
> [root@ovirt-hyp-02 ~]# ping engine
>
> PING engine.example.lan (192.168.170.149) 56(84) bytes of data.
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=1 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=2 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=3 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=4 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=5 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=6 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=7 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=8 Destination
> Host Unreachable
>
>
>
>
>
> [root@ovirt-hyp-02 ~]# gluster volume status engine
>
> Status of volume: engine
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick 192.168.170.141:/gluster_bricks/engin
>
> e/engine49159 0  Y
> 1799
>
> Brick 192.168.170.143:/gluster_bricks/engin
>
> e/engine49159 0  Y
> 2900
>
> Self-heal Daemon on localhost   N/A   N/AY
> 2914
>
> Self-heal Daemon on ovirt-hyp-01.example.lan   N/A   N/A
> Y   1854
>
>
>
> Task Status of Volume engine
>
> 
> --
>
> There are no active volume tasks
>
>
>
> [root@ovirt-hyp-02 ~]# gluster volume heal engine info
>
> Brick 192.168.170.141:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> Brick 192.168.170.143:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> Brick 192.168.170.147:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> [root@ovirt-hyp-02 ~]# cat /var/log/glusterfs/rhev-data-c
> enter-mnt-glusterSD-ovirt-hyp-01.example.lan\:engine.log
>
> [2017-06-15 13:37:02.009436] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
>
>
>
>
>
> Each of the three host sends out the following notifications about every
> 15 minutes.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineDown-EngineStart.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineStart-EngineStarting.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineStarting-EngineForceStop.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineForceStop-EngineDown.
>
> Please let me know if you need any additional information.
>
> Thank you,
>
> Joel
>
>
>
> On Jun 16, 2017 2:52 AM, "Sahina Bose"  wrote:
>
>> From the agent.log,
>> MainThread::INFO::2017-06-15 11:16:50,583::states::473::ovi
>> rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine
>> vm is running on host ovirt-hyp-02.reis.com (id 2)
>>
>> It looks like the HE VM was started successfully? Is it possible that the
>> ovirt-engine service could not be started on the HE VM. Could you try to
>> start the HE vm using below and then logging into the VM console.
>> #hosted-engine --vm-start
>>
>> Also, please check
>> # gluster volume status engine
>> # gluster volume heal engine info
>>
>> Please also check if there are errors in gluster mount logs - at
>> /var/log/glusterfs/rhev-data-center-mnt...log
>>
>>
>> On Thu, Jun 15, 2017 at 8:53 PM, Joel Diaz  wrote:
>>
>>> Sorry. I forgot to attached the requested logs in the previous email.
>>>
>>> Thanks,
>>>
>>> On Jun 15, 2017 9:38 AM, "Joel Diaz"  wrote:
>>>
>>> Good morning,
>>>
>>> Requested info below. Along with some additional info.
>>>
>>> You'll notice the data volume is not mounted.
>>>
>>> Any help in getting HE back running would be greatly appreciated.
>>>
>>> Thank you,
>>>
>>> Joel
>>>
>>> [root@ovirt-hyp-01 ~]# hosted-engine --vm-status
>>>
>>>
>>>
>>>
>>>
>>> --== Host 1 status ==--
>>>
>>>
>>>
>>> conf_on_shared_storage : True
>>>
>>> Status up-to-date  : False
>>>
>>> Hostname   : ovirt-hyp-01.example.lan
>>>
>>> Host ID: 1
>>>
>>> Engine status  : unknown stale-data
>>>
>>> Score  : 3400
>>>
>>> 

Re: [ovirt-users] Hosted engine

2017-06-16 Thread Joel Diaz
Good morning,

Info requested below.

[root@ovirt-hyp-02 ~]# hosted-engine --vm-start

Exception in thread Client localhost:54321 (most likely raised during
interpreter shutdown):VM exists and its status is Up



[root@ovirt-hyp-02 ~]# ping engine

PING engine.example.lan (192.168.170.149) 56(84) bytes of data.

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=1 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=2 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=3 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=4 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=5 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=6 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=7 Destination Host
Unreachable

>From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=8 Destination Host
Unreachable





[root@ovirt-hyp-02 ~]# gluster volume status engine

Status of volume: engine

Gluster process TCP Port  RDMA Port  Online  Pid


--

Brick 192.168.170.141:/gluster_bricks/engin

e/engine49159 0  Y
1799

Brick 192.168.170.143:/gluster_bricks/engin

e/engine49159 0  Y
2900

Self-heal Daemon on localhost   N/A   N/AY
2914

Self-heal Daemon on ovirt-hyp-01.example.lan   N/A   N/AY
1854



Task Status of Volume engine


--

There are no active volume tasks



[root@ovirt-hyp-02 ~]# gluster volume heal engine info

Brick 192.168.170.141:/gluster_bricks/engine/engine

Status: Connected

Number of entries: 0



Brick 192.168.170.143:/gluster_bricks/engine/engine

Status: Connected

Number of entries: 0



Brick 192.168.170.147:/gluster_bricks/engine/engine

Status: Connected

Number of entries: 0



[root@ovirt-hyp-02 ~]# cat /var/log/glusterfs/rhev-data-
center-mnt-glusterSD-ovirt-hyp-01.example.lan\:engine.log

[2017-06-15 13:37:02.009436] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing





Each of the three host sends out the following notifications about every 15
minutes.

Hosted engine host: ovirt-hyp-01.example.lan changed state:
EngineDown-EngineStart.

Hosted engine host: ovirt-hyp-01.example.lan changed state:
EngineStart-EngineStarting.

Hosted engine host: ovirt-hyp-01.example.lan changed state: EngineStarting-
EngineForceStop.

Hosted engine host: ovirt-hyp-01.example.lan changed state:
EngineForceStop-EngineDown.

Please let me know if you need any additional information.

Thank you,

Joel



On Jun 16, 2017 2:52 AM, "Sahina Bose"  wrote:

> From the agent.log,
> MainThread::INFO::2017-06-15 11:16:50,583::states::473::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine
> vm is running on host ovirt-hyp-02.reis.com (id 2)
>
> It looks like the HE VM was started successfully? Is it possible that the
> ovirt-engine service could not be started on the HE VM. Could you try to
> start the HE vm using below and then logging into the VM console.
> #hosted-engine --vm-start
>
> Also, please check
> # gluster volume status engine
> # gluster volume heal engine info
>
> Please also check if there are errors in gluster mount logs - at
> /var/log/glusterfs/rhev-data-center-mnt...log
>
>
> On Thu, Jun 15, 2017 at 8:53 PM, Joel Diaz  wrote:
>
>> Sorry. I forgot to attached the requested logs in the previous email.
>>
>> Thanks,
>>
>> On Jun 15, 2017 9:38 AM, "Joel Diaz"  wrote:
>>
>> Good morning,
>>
>> Requested info below. Along with some additional info.
>>
>> You'll notice the data volume is not mounted.
>>
>> Any help in getting HE back running would be greatly appreciated.
>>
>> Thank you,
>>
>> Joel
>>
>> [root@ovirt-hyp-01 ~]# hosted-engine --vm-status
>>
>>
>>
>>
>>
>> --== Host 1 status ==--
>>
>>
>>
>> conf_on_shared_storage : True
>>
>> Status up-to-date  : False
>>
>> Hostname   : ovirt-hyp-01.example.lan
>>
>> Host ID: 1
>>
>> Engine status  : unknown stale-data
>>
>> Score  : 3400
>>
>> stopped: False
>>
>> Local maintenance  : False
>>
>> crc32  : 5558a7d3
>>
>> local_conf_timestamp   : 20356
>>
>> Host timestamp : 20341
>>
>> Extra metadata (valid at timestamp):
>>
>> metadata_parse_version=1
>>
>> metadata_feature_version=1
>>
>> timestamp=20341 (Fri Jun  9 14:38:57 2017)
>>
>> 

[ovirt-users] [ANN] oVirt 4.1.3 Second Release Candidate is now available

2017-06-16 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.3 for testing, as of June 16th, 2017

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This update is the first release candidate of the third in a series of
stabilization updates to the 4.1 series.
4.1.3 brings more than 40 enhancements and more than 200 bugfixes,
including more than 120 high or urgent
severity fixes, on top of oVirt 4.1 series

This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1
* Fedora 24 (tech preview)

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Live is already available[4]
- oVirt Node is already available[4]
We are addressing compose issues for above components which are missing
ansible 2.3 and latest fluentd builds from CentOS SIGs.

Additional Resources:
* Read more about the oVirt 4.1.3 release highlights:
http://www.ovirt.org/release/4.1.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.3/
[4] resources.ovirt.org/pub/ovirt-4.1-pre/iso/

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt sdk and pipelining

2017-06-16 Thread Fabrice Bacchella

> Le 16 juin 2017 à 10:13, Juan Hernández  a écrit :
> 
> On 06/16/2017 09:52 AM, Fabrice Bacchella wrote:
>> I just read the blog entry about performance increate in for the python sdk 
>> (https://www.ovirt.org/blog/2017/05/higher-performance-for-python-sdk/).
>> 
>> I'm quite sceptical about pipelining.

> In our tests pipe-lining dramatically increases the performance in large
> scale environments with high latency. In our tests with 4000 virtual
> machines 1 disks and 150ms of latency retrieving the complete
> inventory is reduced from approx 1 hour to approx 2 minutes.
> 

Bench are the ultimate judge. So if it works in many different use case for , 
that's nice and intersting.


> Note that the usage of the HTTP protocol in this scenario is very
> different from the typical usage when a browser retrieves a web page.

Indeed, all the literature is about interactive usage. A very different use 
case.

> 
>> It also talks about multiple connection, but don't use pycurl.CurlShare(). I 
>> thing this might be very helpfull, as it allows to share cookies, see 
>> https://curl.haxx.se/libcurl/c/CURLOPT_SHARE.html. 
>> 
> 
> The SDK uses the curl "multi" mechanism, which automatically shares the
> DNS cache.

This: https://curl.haxx.se/libcurl/c/CURLOPT_DNS_USE_GLOBAL_CACHE.html ?

WARNING: this option is considered obsolete. Stop using it. Switch over to 
using the share interface instead! See CURLOPT_SHARE and curl_share_init.


> In addition version 4 of the SDK does not use cookies. So
> this shouldn't be relevant.

From some of my own code:
self._share.setopt(pycurl.SH_SHARE, pycurl.LOCK_DATA_COOKIE)
self._share.setopt(pycurl.SH_SHARE, pycurl.LOCK_DATA_DNS)
self._share.setopt(pycurl.SH_SHARE, pycurl.LOCK_DATA_SSL_SESSION)

And users apaches settings can use cookies for custom usages.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt sdk and pipelining

2017-06-16 Thread nicolas

El 2017-06-16 08:52, Fabrice Bacchella escribió:

I just read the blog entry about performance increate in for the
python sdk
(https://www.ovirt.org/blog/2017/05/higher-performance-for-python-sdk/).

I'm quite sceptical about pipelining.



I disagree. Even without reading the post you mention, we already 
noticed that since this version everything is working much faster than 
with prior versions. We have a lot of stuff implemented with Python-SDK 
but on one of them the effect is quite notorious: A script checks VMs' 
permissions and takes decisions based on them. Without pipelining this 
script took about 5 minutes to execute, with pipelining it doesn't take 
more than 15 seconds on a ~1000VMs infrastructure.


Regards,

Nicolás


A few explanation about that can be found at:
https://devcentral.f5.com/articles/http-pipelining-a-security-risk-without-real-performance-benefits
https://stackoverflow.com/questions/14810890/what-are-the-disadvantages-of-using-http-pipelining

It also talks about multiple connection, but don't use
pycurl.CurlShare(). I thing this might be very helpfull, as it allows
to share cookies, see
https://curl.haxx.se/libcurl/c/CURLOPT_SHARE.html.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt sdk and pipelining

2017-06-16 Thread Juan Hernández
On 06/16/2017 09:52 AM, Fabrice Bacchella wrote:
> I just read the blog entry about performance increate in for the python sdk 
> (https://www.ovirt.org/blog/2017/05/higher-performance-for-python-sdk/).
> 
> I'm quite sceptical about pipelining.
> 
> A few explanation about that can be found at:
> https://devcentral.f5.com/articles/http-pipelining-a-security-risk-without-real-performance-benefits
> https://stackoverflow.com/questions/14810890/what-are-the-disadvantages-of-using-http-pipelining
>

Did you test it? Can you share the results?

In our tests pipe-lining dramatically increases the performance in large
scale environments with high latency. In our tests with 4000 virtual
machines 1 disks and 150ms of latency retrieving the complete
inventory is reduced from approx 1 hour to approx 2 minutes.

Note that the usage of the HTTP protocol in this scenario is very
different from the typical usage when a browser retrieves a web page.

> It also talks about multiple connection, but don't use pycurl.CurlShare(). I 
> thing this might be very helpfull, as it allows to share cookies, see 
> https://curl.haxx.se/libcurl/c/CURLOPT_SHARE.html. 
>

The SDK uses the curl "multi" mechanism, which automatically shares the
DNS cache. In addition version 4 of the SDK does not use cookies. So
this shouldn't be relevant.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt sdk and pipelining

2017-06-16 Thread Fabrice Bacchella
I just read the blog entry about performance increate in for the python sdk 
(https://www.ovirt.org/blog/2017/05/higher-performance-for-python-sdk/).

I'm quite sceptical about pipelining.

A few explanation about that can be found at:
https://devcentral.f5.com/articles/http-pipelining-a-security-risk-without-real-performance-benefits
https://stackoverflow.com/questions/14810890/what-are-the-disadvantages-of-using-http-pipelining

It also talks about multiple connection, but don't use pycurl.CurlShare(). I 
thing this might be very helpfull, as it allows to share cookies, see 
https://curl.haxx.se/libcurl/c/CURLOPT_SHARE.html. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploy Ovirt VM's By Ansible Playbook Issue

2017-06-16 Thread Martin Perina
Hi,

It seems like some storage issue, could you please share your engine logs?

Regards

Martin Perina

On Thursday, June 15, 2017, khalid mahmood  wrote:
> Dear Users
> Procedure :
> 1- create clean volume replica 2 distributed with glusterfs .
> 2- create clean ovirt-engine machine .
> 3- create clean vm from scratch then create template from this vm.
> 4- then create two vm from this template (vm1) & (vm2).
> 5- then delete the two vm .
> 6- create new two vm with the same name (vm1) & (vm2) from the template .
> 7- till now the two vm stable and work correctly .
> 8- repeat no (7) three time all vm's is working correctly .
> issue :
> i have ansible playbook to deploy vm's to our ovirt , my playbook use the
above template to deploy the vm's .
> my issue is after ansible script deploy the vm's , all vm's disk crash
and the template disk is crash also and the script make change into the
template checksum hash .
> you can look at ansible parameters :
> - hosts: localhost
> connection: local
> gather_facts: false
> tasks:
>   - name: entering
> ovirt_auth:
> url: https://ovirt-engine.elcld.net:443/ovirt-engine/api
> username: admin@internal
> password: pass
> insecure: yes
>   - name: creating
> ovirt_vms:
>   auth: "{{ ovirt_auth }}"
>   name: myvm05
>   template: mahdi
>   #state: present
>   cluster: Cluster02
>   memory: 4GiB
>   cpu_cores: 2
>   comment: Dev
>   #type: server
>   cloud_init:
> host_name: vm01
> user_name: root
> root_password: pass
> nic_on_boot: true
> nic_boot_protocol: static
> nic_name: eth0
> dns_servers: 109.224.19.5
> dns_search: elcld.net
> nic_ip_address: 10.10.20.2
> nic_netmask: 255.255.255.0
> nic_gateway: 10.10.20.1
>   - name: Revoke
> ovirt_auth:
>   state: absent
>   ovirt_auth: "{{ ovirt_auth }}"
> can you assist me with this issue by checking if that any missing in my
ansible .
> best regards
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine

2017-06-16 Thread Sahina Bose
>From the agent.log,
MainThread::INFO::2017-06-15
11:16:50,583::states::473::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host ovirt-hyp-02.reis.com (id 2)

It looks like the HE VM was started successfully? Is it possible that the
ovirt-engine service could not be started on the HE VM. Could you try to
start the HE vm using below and then logging into the VM console.
#hosted-engine --vm-start

Also, please check
# gluster volume status engine
# gluster volume heal engine info

Please also check if there are errors in gluster mount logs - at
/var/log/glusterfs/rhev-data-center-mnt...log


On Thu, Jun 15, 2017 at 8:53 PM, Joel Diaz  wrote:

> Sorry. I forgot to attached the requested logs in the previous email.
>
> Thanks,
>
> On Jun 15, 2017 9:38 AM, "Joel Diaz"  wrote:
>
> Good morning,
>
> Requested info below. Along with some additional info.
>
> You'll notice the data volume is not mounted.
>
> Any help in getting HE back running would be greatly appreciated.
>
> Thank you,
>
> Joel
>
> [root@ovirt-hyp-01 ~]# hosted-engine --vm-status
>
>
>
>
>
> --== Host 1 status ==--
>
>
>
> conf_on_shared_storage : True
>
> Status up-to-date  : False
>
> Hostname   : ovirt-hyp-01.example.lan
>
> Host ID: 1
>
> Engine status  : unknown stale-data
>
> Score  : 3400
>
> stopped: False
>
> Local maintenance  : False
>
> crc32  : 5558a7d3
>
> local_conf_timestamp   : 20356
>
> Host timestamp : 20341
>
> Extra metadata (valid at timestamp):
>
> metadata_parse_version=1
>
> metadata_feature_version=1
>
> timestamp=20341 (Fri Jun  9 14:38:57 2017)
>
> host-id=1
>
> score=3400
>
> vm_conf_refresh_time=20356 (Fri Jun  9 14:39:11 2017)
>
> conf_on_shared_storage=True
>
> maintenance=False
>
> state=EngineDown
>
> stopped=False
>
>
>
>
>
> --== Host 2 status ==--
>
>
>
> conf_on_shared_storage : True
>
> Status up-to-date  : False
>
> Hostname   : ovirt-hyp-02.example.lan
>
> Host ID: 2
>
> Engine status  : unknown stale-data
>
> Score  : 3400
>
> stopped: False
>
> Local maintenance  : False
>
> crc32  : 936d4cf3
>
> local_conf_timestamp   : 20351
>
> Host timestamp : 20337
>
> Extra metadata (valid at timestamp):
>
> metadata_parse_version=1
>
> metadata_feature_version=1
>
> timestamp=20337 (Fri Jun  9 14:39:03 2017)
>
> host-id=2
>
> score=3400
>
> vm_conf_refresh_time=20351 (Fri Jun  9 14:39:17 2017)
>
> conf_on_shared_storage=True
>
> maintenance=False
>
> state=EngineDown
>
> stopped=False
>
>
>
>
>
> --== Host 3 status ==--
>
>
>
> conf_on_shared_storage : True
>
> Status up-to-date  : False
>
> Hostname   : ovirt-hyp-03.example.lan
>
> Host ID: 3
>
> Engine status  : unknown stale-data
>
> Score  : 3400
>
> stopped: False
>
> Local maintenance  : False
>
> crc32  : f646334e
>
> local_conf_timestamp   : 20391
>
> Host timestamp : 20377
>
> Extra metadata (valid at timestamp):
>
> metadata_parse_version=1
>
> metadata_feature_version=1
>
> timestamp=20377 (Fri Jun  9 14:39:37 2017)
>
> host-id=3
>
> score=3400
>
> vm_conf_refresh_time=20391 (Fri Jun  9 14:39:51 2017)
>
> conf_on_shared_storage=True
>
> maintenance=False
>
> state=EngineStop
>
> stopped=False
>
> timeout=Thu Jan  1 00:43:08 1970
>
>
>
>
>
> [root@ovirt-hyp-01 ~]# gluster peer status
>
> Number of Peers: 2
>
>
>
> Hostname: 192.168.170.143
>
> Uuid: b2b30d05-cf91-4567-92fd-022575e082f5
>
> State: Peer in Cluster (Connected)
>
> Other names:
>
> 10.0.0.2
>
>
>
> Hostname: 192.168.170.147
>
> Uuid: 4e50acc4-f3cb-422d-b499-fb5796a53529
>
> State: Peer in Cluster (Connected)
>
> Other names:
>
> 10.0.0.3
>
>
>
> [root@ovirt-hyp-01 ~]# gluster volume info all
>
>
>
> Volume Name: data
>
> Type: Replicate
>
> Volume ID: 1d6bb110-9be4-4630-ae91-36ec1cf6cc02
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x (2 + 1) = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: 192.168.170.141:/gluster_bricks/data/data
>
> Brick2: 192.168.170.143:/gluster_bricks/data/data
>
> Brick3: 192.168.170.147:/gluster_bricks/data/data (arbiter)
>
> 

[ovirt-users] hosted-engine VM and services not working

2017-06-16 Thread Andrew Dent

Hi

Well I've got myself into a fine mess.

host01 was setup with hosted-engine v4.1. This was successful.
Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still 
running with more VMs on it)
Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded 
but I couldn't add any storage domains to it. Cannot remember why.

In Ovirt engine UI I removed host02.
I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me 
it was already there (but it wasn't listed in the UI).
Renamed the reinstalled host02 to host03, changed the ipaddress, 
reconfig the DNS server and added host03 into the Ovirt Engine UI.

All good, and I was able to import more VMs to it.
I was also able to shutdown a VM on host01 assign it to host03 and start 
the VM. Cool, everything working.

The above was all last couple of weeks.

This week I performed some yum updates on the Engine VM. No reboot.
Today noticed that the Ovirt services in the Engine VM were in a endless 
restart loop. They would be up for a 5 minutes and then die.
Looking into /var/log/ovirt-engine/engine.log and I could only see 
errors relating to host02. Ovirt was trying to find it and failing. Then 
falling over.
I ran "hosted-engine --clean-metadata" thinking it would cleanup and 
remove bad references to hosts, but now realise that was a really bad 
idea as it didn't do what I'd hoped.
At this point the sequence below worked, I could login to Ovirt UI but 
after 5 minutes the services would be off

service ovirt-engine restart
service ovirt-websocket-proxy restart
service httpd restart

I saw some reference to having to remove hosts from the database by hand 
in situations where under the hood of Ovirt a decommission host was 
still listed, but wasn't showing in the GUI.
So I removed reference to host02 (vds_id and host_id) in the following 
tables in this order.

vds_dynamic
vds_statistics
vds_static
host_device

Now when I try to start ovirt-websocket it will not start
service ovirt-websocket start
Redirecting to /bin/systemctl start  ovirt-websocket.service
Failed to start ovirt-websocket.service: Unit not found.

I'm now thinking that I need to do the following in the engine VM
# engine-cleanup # yum remove ovirt-engine # yum install ovirt-engine # 
engine-setup
But to run engine-cleanup I need to put the engine-vm into maintenance 
mode and because of the --clean-metadata that I ran earlier on host01 I 
cannot do that.


What is the best course of action from here?

Cheers



Andrew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users