Re: [ovirt-users] ovirt4.0.1 and new node, migration fails

2016-08-29 Thread Sandro Bonazzola
On Sat, Aug 27, 2016 at 12:28 AM, Bill James  wrote:

> Can anyone offer suggestions on how to figure out why when I create a new
> VM and go to "Host" and try to select "Specific Host", although 2 nodes are
> listed it won't let me pick on of them?
>
> No updates on the bug reported.
> I tried ovirt-engine-4.0.2.7-1.el7.centos.noarch on both ovirt engine and
> the new node, no change.
>

Adding Phillip and Doron ( Bug 1363900
 - Specific Host(s)
radio button doesn't work) . I see bug has been targeted to 4.0.7 but maybe
some of your questions may be answered earlier.



>
> The 2 nodes are not identical if that makes a difference. Both are HP
> DL360 G8's but one has 128GB RAM and the other has 32GB. Does that matter?
>
> If after creating VM I go to "Run-once", then I can select the second node
> and VM starts up fine.
> Or I can migrate the VM once its started.
> Why doesn't the initial "New VM" window allow me to select a host?
>
>
>
>
> On 8/15/16 8:51 AM, Bill James wrote:
>
> sure did. No word yet on that.
> *bug 1363900*
>
>
>
> On 08/14/2016 03:43 AM, Yaniv Dary wrote:
>
> Did you open a bug like Michal asked?
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Fri, Aug 12, 2016 at 2:23 AM, Bill James  wrote:
>
>> just fyi, I tried updating to ovirt-engine-4.0.2.6-1.el7.centos.noarch
>> and still same story, it won't let me select a specific host to deploy a
>> VM to, and doesn't tell me why not.
>>
>> However I can migrate the VM from one node to the other one just fine.
>>
>> Any ideas why?
>>
>> I provided logs earlier.
>>
>>
>>
>>
>> On 08/03/2016 02:32 PM, Bill James wrote:
>>
>>> opened bug 1363900.
>>> It also includes recent logs from all 3 servers.
>>>
>>> Also tried updating vdsm to 4.18.10-1 and restarted ovirt-engine.
>>> No change in results.
>>>
>>> I can migrate VMs to new node but not initially assign VMs to that node.
>>>
>>>
>>> On 07/29/2016 09:31 AM, Bill James wrote:
>>>
 ok, I will raise a bug. Yes it is very frustrating just having button
 not work without explanation.
 I don't know if this is related, the new host that I am having troubles
 with is running 4.0.1.1-1,
 other other host is running 4.0.0.6-1

 I was planning on migrating VMs then upgrade the older host.
 Also Cluster is still in 3.6 mode, also waiting for upgrade of older
 node.
 All storage domains are on older node, its the NFS server.


 hmm, just retried migration so I could get vdsm logs from both hosts.
 1 VM worked, the other 3 failed.
 And I still can't assign VMs to new node.

 VM that worked: f4cd4891-977d-44c2-8554-750ce86da7c9

 Not sure what's special about it.
 It and 45f4f24b-2dfa-401b-8557-314055a4662c are clones from same
 template.


 Attaching logs from 2 nodes and engine now.



 On 7/29/16 4:00 AM, Michal Skrivanek wrote:

> On 28 Jul 2016, at 18:37, Bill James  wrote:
>>
>> I'm trying to test out ovirt4.0.1.
>> I added a new hardware node to the cluster. Its status is Up.
>> But for some reason when I create a new VM and try to select
>> "Specific Host" the radio button doesn't let me select it so I can't 
>> assign
>> a VM to new node.
>>
> please raise that as a bug if there is no explanation why
>
> When I try to migrate a VM to new node it fails with:
>>
>> MigrationError: Domain not found: no domain with matching uuid
>> '45f4f24b-2dfa-401b-8557-314055a4662c'
>>
>>
>> What did I miss?
>>
> for migration issues you always need to include vdsm logs from both
> source and destination host. Hard to say otherwise. Anything special about
> your deployment?
>
> ovirt-engine-4.0.1.1-1.el7.centos.noarch
>> vdsm-4.18.6-1.el7.centos.x86_64
>>
>> storage is NFS.
>>
>> Attached logs.
>>
>> ___
>>
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>

>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> 
> Check it out Tomorrow night:
> http://thebilljamesgroup.com/Events.html
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Live Migration issues

2016-08-29 Thread Yaniv Kaul
Did you happen to save the logs? They might reveal the issue.
TIA,
Y.

On Sat, Aug 27, 2016 at 1:25 PM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> The migration is working fine with Cluster Switch set to "Legacy" Mode.
>
> But I have run into a different problem. Yesterday, for some reason, I had
> to restart the hosts. But the Hosts never came online. Hosts were
> constantly in Non Responsive state. I manually fenced it, and Vds command
> timed out.
>
> I found that NIC was down for some reason. I brought it up and again
> fenced the hosts. Again the Vds command timed out. I manually attempted to
> restart the VDSM service, but host reported there is no such service.
> Starting VDSMD service resulted in failure.
>
> Finally, I had to forcefully remove the hosts, reinstalled base (CentOS 7)
> on hosts and re-added them to oVirt with the storage domain intact. All
> hosts, storage domains went online and all VMs were intact.
>
> Post reinstallation, I set the Cluster Switch mode to "Legacy" and tried
> migration and it worked fine.
>
> I fail to understand what went wrong while rebooting the hosts.
>
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava eXza Technology Consulting & Services
> Do not print this e-mail unless required. Save Paper & trees.
> On Friday 26 August 2016 10:12 PM, Anantha Raghava wrote:
>
> Hi,
>
> Then, I will switch the cluster switch type back to Legacy and then try to
> migrate. Will post the results here.
>
> Any other suggestions?
>
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava eXza Technology Consulting & Services
>
>
> Do not print this e-mail unless required. Save Paper & trees.
> On Friday 26 August 2016 09:58 PM, Yaniv Dary wrote:
>
> OVS is experimental and there is a open item on making migration work:
> https://bugzilla.redhat.com/show_bug.cgi?id=1362495
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Fri, Aug 26, 2016 at 11:25 AM, Anantha Raghava <
> rag...@exzatechconsulting.com> wrote:
>
>> Hi,
>>
>> Not at all.
>>
>> Just newly created Virtual Machine. No user is accessing the VM yet. Only
>> Ubuntu Trusty Tahr (14.04) OS with oVirt Guest Agent is installed. VM and
>> both hosts are in the same network as well.
>>
>> Applied migration policy is "Minimum Downtime". Bandwidth limit is set to
>> Auto. Cluster switch is set to "OVS" I can send the screen shot tomorrow.
>>
>> Host Hardware configuration. Intel Xeon CPU, 2 Socket X 16 Core in each
>> socket. Installed Memory is 256 GB on each host. I have 4 X 10Gbps NIC, 2
>> given for VM traffic and 2 given for iSCSI traffic. oVirt Management is on
>> a separate 1 X 1Gbps NIC.
>>
>> Yet, migration fails.
>>
>> --
>>
>> Thanks & Regards,
>>
>>
>> Anantha Raghava eXza Technology Consulting & Services
>>
>> Do not print this e-mail unless required. Save Paper & trees.
>> On Friday 26 August 2016 09:31 PM, Yaniv Dary wrote:
>>
>> Is the VM very busy?
>> Did you apply the new cluster migration policies?
>>
>> Yaniv Dary
>> Technical Product Manager
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>> Tel : +972 (9) 7692306
>> 8272306
>> Email: yd...@redhat.com
>> IRC : ydary
>>
>>
>> On Fri, Aug 26, 2016 at 1:54 AM, Anantha Raghava <
>> rag...@exzatechconsulting.com> wrote:
>>
>>> Hi,
>>>
>>> In our setup we have configured two hosts, both of same CPU type, same
>>> amount of memory and Master storage domain is created on iSCSI storage and
>>> is live.
>>>
>>> I created a single VM with Ubuntu Trusty as OS. It installed properly
>>> and when I attempted to migrate the running VM, the migration failed.
>>>
>>> Engine log, Host 1 log and Host 2 logs are attached for your reference.
>>> Since logs are running into several MBs, I have compressed them and
>>> attached here.
>>>
>>> Can someone help us to solve this issue?
>>>
>>> --
>>>
>>> Thanks & Regards,
>>> Anantha Raghava
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Choose default auth backend

2016-08-29 Thread James Michels
Greetings,

We have several auth backends (LDAP, AD...), and we'd like to establish de
default one to one of them. 'internal' seems to be the default now.

Is there a way to override the default backend on the login page?

This is ovirt 4.0.2

Thanks

James
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Choose default auth backend

2016-08-29 Thread Ondra Machacek

On 08/29/2016 10:09 AM, James Michels wrote:

Greetings,

We have several auth backends (LDAP, AD...), and we'd like to establish
de default one to one of them. 'internal' seems to be the default now.

Is there a way to override the default backend on the login page?


yes, we have such feature since 4.0.
You can see details in doc text of following bug:

 https://bugzilla.redhat.com/show_bug.cgi?id=1296274

Doc text:
With this update, a new configuration variable, 
ovirt.engine.aaa.authn.default.profile, has been added to the authn 
configuration file. Setting this variable ensures that the profile 
drop-down menu on the login page defaults to the selected profile. To 
configure this feature, add ovirt.engine.aaa.authn.default.profile to 
the authn configuration file for a selected profile and set the value to 
true then restart the ovirt-engine service. If the 
ovirt.engine.aaa.authn.default.profile variable is not defined in the 
authn configuration file the drop-down menu defaults to internal.




This is ovirt 4.0.2

Thanks

James


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Choose default auth backend

2016-08-29 Thread James Michels
Thanks for this!

Regards

James

2016-08-29 9:26 GMT+01:00 Ondra Machacek :

> On 08/29/2016 10:09 AM, James Michels wrote:
>
>> Greetings,
>>
>> We have several auth backends (LDAP, AD...), and we'd like to establish
>> de default one to one of them. 'internal' seems to be the default now.
>>
>> Is there a way to override the default backend on the login page?
>>
>
> yes, we have such feature since 4.0.
> You can see details in doc text of following bug:
>
>  https://bugzilla.redhat.com/show_bug.cgi?id=1296274
>
> Doc text:
> With this update, a new configuration variable,
> ovirt.engine.aaa.authn.default.profile, has been added to the authn
> configuration file. Setting this variable ensures that the profile
> drop-down menu on the login page defaults to the selected profile. To
> configure this feature, add ovirt.engine.aaa.authn.default.profile to the
> authn configuration file for a selected profile and set the value to true
> then restart the ovirt-engine service. If the 
> ovirt.engine.aaa.authn.default.profile
> variable is not defined in the authn configuration file the drop-down menu
> defaults to internal.
>
>
>> This is ovirt 4.0.2
>>
>> Thanks
>>
>> James
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Stable Next Generation Node Image for 4.0.2?

2016-08-29 Thread Fabian Deutsch
On Wed, Aug 24, 2016 at 2:09 PM, Thomas Klute  wrote:
> Dear oVirt community,
>
> what is the correct way to set up a next generation node of the latest
> stable version (4.0.2)?

Hey Thomas,

let me get back to this question at the end …

> Take the image from
> http://resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-node-ng-installer/ ?
> Seems to be 4.0.0 and then update?

This is the URL yuo want - It contains the builds based on the latest
stable release.

Tho I see that it wasn't updated lately.
Sandro, can we publish a new ISO for 4.0.3?

In the mean time you can take the iso from
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.0_build-artifacts-el7-x86_64/
Which is building the stable isos

> Or take the image from
> http://resources.ovirt.org/pub/ovirt-4.0-snapshot/iso/
> Seems to be 4.0.2 but nightly and thus unstable?

Yep, these are nightly builds, and probably unstable.

- fabian

> Thanks for the clarification,
> best regards,
>  Thomas
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Stable Next Generation Node Image for 4.0.2?

2016-08-29 Thread Sandro Bonazzola
On Mon, Aug 29, 2016 at 10:51 AM, Fabian Deutsch 
wrote:

> On Wed, Aug 24, 2016 at 2:09 PM, Thomas Klute  wrote:
> > Dear oVirt community,
> >
> > what is the correct way to set up a next generation node of the latest
> > stable version (4.0.2)?
>
> Hey Thomas,
>
> let me get back to this question at the end …
>
> > Take the image from
> > http://resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-node-ng-installer/ ?
> > Seems to be 4.0.0 and then update?
>
> This is the URL yuo want - It contains the builds based on the latest
> stable release.
>
> Tho I see that it wasn't updated lately.
>

4.0.2 iso wasn't ready and missed the release. 4.0.3 is currently under
testing to be released this week if no blockers will be found. It's
available here:
http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/ovirt-node-ng-installer/



> Sandro, can we publish a new ISO for 4.0.3?
>
> In the mean time you can take the iso from
> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.0_build-
> artifacts-el7-x86_64/
> Which is building the stable isos
>
> > Or take the image from
> > http://resources.ovirt.org/pub/ovirt-4.0-snapshot/iso/
> > Seems to be 4.0.2 but nightly and thus unstable?
>
> Yep, these are nightly builds, and probably unstable.
>
> - fabian
>
> > Thanks for the clarification,
> > best regards,
> >  Thomas
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem starting VMs

2016-08-29 Thread knarra

Hi,

I am unable to launch vms on one of my host.  The problem is vm is 
stuck at "waiting for launch" and never comes up. I see the following 
messages in /var/log/messages. Can some one help me to resolve the issue.



Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine 
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine 
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine 
qemu-20-appwinvm19.

Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous 
mode
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) 
entered forwarding state
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) 
entered forwarding state
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i 
vnet11 -j libvirt-J-vnet11' failed:

 Illegal target name 'libvirt-J-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o 
vnet11 -j libvirt-P-vnet11' failed

: Illegal target name 'libvirt-P-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11' 
failed: Chain 'libvirt-J-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11' 
failed: Chain 'libvirt-P-vnet11

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' 
failed: Chain 'J-vnet11-mac' doesn'

t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' 
failed: Chain 'J-vnet11-mac' doesn'

t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac' 
failed: Chain 'J-vnet11-arp-mac

' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR: 
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac' 
failed: Chain 'J-vnet11-arp-mac

' doesn't exist.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-29 Thread Nir Soffer
On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
 wrote:
>
> iSCSI & Ovirt is an awful combination, no matter if multipathed or
> bonded. its always gambling how long it will work, and when it fails why
> did it fail.
>
> its supersensitive to latency,

Can you elaborate on this?

> and superfast with setting an host to
> inactive because the engine thinks something is wrong with it.

Typically it takes at least 5 minutes with abnormal monitoring
conditions before engine will make a host non-operational,
is this realy superfast?

> in most
> cases there was no real reason for.

I think the main issue was mixing of storage monitoring and lvm
refreshes, unneeded serialization of lvm commands, and bad
locking in engine side. The engine side was fixed in 3.6, and
the vdsm side in 4.0.
See https://bugzilla.redhat.com/1081962

In rhel/centos 7.2, lot of lot of multipath related issues were fixed,
and ovirt multipath configuration was fixed to prevent unwanted
io queuing with some devices, that could lead to long delays and
failures in many flows. However, I think our configuration is too
extreme, and you may like to use the configuration
in this patch:
https://gerrit.ovirt.org/61281

I guess trying 4.0 may be too bleeding edge for you, but
hopefully you will find that your iscsi setup is much more reliable
now.

Please file bugs if you still have issues with 4.0.

Nir

> we had this in several different hardware combinations, self built
> filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer
>
> Been there, done that, wont do again.
>
> Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:
> > Hi Elad,
> >
> > thank you very much for clearing things up.
> >
> > Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'
> > and 'b' are in completely separate networks this can never work as long
> > as there is no routing between the networks.
> >
> > So it seems the iSCSI-bonding feature is not useful for my setup. I
> > still wonder how and where this feature is supposed to be used?
> >
> > thank you,
> > Uwe
> >
> > Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:
> >> Thanks.
> >>
> >> You're getting an iSCSI connection timeout [1], [2]. It means the host
> >> cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.
> >>
> >> This causes the host to loose its connection to the storage and also,
> >> the connection to the engine becomes inactive. Therefore, the host
> >> changes its status to Non-responsive [3] and since it's the SPM, the
> >> whole DC, with all its storage domains become inactive.
> >>
> >>
> >> vdsm.log:
> >> [1]
> >> Traceback (most recent call last):
> >>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
> >> connectStorageServer
> >> conObj.connect()
> >>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
> >> iscsi.addIscsiNode(self._iface, self._target, self._cred)
> >>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
> >> iscsiadm.node_login(iface.name , portalStr,
> >> target.iqn)
> >>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
> >> raise IscsiNodeError(rc, out, err)
> >> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
> >> iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260]
> >> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ
> >> et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
> >> 'iscsiadm: initiator reported error (8 - connection timed out)',
> >> 'iscsiadm: Could not log into all portals'])
> >>
> >>
> >>
> >> vdsm.log:
> >> [2]
> >> Traceback (most recent call last):
> >>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
> >> connectStorageServer
> >> conObj.connect()
> >>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
> >> iscsi.addIscsiNode(self._iface, self._target, self._cred)
> >>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
> >> iscsiadm.node_login(iface.name , portalStr,
> >> target.iqn)
> >>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
> >> raise IscsiNodeError(rc, out, err)
> >> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
> >> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260]
> >> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:
> >> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].',
> >> 'iscsiadm: initiator reported error (8 - connection timed out)',
> >> 'iscsiadm: Could not log into all portals'])
> >>
> >>
> >> engine.log:
> >> [3]
> >>
> >>
> >> 2016-08-24 14:10:23,222 WARN
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
> >> Custom Event ID:
> >>  -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
> >> 'Default' but some of the hosts encountered conne

Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-29 Thread Nir Soffer
On Thu, Aug 25, 2016 at 2:37 PM, InterNetX - Juergen Gotteswinter
 wrote:
> currently, iscsi multipathed with solaris based filer as backend. but
> this is already in progress of getting migrated to a different, less
> fragile, plattform. ovirt is nice, but too bleeding edge and way to much
> acting like a girly

"acting like a girly" is not  appropriate  for this list.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-29 Thread InterNetX - Juergen Gotteswinter
Am 29.08.2016 um 12:25 schrieb Nir Soffer:
> On Thu, Aug 25, 2016 at 2:37 PM, InterNetX - Juergen Gotteswinter
>  wrote:
>> currently, iscsi multipathed with solaris based filer as backend. but
>> this is already in progress of getting migrated to a different, less
>> fragile, plattform. ovirt is nice, but too bleeding edge and way to much
>> acting like a girly
> 
> "acting like a girly" is not  appropriate  for this list.
> 
> Nir
> 

i am sorry, this was never meant to discriminate any human. if it did, i
promise that it was not meant to.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Fedele Stabile
I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
(actually I'm running oVirt 3.5 and glusterfs 3.7). 
Anyone has experienced troubles for this upgrade?

Thank you, Fedele

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] More Hosted Engine 3.6 Qusetions

2016-08-29 Thread C Williams
Hello,

I have found that I can import thin-provisioned VMs from VMWare ESXi into
an oVirt (3.6 Hosted-Engine) iSCSI  storage domain as thin-provisioned VMs
by doing the following :

1 . Import the thin-provisioned VMWare VM into an NFS storage domain.
2.  Export the thin-provisioned VM from the NFS storage domain.into the
oVirt Export domain.
3.  Import the thin-provisioned VM into an iSCSI Storage Domain from the
oVirt Export domain

Is there a simpler way to do this using the 3.6 Import capability ? Or, do
we need to follow the above procedure to get VMs into oVirt from VMWare as
thin-provisioned VMs ?

Thank You For Your Help !
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Uncaught exception issue

2016-08-29 Thread James Michels
Hi,

I'm trying to de-obfuscate an uncaught exception error and I've reached
this page for instructions:
http://www.ovirt.org/uncategorized/engine-debug-obfuscated-ui/

However, images are broken and its kind of difficult to follow instructions
without seeing them.

Could you please guys fix it?

Thanks

James
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] More Hosted Engine 3.6 Qusetions

2016-08-29 Thread Nir Soffer
On Mon, Aug 29, 2016 at 2:21 PM, C Williams  wrote:
> Hello,
>
> I have found that I can import thin-provisioned VMs from VMWare ESXi into an
> oVirt (3.6 Hosted-Engine) iSCSI  storage domain as thin-provisioned VMs by
> doing the following :
>
> 1 . Import the thin-provisioned VMWare VM into an NFS storage domain.

Why not import the thin-provisioned VMWare VM directly into a block storage
domain?

What do you get in this case?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Uncaught exception issue

2016-08-29 Thread Fred Rolland
Check the old wiki until the page is fixed:

http://old.ovirt.org/OVirt_Engine_Debug_Obfuscated_UI

On Mon, Aug 29, 2016 at 2:21 PM, James Michels <
karma.sometimes.hu...@gmail.com> wrote:

> Hi,
>
> I'm trying to de-obfuscate an uncaught exception error and I've reached
> this page for instructions: http://www.ovirt.org/
> uncategorized/engine-debug-obfuscated-ui/
>
> However, images are broken and its kind of difficult to follow
> instructions without seeing them.
>
> Could you please guys fix it?
>
> Thanks
>
> James
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Nir Soffer
On Mon, Aug 29, 2016 at 2:15 PM, Fedele Stabile
 wrote:
> I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
> (actually I'm running oVirt 3.5 and glusterfs 3.7).
> Anyone has experienced troubles for this upgrade?

Udev in 6.8 seems to be broken, breaking block storage.
See https://access.redhat.com/solutions/2576511

I would avoid this upgrade, or test it on one hypervisor and report
ovirt bug.

Upgrading to 3.6 or 4.0 and centos 7.2 is recommended.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Fedele Stabile
ok,
I understand you suggest to upgrade centos 6.8 to 7.2 first
Correct?

Il giorno lun, 29/08/2016 alle 14.38 +0300, Nir Soffer ha scritto:
> On Mon, Aug 29, 2016 at 2:15 PM, Fedele Stabile
>  wrote:
> > 
> > I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
> > (actually I'm running oVirt 3.5 and glusterfs 3.7).
> > Anyone has experienced troubles for this upgrade?
> 
> Udev in 6.8 seems to be broken, breaking block storage.
> See https://access.redhat.com/solutions/2576511
> 
> I would avoid this upgrade, or test it on one hypervisor and report
> ovirt bug.
> 
> Upgrading to 3.6 or 4.0 and centos 7.2 is recommended.
> 
> Nir
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Uncaught exception issue

2016-08-29 Thread Alexander Wels
On Monday, August 29, 2016 2:30:41 PM EDT Fred Rolland wrote:
> Check the old wiki until the page is fixed:
> 
> http://old.ovirt.org/OVirt_Engine_Debug_Obfuscated_UI
> 
> On Mon, Aug 29, 2016 at 2:21 PM, James Michels <
> 
> karma.sometimes.hu...@gmail.com> wrote:
> > Hi,
> > 
> > I'm trying to de-obfuscate an uncaught exception error and I've reached
> > this page for instructions: http://www.ovirt.org/
> > uncategorized/engine-debug-obfuscated-ui/
> > 
> > However, images are broken and its kind of difficult to follow
> > instructions without seeing them.
> > 
> > Could you please guys fix it?
> > 
> > Thanks
> > 
> > James
> > 

Since 3.6.5 or 6, the deobfuscation happens automatically if you installed the 
mapping files described in the wiki page. You can look at UI.log and it should 
be deobfuscated. It is important that the rpm version of your engine match the 
rpm version of the mapping files.

> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Fedele Stabile
my ultimate goal would be to have a cluster with last version of Centos
and oVirt,
so my question is
what are the best steps:
first upgrade Centos of first upgrade oVirt?
The ovirt-engine is hosted

Fedele

Il giorno lun, 29/08/2016 alle 13.44 +0200, Fedele Stabile ha scritto:
> ok,
> I understand you suggest to upgrade centos 6.8 to 7.2 first
> Correct?
> 
> Il giorno lun, 29/08/2016 alle 14.38 +0300, Nir Soffer ha scritto:
> > 
> > On Mon, Aug 29, 2016 at 2:15 PM, Fedele Stabile
> >  wrote:
> > > 
> > > 
> > > I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
> > > (actually I'm running oVirt 3.5 and glusterfs 3.7).
> > > Anyone has experienced troubles for this upgrade?
> > 
> > Udev in 6.8 seems to be broken, breaking block storage.
> > See https://access.redhat.com/solutions/2576511
> > 
> > I would avoid this upgrade, or test it on one hypervisor and report
> > ovirt bug.
> > 
> > Upgrading to 3.6 or 4.0 and centos 7.2 is recommended.
> > 
> > Nir
> > 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Nir Soffer
On Mon, Aug 29, 2016 at 3:06 PM, Fedele Stabile
 wrote:
> my ultimate goal would be to have a cluster with last version of Centos
> and oVirt,
> so my question is
> what are the best steps:
> first upgrade Centos of first upgrade oVirt?

You cannot upgrade ovirt on centos 6, since 3.5 is the last version
supporting it.

Best would be to upgrade the hypervisors to centos 7, which will
pull latest ovirt anyway.

> The ovirt-engine is hosted

Upgrading the hosted engine vm is not related and more delicate,
It should be possible using hosted engine setup.

Adding Simone to add more info on that.

>
> Fedele
>
> Il giorno lun, 29/08/2016 alle 13.44 +0200, Fedele Stabile ha scritto:
>> ok,
>> I understand you suggest to upgrade centos 6.8 to 7.2 first
>> Correct?

I misread your question - you seems to already run centos 6.8.

I guess you are not using block storage (iSCSI/FC), do you?

Nir

>>
>> Il giorno lun, 29/08/2016 alle 14.38 +0300, Nir Soffer ha scritto:
>> >
>> > On Mon, Aug 29, 2016 at 2:15 PM, Fedele Stabile
>> >  wrote:
>> > >
>> > >
>> > > I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
>> > > (actually I'm running oVirt 3.5 and glusterfs 3.7).
>> > > Anyone has experienced troubles for this upgrade?
>> >
>> > Udev in 6.8 seems to be broken, breaking block storage.
>> > See https://access.redhat.com/solutions/2576511
>> >
>> > I would avoid this upgrade, or test it on one hypervisor and report
>> > ovirt bug.
>> >
>> > Upgrading to 3.6 or 4.0 and centos 7.2 is recommended.
>> >
>> > Nir
>> >
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from CentOS 6.8 (oVirt 3.5) to CentOS 7 (oVirt 4.0)

2016-08-29 Thread Simone Tiraboschi
On Mon, Aug 29, 2016 at 2:11 PM, Nir Soffer  wrote:
> On Mon, Aug 29, 2016 at 3:06 PM, Fedele Stabile
>  wrote:
>> my ultimate goal would be to have a cluster with last version of Centos
>> and oVirt,
>> so my question is
>> what are the best steps:
>> first upgrade Centos of first upgrade oVirt?
>
> You cannot upgrade ovirt on centos 6, since 3.5 is the last version
> supporting it.
>
> Best would be to upgrade the hypervisors to centos 7, which will
> pull latest ovirt anyway.
>
>> The ovirt-engine is hosted
>
> Upgrading the hosted engine vm is not related and more delicate,
> It should be possible using hosted engine setup.
>
> Adding Simone to add more info on that.

On 4.0 we have a new --upgrade-appliance feature on hosted-engine-setup:
 
https://www.ovirt.org/develop/release-management/features/hosted-engine-migration-to-4-0/

The tool will ask you to take a backup pf your engine DB with
engine-backup, then the tool will re-deploy a 4.0/el7 appliance
automatically injecting there the backup of your 3.6 engine.
The supported path is 3.6 -> 4.0 so you have also to do 3.5 -> 3.6 before that.



>> Fedele
>>
>> Il giorno lun, 29/08/2016 alle 13.44 +0200, Fedele Stabile ha scritto:
>>> ok,
>>> I understand you suggest to upgrade centos 6.8 to 7.2 first
>>> Correct?
>
> I misread your question - you seems to already run centos 6.8.
>
> I guess you are not using block storage (iSCSI/FC), do you?
>
> Nir
>
>>>
>>> Il giorno lun, 29/08/2016 alle 14.38 +0300, Nir Soffer ha scritto:
>>> >
>>> > On Mon, Aug 29, 2016 at 2:15 PM, Fedele Stabile
>>> >  wrote:
>>> > >
>>> > >
>>> > > I would upgrade my CentOS 6.8 to 7 on my cluster of 5 ovirt nodes
>>> > > (actually I'm running oVirt 3.5 and glusterfs 3.7).
>>> > > Anyone has experienced troubles for this upgrade?
>>> >
>>> > Udev in 6.8 seems to be broken, breaking block storage.
>>> > See https://access.redhat.com/solutions/2576511
>>> >
>>> > I would avoid this upgrade, or test it on one hypervisor and report
>>> > ovirt bug.
>>> >
>>> > Upgrading to 3.6 or 4.0 and centos 7.2 is recommended.
>>> >
>>> > Nir
>>> >
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to import a qcow2 disk into ovirt

2016-08-29 Thread lifuqiong
Hi,

 How to import a qcow2 disk file into ovirt? I search the Internet
for a long time , but find no solution work.

 

Thank you

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to import a qcow2 disk into ovirt

2016-08-29 Thread Maor Lipchuk
Hi lifuqiong,

There are several ways to import disks into oVirt

Does the disk contains any snapshots?
if not, the disk file can be copied to the storage domain and you can
register it using the Register button (see
https://bugzilla.redhat.com/show_bug.cgi?id=1138139)

You can also take a look at the image-uploader, see
http://www.ovirt.org/develop/release-management/features/storage/image-upload/

What is the use case that you want to do? What is the origin of the disk
(Was it an oVirt disk?), as asked before, does the disk includes any
snapshots.

Regards,
Maor


On Mon, Aug 29, 2016 at 3:40 PM, lifuqiong  wrote:

> Hi,
>
>  How to import a qcow2 disk file into ovirt? I search the Internet
> for a long time , but find no solution work.
>
>
>
> Thank you
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-29 Thread Petr Horacek
Hello,

could you please attach /var/log/vdsm/vdsm.log and
/var/log/vdsm/supervdsm.log here?

Regards,
Petr

2016-08-29 11:51 GMT+02:00 knarra :
> Hi,
>
> I am unable to launch vms on one of my host.  The problem is vm is stuck
> at "waiting for launch" and never comes up. I see the following messages in
> /var/log/messages. Can some one help me to resolve the issue.
>
>
> Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
> qemu-20-appwinvm19.
> Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
> Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
> mode
> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
> forwarding state
> Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
> forwarding state
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet11
> -j libvirt-J-vnet11' failed:
>  Illegal target name 'libvirt-J-vnet11'.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet11
> -j libvirt-P-vnet11' failed
> : Illegal target name 'libvirt-P-vnet11'.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
> failed: Chain 'libvirt-J-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
> failed: Chain 'libvirt-P-vnet11
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' failed:
> Chain 'J-vnet11-mac' doesn'
> t exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' failed:
> Chain 'J-vnet11-mac' doesn'
> t exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
> failed: Chain 'J-vnet11-arp-mac
> ' doesn't exist.
> Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
> COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
> failed: Chain 'J-vnet11-arp-mac
> ' doesn't exist.
>
> Thanks
>
> kasturi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : Hosted engine High Availability

2016-08-29 Thread Alexis HAUSER
Thank you for your explanations, this is very clear now :)



Actually I was confused because "this host" is used in several different 
contexts, if I am right :

1 - For the engine (which is not a host, but a guest) :
"Enter the name which will be used to identify this host inside the 
Administrator Portal [hosted_engine_2]"

2 - For the Host
It asks the same things for the FQDN, but not for the engine this time, for the 
real "host"

Please confirm me this, so I will know if I have to open a bug for this.



Now my error is the following :

"[ ERROR ] Failed to execute stage 'Closing up': Specified cluster does not 
exist: Default"

I think it assume I didn't change the name of the default cluster after 
deploying the first host. I will try to workaround with this by renaming the 
datacenter
I will check if a bug if open on the bugzilla about this and if not I'll open 
one.





- Mail original -
De: "Simone Tiraboschi" 
À: "Alexis HAUSER" 
Cc: "users" 
Envoyé: Jeudi 25 Août 2016 16:56:17
Objet: Re: [ovirt-users] 3.6 : Hosted engine High Availability

On Thu, Aug 25, 2016 at 4:26 PM, Alexis HAUSER
 wrote:
>
>> Can you please share your hosted-engine-setup logs?
>
> Yes of course, here they are :)

OK, the issue is here:
2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:156 query OVESETUP_NETWORK_FQDN_HOST_HOSTNAME
2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Please provide the
address of this host.
2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Note: The engine VM
and all the other hosts should be able to correctly resolve it.
2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:SEND Host address:
[localhost.localdomain]:
2016-08-25 12:49:37 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:219 DIALOG:RECEIVEvm-rhemgr.mydomain.com
2016-08-25 12:49:37 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
hostname.test_hostname:411 test_hostname exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py",
line 407, in test_hostname
not_local_text,
  File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py",
line 252, in _validateFQDNresolvability
fqdn=fqdn,
RuntimeError: vm-rhemgr.mydomain.com did not resolve into an IP address
2016-08-25 12:49:37 ERROR
otopi.plugins.ovirt_hosted_engine_setup.network.bridge
dialog.queryEnvKey:115 Host name is not valid: vm-rhemgr.mydomain.com
did not resolve into an IP address



'Please provide the address of THIS host.' means that you have to
enter/validate the address of the host you are going to add (the host
where you are running hosted-engine --deploy command).

Let's try to recap:
the fqdn of your engine VM is 'vm-rhemgr.mydomain.com',
the fqdn of your host is currently 'localhost.localdomain' but it's
not acceptable (try to run 'ssh localhost.localdomain' on the engine
VM and see where are you getting...)

So you have just to configure a valid fqdn on your additional host
(something like 'my2ndhost.mydomain.com') and confirm it when asked by
that question.

Normally we suggest to rely on a properly configured DNS; you can just
work entering values under '/etc/hosts' but it's up to you to properly
maintain it:
- the engine VM should be able to resolve the address of all the hosts
to contact them: this is not true in your env, with
'localhost.localdomain' your engine VM will not reach your host...
- each host should be able to resolve the address of all the other
hosts and also the address of the engine VM: this is not true in your
env as I read 'RuntimeError: vm-rhemgr.mydomain.com did not resolve
into an IP address'
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to manually restore backup

2016-08-29 Thread Hanson

Hi Guys,

Just wondering what the proper way to restore a backup to a hosted-engine?

I've tried doing the deploy, then cleanup, and backup --mode=restore, 
but then engine-setup needs internet access. (which it doesn't have)


Is there a way to restore the backup over the current data of a 
currently deployed new host?


ie like a --mode=restore --option=force ?

or is there another way to restore?


Thanks,

Hanson

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : Hosted engine High Availability

2016-08-29 Thread Simone Tiraboschi
On Mon, Aug 29, 2016 at 4:08 PM, Alexis HAUSER <
alexis.hau...@telecom-bretagne.eu> wrote:

> Thank you for your explanations, this is very clear now :)
>
>
>
> Actually I was confused because "this host" is used in several different
> contexts, if I am right :
>
> 1 - For the engine (which is not a host, but a guest) :
> "Enter the name which will be used to identify this host inside the
> Administrator Portal [hosted_engine_2]"
>
> 2 - For the Host
> It asks the same things for the FQDN, but not for the engine this time,
> for the real "host"
>
> Please confirm me this, so I will know if I have to open a bug for this.
>
>
>
No, in both the case it's referring to the host you are going to add to
your engine (the host where you are running hosted-engine --deploy): the
first one is a label to easily identify your host, the second one the
address to reach it.


>
> Now my error is the following :
>
> "[ ERROR ] Failed to execute stage 'Closing up': Specified cluster does
> not exist: Default"
>
> I think it assume I didn't change the name of the default cluster after
> deploying the first host. I will try to workaround with this by renaming
> the datacenter
> I will check if a bug if open on the bugzilla about this and if not I'll
> open one.
>
>
>
>
>
> - Mail original -
> De: "Simone Tiraboschi" 
> À: "Alexis HAUSER" 
> Cc: "users" 
> Envoyé: Jeudi 25 Août 2016 16:56:17
> Objet: Re: [ovirt-users] 3.6 : Hosted engine High Availability
>
> On Thu, Aug 25, 2016 at 4:26 PM, Alexis HAUSER
>  wrote:
> >
> >> Can you please share your hosted-engine-setup logs?
> >
> > Yes of course, here they are :)
>
> OK, the issue is here:
> 2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVESETUP_NETWORK_FQDN_HOST_HOSTNAME
> 2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please provide the
> address of this host.
> 2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Note: The engine VM
> and all the other hosts should be able to correctly resolve it.
> 2016-08-25 12:49:04 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Host address:
> [localhost.localdomain]:
> 2016-08-25 12:49:37 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEvm-rhemgr.mydomain.com
> 2016-08-25 12:49:37 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.network.bridge
> hostname.test_hostname:411 test_hostname exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py",
> line 407, in test_hostname
> not_local_text,
>   File "/usr/lib/python2.7/site-packages/ovirt_setup_lib/hostname.py",
> line 252, in _validateFQDNresolvability
> fqdn=fqdn,
> RuntimeError: vm-rhemgr.mydomain.com did not resolve into an IP address
> 2016-08-25 12:49:37 ERROR
> otopi.plugins.ovirt_hosted_engine_setup.network.bridge
> dialog.queryEnvKey:115 Host name is not valid: vm-rhemgr.mydomain.com
> did not resolve into an IP address
>
>
>
> 'Please provide the address of THIS host.' means that you have to
> enter/validate the address of the host you are going to add (the host
> where you are running hosted-engine --deploy command).
>
> Let's try to recap:
> the fqdn of your engine VM is 'vm-rhemgr.mydomain.com',
> the fqdn of your host is currently 'localhost.localdomain' but it's
> not acceptable (try to run 'ssh localhost.localdomain' on the engine
> VM and see where are you getting...)
>
> So you have just to configure a valid fqdn on your additional host
> (something like 'my2ndhost.mydomain.com') and confirm it when asked by
> that question.
>
> Normally we suggest to rely on a properly configured DNS; you can just
> work entering values under '/etc/hosts' but it's up to you to properly
> maintain it:
> - the engine VM should be able to resolve the address of all the hosts
> to contact them: this is not true in your env, with
> 'localhost.localdomain' your engine VM will not reach your host...
> - each host should be able to resolve the address of all the other
> hosts and also the address of the engine VM: this is not true in your
> env as I read 'RuntimeError: vm-rhemgr.mydomain.com did not resolve
> into an IP address'
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to manually restore backup

2016-08-29 Thread Yedidyah Bar David
On Mon, Aug 29, 2016 at 5:08 PM, Hanson  wrote:
> Hi Guys,
>
> Just wondering what the proper way to restore a backup to a hosted-engine?
>
> I've tried doing the deploy, then cleanup, and backup --mode=restore, but
> then engine-setup needs internet access. (which it doesn't have)

You can try 'engine-setup --offline'.

>
> Is there a way to restore the backup over the current data of a currently
> deployed new host?

No, but you can try running 'engine-cleanup'.

Also, engine-backup only checks for the databases, not files. So you can try:

service postgresql stop
rm -rf /var/lib/pgsql/data
then restore again.

Best,

>
> ie like a --mode=restore --option=force ?
>
> or is there another way to restore?
>
>
> Thanks,
>
> Hanson
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.0.3 Final Release Candidate is now available

2016-08-29 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of oVirt 4.0.3
for testing, as of August 29th, 2016.

This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0

Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].

This update is the third in a series of stabilization updates to the 4.0
series.
4.0.3 brings 5 enhancements and 45 bugfixes, including 17 high
or urgent severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is available. [4]
* A new oVirt Next Generation Node will be available soon [4].
* A new oVirt Engine Appliance is available for Red Hat Enterprise Linux
and CentOS Linux (or similar)
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.3 release highlights:
http://www.ovirt.org/release/4.0.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.3/
[4] http://resources.ovirt.org/pub/ovirt-4.0/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : Hosted engine High Availability

2016-08-29 Thread Alexis HAUSER
>No, in both the case it's referring to the host you are going to add to
>your engine (the host where you are running hosted-engine --deploy): the
>first one is a label to easily identify your host, the second one the
>address to reach it.


Thanks, then it means only the default label is wrong, right ? It should be 
[host_2] (refering to the host itself) instead of [hosted_engine_2] (refering 
to the engine itself), no ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 : Hosted engine High Availability

2016-08-29 Thread Simone Tiraboschi
On Mon, Aug 29, 2016 at 4:55 PM, Alexis HAUSER <
alexis.hau...@telecom-bretagne.eu> wrote:

> >No, in both the case it's referring to the host you are going to add to
> >your engine (the host where you are running hosted-engine --deploy): the
> >first one is a label to easily identify your host, the second one the
> >address to reach it.
>
>
> Thanks, then it means only the default label is wrong, right ? It should
> be [host_2] (refering to the host itself) instead of [hosted_engine_2]
> (refering to the engine itself), no ?
>

We are proposing  hosted_engine_1 for the first host involved in
hosted_engine and hosted_engine_n for additional hosts.
A lot of users got confused by this so we are going to remove and simply
use by default the host address as its label; the user will always be able
to rename it from the engine.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Import of exported VMware VM fails

2016-08-29 Thread Cam Mac
Hi,

I've use ovftool to create an .ova of a VMWare guest (in this case W2012),
and then have converted it with virt-v2v, outputting to the oVirt export
domain (virt-v2v -i ova /space/w2012-test.ova -o ovirt -os
ovirt-engine:/mnt/export-vm -of qcow2). This appears to work, though it
reports a warning:

[   0.0] Opening the source -i ova /space/w2012-test.ova
[   5.3] Creating an overlay to protect the source from being modified
[   5.9] Initializing the target -o rhev -os ovirt-engine:/mnt/export-vm
[   6.1] Opening the overlay
[  11.5] Inspecting the overlay
[  12.6] Checking for sufficient free disk space in the guest
[  12.6] Estimating space required on target for each disk
[  12.6] Converting Windows Server 2012 Standard Evaluation to run on KVM
virt-v2v: warning: Neither rhev-apt.exe nor vmdp.exe can be found.  Unable
to install one of them.
virt-v2v: warning: there is no QXL driver for this version of Windows (6.2
x86_64).  virt-v2v looks for this driver in /usr/share/virtio-win

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[  13.5] Mapping filesystem data to avoid copying unused and blank areas
[  16.6] Closing the overlay
[  16.6] Checking if the guest needs BIOS or UEFI to boot
[  16.6] Assigning disks to buses
[  16.6] Copying disk 1/1 to
/space/scratch/v2v.F5dkB2/ff68b458-3fe9-4ecf-95b7-cfdcc42dd291/images/b8a180eb-e075-44bf-a6f0-263e2792f5d7/a49bed7d-9ac3-48f1-8b00-2abb1e4c183f
(qcow2)
(100.00/100%)
[ 200.3] Creating output metadata
[ 200.3] Finishing off

I can then see the exported VM available for import in the oVirt GUI. When
I import it however, it just says it fails to import, and having a look at
the engine log I can't see what is wrong specifically:

2016-08-29 15:43:06,620 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(default task-372) [4cdcbb5c] FINISH, DoesImageExistVDSCommand, return:
true, log id: 188b0975
2016-08-29 15:43:06,715 WARN
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (default task-372)
[] VM '980e3d5d-934a-4426-850d-23ba5df77b01' doesn't have active snapshot
in export domain
2016-08-29 15:43:06,788 INFO
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Running command:
ImportVmCommand internal: false. Entities affected :  ID:
9d1fe12c-5c5f-49ad-a875-a735acba2530 Type: StorageAction group
IMPORT_EXPORT_VM with role type ADMIN,  ID:
ff68b458-3fe9-4ecf-95b7-cfdcc42dd291 Type: StorageAction group
IMPORT_EXPORT_VM with role type ADMIN
2016-08-29 15:43:06,797 INFO
[org.ovirt.engine.core.utils.transaction.TransactionSupport]
(org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] transaction rolled back
2016-08-29 15:43:06,798 ERROR
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Command
'org.ovirt.engine.core.bll.exportimport.ImportVmCommand' failed: null
2016-08-29 15:43:06,798 ERROR
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Exception:
java.lang.reflect.UndeclaredThrowableException


Any ideas?

Thanks,

Cam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] extra chatty /var/log/vdsm.log

2016-08-29 Thread Pat Riehecky

Every few seconds I'm getting[1]

Any ideas how to make VDSM happy so it stops complaining?

vdsm-4.17.26-0.el7

Pat

[1]
Thread-4794::WARNING::2016-08-29 
10:01:03,340::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/images 
already exists
Thread-4794::WARNING::2016-08-29 
10:01:03,340::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/dom_md 
already exists
Thread-4797::WARNING::2016-08-29 
10:01:03,652::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/c231f06b-0d4e-4d91-be55-2de903351fd3 
already exists
Thread-4799::WARNING::2016-08-29 
10:01:03,928::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/36a09f1d-0923-4b3b-87aa-4788ca64064e 
already exists
Thread-4801::WARNING::2016-08-29 
10:01:04,074::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/d2696090-e25c-4195-8c06-dafd80cf0720 
already exists
Thread-4803::WARNING::2016-08-29 
10:01:04,221::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4805::WARNING::2016-08-29 
10:01:04,369::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4807::WARNING::2016-08-29 
10:01:04,515::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/f9e59ec5-6903-4b2d-8164-4fce3d901bdd 
already exists
Thread-4825::WARNING::2016-08-29 
10:01:05,511::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4826::WARNING::2016-08-29 
10:01:05,587::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4846::WARNING::2016-08-29 
10:01:11,675::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/images 
already exists
Thread-4846::WARNING::2016-08-29 
10:01:11,676::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/dom_md 
already exists
Thread-4849::WARNING::2016-08-29 
10:01:11,979::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/c231f06b-0d4e-4d91-be55-2de903351fd3 
already exists
Thread-4851::WARNING::2016-08-29 
10:01:12,263::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/36a09f1d-0923-4b3b-87aa-4788ca64064e 
already exists
Thread-4853::WARNING::2016-08-29 
10:01:12,410::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/d2696090-e25c-4195-8c06-dafd80cf0720 
already exists
Thread-4855::WARNING::2016-08-29 
10:01:12,558::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4857::WARNING::2016-08-29 
10:01:12,705::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4859::WARNING::2016-08-29 
10:01:12,852::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/f9e59ec5-6903-4b2d-8164-4fce3d901bdd 
already exists
Thread-4877::WARNING::2016-08-29 
10:01:13,810::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/2c8c56f2-1711-4867-8ed0-3c502bb635ff 
already exists
Thread-4878::WARNING::2016-08-29 
10:01:13,886::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/22a9849b-c551-4783-ad0c-530464df47f3 
already exists
Thread-4900::WARNING::2016-08-29 
10:01:19,891::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/images 
already exists
Thread-4900::WARNING::2016-08-29 
10:01:19,891::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/dom_md 
already exists
Thread-4903::WARNING::2016-08-29 
10:01:20,195::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/c231f06b-0d4e-4d91-be55-2de903351fd3 
already exists
Thread-4905::WARNING::2016-08-29 
10:01:20,473::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/var/run/vdsm/storage/81f19871-4

Re: [ovirt-users] extra chatty /var/log/vdsm.log

2016-08-29 Thread Nir Soffer
On Mon, Aug 29, 2016 at 6:03 PM, Pat Riehecky  wrote:

> Every few seconds I'm getting[1]
>
> Any ideas how to make VDSM happy so it stops complaining?
>
> vdsm-4.17.26-0.el7
>
> Pat
>
> [1]
> Thread-4794::WARNING::2016-08-29 
> 10:01:03,340::fileUtils::152::Storage.fileUtils::(createdir)
> Dir /rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/images
> already exists
> Thread-4794::WARNING::2016-08-29 
> 10:01:03,340::fileUtils::152::Storage.fileUtils::(createdir)
> Dir /rhev/data-center/mnt/blockSD/81f19871-4d91-4698-a97d-36452bfae281/dom_md
> already exists
> Thread-4797::WARNING::2016-08-29 
> 10:01:03,652::fileUtils::152::Storage.fileUtils::(createdir)
> Dir /var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/
> c231f06b-0d4e-4d91-be55-2de903351fd3 already exists
> Thread-4799::WARNING::2016-08-29 
> 10:01:03,928::fileUtils::152::Storage.fileUtils::(createdir)
> Dir /var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/
> 36a09f1d-0923-4b3b-87aa-4788ca64064e already exists
> Thread-4801::WARNING::2016-08-29 
> 10:01:04,074::fileUtils::152::Storage.fileUtils::(createdir)
> Dir /var/run/vdsm/storage/81f19871-4d91-4698-a97d-36452bfae281/
> d2696090-e25c-4195-8c06-dafd80cf0720 already exists
>

This was fixed in 4.18.0:
https://bugzilla.redhat.com/1129587

These warnings should show only when you start a vm - are you starting
and stopping vms every few seconds?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine on gluster problem

2016-08-29 Thread Simone Tiraboschi
On Fri, Aug 26, 2016 at 8:54 AM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Aug 23, 2016 at 8:44 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Fri, Apr 15, 2016 at 8:00 AM, Luiz Claudio Prazeres Goncalves <
>> luiz...@gmail.com> wrote:
>>
>>> I'm not planning to move to ovirt 4 until it gets stable, so would be
>>> great to backport to 3.6 or ,ideally, gets developed on the next release of
>>> 3.6 branch. Considering the urgency (its a single point of failure) x
>>> complexity wouldn't be hard to make the proposed fix.
>>>
>>>
>> Bumping old email sorry.  Looks like https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1298693 was finished against 3.6.7 according to that RFE.
>>
>> So does that mean if I add appropriate lines to my
>>  /etc/ovirt-hosted-engine/hosted-engine.conf the next time I restart
>> engine and agent/brokers to mount that storage point it will utilize the
>> backupvol-server features?
>>
>> If so are appropriate settings outlined in docs somewhere?
>>
>> Running ovirt 3.6.7 and gluster 3.8.2 on centos 7 nodes.
>>
>
> Adding Simone
>


First step, you have to edit
/etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine
hosts to ensure that the storage field always point to the same entry
point (host01 for instance)
Then on each host you can add something like:
mnt_options=backupvolfile-server=host02.yourdomain.com:host03.
yourdomain.com,fetch-attempts=2,log-level=
WARNING,log-file=/var/log/engine_domain.log

Then check the representation of your storage connection in the table
storage_server_connections
of the engine DB and make sure that connection refers to the entry point
you used in hosted-engine.conf on all your hosts, you have lastly to set
the value of mount_options also here.

Please tune also the value of network.ping-timeout for your glusterFS volume
to avoid this:
 https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17

You can find other information here:
https://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-gluster-support/



>
>
>>
>>
>> I'm using today a production environment on top of gluster replica 3 and
>>> this is the only SPF I have.
>>>
>>> Thanks
>>> Luiz
>>>
>>> Em sex, 15 de abr de 2016 03:05, Sandro Bonazzola 
>>> escreveu:
>>>
 On Thu, Apr 14, 2016 at 7:35 PM, Nir Soffer  wrote:

> On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
>  wrote:
> > Nir, here is the problem:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1298693
> >
> > When you do a hosted-engine --deploy and pick "glusterfs" you don't
> have a
> > way to define the mount options, therefore, the use of the
> > "backupvol-server", however when you create a storage domain from
> the UI you
> > can, like the attached screen shot.
> >
> >
> > In the hosted-engine --deploy, I would expect a flow which includes
> not only
> > the "gluster" entrypoint, but also the gluster mount options which is
> > missing today. This option would be optional, but would remove the
> single
> > point of failure described on the Bug 1298693.
> >
> > for example:
> >
> > Existing entry point on the "hosted-engine --deploy" flow
> > gluster1.xyz.com:/engine
>
> I agree, this feature must be supported.
>

 It will, and it's currently targeted to 4.0.



>
> > Missing option on the "hosted-engine --deploy" flow :
> > backupvolfile-server=gluster2.xyz.com,fetch-attempts=3,log-l
> evel=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
> >
> > Sandro, it seems to me a simple solution which can be easily fixed.
> >
> > What do you think?
> >
> > Regards
> > -Luiz
> >
> >
> >
> > 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
> >>
> >>
> >>
> >> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer 
> wrote:
> >>>
> >>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
> >>>  wrote:
> >>> > Hi Sandro, I've been using gluster with 3 external hosts for a
> while
> >>> > and
> >>> > things are working pretty well, however this single point of
> failure
> >>> > looks
> >>> > like a simple feature to implement,but critical to anyone who
> wants to
> >>> > use
> >>> > gluster on production  . This is not hyperconvergency which has
> other
> >>> > issues/implications. So , why not have this feature out on 3.6
> branch?
> >>> > It
> >>> > looks like just let vdsm use the 'backupvol-server' option when
> >>> > mounting the
> >>> > engine domain and make the property tests.
> >>>
> >>> Can you explain what is the problem, and what is the suggested
> solution?
> >>>
> >>> Engine and vdsm already support the backupvol-server option - you
> can
> >>> define this option in the storage domain options wh

Re: [ovirt-users] extra chatty /var/log/vdsm.log

2016-08-29 Thread Pat Riehecky



On 08/29/2016 10:46 AM, Nir Soffer wrote:

are you starting
and stopping vms every few seconds?


not to my knowledge.  I'll upgrade to 4.18

Thanks!

Pat
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine on gluster problem

2016-08-29 Thread David Gossage
On Mon, Aug 29, 2016 at 10:47 AM, Simone Tiraboschi 
wrote:

>
>
> On Fri, Aug 26, 2016 at 8:54 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Tue, Aug 23, 2016 at 8:44 PM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
>>> On Fri, Apr 15, 2016 at 8:00 AM, Luiz Claudio Prazeres Goncalves <
>>> luiz...@gmail.com> wrote:
>>>
 I'm not planning to move to ovirt 4 until it gets stable, so would be
 great to backport to 3.6 or ,ideally, gets developed on the next release of
 3.6 branch. Considering the urgency (its a single point of failure) x
 complexity wouldn't be hard to make the proposed fix.


>>> Bumping old email sorry.  Looks like https://bugzilla.redhat.com/sh
>>> ow_bug.cgi?id=1298693 was finished against 3.6.7 according to that RFE.
>>>
>>> So does that mean if I add appropriate lines to my
>>>  /etc/ovirt-hosted-engine/hosted-engine.conf the next time I restart
>>> engine and agent/brokers to mount that storage point it will utilize the
>>> backupvol-server features?
>>>
>>> If so are appropriate settings outlined in docs somewhere?
>>>
>>> Running ovirt 3.6.7 and gluster 3.8.2 on centos 7 nodes.
>>>
>>
>> Adding Simone
>>
>
>
> First step, you have to edit
> /etc/ovirt-hosted-engine/hosted-engine.conf on all your hosted-engine
> hosts to ensure that the storage field always point to the same entry
> point (host01 for instance)
> Then on each host you can add something like:
> mnt_options=backupvolfile-server=host02.yourdomain.com:host03.
> yourdomain.com,fetch-
> attempts=2,log-level=WARNING,log-file=/var/log/engine_domain.log
>
> Then check the representation of your storage connection in the table
> storage_server_connections of the engine DB and make sure that connection
> refers to the entry point you used in hosted-engine.conf on all your
> hosts, you have lastly to set the value of mount_options also here.
>
> Please tune also the value of network.ping-timeout for your glusterFS volume
> to avoid this:
>  https://bugzilla.redhat.com/show_bug.cgi?id=1319657#c17
>
> You can find other information here:
> https://www.ovirt.org/develop/release-management/features/
> engine/self-hosted-engine-gluster-support/
>
>
Thanks, I'll review all that information.

>
>
>>
>>
>>>
>>>
>>> I'm using today a production environment on top of gluster replica 3 and
 this is the only SPF I have.

 Thanks
 Luiz

 Em sex, 15 de abr de 2016 03:05, Sandro Bonazzola 
 escreveu:

> On Thu, Apr 14, 2016 at 7:35 PM, Nir Soffer 
> wrote:
>
>> On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
>>  wrote:
>> > Nir, here is the problem:
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>> >
>> > When you do a hosted-engine --deploy and pick "glusterfs" you don't
>> have a
>> > way to define the mount options, therefore, the use of the
>> > "backupvol-server", however when you create a storage domain from
>> the UI you
>> > can, like the attached screen shot.
>> >
>> >
>> > In the hosted-engine --deploy, I would expect a flow which includes
>> not only
>> > the "gluster" entrypoint, but also the gluster mount options which
>> is
>> > missing today. This option would be optional, but would remove the
>> single
>> > point of failure described on the Bug 1298693.
>> >
>> > for example:
>> >
>> > Existing entry point on the "hosted-engine --deploy" flow
>> > gluster1.xyz.com:/engine
>>
>> I agree, this feature must be supported.
>>
>
> It will, and it's currently targeted to 4.0.
>
>
>
>>
>> > Missing option on the "hosted-engine --deploy" flow :
>> > backupvolfile-server=gluster2.xyz.com,fetch-attempts=3,log-l
>> evel=WARNING,log-file=/var/log/glusterfs/gluster_engine_domain.log
>> >
>> > Sandro, it seems to me a simple solution which can be easily fixed.
>> >
>> > What do you think?
>> >
>> > Regards
>> > -Luiz
>> >
>> >
>> >
>> > 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
>> >>
>> >>
>> >>
>> >> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer 
>> wrote:
>> >>>
>> >>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
>> >>>  wrote:
>> >>> > Hi Sandro, I've been using gluster with 3 external hosts for a
>> while
>> >>> > and
>> >>> > things are working pretty well, however this single point of
>> failure
>> >>> > looks
>> >>> > like a simple feature to implement,but critical to anyone who
>> wants to
>> >>> > use
>> >>> > gluster on production  . This is not hyperconvergency which has
>> other
>> >>> > issues/implications. So , why not have this feature out on 3.6
>> branch?
>> >>> > It
>> >>> > looks like just let vdsm use the 'backupvol-server' option when
>> >>> > mounting the
>> >>> > engine do

Re: [ovirt-users] how to debug no audio in guest?

2016-08-29 Thread Jakub Niedermertl
Hi,

it is not an expected behavior and filed a bug for that 
(https://bugzilla.redhat.com/show_bug.cgi?id=1371243). Please feel free to add 
any details.

Jakub

- Original Message -
> From: "Gianluca Cecchi" 
> To: "Jakub Niedermertl" 
> Cc: "users" , "Michal Skrivanek" 
> 
> Sent: Friday, August 26, 2016 1:58:03 AM
> Subject: Re: [ovirt-users] how to debug no audio in guest?
> 
> On Fri, Aug 26, 2016 at 1:37 AM, Gianluca Cecchi 
> wrote:
> 
> > On Thu, Aug 25, 2016 at 5:17 PM, Jakub Niedermertl 
> > wrote:
> >
> >> Hi Gianluca,
> >>
> >> QEMU_AUDIO_DRV=none is most probably the problem. Libvirt is supposed to
> >> set "QEMU_AUDIO_DRV" to "spice" for VMs with graphics. Please make sure
> >> that the "Optimize for" attribute is set to "Desktop" (in Edit VM dialog)
> >> and try to shutdown and start the VM again. If the QEMU process will still
> >> have QEMU_AUDIO_DRV=none set, you can try to set the variable in
> >> /etc/sysconfig/libvirtd by adding line "QEMU_AUDIO_DRV=spice" and restart
> >> the libvirtd service.
> >>
> >> Jakub
> >>
> >
> >
> [snip]
> 
> 
> >
> > Apparently all is the same in qemu-kvm command lines, comparing c6 with c7
> > one, but I don't understand who drives instead the QEMU_AUDIO_DRV=XXX in
> > log file that is different between the two guests (none vs spice) and btw
> > also in f24 it is spice:
> >
> 
> 
> [snip]
> 
> Understood!
> Actually CentOS 6 guest graphics protocol was configured as "SPICE + VNC"
> and this was the reason of "QEMU_AUDIO_DRV=none"
> Changing it to "SPICE", now I have "QEMU_AUDIO_DRV=spice" in guest logfile
> and also test speakers working inside guest.
> 
> Is this expected? I thought that the "+" would have given an aggregate of
> functionalities, not a limitation... because initialy I wanted to test both
> spice and vnc access...
> If this is the case, probably a note or tool-tip could be useful for the
> final user because it is not so immediate to correlate video with audio...
> 
> Gianluca
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] unable to update to 4.0

2016-08-29 Thread Jeb Baxley
Hi All,

I have Ovirt 3.6.7 installed and running.  However, when attempting to
update to 4.0 I receive this error when running engine-setup.

Failed to execute stage 'Setup validation': Trying to upgrade from
unsupported versions: 3.5

I find this pretty strange as I haven't run 3.5 in a long time and can
confirm that I'm running 3.6.7.





[image: --]
Jeb Baxley
[image: http://]about.me/jeb.baxley

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm memory consumption (ovirt 4.0)

2016-08-29 Thread Federico Alberto Sayd
I have issues with my ovirt setup related to memory consumption. After
upgrading to 4.0 I noted a considerable grow in vdsm memory consumption.
I suspect that the grow is related to a memory leak.

When I boot up the system and activate the host the memory consumption
is about 600MB. After 5 days running and host in maintenance mode the
memory consumption is about 1,4 GB.

I need to put my hosts in maintenance and reboot to free memory.

Can anyone help me to debug this problem?

OS Version:
RHEL - 7 - 2.1511.el7.centos.2.10
Kernel Version:
3.10.0 - 327.22.2.el7.x86_64
KVM Version:
2.3.0 - 31.el7.16.1
LIBVIRT Version:
libvirt-1.2.17-13.el7_2.5
VDSM Version:
vdsm-4.18.11-1.el7.centos

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-engine on Fedora 24

2016-08-29 Thread Yves Dorfsman

Can you run ovirt on Fedora 24?

I followed the instructions to install, both ovirt 3.6 and 4.0, both with dnf
and yum-deprecated , but when I try to install I get

No package ovirt-engine available.



-- 
http://yves.zioup.com
gpg: 4096R/32B0F416

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Live Migration issues

2016-08-29 Thread Anantha Raghava

Hi,

Unfortunately not. Engine log may be there, if it is not rotated 
already. But VDSM logs are not there at all.


I have two more servers of the same configuration. Let me try & simulate 
the same situation.


--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


Do not print this e-mail unless required. Save Paper & trees.
On Monday 29 August 2016 12:54 PM, Yaniv Kaul wrote:

Did you happen to save the logs? They might reveal the issue.
TIA,
Y.

On Sat, Aug 27, 2016 at 1:25 PM, Anantha Raghava 
mailto:rag...@exzatechconsulting.com>> 
wrote:


Hi,

The migration is working fine with Cluster Switch set to "Legacy"
Mode.

But I have run into a different problem. Yesterday, for some
reason, I had to restart the hosts. But the Hosts never came
online. Hosts were constantly in Non Responsive state. I manually
fenced it, and Vds command timed out.

I found that NIC was down for some reason. I brought it up and
again fenced the hosts. Again the Vds command timed out. I
manually attempted to restart the VDSM service, but host reported
there is no such service. Starting VDSMD service resulted in failure.

Finally, I had to forcefully remove the hosts, reinstalled base
(CentOS 7) on hosts and re-added them to oVirt with the storage
domain intact. All hosts, storage domains went online and all VMs
were intact.

Post reinstallation, I set the Cluster Switch mode to "Legacy" and
tried migration and it worked fine.

I fail to understand what went wrong while rebooting the hosts.

-- 


Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


Do not print this e-mail unless required. Save Paper & trees.
On Friday 26 August 2016 10:12 PM, Anantha Raghava wrote:


Hi,

Then, I will switch the cluster switch type back to Legacy and
then try to migrate. Will post the results here.

Any other suggestions?

-- 


Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services



Do not print this e-mail unless required. Save Paper & trees.

On Friday 26 August 2016 09:58 PM, Yaniv Dary wrote:

OVS is experimental and there is a open item on making migration
work:
https://bugzilla.redhat.com/show_bug.cgi?id=1362495


Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34
Jerusalem Road Building A, 4th floor Ra'anana, Israel 4350109
Tel : +972 (9) 7692306  8272306
Email: yd...@redhat.com  IRC : ydary

On Fri, Aug 26, 2016 at 11:25 AM, Anantha Raghava
mailto:rag...@exzatechconsulting.com>> wrote:

Hi,

Not at all.

Just newly created Virtual Machine. No user is accessing the
VM yet. Only Ubuntu Trusty Tahr (14.04) OS with oVirt Guest
Agent is installed. VM and both hosts are in the same
network as well.

Applied migration policy is "Minimum Downtime". Bandwidth
limit is set to Auto. Cluster switch is set to "OVS" I can
send the screen shot tomorrow.

Host Hardware configuration. Intel Xeon CPU, 2 Socket X 16
Core in each socket. Installed Memory is 256 GB on each
host. I have 4 X 10Gbps NIC, 2 given for VM traffic and 2
given for iSCSI traffic. oVirt Management is on a separate 1
X 1Gbps NIC.

Yet, migration fails.

-- 


Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services

Do not print this e-mail unless required. Save Paper & trees.
On Friday 26 August 2016 09:31 PM, Yaniv Dary wrote:

Is the VM very busy?
Did you apply the new cluster migration policies?

Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34
Jerusalem Road Building A, 4th floor Ra'anana, Israel
4350109 Tel : +972 (9) 7692306
 8272306 Email:
yd...@redhat.com  IRC : ydary

On Fri, Aug 26, 2016 at 1:54 AM, Anantha Raghava
mailto:rag...@exzatechconsulting.com>> wrote:

Hi,

In our setup we have configured two hosts, both of same
CPU type, same amount of memory and Master storage
domain is created on iSCSI storage and is live.

I created a single VM with Ubuntu Trusty as OS. It
installed properly and when I attempted to migrate the
running VM, the migration failed.

Engine log, Host 1 log and Host 2 logs are attached for
your reference. Since logs are running into several
MBs, I have compressed them and attached here.

Can someone help us to solve this issue?


-- 


Thanks & Regards,

Anantha Raghava



___
Users mailing list
  

Re: [ovirt-users] unable to update to 4.0

2016-08-29 Thread Yedidyah Bar David
On Mon, Aug 29, 2016 at 9:12 PM, Jeb Baxley  wrote:

> Hi All,
>
> I have Ovirt 3.6.7 installed and running.  However, when attempting to
> update to 4.0 I receive this error when running engine-setup.
>
> Failed to execute stage 'Setup validation': Trying to upgrade from
> unsupported versions: 3.5
>
> I find this pretty strange as I haven't run 3.5 in a long time and can
> confirm that I'm running 3.6.7.
>

This message means that you have one or more cluster/dc with 3.5
compatibility version. You can check the setup log for more details. Please
upgrade them to 3.6 and try again.

Best,


>
>
>
>
>
> [image: --]
> Jeb Baxley
> [image: http://]about.me/jeb.baxley
> 
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-29 Thread knarra

On 08/29/2016 07:13 PM, Petr Horacek wrote:

Hello,

could you please attach /var/log/vdsm/vdsm.log and
/var/log/vdsm/supervdsm.log here?

Regards,
Petr

Hi petr,

   I am not able to send vdsm and supervdsm log through mail. I have 
shared it with you using dropbox, hope you have got the email.


Thanks
kasturi


2016-08-29 11:51 GMT+02:00 knarra :

Hi,

 I am unable to launch vms on one of my host.  The problem is vm is stuck
at "waiting for launch" and never comes up. I see the following messages in
/var/log/messages. Can some one help me to resolve the issue.


Aug 29 12:16:20 rhsqa-grafton3 systemd-machined: New machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Started Virtual Machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 systemd: Starting Virtual Machine
qemu-20-appwinvm19.
Aug 29 12:16:20 rhsqa-grafton3 kvm: 11 guests now active
Aug 29 12:16:21 rhsqa-grafton3 kernel: device vnet11 entered promiscuous
mode
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
forwarding state
Aug 29 12:16:21 rhsqa-grafton3 kernel: ovirtmgmt: port 13(vnet11) entered
forwarding state
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet11
-j libvirt-J-vnet11' failed:
  Illegal target name 'libvirt-J-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet11
-j libvirt-P-vnet11' failed
: Illegal target name 'libvirt-P-vnet11'.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet11'
failed: Chain 'libvirt-J-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet11'
failed: Chain 'libvirt-P-vnet11
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-mac' failed:
Chain 'J-vnet11-mac' doesn'
t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-mac' failed:
Chain 'J-vnet11-mac' doesn'
t exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -F J-vnet11-arp-mac'
failed: Chain 'J-vnet11-arp-mac
' doesn't exist.
Aug 29 12:16:21 rhsqa-grafton3 firewalld: 2016-08-29 12:16:21 ERROR:
COMMAND_FAILED: '/sbin/ebtables --concurrent -t nat -X J-vnet11-arp-mac'
failed: Chain 'J-vnet11-arp-mac
' doesn't exist.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-engine on Fedora 24

2016-08-29 Thread Sandro Bonazzola
On Tue, Aug 30, 2016 at 2:14 AM, Yves Dorfsman  wrote:

>
> Can you run ovirt on Fedora 24?
>
> I followed the instructions to install, both ovirt 3.6 and 4.0, both with
> dnf
> and yum-deprecated , but when I try to install I get
>
> No package ovirt-engine available.
>

Hi, ovirt 3.6 is supported only on Fedora 22.
oVirt 4.0 supports Fedora 23 as tech preview.
oVirt master (will become 4.1) support Fedora 24 as tech preview.

Your help testing what will become oVirt 4.1 on Fedora 24 will be really
appreciated.
Instructions on how to install master nightlies are here:
http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/




>
>
>
> --
> http://yves.zioup.com
> gpg: 4096R/32B0F416
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Vds time out occured

2016-08-29 Thread knarra

Hi,

I have installed the latest bits of ovirt and i see that my events 
tab in the UI is flodded with the following error messages for all the 
hosts in the cluster. Can some help me understand why are these ?


VDSM  command failed: Message timeout which can be caused by 
communication issues


VDSM command failed: Vds timeout occured

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Import of exported VMware VM fails

2016-08-29 Thread Shahar Havivi
Check for permissions,
All the directory hierarchy and the images as well as the ovf needs to reached
by the vdsm user, its best to change the owner to vdsm:kvm (e.g. 36:36).

 Shahar.


On 29.08.16 16:05, Cam Mac wrote:
> Hi,
> 
> I've use ovftool to create an .ova of a VMWare guest (in this case W2012),
> and then have converted it with virt-v2v, outputting to the oVirt export
> domain (virt-v2v -i ova /space/w2012-test.ova -o ovirt -os
> ovirt-engine:/mnt/export-vm -of qcow2). This appears to work, though it
> reports a warning:
> 
> [   0.0] Opening the source -i ova /space/w2012-test.ova
> [   5.3] Creating an overlay to protect the source from being modified
> [   5.9] Initializing the target -o rhev -os ovirt-engine:/mnt/export-vm
> [   6.1] Opening the overlay
> [  11.5] Inspecting the overlay
> [  12.6] Checking for sufficient free disk space in the guest
> [  12.6] Estimating space required on target for each disk
> [  12.6] Converting Windows Server 2012 Standard Evaluation to run on KVM
> virt-v2v: warning: Neither rhev-apt.exe nor vmdp.exe can be found.  Unable
> to install one of them.
> virt-v2v: warning: there is no QXL driver for this version of Windows (6.2
> x86_64).  virt-v2v looks for this driver in /usr/share/virtio-win
> 
> The guest will be configured to use a basic VGA display driver.
> virt-v2v: This guest has virtio drivers installed.
> [  13.5] Mapping filesystem data to avoid copying unused and blank areas
> [  16.6] Closing the overlay
> [  16.6] Checking if the guest needs BIOS or UEFI to boot
> [  16.6] Assigning disks to buses
> [  16.6] Copying disk 1/1 to
> /space/scratch/v2v.F5dkB2/ff68b458-3fe9-4ecf-95b7-cfdcc42dd291/images/b8a180eb-e075-44bf-a6f0-263e2792f5d7/a49bed7d-9ac3-48f1-8b00-2abb1e4c183f
> (qcow2)
> (100.00/100%)
> [ 200.3] Creating output metadata
> [ 200.3] Finishing off
> 
> I can then see the exported VM available for import in the oVirt GUI. When
> I import it however, it just says it fails to import, and having a look at
> the engine log I can't see what is wrong specifically:
> 
> 2016-08-29 15:43:06,620 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (default task-372) [4cdcbb5c] FINISH, DoesImageExistVDSCommand, return:
> true, log id: 188b0975
> 2016-08-29 15:43:06,715 WARN
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (default task-372)
> [] VM '980e3d5d-934a-4426-850d-23ba5df77b01' doesn't have active snapshot
> in export domain
> 2016-08-29 15:43:06,788 INFO
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Running command:
> ImportVmCommand internal: false. Entities affected :  ID:
> 9d1fe12c-5c5f-49ad-a875-a735acba2530 Type: StorageAction group
> IMPORT_EXPORT_VM with role type ADMIN,  ID:
> ff68b458-3fe9-4ecf-95b7-cfdcc42dd291 Type: StorageAction group
> IMPORT_EXPORT_VM with role type ADMIN
> 2016-08-29 15:43:06,797 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] transaction rolled back
> 2016-08-29 15:43:06,798 ERROR
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Command
> 'org.ovirt.engine.core.bll.exportimport.ImportVmCommand' failed: null
> 2016-08-29 15:43:06,798 ERROR
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-8-thread-17) [4cdcbb5c] Exception:
> java.lang.reflect.UndeclaredThrowableException
> 
> 
> Any ideas?
> 
> Thanks,
> 
> Cam

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to import a qcow2 disk into ovirt

2016-08-29 Thread Amit Aviram
Hi
Actually, if you have the latest oVirt 4, you can just do that from the UI.
just go to disks tab and select "upload" -> "start".

Before doing that, make sure to make your browser trust oVirt's CA by
downloading its CA certificate from:
https://{your_engine's_URI}/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA


Let me know if that's works for you.

On Mon, Aug 29, 2016 at 3:40 PM, lifuqiong  wrote:

> Hi,
>
>  How to import a qcow2 disk file into ovirt? I search the Internet
> for a long time , but find no solution work.
>
>
>
> Thank you
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Vds time out occured

2016-08-29 Thread knarra

On 08/30/2016 11:47 AM, knarra wrote:

Hi,

I have installed the latest bits of ovirt and i see that my events 
tab in the UI is flodded with the following error messages for all the 
hosts in the cluster. Can some help me understand why are these ?


VDSM  command failed: Message timeout which can be caused by 
communication issues


VDSM command failed: Vds timeout occured

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I am monitoring some vms and i see there are warnings related to these 
saying "Vm not responding" though my vms are up and running fine.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm memory consumption (ovirt 4.0)

2016-08-29 Thread Nir Soffer
On Tue, Aug 30, 2016 at 1:30 AM, Federico Alberto Sayd 
wrote:

> I have issues with my ovirt setup related to memory consumption. After
> upgrading to 4.0 I noted a considerable grow in vdsm memory consumption.
> I suspect that the grow is related to a memory leak.
>

We need more details, see bellow...


>
> When I boot up the system and activate the host the memory consumption
> is about 600MB. After 5 days running and host in maintenance mode the
> memory consumption is about 1,4 GB.
>
> I need to put my hosts in maintenance and reboot to free memory.
>

You can restart vdsm (systemctl restart vdsmd) instead, running vms
are not effected by this.


>
> Can anyone help me to debug this problem?
>

We had a memory in vdsm-4.18.5, fixed  in vdsm-4.18.11. Since you
are running 4.18.11, there may be another leak.

Please enable health monitoring by creating
/etc/vdsm/vdsm.conf.d/50-health.conf

[devel]
health_monitor_enable = true

And restart vdsm.

Please run with this setting for couple of hours, maybe one day,
and then share the vdsm logs from this timeframe.

You may disable health monitoring by setting

[devel]
health_monitor_enable = false

Or by renaming or deleting this configuration file:

/etc/vdsm/vdsm.conf.d/50-health.conf.disabled

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users