Re: [ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Yedidyah Bar David
On Sun, Jun 26, 2016 at 4:14 AM, Brett I. Holcomb  wrote:
>
>
> On 06/25/2016 08:10 PM, Nir Soffer wrote:
>>
>> On Sat, Jun 25, 2016 at 8:51 PM, Brett I. Holcomb 
>> wrote:
>>>
>>>
>>> On 06/25/2016 10:57 AM, Robert Story wrote:
>>>
>>> I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
>>> include livbirt:
>>>
>>>   libvirt-client   x86_64  1.2.17-13.el7_2.5  updates
>>> 4.3 M
>>>   libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates
>>> 585 k
>>>   libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
>>> 122 k
>>>   libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates
>>> 162 k
>>>   libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates
>>> 302 k
>>>   libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates
>>> 161 k
>>>   libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
>>> 185 k
>>>   libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates
>>> 571 k
>>>   libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates
>>> 155 k
>>>   libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates
>>> 328 k
>>>   libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates
>>> 118 k
>>>   libvirt-lock-sanlock
>>>
>>> Is it safe to let yum update these packages while the host has running
>>> VMs?
>>> in maintenance mode? or not at all?
>>>
>>>
>>> Robert
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>> I saw a response where we are supposed to go to maintenance mode and then
>>> VMs will migrate but i've got nowhere to migrate to as I'm on a host with
>>> hosted Engine and no other host to migrate to.  So do I shutdown all VMs
>>> and
>>> then go to maintenance mode and then update and reboot my host?
>>
>> In this case you cannot put the host into maintenance, since hosted
>> engine is running on this host.
>>
>> Adding Simone to add more details on hosted engine upgrades.
>>
>> Nir
>
> Thanks.  That will be a big help.

Didn't test, I think this will work:

1. Shutdown all other VMs (as you already did)
2. Move to global maintenance: hosted-engine --set-maintenance --mode=global
3. Cleanly shutdown engine vm
4. stop HA daemons: service ovirt-ha-agent stop ; service ovirt-ha-broker stop
5. yum update what you need/want
6. Reboot (perhaps not always needed)
7. See that HA daemons started
8. Exit global maintenance: hosted-engine --set-maintenance --mode=none
9. Start engine vm (or see that HA starts it)
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 Host install fail cause of dnf api

2016-06-25 Thread Yedidyah Bar David
On Sat, Jun 25, 2016 at 9:22 PM, Pilař Vojtěch  wrote:
>
> Hi all,
>
> iam trying to install host from oVirt Engine Web Admin, but with no success
> so far. Everything dies on sigCheckPkg bug from dnf
> (https://bugzilla.redhat.com/show_bug.cgi?id=1344270).

Indeed.

Above bug can only be fixed once dnf publishes an official api for
checking signatures. For now we just dropped the check, see this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1343382

which is targeted to 4.0.1.

>
> 2016-06-25 19:36:22 INFO otopi.plugins.otopi.packagers.dnfpackager
> dnfpackager.info:79 DNF Downloaded libselinux-python-2.4-4.fc23.x86_64.rpm
> The 'sigCheckPkg' function is not a part of DNF API and will be removed in
> the upcoming DNF release. Please use only officially supported API
> functions. DNF API documentation is available at
> https://dnf.readthedocs.org/en/latest/api.html.
> 2016-06-25 19:36:22 ERROR otopi.plugins.otopi.packagers.dnfpackager
> dnfpackager.error:84 DNF 'NoneType' object is not iterable
> 2016-06-25 19:36:22 DEBUG otopi.plugins.otopi.packagers.dnfpackager
> dnfpackager.verbose:75 DNF Closing transaction with rollback
>
> Is there any workaround? I see no way to deploy hosts.

For now the only workaround I am aware of is to use a fixed
build, which you can get from jenkins [1] or from the nightly snapshots [2].

[1] http://jenkins.ovirt.org/job/otopi_4.0_build-artifacts-el7-x86_64/
[2] 
https://www.ovirt.org/develop/dev-process/install-nightly-snapshot/#from-40-branches

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.6 to 4.0 Upgrade

2016-06-25 Thread Brett I. Holcomb
In order to not hijack another thread I wanted to separate this from the 
"latest CentOS libvirt updates safe?" thread since this is different 
than what the OP on that thread wants to do.


I've been digging through the 4.0 documentation since I'd like to 
upgrade in the not to distant future so I'm doing research and I'm confused.


I'm running 3.6.6 in a hosted Engine environment.  My engine runs in a 
vm on the host.  The host is a physical box and the VMs are stored on a 
NAS accessed by an iSCSI LUN,  I have no other hosts running so I can 
not migrate.


According to this link, https://www.ovirt.org/release/4.0.0/, I'm told 
to simply run yum update ovirt-engine-setup and then engine-setup but I 
assume that's for a non hosted-engine environment.  I then go to the 
link for Hosted_Engine_HOwto#Upgrade_Hosted_Engine guide and am told to 
do this.


Set hosted-engine maintenance mode to global and other stuff that is 
written assuming I have some place to migrate my VMs to.  I do not have 
another host.


So how do I properly move to oVirt 4.0 from 3.6.6 on a single physical 
host running a hosted Engine VM?


Thanks.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] direct mounting of vm disks?

2016-06-25 Thread Robert Story
Hello,

I'm using oVirt 3.5.x w/nfs for vm file storage. I'm trying to restore a vm
from backup, which entails:

 - scp backup.tar to vm
 - untar backup on vm

this means all the data makes 3 trips over the network, each of which
causes a load spike on my nfs server. That nfs load, of course, affects all
other vms.

what I'd like to be able to do is

 - scp backup.tar to nfs server
 - stop vm
 - mount vm disks on nfs server
 - untar backup on nfs server (using ionice to minimze load impact)
 - unmount vm disks
 - start vm

I remember that I used to use kpartx to mount regular KVM disks, so I'm
hoping that it can also be done here. Anyone else tried to make this work?


Robert

-- 
Senior Software Engineer @ Parsons


pgpVh26sSZWLP.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Accessing VM over WAN

2016-06-25 Thread RK RK
Hi,

I am planning to deploy oVirt 4.0.0 in a production environment where
initially it will host 150 VDIs and will expand by 20 to 25 in subsequent
years with Windows 7, 8.1 and 10.

Users will be accessing these VDIs from their android tab or any tiny thin
client devices which will have only very small footprint OS with HTML5
supported browser.

I wish to give users to access the VDIs over WAN(from their home or on
travel). Is it possible to access the VDIs via spice-html5 over the WAN.

Any special configurations to be made to enable it? How should I make
spice-html5 as the default protocol to access the VDIs?

-- 

With Regards,
RK,
+91 9840483044
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-25 Thread Colin Coe
HI Dan

As this is production, critical infrastructure large downtime is not
possible.  We have a hardware refresh coming up in about 12 months so I'll
have to wait until then.

I recall asking this of GSS quite some time ago and not really getting too
helpful an answer

We use a combination of Cisco C4500-X (core/distribution) and C2960-X
(access) switches.  The SAN units connect into the C4500-X switches (32 x
10Gbps ports).

Thanks

On Sun, Jun 26, 2016 at 9:47 AM, Dan Yasny  wrote:

>
>
> On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe  wrote:
>
>> Hi Dan
>>
>> I should have mentioned that we need to use the same subnet for both
>> iSCSI interfaces which is why I ended up bonding (mode 1) these.
>>
>
> This is not best practice. Perhaps you should have asked these questions
> when planning? Right now, I'd start planning for a large downtime window in
> order to redo things right.
>
>
>> Looking at
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing,
>> it doesn't say anything about tying the iSCSI Bond back to the host.  In
>> our DEV environment I removed the bond the iSCSI interfaces were using and
>> created the iSCSI Bond as per this link.  What do I do now?  Recreate the
>> bond and give it an IP?  I don't see where to put an IP for iSCSI against
>> the hosts?
>>
>
> I don't have a setup in front of me to provide instructions, but you did
> mention you're using RHEV, why not just call support, they can just remote
> in and help you, or send some screenshots...
>
>
>>
>> Lastly, not using jumbo frames as where a critical infrastructure
>> organisation and I fear possible side effects.
>>
>
> You have an iSCSI dedicated network, I don't see the problem setting up a
> dedicated network the correct way, unless your switches have a single MTU
> setting for all ports, like the cisco 2960's. There's a lot of performance
> to gain there, depending on the kind of IO your VMs of generating.
>
>
>>
>> Thanks
>>
>> On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny  wrote:
>>
>>> Two things off the top of my head after skimming the given details:
>>> 1. iSCSI will work better without the bond. It already uses multipath,
>>> so all you need is to separate the portal IPs/subnets and provide separate
>>> IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
>>> https://access.redhat.com/solutions/131153 and also be sure to follow
>>> this:
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
>>> 2. You haven't mentioned anything about jumbo frames, are you using
>>> those? If not, it is a very good idea to start.
>>>
>>> And 3: since this is RHEV, you might get much more help from the
>>> official support than from this list.
>>>
>>> Hope this helps
>>> Dan
>>>
>>> On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe  wrote:
>>>
 Hi all

 We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
 They are all  working OK but I'd like a definitive answer on how I should
 be configuring the networking side as I'm pretty sure we're getting
 sub-optimal networking performance.

 All datacenters are housed in HP C7000 Blade enclosures.  The PROD
 datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
 of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
 and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
 cluster of three P4500s configured with RAID10 internally and NRAID5.

 The HP C7000 each have two Flex10/10D interconnect modules configured
 in a redundant ring so that we can upgrade the interconnects without
 dropping network connectivity to the infrastructure. We use fat RHEL-H 7.2
 hypervisors (HP BL460) and these are all configured with six network
 interfaces:
 - eno1 and eno2 are bond0 which is the rhevm interface
 - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this
 bond using 802.1q
 - eno5 and eno6 are bond2 and dedicated to iSCSI traffic

 Is this the "correct" way to do this?  If not, what should I be doing
 instead?

 Thanks

 CC

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-25 Thread Dan Yasny
On Fri, Jun 24, 2016 at 11:05 PM, Colin Coe  wrote:

> Hi Dan
>
> I should have mentioned that we need to use the same subnet for both iSCSI
> interfaces which is why I ended up bonding (mode 1) these.
>

This is not best practice. Perhaps you should have asked these questions
when planning? Right now, I'd start planning for a large downtime window in
order to redo things right.


> Looking at
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing,
> it doesn't say anything about tying the iSCSI Bond back to the host.  In
> our DEV environment I removed the bond the iSCSI interfaces were using and
> created the iSCSI Bond as per this link.  What do I do now?  Recreate the
> bond and give it an IP?  I don't see where to put an IP for iSCSI against
> the hosts?
>

I don't have a setup in front of me to provide instructions, but you did
mention you're using RHEV, why not just call support, they can just remote
in and help you, or send some screenshots...


>
> Lastly, not using jumbo frames as where a critical infrastructure
> organisation and I fear possible side effects.
>

You have an iSCSI dedicated network, I don't see the problem setting up a
dedicated network the correct way, unless your switches have a single MTU
setting for all ports, like the cisco 2960's. There's a lot of performance
to gain there, depending on the kind of IO your VMs of generating.


>
> Thanks
>
> On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny  wrote:
>
>> Two things off the top of my head after skimming the given details:
>> 1. iSCSI will work better without the bond. It already uses multipath, so
>> all you need is to separate the portal IPs/subnets and provide separate
>> IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
>> https://access.redhat.com/solutions/131153 and also be sure to follow
>> this:
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
>> 2. You haven't mentioned anything about jumbo frames, are you using
>> those? If not, it is a very good idea to start.
>>
>> And 3: since this is RHEV, you might get much more help from the official
>> support than from this list.
>>
>> Hope this helps
>> Dan
>>
>> On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe  wrote:
>>
>>> Hi all
>>>
>>> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
>>> They are all  working OK but I'd like a definitive answer on how I should
>>> be configuring the networking side as I'm pretty sure we're getting
>>> sub-optimal networking performance.
>>>
>>> All datacenters are housed in HP C7000 Blade enclosures.  The PROD
>>> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
>>> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
>>> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
>>> cluster of three P4500s configured with RAID10 internally and NRAID5.
>>>
>>> The HP C7000 each have two Flex10/10D interconnect modules configured in
>>> a redundant ring so that we can upgrade the interconnects without dropping
>>> network connectivity to the infrastructure. We use fat RHEL-H 7.2
>>> hypervisors (HP BL460) and these are all configured with six network
>>> interfaces:
>>> - eno1 and eno2 are bond0 which is the rhevm interface
>>> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this
>>> bond using 802.1q
>>> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>>>
>>> Is this the "correct" way to do this?  If not, what should I be doing
>>> instead?
>>>
>>> Thanks
>>>
>>> CC
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Brett I. Holcomb



On 06/25/2016 08:10 PM, Nir Soffer wrote:

On Sat, Jun 25, 2016 at 8:51 PM, Brett I. Holcomb  wrote:


On 06/25/2016 10:57 AM, Robert Story wrote:

I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
include livbirt:

  libvirt-client   x86_64  1.2.17-13.el7_2.5  updates
4.3 M
  libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates
585 k
  libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
122 k
  libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates
162 k
  libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates
302 k
  libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates
161 k
  libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
185 k
  libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates
571 k
  libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates
155 k
  libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates
328 k
  libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates
118 k
  libvirt-lock-sanlock

Is it safe to let yum update these packages while the host has running VMs?
in maintenance mode? or not at all?


Robert



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I saw a response where we are supposed to go to maintenance mode and then
VMs will migrate but i've got nowhere to migrate to as I'm on a host with
hosted Engine and no other host to migrate to.  So do I shutdown all VMs and
then go to maintenance mode and then update and reboot my host?

In this case you cannot put the host into maintenance, since hosted
engine is running on this host.

Adding Simone to add more details on hosted engine upgrades.

Nir

Thanks.  That will be a big help.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Nir Soffer
On Sat, Jun 25, 2016 at 8:51 PM, Brett I. Holcomb  wrote:
>
>
> On 06/25/2016 10:57 AM, Robert Story wrote:
>
> I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
> include livbirt:
>
>  libvirt-client   x86_64  1.2.17-13.el7_2.5  updates
> 4.3 M
>  libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates
> 585 k
>  libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
> 122 k
>  libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates
> 162 k
>  libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates
> 302 k
>  libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates
> 161 k
>  libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates
> 185 k
>  libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates
> 571 k
>  libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates
> 155 k
>  libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates
> 328 k
>  libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates
> 118 k
>  libvirt-lock-sanlock
>
> Is it safe to let yum update these packages while the host has running VMs?
> in maintenance mode? or not at all?
>
>
> Robert
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> I saw a response where we are supposed to go to maintenance mode and then
> VMs will migrate but i've got nowhere to migrate to as I'm on a host with
> hosted Engine and no other host to migrate to.  So do I shutdown all VMs and
> then go to maintenance mode and then update and reboot my host?

In this case you cannot put the host into maintenance, since hosted
engine is running on this host.

Adding Simone to add more details on hosted engine upgrades.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Fernando Frediani

This solution looks intresting.

If I understand it correctly you first build your CEPH pool. Then you 
export RBD to iSCSI Target which exports it to oVirt which then will 
create LVMs on the top of it ?


Could you share more details about your experience ? Looks like a way to 
get CEPH + oVirt without Cinder.


Thanks

Fernando

On 25/06/2016 17:47, Nicolás wrote:

Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is 
actually an iSCSI backend. So far, we have had zero issues with cca. 
50 high IO rated VMs. Perhaps [1] might shed some light on how to set 
it up.


Regards.

[1]: 
https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.html
En 24/6/2016 9:28 p. m., Charles Gomes  
escribió:


Hello

I’ve been reading lots of material about implementing oVirt with
Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement
entire Openstack ?

I’m already currently using Foreman to deploy Ceph and KVM nodes,
trying to minimize the amount of moving parts. I heard something
about oVirt providing a managed Cinder appliance, have any seen this ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Nir Soffer
On Sat, Jun 25, 2016 at 11:47 PM, Nicolás  wrote:
> Hi,
>
> We're using Ceph along with an iSCSI gateway, so our storage domain is
> actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
> IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Can you share more details on this setup and how you integrate with ovirt?

For example, are you using ceph luns in regular iscsi storage domain, or
attaching luns directly to vms?

Did you try our dedicated cinder/ceph support and compared it with ceph
iscsi gateway?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Nicolás
Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Regards.

[1]: https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.htmlEn 24/6/2016 9:28 p. m., Charles Gomes  escribió:



Hello
 
I’ve been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement entire Openstack ?

I’m already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network redundancy with Manual balancing per VLAN

2016-06-25 Thread Fernando Frediani

Hello,

In VMware it is possible to bond two network interfaces and for each 
Portgroup (equivalent to a VLAN) is possible to tell which of the 
physical interfaces underneath it you wish the traffic to flow primarily 
and which stays as secondary(bond mode=1 equivalent). So for certain 
VLANs (Management, Live Migration, etc) is possible to force traffic 
flow via one physical NIC of the bond and for other VLANs (Virtual 
Machine's traffic) outs via the other NIC with failover to each other 
should a cable or switch fails.


This is specially good for better utilize the fewer NICs available and 
still have redundancy.


In oVirt it is also possible to have bonds, but would it still be 
possible to do that same and favor the traffic per VLAN basis ? I guess 
it is something related to Linux Bond module but perhaps someone has 
done this already.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Maor Lipchuk
Hi Charles,

Currently, oVirt communicates with Ceph only through Cinder.
If you want to avoid using Cinder perhaps you can try to use cephfs and
mount it as a posix storage domain instead.
Regarding Cinder appliance, it is not yet implemented though we are
currently investigating this option.

Regards,
Maor

On Fri, Jun 24, 2016 at 11:23 PM, Charles Gomes 
wrote:

> Hello
>
>
>
> I’ve been reading lots of material about implementing oVirt with Ceph,
> however all talk about using Cinder.
>
> Is there a way to get oVirt with Ceph without having to implement entire
> Openstack ?
>
> I’m already currently using Foreman to deploy Ceph and KVM nodes, trying
> to minimize the amount of moving parts. I heard something about oVirt
> providing a managed Cinder appliance, have any seen this ?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0 Host install fail cause of dnf api

2016-06-25 Thread Pilař Vojtěch

Hi all,

iam trying to install host from oVirt Engine Web Admin, but with no success so 
far. Everything dies on sigCheckPkg bug from dnf 
(https://bugzilla.redhat.com/show_bug.cgi?id=1344270).

2016-06-25 19:36:22 INFO otopi.plugins.otopi.packagers.dnfpackager 
dnfpackager.info:79 DNF Downloaded libselinux-python-2.4-4.fc23.x86_64.rpm
The 'sigCheckPkg' function is not a part of DNF API and will be removed in the 
upcoming DNF release. Please use only officially supported API functions. DNF 
API documentation is available at 
https://dnf.readthedocs.org/en/latest/api.html.
2016-06-25 19:36:22 ERROR otopi.plugins.otopi.packagers.dnfpackager 
dnfpackager.error:84 DNF 'NoneType' object is not iterable
2016-06-25 19:36:22 DEBUG otopi.plugins.otopi.packagers.dnfpackager 
dnfpackager.verbose:75 DNF Closing transaction with rollback

Is there any workaround? I see no way to deploy hosts.

(Fedora 23, oVirt 4.0)

Thank you for help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-25 Thread Melissa Mesler
Okay so I deleted the certificate and added it again. The login page
works for now. I'll report back any more issues.
 
 
On Sat, Jun 25, 2016, at 01:45 PM, Melissa Mesler wrote:
> Pardon my ignorance on the sos report but how do I run one of those?
>
>
> On Sat, Jun 25, 2016, at 01:52 AM, Sandro Bonazzola wrote:
>>
>>
>> Il 25/Giu/2016 01:38, "Melissa Mesler"  ha
>> scritto:
>> >
>> > Okay I finally got past the install.. Everything went fine. I then
>> > bring it up in the web browser, add the certs, confirm security
>> > exception and then nothing. There is nothing in the ovirt-
>> > engine/engine.log to help guide me.
>> >
>>
>>
>> Can you please share a sos report?
>>
>>
>> >
>> > On Fri, Jun 24, 2016, at 01:59 PM, Martin Perina wrote:
>> >>
>> >> Could you please share installation log from /var/log/ovirt-
>> >> engine/setup ?
>> >> Thanks
>> >> Martin Perina
>> >>
>> >> On Fri, Jun 24, 2016 at 6:31 PM, Melissa Mesler
>> >>  wrote:
>> >>>
>> >>> Also, this is in the engine-setup logs:
>> >>> 2016-06-24 11:04:08 ERROR
>> >>> otopi.plugins.ovirt_engine_common.base.core.misc
>> >>> misc._terminate:148
>> >>> Execution of setup failed
>> >>>
>> >>> On Fri, Jun 24, 2016, at 11:08 AM, Melissa Mesler wrote:
>> >>> > I am doing a clean install of Ovirt 4.0. Upon executing engine-
>> >>> > setup
>> >>> > with all default values, it fails. This is the error during
>> >>> > setup:
>> >>> >
>> >>> > [ ERROR ] Failed to execute stage 'Misc configuration': Command
>> >>> > '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
>> >>> >
>> >>> >
>> >>> > Any ideas?
>> >>> ___
>> >>> Users mailing list
>> >>> Users@ovirt.org
>> >>> http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
> _
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-25 Thread Melissa Mesler
Pardon my ignorance on the sos report but how do I run one of those?
 
 
On Sat, Jun 25, 2016, at 01:52 AM, Sandro Bonazzola wrote:
>
> Il 25/Giu/2016 01:38, "Melissa Mesler"  ha
> scritto:
>  >
>  > Okay I finally got past the install.. Everything went fine. I then
>  > bring it up in the web browser, add the certs, confirm security
>  > exception and then nothing. There is nothing in the ovirt-
>  > engine/engine.log to help guide me.
>  >
> Can you please share a sos report?
> >
>  > On Fri, Jun 24, 2016, at 01:59 PM, Martin Perina wrote:
>  >>
>  >> Could you please share installation log from /var/log/ovirt-
>  >> engine/setup ?
>  >> Thanks
>  >> Martin Perina
>  >>
>  >> On Fri, Jun 24, 2016 at 6:31 PM, Melissa Mesler
>  >>  wrote:
>  >>>
>  >>> Also, this is in the engine-setup logs:
>  >>> 2016-06-24 11:04:08 ERROR
>  >>> otopi.plugins.ovirt_engine_common.base.core.misc
>  >>> misc._terminate:148
>  >>> Execution of setup failed
>  >>>
>  >>> On Fri, Jun 24, 2016, at 11:08 AM, Melissa Mesler wrote:
>  >>> > I am doing a clean install of Ovirt 4.0. Upon executing engine-
>  >>> > setup
>  >>> > with all default values, it fails. This is the error during
>  >>> > setup:
>  >>> >
>  >>> > [ ERROR ] Failed to execute stage 'Misc configuration': Command
>  >>> > '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
>  >>> >
>  >>> >
>  >>> > Any ideas?
>  >>> ___
>  >>> Users mailing list
>  >>> Users@ovirt.org
>  >>> http://lists.ovirt.org/mailman/listinfo/users
>  >
>  >
>  >
>  > ___
>  > Users mailing list
>  > Users@ovirt.org
>  > http://lists.ovirt.org/mailman/listinfo/users
>  >
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error mounting hosted engine Volume (Glusterfs) via VDSM

2016-06-25 Thread Ralf Schenk
Hello,

I changed this (I don't really know what I do, but I'm pretty sure it
doesn't hurt here ;-) )
in /usr/share/vdsm/storage storageServer.py "class
GlusterFSConnection(MountConnection)"

[root@microcloud28 storage]# diff -u storageServer.py.orig storageServer.py
--- storageServer.py.orig   2016-06-25 20:20:32.372965968 +0200
+++ storageServer.py2016-06-25 20:20:44.490640046 +0200
@@ -308,7 +308,7 @@

 def __init__(self,
  spec,
- vfsType=None,
+ vfsType="glusterfs",
  options="",
  mountClass=mount.Mount):
 super(GlusterFSConnection, self).__init__(spec,

and this helped to add the "-t glusterfs" in the Mount-Command according
vdsm.log:

jsonrpc.Executor/4::DEBUG::2016-06-25
20:22:16,804::fileUtils::143::Storage.fileUtils::(createdir) Cre
ating directory:
/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine
mode: None
jsonrpc.Executor/4::DEBUG::2016-06-25
20:22:16,804::storageServer::364::Storage.StorageServer.MountCon
nection::(_get_backup_servers_option) Using bricks:
['microcloud21.rxmgmt.databay.de', 'microcloud24.r
xmgmt.databay.de', 'microcloud27.rxmgmt.databay.de']
jsonrpc.Executor/4::WARNING::2016-06-25
20:22:16,804::storageServer::370::Storage.StorageServer.MountC
onnection::(_get_backup_servers_option) gluster server
u'glusterfs.rxmgmt.databay.de' is not in bricks
 ['microcloud21.rxmgmt.databay.de', 'microcloud24.rxmgmt.databay.de',
'microcloud27.rxmgmt.databay.de'
], possibly mounting duplicate servers
jsonrpc.Executor/4::DEBUG::2016-06-25
20:22:16,804::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bi
n/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin
/mount -t glusterfs -o
backup-volfile-servers=microcloud21.rxmgmt.databay.de:microcloud24.rxmgmt.datab
ay.de:microcloud27.rxmgmt.databay.de glusterfs.rxmgmt.databay.de:/engine
/rhev/data-center/mnt/gluster
SD/glusterfs.rxmgmt.databay.de:_engine (cwd None)


Am 25.06.2016 um 20:00 schrieb Ralf Schenk:
>
> Hello,
>
> I think options for mounting the hosted-engine Volume changed in
> latest vdsm to support mounting from backup-volfile-servers.
>
> [root@microcloud28 ~]# rpm -qi vdsm
> Name: vdsm
> Version : 4.17.28
> Release : 1.el7
> Architecture: noarch
> Install Date: Fri 10 Jun 2016 11:17:37 AM CEST
> Group   : Applications/System
> Size: 3828639
> License : GPLv2+
> Signature   : RSA/SHA1, Fri 03 Jun 2016 12:53:20 AM CEST, Key ID
> 7aebbe8261e8806c
> Source RPM  : vdsm-4.17.28-1.el7.src.rpm
>
> Now my hosts have problems to mount the Volume. On hosted-engine setup
> I configured the (Replica 3) volume to be
> "glusterfs.rxmgmt.databay.de:/engine" which ist a Round-Robin DNS to
> my gluster hosts and _not_ the DNS-Name of any gluster-brick.
>
> Now VDSM logs:
>
> jsonrpc.Executor/3::DEBUG::2016-06-25
> 19:40:02,520::fileUtils::143::Storage.fileUtils::(createdir) Cre
> ating directory:
> /rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine
> mode: None
> jsonrpc.Executor/3::DEBUG::2016-06-25
> 19:40:02,520::storageServer::364::Storage.StorageServer.MountCon
> nection::(_get_backup_servers_option) Using bricks:
> ['microcloud21.rxmgmt.databay.de', 'microcloud24.r
> xmgmt.databay.de', 'microcloud27.rxmgmt.databay.de']
> jsonrpc.Executor/3::WARNING::2016-06-25
> 19:40:02,520::storageServer::370::Storage.StorageServer.MountC
> onnection::(_get_backup_servers_option) gluster server
> u'glusterfs.rxmgmt.databay.de' is not in bricks
>  ['microcloud21.rxmgmt.databay.de', 'microcloud24.rxmgmt.databay.de',
> 'microcloud27.rxmgmt.databay.de'
> ], possibly mounting duplicate servers
> jsonrpc.Executor/3::DEBUG::2016-06-25
> 19:40:02,520::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bi
> n/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/systemd-run --scope
> --slice=vdsm-glusterfs /usr/bin
> /mount -o
> backup-volfile-servers=microcloud21.rxmgmt.databay.de:microcloud24.rxmgmt.databay.de:microcl
> oud27.rxmgmt.databay.de glusterfs.rxmgmt.databay.de:/engine
> /rhev/data-center/mnt/glusterSD/glusterfs.
> rxmgmt.databay.de:_engine (cwd None)
> jsonrpc.Executor/3::ERROR::2016-06-25
> 19:40:02,540::hsm::2473::Storage.HSM::(connectStorageServer) Cou
> ld not connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2470, in
> connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 237, in connect
> six.reraise(t, v, tb)
>   File "/usr/share/vdsm/storage/storageServer.py", line 229, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/share/vdsm/storage/mount.py", line 225, in mount
> return self._runcmd(cmd, timeout)
>   File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
> raise MountError(rc, ";".join((out, err)))
> MountError: (32, ';Running scope as unit run-13461.scope.\nmount.nf

[ovirt-users] Error mounting hosted engine Volume (Glusterfs) via VDSM

2016-06-25 Thread Ralf Schenk
Hello,

I think options for mounting the hosted-engine Volume changed in latest
vdsm to support mounting from backup-volfile-servers.

[root@microcloud28 ~]# rpm -qi vdsm
Name: vdsm
Version : 4.17.28
Release : 1.el7
Architecture: noarch
Install Date: Fri 10 Jun 2016 11:17:37 AM CEST
Group   : Applications/System
Size: 3828639
License : GPLv2+
Signature   : RSA/SHA1, Fri 03 Jun 2016 12:53:20 AM CEST, Key ID
7aebbe8261e8806c
Source RPM  : vdsm-4.17.28-1.el7.src.rpm

Now my hosts have problems to mount the Volume. On hosted-engine setup I
configured the (Replica 3) volume to be
"glusterfs.rxmgmt.databay.de:/engine" which ist a Round-Robin DNS to my
gluster hosts and _not_ the DNS-Name of any gluster-brick.

Now VDSM logs:

jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::fileUtils::143::Storage.fileUtils::(createdir) Cre
ating directory:
/rhev/data-center/mnt/glusterSD/glusterfs.rxmgmt.databay.de:_engine
mode: None
jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::storageServer::364::Storage.StorageServer.MountCon
nection::(_get_backup_servers_option) Using bricks:
['microcloud21.rxmgmt.databay.de', 'microcloud24.r
xmgmt.databay.de', 'microcloud27.rxmgmt.databay.de']
jsonrpc.Executor/3::WARNING::2016-06-25
19:40:02,520::storageServer::370::Storage.StorageServer.MountC
onnection::(_get_backup_servers_option) gluster server
u'glusterfs.rxmgmt.databay.de' is not in bricks
 ['microcloud21.rxmgmt.databay.de', 'microcloud24.rxmgmt.databay.de',
'microcloud27.rxmgmt.databay.de'
], possibly mounting duplicate servers
jsonrpc.Executor/3::DEBUG::2016-06-25
19:40:02,520::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bi
n/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin
/mount -o
backup-volfile-servers=microcloud21.rxmgmt.databay.de:microcloud24.rxmgmt.databay.de:microcl
oud27.rxmgmt.databay.de glusterfs.rxmgmt.databay.de:/engine
/rhev/data-center/mnt/glusterSD/glusterfs.
rxmgmt.databay.de:_engine (cwd None)
jsonrpc.Executor/3::ERROR::2016-06-25
19:40:02,540::hsm::2473::Storage.HSM::(connectStorageServer) Cou
ld not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2470, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 237, in connect
six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 229, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/share/vdsm/storage/mount.py", line 225, in mount
return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';Running scope as unit run-13461.scope.\nmount.nfs: an
incorrect mount option was specified\n')

So the mount is tried as NFS which hasn't the option "-o
backup-volfile-servers=...".

As a consequence the host is disabled in engine. The only way to get it
up is mounting the volume manually to
/rhev/data-center/mnt/glusterSD/glusterfs.
rxmgmt.databay.de:_engine and activate it manually in Management-GUI.

Should and can I change the hosted_storage entry-point globally to i.e.
"microcloud21.rxmgmt.databay.de" or wouldn't it be better globally that
VDSM uses "-t glusterfs" to try to mount the gluster-Volume regardless
which DNS Name for the gluster-service is used ?

Ovirt is:

ovirt-release36.noarch 1:3.6.6-1

Bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Brett I. Holcomb



On 06/25/2016 10:57 AM, Robert Story wrote:

I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
include livbirt:

  libvirt-client   x86_64  1.2.17-13.el7_2.5  updates  4.3 M
  libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates  585 k
  libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  122 k
  libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates  162 k
  libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates  302 k
  libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates  161 k
  libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  185 k
  libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates  571 k
  libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates  155 k
  libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates  328 k
  libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates  118 k
  libvirt-lock-sanlock

Is it safe to let yum update these packages while the host has running VMs?
in maintenance mode? or not at all?


Robert



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I saw a response where we are supposed to go to maintenance mode and 
then VMs will migrate but i've got nowhere to migrate to as I'm on a 
host with hosted Engine and no other host to migrate to.  So do I 
shutdown all VMs and then go to maintenance mode and then update and 
reboot my host?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Nir Soffer
On Sat, Jun 25, 2016 at 5:57 PM, Robert Story  wrote:
> I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
> include livbirt:
>
>  libvirt-client   x86_64  1.2.17-13.el7_2.5  updates  4.3 
> M
>  libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates  585 
> k
>  libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  122 
> k
>  libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates  162 
> k
>  libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates  302 
> k
>  libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates  161 
> k
>  libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  185 
> k
>  libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates  571 
> k
>  libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates  155 
> k
>  libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates  328 
> k
>  libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates  118 
> k
>  libvirt-lock-sanlock
>
> Is it safe to let yum update these packages while the host has running VMs?
> in maintenance mode? or not at all?

The safest way is to put a host into maintenance before updating.
All vms will be migrated to other hosts before the host is deactivated.

Some packages like sanlock cannot be updated while host is connected
to storage; trying to update them may kill sanlock, which will cause the
watchdog to reboot your host, killing your vms.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] latest CentOS libvirt updates safe?

2016-06-25 Thread Robert Story
I have oVirt 3.5.x on CentOS 7 hosts. These hosts have updates which
include livbirt:

 libvirt-client   x86_64  1.2.17-13.el7_2.5  updates  4.3 M
 libvirt-daemon   x86_64  1.2.17-13.el7_2.5  updates  585 k
 libvirt-daemon-config-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  122 k
 libvirt-daemon-driver-interface  x86_64  1.2.17-13.el7_2.5  updates  162 k
 libvirt-daemon-driver-networkx86_64  1.2.17-13.el7_2.5  updates  302 k
 libvirt-daemon-driver-nodedevx86_64  1.2.17-13.el7_2.5  updates  161 k
 libvirt-daemon-driver-nwfilter   x86_64  1.2.17-13.el7_2.5  updates  185 k
 libvirt-daemon-driver-qemu   x86_64  1.2.17-13.el7_2.5  updates  571 k
 libvirt-daemon-driver-secret x86_64  1.2.17-13.el7_2.5  updates  155 k
 libvirt-daemon-driver-storagex86_64  1.2.17-13.el7_2.5  updates  328 k
 libvirt-daemon-kvm   x86_64  1.2.17-13.el7_2.5  updates  118 k
 libvirt-lock-sanlock

Is it safe to let yum update these packages while the host has running VMs?
in maintenance mode? or not at all?


Robert

-- 
Senior Software Engineer @ Parsons


pgpkSKIC8GM2W.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Create a Virtual Disk failed

2016-06-25 Thread Dewey Du
I use glusterfs as storage. When I tried to create a virtual disk for a vm,
I got the following error in UI logs.

ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-48) [] Permutation name: B003B5EDCB6FC4308644D5D001F65B4C
ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-48) [] Uncaught exception: :
com.google.gwt.core.client.JavaScriptException: (TypeError)
 __gwt$exception: : a is undefined
at Unknown._kj(Unknown Source)
at Unknown.JUq(Unknown Source)
at Unknown.Fbr(Unknown Source)
at Unknown.Hno(Unknown Source)
at Unknown.t0n(Unknown Source)
at Unknown.w0n(Unknown Source)
at Unknown.q3n(Unknown Source)
at Unknown.t3n(Unknown Source)
at Unknown.S2n(Unknown Source)
at Unknown.V2n(Unknown Source)
at Unknown.bMe(Unknown Source)
at Unknown.V7(Unknown Source)
at Unknown.k8(Unknown Source)
at Unknown.czf/c.onreadystatechange<(Unknown Source)
at Unknown.Ux(Unknown Source)
at Unknown.Yx(Unknown Source)
at Unknown.Xx/<(Unknown Source)
at Unknown.anonymous(Unknown Source)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unofficial projects

2016-06-25 Thread James Michels
We have lots of non-technically profiled users but yet we want them to use
some machines, mostly "automatic" VM pools. So instead of making them open
the User panel, we're working on a very simple desktop app that will
connect to oVirt's API with their credentials, get a list of all VMs they
have permissions on and allow them only 2 operations: manage power (power
off, power on) and connect to the machine getting a SPICE ticket and
chaining it to virt-viewer. The main advantage is that this app will be
started on their session open, so it should be more confortable to them.

Once finished we will publish the source code, so If you consider this
useful I can notify you if interested.

Regards

2016-06-25 8:37 GMT+01:00 Michal Skrivanek :

>
> > On 25 Jun 2016, at 08:56, James Michels 
> wrote:
> >
> > Greetings,
> >
> > Is there some official oVirt section where you list unofficial
> oVirt-related projects? If so, what is the process for a new unofficial
> project to get listed there?
>
> What do you have in mind? Anything cool?
>
> >
> > If not, I think it might be a good idea to provide a minimal support by
> listing them.
>
> If you mean to include something as a part of oVirt then following pages
> should explain it:
> https://www.ovirt.org/develop/projects/adding-a-new-project/
> https://www.ovirt.org/develop/projects/incubating-an-subproject/
>
> Thanks,
> michal
>
> >
> > Thank you.
> >
> > James
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setup new enviroment

2016-06-25 Thread Andy Michielsen
Hello,

This is the setup I was thinking about doing. But Fernando's suggestion also 
makes sense.
Can I maybe install the host on 3 servers for high availability and run local 
storage for the vm's ?

But i only have 1 node available at the moment. As soon as I install a first 
one I can migrate the vm's already running on the other hosts to this new oVirt 
host. 

How can I install 1 node with a hosted engine on glusterfs or ??? and use local 
storage for my vm's. Later as the other host become available I will add them 
acordingly.

What would be a good network setup as I only have 2 nic's per host. Or should I 
invest in addidional nic's

Kind regards

Verstuurd vanaf mijn iPad

> Op 21 jun. 2016 om 08:10 heeft Marcin Michta  het 
> volgende geschreven:
> 
> Hi Andy,
> 
> You can manage them from one web interface. Just deploy one hosted-engine VM 
> on one of them. Add others servers to the default cluster and then switch 
> them to use local storage (Configure Local Storage). After that each nodes 
> will move to separate cluster, and you can manage all off them from one web 
> interface.
> I hope it will be helpful
> 
> 
>> On 21.06.2016 07:32, Andy Michielsen wrote:
>> Hello all,
>> 
>> I was just wondering what your opinions would be in setting up a new oVirt 
>> enviroment.
>> 
>> I have 4 old servers, 3 with 64 Gigs of ram and 2 tera's of disk space an 
>> one with 32 Gigs of ram and 1,2 Gb. Each has 2 hexcore cpu's. Each server 
>> has at least 2 nic's.
>> 
>> I would like to use each server in a seperate cluster with there own local 
>> storage as this would all only be used as a test enviroment but still would 
>> like to manage them from one interface. Deploy new vm's from templates etc.
>> 
>> Can I still use the all in one installation for engine and node ? Or can I 
>> install the engine on a seperate host, physical or virtual. 
>> 
>> What would be a good way to use the network ?
>> 
>> Any advice would be greatly appriciated. Thanks in advance.
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Marcin Michta
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unofficial projects

2016-06-25 Thread Michal Skrivanek

> On 25 Jun 2016, at 08:56, James Michels  
> wrote:
> 
> Greetings,
> 
> Is there some official oVirt section where you list unofficial oVirt-related 
> projects? If so, what is the process for a new unofficial project to get 
> listed there?

What do you have in mind? Anything cool?

> 
> If not, I think it might be a good idea to provide a minimal support by 
> listing them.

If you mean to include something as a part of oVirt then following pages should 
explain it:
https://www.ovirt.org/develop/projects/adding-a-new-project/
https://www.ovirt.org/develop/projects/incubating-an-subproject/

Thanks,
michal

> 
> Thank you.
> 
> James
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-25 Thread Michal Skrivanek

> On 24 Jun 2016, at 18:34, Scott  wrote:
> 
> Actually, I figured out a work around. I changed the HostedEngine VM's 
> vds_group_id in the database to the vds_group_id of my temporary cluster 
> (found from the vds_groups table). This worked and I could put my main 
> cluster in upgrade mode. Now to continue the process...
> 
> 

Note you don’t really need an upgrade mode/policy if you already create a 
temporary cluster. 
Putting aside the HE problem,If you need your other VMs running then you can 
also just cross migrate them to the new el7 cluster (in 3.5 mode). If you have 
many hosts you can just move one by one as you upgrade/reinstall them, and keep 
migrating VMs from the old one to the new depending on your capacity. At the 
end you can just remove the old empty cluster and rename the new one back to 
the original name:)
Shutting down unneeded VMs will save you time

Note that once you have a 3.5 cluster all with el7 hosts in order to upgrade 
the cluster level to 3.6 you anyway need to shut down the VMs in order for 
their configuration changes to happen.

Thanks,
michal
> Thanks,
> Scott
> 
> 
> On Fri, Jun 24, 2016, 9:29 AM Scott  > wrote:
> Hi Roman,
> 
> I made it through step 6 however it does look like the problem you mentioned 
> has occurred.  My engine VM is running on my host in the temporary cluster.  
> The stats under Hosts show this.  But in the Virtual Machines tab this VM 
> still thinks its on my main cluster and I can't change that setting.  Did you 
> have a suggestion on how to work around this?  Thankfully only one of my RHEV 
> instances has this upgrade path.
> 
> Thanks for your help,
> Scott
> 
> On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr  > wrote:
> On Thu, Jun 23, 2016 at 10:26 PM, Scott  > wrote:
> > Hi Roman,
> >
> > Thanks for the detailed steps.  I follow the idea you have outlined and I
> > think its easier than what I thought of (moving my self hosted engine back
> > to physical hardware, upgrading and moving it back to self hosted).  I will
> > give it a spin in my build RHEV cluster tomorrow and let you know how I get
> > on.
> >
> 
> Thanks.
> 
> The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745 
> .
> 
> I thought about the solution and I see one possible problem with this
> approach. It might be that the engine still thinks that the VM is on
> the old cluster.
> Let me know if this happens, we can work around that too.
> 
> Roman
> 
> > Thanks again,
> > Scott
> >
> > On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr  > > wrote:
> >>
> >> Hi Scott,
> >>
> >> On Thu, Jun 23, 2016 at 8:54 PM, Scott  >> > wrote:
> >> > Hello list,
> >> >
> >> > I'm trying to upgrade a self-hosted engine RHEV environment running
> >> > 3.5/el6
> >> > to 3.6/el7.  I'm following the process outlined in these two documents:
> >> >
> >> >
> >> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> >> >  
> >> > 
> >> > https://access.redhat.com/solutions/2300331 
> >> > 
> >> >
> >> > The problem I'm having is I don't seem to be able to apply the
> >> > "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the following
> >> > error:
> >> >
> >> > Can not start cluster upgrade mode, see below for details:
> >> > VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is
> >> > configured
> >> > to be not migratable.
> >> >
> >> That is correct, only the he-agents on each host decide where the
> >> hosted engine VM can start
> >>
> >> > But the HostedEngine VM is not one I can edit due to being mid-upgrade.
> >> > And
> >> > even if I could, the setting its complaining about can't be managed by
> >> > the
> >> > engine (I tried in another RHEV instance).
> >> >
> >> Also true, it is very limited what you can currently do with the
> >> hosted engine VM.
> >>
> >>
> >> > Is this a bug?  What am I missing to be able to move on?  As it seems
> >> > now,
> >> > the InClusterUpgrade scheduling policy is useless and can't actually be
> >> > used.
> >>
> >> That is indeed something the InClusterUpgrade does not take into
> >> consideration. I will file a bug report.
> >>
> >>  But what you can do is the following:
> >>
> >> You can create a temporary cluster, move one host and the hosted
> >> engine VM there, upgrade all hosts and then start the hosted-engine VM
> >> in the original cluster again.
> >>
> >> The detailed steps are:
> >>
> >> 1) Enter the global maintenance mode
> >> 2) Create a temporary cluster
> >> 3) Put one of the hosted engine hosts which does not cur