Re: [ovirt-users] How to automate the ovirt host deployment?

2016-06-03 Thread Barak Korren
>
> You should be able to see the hosts from ovirt interface.
> I was not able to add an auto-discovered hosts to ovirt it always trows
an exception: Failed to add Host  (User: admin@internal). probably
it is a bug.

Did you add the oVirt provision plugin to Foreman? You probably need it for
this to work.
If you did then please submit a bug to oVirt bugzilla.
>
Thanks,
Barak
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-03 Thread InterNetX - Juergen Gotteswinter
Hello David,

thanks for your explanation of those messages, is there any possibility
to get rid of this? i already figured out that it might be an corruption
of the ids file, but i didnt find anything about re-creating or other
solutions to fix this.

Imho this occoured after an outage where several hosts, and the iscsi
SAN has been fenced and/or rebooted.

Thanks,

Juergen


Am 6/2/2016 um 6:03 PM schrieb David Teigland:
> On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
>>> This is a mess that's been caused by improper use of storage, and various
>>> sanity checks in sanlock have all reported errors for "impossible"
>>> conditions indicating that something catastrophic has been done to the
>>> storage it's using.  Some fundamental rules are not being followed.
>>
>> Thanks David.
>>
>> Do you need more output from sanlock to understand this issue?
> 
> I can think of nothing more to learn from sanlock.  I'd suggest tighter,
> higher level checking or control of storage.  Low level sanity checks
> detecting lease corruption are not a convenient place to work from.
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Windows Guest Tools & Live Migration issues.

2016-06-03 Thread Michal Skrivanek

> On 03 Jun 2016, at 07:14, Anantha Raghava  
> wrote:
> 
> Hi,
> 
> We have just installed oVirt 3.6 on 3 hosts with storage volumes on Fibre 
> Channel storage box. Every thing is working fine except the following.
> 
> 1. We have created 15 Virtual Machines all VM with Windows Server 2012 R2 OS. 
> VM properties does not report the Operating System nor it shows the IP and 
> FQDN in the Admin Portal. There is always an exclamation mark that reports 
> about OS being different from the template and timezone issues. We have 
> changed the timezone to Indian Standard Time in both VM and Host, same result 
> continues. We have installed Windows Gues Tools, same result continues. 
> Screen shot is below.

you doesn’t seem to run the guest agent. Make sure the service is started and 
works, then you’ll see IPs and more detailed info about each guest, and 
exclamation marks should go away

> 
> 
> 
> 2. When we manually tried to migrate the VMs from one host to another one, 
> the migration gets initiated, but will eventually fail.
> 
> Any specific setting missing here or is it a bug.

are they big or busy VMs? What does it fail on? There should be a menaingful 
message even if it’s just a timeout

> 
> Note: 
> 
> All hosts are installed with CentOS 7.2 minimal installation oVirt node is 
> installed and activated.
> We do not have a DNS in our environment. We have to do with IPs. 

as long as engine works with FQDN it’s ok

> We are yet to apply the 3.6.6 patch on Engine and Nodes.

that may help with some of the issues above, so please try to do that 

Thanks,
michal

> We are running a stand alone engine, not a Hosted Engine.
> -- 
> Thanks & Regards,
> 
> Anantha Raghava
> eXza Technology Consulting & Services
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ipxe and nslookup

2016-06-03 Thread Michal Skrivanek

> On 01 Jun 2016, at 15:33, Cam Mac  wrote:
> 
> Hi,
> 
> I am using pxeboot with oVirt, which I believe uses ipxe as it's pxe 
> implementation. In our kickstart menu, we need to do a hostname lookup to 
> choose the appropriate local server as there are different boot servers in 
> different global locations. For this, we are currently relying upon the 
> 'nslookup' command, which is available in Xen and VMWare pxe command sets, 
> but the oVirt/KVM one does not have this command enabled. I've tried a 
> workaround using php commands, e.g.:
> 
>  
>  set wds 
>  set net0/next-server ${wds}
> 
> However, this does not work. Is there a possibility to get the nslookup 
> function enabled in ipxe (apparently it is a compile time option). I could 
> probably recompile the pxe rom, and substitute it on my install, but then 
> when I update it will get clobbered.

we’re updating to a new ipxe[1] soon, so you can give it a try if it maybe 
works (i dont’ see the option added though) or if there is some other way how 
to do that
otherwise I guess you would have to recompile, but if you bump up the version 
yourself and put it into your own repo it should always have a precedence

Thanks,
michal

> 
> Or is there another way of getting this functionality?
> 
> Thanks,
> 
> Cam
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

[1] https://cbs.centos.org/koji/buildinfo?buildID=10754
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to automate the ovirt host deployment?

2016-06-03 Thread Arman Khalatyan
Yes for sure, as you can see the options in foreman: 28. [\u2713] Configure
foreman_compute_ovirt



***

 Dr. Arman Khalatyan  eScience -SuperComputing
 Leibniz-Institut für Astrophysik Potsdam (AIP)
 An der Sternwarte 16, 14482 Potsdam, Germany

***

On Fri, Jun 3, 2016 at 9:33 AM, Barak Korren  wrote:

> >
> > You should be able to see the hosts from ovirt interface.
> > I was not able to add an auto-discovered hosts to ovirt it always trows
> an exception: Failed to add Host  (User: admin@internal).
> probably it is a bug.
>
> Did you add the oVirt provision plugin to Foreman? You probably need it
> for this to work.
> If you did then please submit a bug to oVirt bugzilla.
> >
> Thanks,
> Barak
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-03 Thread Simone Tiraboschi
On Thu, Jun 2, 2016 at 5:33 PM, Beard Lionel (BOSTON-STORAGE)
 wrote:
> Hi,
>
>
>
> I have tried these steps :
>
> -  Stop Hosted VM
>
> -  # vdsClient -s localhost forcedDetachStorageDomain
> 
>
> -  Domain is now detached
>
> -  # hosted-storage –clean-metadata
>
> -  # hosted-storage –vm-start
>
>
>
> But, hosted domain path is still the old one.
>
> If I run :
>
> # vdsClient -s localhost getStorageDomainsList 
>
> The path is correct !!
>
>
>
> So I don’t know where the wrong path is stored.

If the engine imported the hosted-engine storage domain in the past,
that storage domain is in the engine DB with the wrong path.
If you bring down everything and reboot your hosts, ovirt-ha-agent
will mount the hosted-engine-storage domain with the new path under
hosted-engine.conf.
At this point ovirt-ha-agent can start the engine VM. When the engine
will come up it will try to mount all the storage domain in the
datacenter as for regular hosts. This will mean that it will try to
remount also the hosted-engine storage domain (cause the domain uuid
is the same!) from the old path since it's still configured like that
in the engine DB.

> I think the only way is to reinstall Hosted VM from scratch.

You can try to manually force a new path in the DB.

> @ Staniforth Paul, your procedure is not working L
>
>
>
> Regards,
>
> Lionel BEARD
>
>
>
> De : Beard Lionel (BOSTON-STORAGE)
> Envoyé : mercredi 1 juin 2016 22:26
> À : 'Roy Golan' 
> Cc : Roman Mohr ; users 
> Objet : RE: [ovirt-users] Moving Hosted Engine NFS storage domain
>
>
>
> Hi,
>
>
>
> Path is neither shared not mounted anymore on previous NFS server, but
> storage domain is still up and cannot be removed…
>
>
>
> Is there a possibility to remove it from command line ?
>
>
>
> Regards,
>
> Lionel BEARD
>
>
>
> De : Roy Golan [mailto:rgo...@redhat.com]
> Envoyé : mercredi 1 juin 2016 20:57
> À : Beard Lionel (BOSTON-STORAGE) 
> Cc : Roman Mohr ; users 
>
>
> Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>
>
>
>
> On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)" 
> wrote:
>>
>> Hi,
>>
>> I am not able to do that, "Remove" button is greyed.
>> And it is not possible to place it into maintenance mode because hosted VM
>> is running on it...
>>
>> Any clue?
>>
>
> You must create a situation where vdsm would fail to monitor that domain.
> I.e stop sharing that path or block it and then the status will allow you to
> force remove
>
>> Thanks.
>>
>> Regards,
>> Lionel BEARD
>>
>> > -Message d'origine-
>> > De : Roman Mohr [mailto:rm...@redhat.com]
>> > Envoyé : mercredi 1 juin 2016 14:43
>> > À : Beard Lionel (BOSTON-STORAGE) 
>> > Cc : Staniforth, Paul ; users@ovirt.org
>> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>> >
>> > On Wed, Jun 1, 2016 at 2:40 PM, Beard Lionel (BOSTON-STORAGE)
>> >  wrote:
>> > > Hi,
>> > >
>> > >
>> > >
>> > > I have followed these steps :
>> > >
>> > >
>> > >
>> > > -  Stop supervdsmd + vdsmd + ovirt-ha-agent + ovirt-ha-broker
>> > >
>> > > -  Modify config file
>> > >
>> > > -  Copy files (cp better handles sparse files than rsync)
>> > >
>> > > -  Umount old hosted-engine path
>> > >
>> > > -  Restart services
>> > >
>> > > -  Hosted VM doesn’t start => hosted-engine –clean-metadata. I
>> > > get
>> > > an error at the end, but now I am able to start Hosted VM :
>> > >
>> > > o
>> > ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Metad
>> > ata
>> > > for current host missing.
>> > >
>> > >
>> > >
>> > > I can connect to oVirt interface, everything seems to be working fine,
>> > > but the Hosted storage domain has an incorrect path, it is still
>> > > pointing to old one… I think this information is not correctly
>> > > reported by web interface, because this path doesn’t exist anymore,
>> > > and
>> > hosted VM is working !
>> > >
>> > > Does anyone knows how to fix that ?
>> >
>> > You have to do a "force remove" in the UI (without clicking the destroy
>> > checkbox) of that storage. Then it should be reimported automatically.
>> >
>> > >
>> > >
>> > >
>> > > Regards,
>> > >
>> > > Lionel BEARD
>> > >
>> > >
>> > >
>> > > De : Beard Lionel (BOSTON-STORAGE)
>> > > Envoyé : mercredi 1 juin 2016 10:37
>> > > À : 'Staniforth, Paul' ;
>> > > users@ovirt.org Objet : RE: Moving Hosted Engine NFS storage domain
>> > >
>> > >
>> > >
>> > > Hi,
>> > >
>> > >
>> > >
>> > > I’m trying to move Hosted storage from one NFS server to another.
>> > >
>> > > As this is not a production environment, so I gave a try with no
>> > > success, with a plan similar to yours.
>> > >
>> > >
>> > >
>> > > But I don’t like to stay on a failure, so I will give a second chance
>> > > by following your plan J
>> > >
>> > >
>> > >
>> > > Regards,
>> > >
>> > > Lionel BEARD
>> > >
>> > >
>> > >
>> > > De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
>> > > part de Staniforth, Paul E

Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-03 Thread InterNetX - Juergen Gotteswinter
What if we move all vm off the lun which causes this error, drop the lun
and recreated it. Will we "migrate" the error with the VM to a different
lun or could this be a fix?

Am 6/3/2016 um 10:08 AM schrieb InterNetX - Juergen Gotteswinter:
> Hello David,
> 
> thanks for your explanation of those messages, is there any possibility
> to get rid of this? i already figured out that it might be an corruption
> of the ids file, but i didnt find anything about re-creating or other
> solutions to fix this.
> 
> Imho this occoured after an outage where several hosts, and the iscsi
> SAN has been fenced and/or rebooted.
> 
> Thanks,
> 
> Juergen
> 
> 
> Am 6/2/2016 um 6:03 PM schrieb David Teigland:
>> On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
 This is a mess that's been caused by improper use of storage, and various
 sanity checks in sanlock have all reported errors for "impossible"
 conditions indicating that something catastrophic has been done to the
 storage it's using.  Some fundamental rules are not being followed.
>>>
>>> Thanks David.
>>>
>>> Do you need more output from sanlock to understand this issue?
>>
>> I can think of nothing more to learn from sanlock.  I'd suggest tighter,
>> higher level checking or control of storage.  Low level sanity checks
>> detecting lease corruption are not a convenient place to work from.
>>
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] qemu cgroup_controllers

2016-06-03 Thread Дмитрий Глушенок
Hello!

Is it possible to tell libvirt to add specific devices to qemu cgroup? By 
somehow enumerating the devices in XML using a hook for example.
I'm passing scsi-generic disks (/dev/sgX) to VM using qemucmdline hook and it 
doesn't work until I remove "devices" from cgroup_controllers in qemu.conf.

--
Dmitry Glushenok
Jet Infosystems
http://www.jet.msk.su
+7-495-411-7601 (ext. 1237)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0.0 First Release candidate is now available for testing

2016-06-03 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Release Candidate of oVirt 4.0.0 for testing, as of June 3rd, 2016

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This release is available now for:
* Fedora 23
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23
* oVirt Next Generation Node 4.0

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is already available [4].
* A new oVirt Next Generation Node is already available [4]
* A new oVirt Engine Appliance will be available soon
* A new oVirt Guest Tools ISO is already available [4]
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.0 release candidate highlights:
http://www.ovirt.org/release/4.0.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.0/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors


--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qemu cgroup_controllers

2016-06-03 Thread Martin Polednik

On 03/06/16 11:48 +0300, Дмитрий Глушенок wrote:

Hello!

Is it possible to tell libvirt to add specific devices to qemu cgroup? By 
somehow enumerating the devices in XML using a hook for example.
I'm passing scsi-generic disks (/dev/sgX) to VM using qemucmdline hook and it doesn't 
work until I remove "devices" from cgroup_controllers in qemu.conf.


One way to achieve this is creating a hook to generate the scsi device
XML instead of modifying qemu cmdline directly. Libvirt assumes
ownership of all devices created in the XML and therefore adds them to
the machine cgroup.

Example of the XML taken from [1]:

   
   
   
   
   
   
   
   


There is slight issue with this approach outlined in [2].

If you want to keep the qemu approach, I think creating a custom
partition and moving devices there would be the cleanest approach. In
this case, [3] could help but I'm not entirely sure if that would
solve the issue.

[1] https://libvirt.org/formatdomain.html
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1325485
[3] https://libvirt.org/cgroups.html

--
Dmitry Glushenok
Jet Infosystems
http://www.jet.msk.su
+7-495-411-7601 (ext. 1237)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade version problem

2016-06-03 Thread nicolas

Hi,

We're trying to upgrade oVirt engine from ver. 3.6.5.3 to 3.6.6.2. When 
calling 'yum update', following error occurs:


Error: Package: ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch 
(@ovirt-3.6)

   Requires: ovirt-engine-tools-backup = 3.6.5.3-1.el7.centos
   Removing: 
ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch (@ovirt-3.6)

   ovirt-engine-tools-backup = 3.6.5.3-1.el7.centos
   Updated By: 
ovirt-engine-tools-backup-3.6.6.2-1.el7.centos.noarch (ovirt-3.6)

   ovirt-engine-tools-backup = 3.6.6.2-1.el7.centos

Seems ovirt-engine-tools package is not marked for update although there 
is a new version in repos (for example in [1]), 3.6.6.2. I run 'yum 
clean all' previously to update to make sure this is not a cache issue 
and did not solve the problem.


When trying to update ovirt-engine-tools separately it says ver. 3.6.5.3 
is the latest version:


  Package ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch already 
installed and latest version


Could this be a repo metadata problem?

Thanks.

Nicolás

 [1]: 
http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.6/rpm/el7/noarch/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade version problem

2016-06-03 Thread Ollie Armstrong
On 3 June 2016 at 11:48,   wrote:
> Seems ovirt-engine-tools package is not marked for update although there is
> a new version in repos (for example in [1]), 3.6.6.2. I run 'yum clean all'
> previously to update to make sure this is not a cache issue and did not
> solve the problem.

I don't know if you have a solution, but I hit this on the last
upgrade. I just updated ovirt-engine-setup and ran engine-setup which
pulled in all the required packages properly.  This is the recommended
procedure from the upgrade notes.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade version problem

2016-06-03 Thread Dobó László
I got this also, for quick workaround you can use rpm to remove the old 
version.

rpm -e --nodeps ovirt-engine-tools-backup-3.6.5.3-1.el7.centos

regards
enax

On 06/03/2016 12:48 PM, nico...@devels.es wrote:

Hi,

We're trying to upgrade oVirt engine from ver. 3.6.5.3 to 3.6.6.2. 
When calling 'yum update', following error occurs:


Error: Package: ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch 
(@ovirt-3.6)

   Requires: ovirt-engine-tools-backup = 3.6.5.3-1.el7.centos
   Removing: 
ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch (@ovirt-3.6)

   ovirt-engine-tools-backup = 3.6.5.3-1.el7.centos
   Updated By: 
ovirt-engine-tools-backup-3.6.6.2-1.el7.centos.noarch (ovirt-3.6)

   ovirt-engine-tools-backup = 3.6.6.2-1.el7.centos

Seems ovirt-engine-tools package is not marked for update although 
there is a new version in repos (for example in [1]), 3.6.6.2. I run 
'yum clean all' previously to update to make sure this is not a cache 
issue and did not solve the problem.


When trying to update ovirt-engine-tools separately it says ver. 
3.6.5.3 is the latest version:


  Package ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch already 
installed and latest version


Could this be a repo metadata problem?

Thanks.

Nicolás

 [1]: 
http://ftp.nluug.nl/os/Linux/virtual/ovirt/ovirt-3.6/rpm/el7/noarch/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Windows Guest Tools & Live Migration issues.

2016-06-03 Thread Anantha Raghava

Hello Michal,

Thanks for quick feed back.

Windows Guests Agents - Where do I download them from? Not getting the 
proper links. Or is to a part of virt-drivers that we use during 
installation.


Live migration failure - These are pretty busy VMs if not big. The event 
does not show any specific message but just as migration failed. Is the 
reason logged somewhere? Please share the log location so that I can 
check & share the log for meaningful message and proper screen shots.


--

Thanks & Regards,


Anantha Raghava


On 03 Jun 2016, at 07:14, Anantha Raghava 
> wrote:


Hi,

We have just installed oVirt 3.6 on 3 hosts with storage volumes on 
Fibre Channel storage box. Every thing is working fine except the 
following.


1. We have created 15 Virtual Machines all VM with Windows Server 
2012 R2 OS. VM properties does not report the Operating System nor it 
shows the IP and FQDN in the Admin Portal. There is always an 
exclamation mark that reports about OS being different from the 
template and timezone issues. We have changed the timezone to Indian 
Standard Time in both VM and Host, same result continues. We have 
installed Windows Gues Tools, same result continues. Screen shot is 
below.


you doesn’t seem to run the guest agent. Make sure the service is 
started and works, then you’ll see IPs and more detailed info about 
each guest, and exclamation marks should go away






2. When we manually tried to migrate the VMs from one host to another 
one, the migration gets initiated, but will eventually fail.


Any specific setting missing here or is it a bug.


are they big or busy VMs? What does it fail on? There should be a 
menaingful message even if it’s just a timeout




Note:

All hosts are installed with CentOS 7.2 minimal installation oVirt 
node is installed and activated.

We do not have a DNS in our environment. We have to do with IPs.


as long as engine works with FQDN it’s ok


We are yet to apply the 3.6.6 patch on Engine and Nodes.


that may help with some of the issues above, so please try to do that

Thanks,
michal


We are running a stand alone engine, not a Hosted Engine.
--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade version problem

2016-06-03 Thread nicolas

El 2016-06-03 11:53, Ollie Armstrong escribió:

On 3 June 2016 at 11:48,   wrote:
Seems ovirt-engine-tools package is not marked for update although 
there is
a new version in repos (for example in [1]), 3.6.6.2. I run 'yum clean 
all'
previously to update to make sure this is not a cache issue and did 
not

solve the problem.


I don't know if you have a solution, but I hit this on the last
upgrade. I just updated ovirt-engine-setup and ran engine-setup which
pulled in all the required packages properly.  This is the recommended
procedure from the upgrade notes.


That would work as a quick fix, but anyhow this should be fixed as most 
systems run complete system upgrades periodically.


Regards,

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qemu cgroup_controllers

2016-06-03 Thread Дмитрий Глушенок
Thank you Martin!

Actually I tried the workaround hook, provided in [2], but then VDSM (oVirt 
3.6.6) tries to interpret hostdev in XML as PCI device, which leads to:

::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 703, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1949, in _run
self._domDependentInit()
  File "/usr/share/vdsm/virt/vm.py", line 1797, in _domDependentInit
self._getUnderlyingVmDevicesInfo()
  File "/usr/share/vdsm/virt/vm.py", line 1738, in _getUnderlyingVmDevicesInfo
self._getUnderlyingHostDeviceInfo()
  File "/usr/share/vdsm/virt/vm.py", line 4277, in _getUnderlyingHostDeviceInfo
**self._getUnderlyingDeviceAddress(source))
TypeError: pci_address_to_name() got an unexpected keyword argument 'target'

XML part was:














As of creating custom partition - by default machine.slice has "a *:* rwm" in 
devices.list. But for every new VM libvirt removes *:* mask and fills the list 
with actually needed devices (as I understand the process). For example:

c 136:* rw
c 1:3 rw
c 1:7 rw
c 1:5 rw
c 1:8 rw
c 1:9 rw
c 5:2 rw
c 10:232 rw
c 253:0 rw
c 10:228 rw
c 10:196 rw

What I'm looking for is a way to tell libvirt about my additional devices 
without breaking oVirt.

--
Dmitry Glushenok
Jet Infosystems
http://www.jet.msk.su
+7-495-411-7601 (ext. 1237)

> 3 июня 2016 г., в 12:24, Martin Polednik  написал(а):
> 
> On 03/06/16 11:48 +0300, Дмитрий Глушенок wrote:
>> Hello!
>> 
>> Is it possible to tell libvirt to add specific devices to qemu cgroup? By 
>> somehow enumerating the devices in XML using a hook for example.
>> I'm passing scsi-generic disks (/dev/sgX) to VM using qemucmdline hook and 
>> it doesn't work until I remove "devices" from cgroup_controllers in 
>> qemu.conf.
> 
> One way to achieve this is creating a hook to generate the scsi device
> XML instead of modifying qemu cmdline directly. Libvirt assumes
> ownership of all devices created in the XML and therefore adds them to
> the machine cgroup.
> 
> Example of the XML taken from [1]:
> 
>   
>   
>   
>   
>   
>   
>   
>   
> 
> 
> There is slight issue with this approach outlined in [2].
> 
> If you want to keep the qemu approach, I think creating a custom
> partition and moving devices there would be the cleanest approach. In
> this case, [3] could help but I'm not entirely sure if that would
> solve the issue.
> 
> [1] https://libvirt.org/formatdomain.html
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1325485
> [3] https://libvirt.org/cgroups.html
>> --
>> Dmitry Glushenok
>> Jet Infosystems
>> http://www.jet.msk.su
>> +7-495-411-7601 (ext. 1237)
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Community Newsletter, May 2016

2016-06-03 Thread Mikey Ariel
May was a big month for the oVirt community, as the 3.6 branch
progressed, the 4.0 branch got one step closer to final, and the moVirt
client for Android saw a new release candidate enter the world.

June will see oVirt showcased at the Red Hat Summit event in San
Francisco, and more work getting done on oVirt 4.0! Until then, here's
what's happened in May, 2016:

-
Software Releases
-

oVirt 3.6.6 Final Release is now available
http://lists.ovirt.org/pipermail/announce/2016-May/000253.html

oVirt Live 3.6.7 Second Release Candidate is now available for testing
http://lists.ovirt.org/pipermail/announce/2016-June/000255.html

oVirt 4.0.0 First Beta Release is now   available for testing
http://lists.ovirt.org/pipermail/announce/2016-May/000252.html

moVirt 1.4 RC1 (Android client for oVirt)
http://lists.ovirt.org/pipermail/users/2016-May/040010.html


In the Community


Community Will Make Big Splash at Red Hat Summit
http://red.ht/1Vi7YSU

CINLUG: Virtualization Management The oVirt Way
http://www.meetup.com/CINLUG/events/230746101/

ROSS`2016: путь в облака лежит через Open Source [Russian, oVirt
highlighted]
http://www.pcweek.ru/foss/article/detail.php?ID=185361

ovcmd: A CLI tool to manage an oVirt server [Python]
http://lists.ovirt.org/pipermail/users/2016-May/039655.html

China Cloud Computing Conference [Chinese, oVirt highlighted]
http://biz.ifeng.com/news/detail_2016_05/19/4800169_0.shtml


Deep Dives and Technical Discussions


Single Root I/O Virtualization (SR-IOV) Primer
http://red.ht/25bvUKB

ovirt-vmconsole serial [Howto]
http://lists.ovirt.org/pipermail/users/2016-May/039590.html

Virtual Machine Migration Best Practices
http://red.ht/1sqWP6V

Viewing the Horizon from the Cockpit
http://red.ht/1NrCckF

Deploying RHEV 3.6 Pt1
http://captainkvm.com/2016/05/deploying-rhev-pt1/

RHEV Deploy RHEVM [Video]
https://youtu.be/hMOKrmvYRC0

Deploying RHEV 3.6 Pt2 (Hypervisors)
http://captainkvm.com/2016/05/deploying-rhev-3-6-pt2/

Red Hat Enterprise Virtualization 3.6 - Adding RHEL Hypervisors [Video]
https://youtu.be/1RwcLrA3mcg

Deploying RHEV 3.6 Pt3 (Storage)
http://captainkvm.com/2016/05/deploying-rhev-3-6-pt3-storage/

Red Hat Enterprise Virtualization 3.6 - Add NFS Storage [Video]
https://youtu.be/Q17SVcf9v6M

Deploying RHEV 3.6 Pt4 (Provision VMs)
http://captainkvm.com/2016/05/deploying-rhev-3-6-pt4-provision-vms/

RHEV Deploy VM [Video]
https://youtu.be/1ahhn-8BJr0

Deploying RHEV 3.6 Pt5 (Networks)
http://captainkvm.com/2016/05/deploying-rhev-3-6-pt5-networks/

Red Hat Enterprise Virtualization 3.6 Provision Networks [Video]
https://youtu.be/xW6qANOTVtw

Deploying RHEV 3.6 Pt6 (Importing vSphere VMs)
http://captainkvm.com/2016/05/deploying-rhev-pt6-importing-vsphere-vms/

Red Hat Enterprise Virtualization 3.6 Import vSphere VMs [Video]
https://youtu.be/1YDzbn0V6XQ

Migrace virtuálních serverů z KVM do RHEV [Czech]
http://www.root.cz/clanky/migrace-virtualnich-serveru-z-kvm-do-rhev/

-- 
Mikey Ariel
Community Lead, oVirt
www.ovirt.org

"To be is to do" (Socrates)
"To do is to be" (Jean-Paul Sartre)
"Do be do be do" (Frank Sinatra)

Mobile: +420-702-131-141
IRC: mariel / thatdocslady
Twitter: @ThatDocsLady





signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Windows Guest Tools & Live Migration issues.

2016-06-03 Thread Michal Skrivanek

> On 03 Jun 2016, at 12:57, Anantha Raghava  
> wrote:
> 
> Hello Michal,
> 
> Thanks for quick feed back.
> 
> Windows Guests Agents - Where do I download them from? Not getting the proper 
> links. Or is to a part of virt-drivers that we use during installation.

Hi Anantha,
it’s part of ovirt-guest-tools. See 
http://lists.ovirt.org/pipermail/users/2016-May/039779.html 


> 
> Live migration failure - These are pretty busy VMs if not big. The event does 
> not show any specific message but just as migration failed. Is the reason 
> logged somewhere? Please share the log location so that I can check & share 
> the log for meaningful message and proper screen shots.

There should be more details in the events at the bottom of the webadmin page 
(just make it a bit bigger it may not be te last message)
If they are busy VMs then you may have issues migrating them. There is some 
simple tuning available and it may help. You can increase the maximum downtime 
in VM properties dialog (500ms by default, you can go up to whatever is 
feasible for your VM, seconds, tens of seconds)
Bandwidth is limited to work on 1Gb network, if you have 10Gb availabl you can 
also tune /etc/vdsm.conf on each host and increase the maximum bandwidth 10 
times

Thanks,
michal

> 
> --
> Thanks & Regards,
> 
> Anantha Raghava
> 
>>> On 03 Jun 2016, at 07:14, Anantha Raghava < 
>>> rag...@exzatechconsulting.com 
>>> > wrote:
>>> 
>>> Hi,
>>> 
>>> We have just installed oVirt 3.6 on 3 hosts with storage volumes on Fibre 
>>> Channel storage box. Every thing is working fine except the following.
>>> 
>>> 1. We have created 15 Virtual Machines all VM with Windows Server 2012 R2 
>>> OS. VM properties does not report the Operating System nor it shows the IP 
>>> and FQDN in the Admin Portal. There is always an exclamation mark that 
>>> reports about OS being different from the template and timezone issues. We 
>>> have changed the timezone to Indian Standard Time in both VM and Host, same 
>>> result continues. We have installed Windows Gues Tools, same result 
>>> continues. Screen shot is below.
>> 
>> you doesn’t seem to run the guest agent. Make sure the service is started 
>> and works, then you’ll see IPs and more detailed info about each guest, and 
>> exclamation marks should go away
>> 
>>> 
>>> 
>>> 
>>> 2. When we manually tried to migrate the VMs from one host to another one, 
>>> the migration gets initiated, but will eventually fail.
>>> 
>>> Any specific setting missing here or is it a bug.
>> 
>> are they big or busy VMs? What does it fail on? There should be a menaingful 
>> message even if it’s just a timeout
>> 
>>> 
>>> Note: 
>>> 
>>> All hosts are installed with CentOS 7.2 minimal installation oVirt node is 
>>> installed and activated.
>>> We do not have a DNS in our environment. We have to do with IPs. 
>> 
>> as long as engine works with FQDN it’s ok
>> 
>>> We are yet to apply the 3.6.6 patch on Engine and Nodes.
>> 
>> that may help with some of the issues above, so please try to do that 
>> 
>> Thanks,
>> michal
>> 
>>> We are running a stand alone engine, not a Hosted Engine.
>>> -- 
>>> Thanks & Regards,
>>> 
>>> 
>>> Anantha Raghava
>>> eXza Technology Consulting & Services
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>> 
>> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-03 Thread Beard Lionel (BOSTON-STORAGE)
Hi,

Ok, after updating the DB, it is now fine:
# update storage_server_connections set connection='' where 
id='';

But (because there is always a 'but'), I still have an issue when I reboot host 
where Hosted VM runs, I have to do a:
# hosted-engine --clean-metadata
In order to have Hosted VM starts.
Else, the error I have is:
sanlock[1268]: 2016-06-03 11:45:15+ 5973 [1273]: r3 cmd_acquire 2,8,3274 
invalid lockspace found -1 failed 0 name ab7ce50d-238a-4f4f-a36e-8d06e276ae4b
libvirtd[2586]: Failed to acquire lock: No space left on device

Regards,
Lionel BEARD
05.61.39.39.19

> -Message d'origine-
> De : Simone Tiraboschi [mailto:stira...@redhat.com]
> Envoyé : vendredi 3 juin 2016 10:26
> À : Beard Lionel (BOSTON-STORAGE) 
> Cc : users 
> Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>
> On Thu, Jun 2, 2016 at 5:33 PM, Beard Lionel (BOSTON-STORAGE)
>  wrote:
> > Hi,
> >
> >
> >
> > I have tried these steps :
> >
> > -  Stop Hosted VM
> >
> > -  # vdsClient -s localhost forcedDetachStorageDomain
> > 
> >
> > -  Domain is now detached
> >
> > -  # hosted-storage –clean-metadata
> >
> > -  # hosted-storage –vm-start
> >
> >
> >
> > But, hosted domain path is still the old one.
> >
> > If I run :
> >
> > # vdsClient -s localhost getStorageDomainsList 
> >
> > The path is correct !!
> >
> >
> >
> > So I don’t know where the wrong path is stored.
>
> If the engine imported the hosted-engine storage domain in the past, that
> storage domain is in the engine DB with the wrong path.
> If you bring down everything and reboot your hosts, ovirt-ha-agent will
> mount the hosted-engine-storage domain with the new path under hosted-
> engine.conf.
> At this point ovirt-ha-agent can start the engine VM. When the engine will
> come up it will try to mount all the storage domain in the datacenter as for
> regular hosts. This will mean that it will try to remount also the 
> hosted-engine
> storage domain (cause the domain uuid is the same!) from the old path since
> it's still configured like that in the engine DB.
>
> > I think the only way is to reinstall Hosted VM from scratch.
>
> You can try to manually force a new path in the DB.
>
> > @ Staniforth Paul, your procedure is not working L
> >
> >
> >
> > Regards,
> >
> > Lionel BEARD
> >
> >
> >
> > De : Beard Lionel (BOSTON-STORAGE)
> > Envoyé : mercredi 1 juin 2016 22:26
> > À : 'Roy Golan' 
> > Cc : Roman Mohr ; users  Objet :
> > RE: [ovirt-users] Moving Hosted Engine NFS storage domain
> >
> >
> >
> > Hi,
> >
> >
> >
> > Path is neither shared not mounted anymore on previous NFS server, but
> > storage domain is still up and cannot be removed…
> >
> >
> >
> > Is there a possibility to remove it from command line ?
> >
> >
> >
> > Regards,
> >
> > Lionel BEARD
> >
> >
> >
> > De : Roy Golan [mailto:rgo...@redhat.com] Envoyé : mercredi 1 juin
> > 2016 20:57 À : Beard Lionel (BOSTON-STORAGE)  Cc :
> > Roman Mohr ; users 
> >
> >
> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
> >
> >
> >
> >
> > On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)"
> > 
> > wrote:
> >>
> >> Hi,
> >>
> >> I am not able to do that, "Remove" button is greyed.
> >> And it is not possible to place it into maintenance mode because
> >> hosted VM is running on it...
> >>
> >> Any clue?
> >>
> >
> > You must create a situation where vdsm would fail to monitor that domain.
> > I.e stop sharing that path or block it and then the status will allow
> > you to force remove
> >
> >> Thanks.
> >>
> >> Regards,
> >> Lionel BEARD
> >>
> >> > -Message d'origine-
> >> > De : Roman Mohr [mailto:rm...@redhat.com] Envoyé : mercredi 1 juin
> >> > 2016 14:43 À : Beard Lionel (BOSTON-STORAGE)  Cc :
> >> > Staniforth, Paul ; users@ovirt.org
> >> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
> >> >
> >> > On Wed, Jun 1, 2016 at 2:40 PM, Beard Lionel (BOSTON-STORAGE)
> >> >  wrote:
> >> > > Hi,
> >> > >
> >> > >
> >> > >
> >> > > I have followed these steps :
> >> > >
> >> > >
> >> > >
> >> > > -  Stop supervdsmd + vdsmd + ovirt-ha-agent + ovirt-ha-broker
> >> > >
> >> > > -  Modify config file
> >> > >
> >> > > -  Copy files (cp better handles sparse files than rsync)
> >> > >
> >> > > -  Umount old hosted-engine path
> >> > >
> >> > > -  Restart services
> >> > >
> >> > > -  Hosted VM doesn’t start => hosted-engine –clean-metadata. I
> >> > > get
> >> > > an error at the end, but now I am able to start Hosted VM :
> >> > >
> >> > > o
> >> >
> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Metad
> >> > ata
> >> > > for current host missing.
> >> > >
> >> > >
> >> > >
> >> > > I can connect to oVirt interface, everything seems to be working
> >> > > fine, but the Hosted storage domain has an incorrect path, it is
> >> > > still pointing to old one… I think this information is not
> >> > > correctly reported by web interface

Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-03 Thread Simone Tiraboschi
On Fri, Jun 3, 2016 at 1:46 PM, Beard Lionel (BOSTON-STORAGE)
 wrote:
> Hi,
>
> Ok, after updating the DB, it is now fine:
> # update storage_server_connections set connection='' where 
> id='';

Nice to hear

> But (because there is always a 'but'), I still have an issue when I reboot 
> host where Hosted VM runs, I have to do a:
> # hosted-engine --clean-metadata
> In order to have Hosted VM starts.
> Else, the error I have is:
> sanlock[1268]: 2016-06-03 11:45:15+ 5973 [1273]: r3 cmd_acquire 2,8,3274 
> invalid lockspace found -1 failed 0 name ab7ce50d-238a-4f4f-a36e-8d06e276ae4b
> libvirtd[2586]: Failed to acquire lock: No space left on device

I suspect that this is a side effect of this bug:
https://bugzilla.redhat.com/1322849

Under certain circumstances we could have  a mismatch between the host
id as saw by ha-agent and the spm_id used by the engine.

Can you please execute this query
 SELECT vds_spm_id_map.vds_spm_id, vds.vds_name FROM vds_spm_id_map,
vds WHERE vds_spm_id_map.vds_id = vds.vds_id;
to check the spm_id of your host in the DB comparing its output with
the output of
 grep host_id /etc/ovirt-hosted-engine/hosted-engine.conf
run on each involved host?

If you find any mismatching id you have to change
/etc/ovirt-hosted-engine/hosted-engine.conf to reflect the engine
spm_id and reboot the host.

> Regards,
> Lionel BEARD
> 05.61.39.39.19
>
>> -Message d'origine-
>> De : Simone Tiraboschi [mailto:stira...@redhat.com]
>> Envoyé : vendredi 3 juin 2016 10:26
>> À : Beard Lionel (BOSTON-STORAGE) 
>> Cc : users 
>> Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>>
>> On Thu, Jun 2, 2016 at 5:33 PM, Beard Lionel (BOSTON-STORAGE)
>>  wrote:
>> > Hi,
>> >
>> >
>> >
>> > I have tried these steps :
>> >
>> > -  Stop Hosted VM
>> >
>> > -  # vdsClient -s localhost forcedDetachStorageDomain
>> > 
>> >
>> > -  Domain is now detached
>> >
>> > -  # hosted-storage –clean-metadata
>> >
>> > -  # hosted-storage –vm-start
>> >
>> >
>> >
>> > But, hosted domain path is still the old one.
>> >
>> > If I run :
>> >
>> > # vdsClient -s localhost getStorageDomainsList 
>> >
>> > The path is correct !!
>> >
>> >
>> >
>> > So I don’t know where the wrong path is stored.
>>
>> If the engine imported the hosted-engine storage domain in the past, that
>> storage domain is in the engine DB with the wrong path.
>> If you bring down everything and reboot your hosts, ovirt-ha-agent will
>> mount the hosted-engine-storage domain with the new path under hosted-
>> engine.conf.
>> At this point ovirt-ha-agent can start the engine VM. When the engine will
>> come up it will try to mount all the storage domain in the datacenter as for
>> regular hosts. This will mean that it will try to remount also the 
>> hosted-engine
>> storage domain (cause the domain uuid is the same!) from the old path since
>> it's still configured like that in the engine DB.
>>
>> > I think the only way is to reinstall Hosted VM from scratch.
>>
>> You can try to manually force a new path in the DB.
>>
>> > @ Staniforth Paul, your procedure is not working L
>> >
>> >
>> >
>> > Regards,
>> >
>> > Lionel BEARD
>> >
>> >
>> >
>> > De : Beard Lionel (BOSTON-STORAGE)
>> > Envoyé : mercredi 1 juin 2016 22:26
>> > À : 'Roy Golan' 
>> > Cc : Roman Mohr ; users  Objet :
>> > RE: [ovirt-users] Moving Hosted Engine NFS storage domain
>> >
>> >
>> >
>> > Hi,
>> >
>> >
>> >
>> > Path is neither shared not mounted anymore on previous NFS server, but
>> > storage domain is still up and cannot be removed…
>> >
>> >
>> >
>> > Is there a possibility to remove it from command line ?
>> >
>> >
>> >
>> > Regards,
>> >
>> > Lionel BEARD
>> >
>> >
>> >
>> > De : Roy Golan [mailto:rgo...@redhat.com] Envoyé : mercredi 1 juin
>> > 2016 20:57 À : Beard Lionel (BOSTON-STORAGE)  Cc :
>> > Roman Mohr ; users 
>> >
>> >
>> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>> >
>> >
>> >
>> >
>> > On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)"
>> > 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> I am not able to do that, "Remove" button is greyed.
>> >> And it is not possible to place it into maintenance mode because
>> >> hosted VM is running on it...
>> >>
>> >> Any clue?
>> >>
>> >
>> > You must create a situation where vdsm would fail to monitor that domain.
>> > I.e stop sharing that path or block it and then the status will allow
>> > you to force remove
>> >
>> >> Thanks.
>> >>
>> >> Regards,
>> >> Lionel BEARD
>> >>
>> >> > -Message d'origine-
>> >> > De : Roman Mohr [mailto:rm...@redhat.com] Envoyé : mercredi 1 juin
>> >> > 2016 14:43 À : Beard Lionel (BOSTON-STORAGE)  Cc :
>> >> > Staniforth, Paul ; users@ovirt.org
>> >> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>> >> >
>> >> > On Wed, Jun 1, 2016 at 2:40 PM, Beard Lionel (BOSTON-STORAGE)
>> >> >  wrote:
>> >> > > Hi,
>> >> > >
>> >> > >
>> >> > >
>> >> > > I have followed these steps :
>> >> > >
>

Re: [ovirt-users] Moving Hosted Engine NFS storage domain

2016-06-03 Thread Beard Lionel (BOSTON-STORAGE)
Hi,

Thanks for your answer, but I have just decided to reinstall everything from 
scratch, because I don't want to spend too much time on this testing 
environment.

Regards,
Lionel BEARD


> -Message d'origine-
> De : Simone Tiraboschi [mailto:stira...@redhat.com]
> Envoyé : vendredi 3 juin 2016 14:57
> À : Beard Lionel (BOSTON-STORAGE) 
> Cc : users 
> Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
>
> On Fri, Jun 3, 2016 at 1:46 PM, Beard Lionel (BOSTON-STORAGE)
>  wrote:
> > Hi,
> >
> > Ok, after updating the DB, it is now fine:
> > # update storage_server_connections set connection=' EXPORT>'
> > where id='';
>
> Nice to hear
>
> > But (because there is always a 'but'), I still have an issue when I reboot 
> > host
> where Hosted VM runs, I have to do a:
> > # hosted-engine --clean-metadata
> > In order to have Hosted VM starts.
> > Else, the error I have is:
> > sanlock[1268]: 2016-06-03 11:45:15+ 5973 [1273]: r3 cmd_acquire
> > 2,8,3274 invalid lockspace found -1 failed 0 name
> > ab7ce50d-238a-4f4f-a36e-8d06e276ae4b
> > libvirtd[2586]: Failed to acquire lock: No space left on device
>
> I suspect that this is a side effect of this bug:
> https://bugzilla.redhat.com/1322849
>
> Under certain circumstances we could have  a mismatch between the host id
> as saw by ha-agent and the spm_id used by the engine.
>
> Can you please execute this query
>  SELECT vds_spm_id_map.vds_spm_id, vds.vds_name FROM
> vds_spm_id_map, vds WHERE vds_spm_id_map.vds_id = vds.vds_id; to
> check the spm_id of your host in the DB comparing its output with the output
> of  grep host_id /etc/ovirt-hosted-engine/hosted-engine.conf
> run on each involved host?
>
> If you find any mismatching id you have to change /etc/ovirt-hosted-
> engine/hosted-engine.conf to reflect the engine spm_id and reboot the
> host.
>
> > Regards,
> > Lionel BEARD
> > 05.61.39.39.19
> >
> >> -Message d'origine-
> >> De : Simone Tiraboschi [mailto:stira...@redhat.com] Envoyé : vendredi
> >> 3 juin 2016 10:26 À : Beard Lionel (BOSTON-STORAGE) 
> >> Cc : users  Objet : Re: [ovirt-users] Moving Hosted
> >> Engine NFS storage domain
> >>
> >> On Thu, Jun 2, 2016 at 5:33 PM, Beard Lionel (BOSTON-STORAGE)
> >>  wrote:
> >> > Hi,
> >> >
> >> >
> >> >
> >> > I have tried these steps :
> >> >
> >> > -  Stop Hosted VM
> >> >
> >> > -  # vdsClient -s localhost forcedDetachStorageDomain
> >> > 
> >> >
> >> > -  Domain is now detached
> >> >
> >> > -  # hosted-storage –clean-metadata
> >> >
> >> > -  # hosted-storage –vm-start
> >> >
> >> >
> >> >
> >> > But, hosted domain path is still the old one.
> >> >
> >> > If I run :
> >> >
> >> > # vdsClient -s localhost getStorageDomainsList  >> > domain>
> >> >
> >> > The path is correct !!
> >> >
> >> >
> >> >
> >> > So I don’t know where the wrong path is stored.
> >>
> >> If the engine imported the hosted-engine storage domain in the past,
> >> that storage domain is in the engine DB with the wrong path.
> >> If you bring down everything and reboot your hosts, ovirt-ha-agent
> >> will mount the hosted-engine-storage domain with the new path under
> >> hosted- engine.conf.
> >> At this point ovirt-ha-agent can start the engine VM. When the engine
> >> will come up it will try to mount all the storage domain in the
> >> datacenter as for regular hosts. This will mean that it will try to
> >> remount also the hosted-engine storage domain (cause the domain uuid
> >> is the same!) from the old path since it's still configured like that in 
> >> the
> engine DB.
> >>
> >> > I think the only way is to reinstall Hosted VM from scratch.
> >>
> >> You can try to manually force a new path in the DB.
> >>
> >> > @ Staniforth Paul, your procedure is not working L
> >> >
> >> >
> >> >
> >> > Regards,
> >> >
> >> > Lionel BEARD
> >> >
> >> >
> >> >
> >> > De : Beard Lionel (BOSTON-STORAGE)
> >> > Envoyé : mercredi 1 juin 2016 22:26 À : 'Roy Golan'
> >> >  Cc : Roman Mohr ; users
> >> >  Objet :
> >> > RE: [ovirt-users] Moving Hosted Engine NFS storage domain
> >> >
> >> >
> >> >
> >> > Hi,
> >> >
> >> >
> >> >
> >> > Path is neither shared not mounted anymore on previous NFS server,
> >> > but storage domain is still up and cannot be removed…
> >> >
> >> >
> >> >
> >> > Is there a possibility to remove it from command line ?
> >> >
> >> >
> >> >
> >> > Regards,
> >> >
> >> > Lionel BEARD
> >> >
> >> >
> >> >
> >> > De : Roy Golan [mailto:rgo...@redhat.com] Envoyé : mercredi 1 juin
> >> > 2016 20:57 À : Beard Lionel (BOSTON-STORAGE)  Cc :
> >> > Roman Mohr ; users 
> >> >
> >> >
> >> > Objet : Re: [ovirt-users] Moving Hosted Engine NFS storage domain
> >> >
> >> >
> >> >
> >> >
> >> > On Jun 1, 2016 7:19 PM, "Beard Lionel (BOSTON-STORAGE)"
> >> > 
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I am not able to do that, "Remove" button is greyed.
> >> >> And it is not possible to place it into maintenance mode because
> >> >> hosted VM is running on it...

Re: [ovirt-users] Upgrade version problem

2016-06-03 Thread Sandro Bonazzola
Il 03/Giu/2016 12:53, "Ollie Armstrong"  ha scritto:
>
> On 3 June 2016 at 11:48,   wrote:
> > Seems ovirt-engine-tools package is not marked for update although
there is
> > a new version in repos (for example in [1]), 3.6.6.2. I run 'yum clean
all'
> > previously to update to make sure this is not a cache issue and did not
> > solve the problem.
>
> I don't know if you have a solution, but I hit this on the last
> upgrade. I just updated ovirt-engine-setup and ran engine-setup which
> pulled in all the required packages properly.  This is the recommended
> procedure from the upgrade notes.

Karma points +1
Virtual medal: I read the docs
Thanks!

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] free-IPA Multi-Master Authentication Problem

2016-06-03 Thread Kilian Ries
Hi,


i have two free-IPA directories setup in multi-master replication. Both are 
running on CentOS 7.2 with latest Software installed. Replication between both 
IPAs is setup correctly and i am able to authenticate against each of the two 
manually.


However, if i shutdown IPA1 and try to authenticate from oVirt 3.5.6.2 against 
IPA2 i can't login. Login is only working if IPA1 is running (keep in mind that 
manual authentication against IPA2 is working).


In the dirSRV Error-Logfile nothing is logged, however i can see the 
authentication in the access log from IPA2:



###


filter="(&(|(objectClass=krbprincipalaux)(objectClass=krbprincipal)(objectClass=ipakrbprincipal))(|(ipaKrbPrincipalAlias=krbtgt/intern.customer-virt...@intern.customer-virt.eu)(krbPrincipalName=krbtgt/intern.customer-virt...@intern.customer-virt.eu)))"
 attrs="krbPrincipalName krbCanonicalName ipaKrbPrincipalAlias krbUPEnabled 
krbPrincipalKey krbTicketPolicyReference krbPrincipalExpiration 
krbPasswordExpiration krbPwdPolicyReference krbPrincipalType krbPwdHistory 
krbLastPwdChange krbPrincipalAliases krbLastSuccessfulAuth krbLastFailedAuth 
krbLoginFailedCount krbExtraData krbLastAdminUnlock krbObjectReferences 
krbTicketFlags krbMaxTicketLife krbMaxRenewableAge nsAccountLock 
passwordHistory ipaKrbAuthzData ipaUserAuthType ipatokenRadiusConfigLink 
objectClass"

[03/Jun/2016:17:18:39 +0200] conn=5 op=758 RESULT err=0 tag=101 nentries=1 
etime=0

[03/Jun/2016:17:18:39 +0200] conn=5 op=759 SRCH 
base="cn=global_policy,cn=INTERN.CUSTOMER-VIRT.EU,cn=kerberos,dc=intern,dc=customer-virt,dc=eu"
 scope=0 filter="(objectClass=*)" attrs="krbMaxPwdLife krbMinPwdLife 
krbPwdMinDiffChars krbPwdMinLength krbPwdHistoryLength krbPwdMaxFailure 
krbPwdFailureCountInterval krbPwdLockoutDuration"

[03/Jun/2016:17:18:39 +0200] conn=5 op=759 RESULT err=0 tag=101 nentries=1 
etime=0

[03/Jun/2016:17:18:39 +0200] conn=5 op=760 SRCH 
base="uid=kries,cn=users,cn=accounts,dc=intern,dc=customer-virt,dc=eu" scope=0 
filter="(objectClass=*)" attrs="objectClass uid cn fqdn gidNumber 
krbPrincipalName krbCanonicalName krbTicketPolicyReference 
krbPrincipalExpiration krbPasswordExpiration krbPwdPolicyReference 
krbPrincipalType krbLastPwdChange krbPrincipalAliases krbLastSuccessfulAuth 
krbLastFailedAuth krbLoginFailedCount krbLastAdminUnlock krbTicketFlags 
ipaNTSecurityIdentifier ipaNTLogonScript ipaNTProfilePath ipaNTHomeDirectory 
ipaNTHomeDirectoryDrive"

[03/Jun/2016:17:18:39 +0200] conn=5 op=760 RESULT err=0 tag=101 nentries=1 
etime=0

[03/Jun/2016:17:18:39 +0200] conn=5 op=761 MOD 
dn="uid=kries,cn=users,cn=accounts,dc=intern,dc=customer-virt,dc=eu"

[03/Jun/2016:17:18:39 +0200] conn=5 op=761 RESULT err=0 tag=103 nentries=0 
etime=0 csn=5751a1820001000d

[03/Jun/2016:17:18:39 +0200] conn=95 fd=109 slot=109 connection from 
192.168.210.45 to 192.168.210.181

[03/Jun/2016:17:18:39 +0200] conn=6 op=937 SRCH 
base="dc=intern,dc=customer-virt,dc=eu" scope=2 
filter="(&(|(objectClass=krbprincipalaux)(objectClass=krbprincipal)(objectClass=ipakrbprincipal))(|(ipaKrbPrincipalAlias=krbtgt/intern.customer-virt...@intern.customer-virt.eu)(krbPrincipalName=krbtgt/intern.customer-virt...@intern.customer-virt.eu)))"
 attrs="krbPrincipalName krbCanonicalName ipaKrbPrincipalAlias krbUPEnabled 
krbPrincipalKey krbTicketPolicyReference krbPrincipalExpiration 
krbPasswordExpiration krbPwdPolicyReference krbPrincipalType krbPwdHistory 
krbLastPwdChange krbPrincipalAliases krbLastSuccessfulAuth krbLastFailedAuth 
krbLoginFailedCount krbExtraData krbLastAdminUnlock krbObjectReferences 
krbTicketFlags krbMaxTicketLife krbMaxRenewableAge nsAccountLock 
passwordHistory ipaKrbAuthzData ipaUserAuthType ipatokenRadiusConfigLink 
objectClass"

[03/Jun/2016:17:18:39 +0200] conn=6 op=937 RESULT err=0 tag=101 nentries=1 
etime=0

[03/Jun/2016:17:18:39 +0200] conn=6 op=938 SRCH 
base="dc=intern,dc=customer-virt,dc=eu" scope=2 
filter="(&(|(objectClass=krbprincipalaux)(objectClass=krbprincipal)(objectClass=ipakrbprincipal))(|(ipaKrbPrincipalAlias=ldap/auth02.intern.customer-virt...@intern.customer-virt.eu)(krbPrincipalName=ldap/auth02.intern.customer-virt...@intern.customer-virt.eu)))"
 attrs="krbPrincipalName krbCanonicalName ipaKrbPrincipalAlias krbUPEnabled 
krbPrincipalKey krbTicketPolicyReference krbPrincipalExpiration 
krbPasswordExpiration krbPwdPolicyReference krbPrincipalType krbPwdHistory 
krbLastPwdChange krbPrincipalAliases krbLastSuccessfulAuth krbLastFailedAuth 
krbLoginFailedCount krbExtraData krbLastAdminUnlock krbObjectReferences 
krbTicketFlags krbMaxTicketLife krbMaxRenewableAge nsAccountLock 
passwordHistory ipaKrbAuthzData ipaUserAuthType ipatokenRadiusConfigLink 
objectClass"

[03/Jun/2016:17:18:39 +0200] conn=6 op=938 RESULT err=0 tag=101 nentries=1 
etime=0

[03/Jun/2016:17:18:39 +0200] conn=6 op=939 SRCH 
base="cn=INTERN.CUSTOMER-VIRT.EU,cn=kerberos,dc=intern,dc=customer-virt,dc=eu" 
scope=0 filter="(objectClass=krbticketpolicyaux)" attrs="krbMaxTicketLife

Re: [ovirt-users] Questions on oVirt

2016-06-03 Thread Brett I. Holcomb
On Fri, 2016-06-03 at 03:49 -0300, Charles Tassell wrote:
> Hi Brett,
> 
>    I'm not an expert on oVirt, but from my experience I would say
> you 
> probably want to run the engine as a VM rather than on the bare
> metal.  
> It has a lot of moving parts (PostgresSQL, jBoss, etc...) and they
> all 
> fit well inside the VM.  You can run it right on the bare-metal if
> you 
> want though, as that was the preferred means for versions prior to
> 3.6  
> Also, you don't need to allocate the recommended 16GB of RAM to it
> if 
> you are only running 5-10 VMs.  You can probably get by with a 2-4GB
> VM 
> which makes it more palatable.
> 
>    The thing to realize with oVirt is that the Engine is not the 
> Hypervisor.  The engine is just a management tool.  If it crashes,
> all 
> the VMs continue to run fine without it, so you can just start it
> back 
> up and it will just resume managing everything fine.  If you only
> have 
> one physical host you don't need to really worry too much about 
> redundancy.  I don't think you can assign a host to two engines at
> the 
> same time, but I might be wrong about that.
> 
>    If you want to migrate between a hosted engine and bare metal (or 
> vice versa) you can use the engine-backup command to backup and then 
> restore (same command, different arguments)  the
> configuration.  I've 
> never done it, but it should work fine.
> 
>   For a system shutdown, I would shutdown all of the VMs (do the
> hosted 
> engine last) and then just shutdown the box.  I'm not sure if 
> maintenance mode is actually required or not, so I'd defer to
> someone 
> with more experience.  I know I have done it this way and it doesn't 
> seem to have caused any problems.
> 
>    For upgrades, I'd say shutdown all of the VMs (including the
> hosted 
> engine) then apply your updates, reboot as necessary, and then start
> the 
> VMs back up.  Once everything is up ssh into the hosted engine,
> update 
> it (yum update), reboot as necessary, and you are good to go.  If
> you 
> have a multi-host system that's a bit different.  In that case put a 
> host into maintenance mode; migrate all the VMs to other hosts;
> update 
> it and reboot it; set it as active; migrate the VMs back and move on
> to 
> the next host doing the same thing.  the reason you want to shutdown
> all 
> the VMs is that upgrades to the KVM/qemu packages may crash running
> VMs. 
> I've seen this happen on Ubuntu, so I assume it's the same on
> RedHat/CentOS.
> 
>    As for the 4.0 branch, I'd give it a month or two of being out
> before 
> you use it for a production system.  I started with oVirt just as
> 3.6 
> came out and ran into some bugs that made it quite complicated.  On
> the 
> positive side, I learned a lot about how it works from getting advice
> on 
> how to deal with those issues. :)
> 
> On 2016-06-02 10:23 PM, users-requ...@ovirt.org wrote:
> > 
> > Message: 4
> > Date: Thu, 02 Jun 2016 21:23:49 -0400
> > From: "Brett I. Holcomb" 
> > To: users 
> > Subject: [ovirt-users] Questions on oVirt
> > Message-ID: <1464917029.26446.133.ca...@l1049h.com>
> > Content-Type: text/plain; charset="utf-8"
> > 
> > After using oVirt for about three months I have some questions that
> > really haven't been answered in any of the documentation, posts, or
> > found in searching. ?Or maybe more correctly I've found some
> > answers
> > but am trying to put the pieces I've found together.
> > 
> > My setup is one physical host that used to run VMware ESXi6 and it
> > handled running the VMs on an iSCSI LUN on a Synology 3615xs unit.
> > ?I
> > have one physical Windows workstation and all the servers, DNS,
> > DHCP,
> > file, etc. are VMs. ?The VMs are on an iSCSI LUN on the Synology.
> > 
> > * Hosted-engine deployment - Run Engine as a VM. ?This has the
> > advantage of using one machine for host and running the Engine as a
> > VM
> > but what are the cons of it?
> > 
> > * Can I run the Engine on the host that will run the VMs without
> > running it on a VM? ?That is I install the OS on my physical box,
> > install Engine, then setup datastores (iSCSI LUN), networking etc.
> > 
> > * How do I run more than one Engine. ?With just one there is no
> > redundancy so can I run another Engine that access the same
> > Datacenter,
> > etc. as the first? ?Or does each Engine have to have it's own
> > Datacenter and the backup is achieved by migrating between the
> > Engine's
> > Datacenters as needed.
> > 
> > * Given I have a hosted Engine setup how can I "undo" it and ?get
> > to
> > running just the Engine on the host. ?Do I have to undo everything
> > or
> > can I just install another instance of the Engine on the host but
> > not
> > in a VM, move the VMs to it and then remove the Engine VM.
> > 
> > * System shutdown - If I shutdown the host what is the proper
> > procedure? ?Go to global maintenance mode and then shutdown the
> > host or
> > do I have to do some other steps to make sure VMs don't get
> > corrupted.
> > ?On

Re: [ovirt-users] Questions on oVirt

2016-06-03 Thread Brett I. Holcomb
On Fri, 2016-06-03 at 08:41 +0200, jvandewege wrote:
> On 3-6-2016 3:23, Brett I. Holcomb wrote:
> > 
> > After using oVirt for about three months I have some questions that
> > really haven't been answered in any of the documentation, posts, or
> > found in searching.  Or maybe more correctly I've found some
> > answers
> > but am trying to put the pieces I've found together.
> > 
> > My setup is one physical host that used to run VMware ESXi6 and it
> > handled running the VMs on an iSCSI LUN on a Synology 3615xs
> > unit.  I
> > have one physical Windows workstation and all the servers, DNS,
> > DHCP,
> > file, etc. are VMs.  The VMs are on an iSCSI LUN on the Synology.
> > 
> > * Hosted-engine deployment - Run Engine as a VM.  This has the
> > advantage of using one machine for host and running the Engine as a
> > VM
> > but what are the cons of it?
> Not many but I can think of one: if there is a problem with the
> storage
> where the engine VM is running then it can be a challenge to get
> things
> working again. You can guard against that by not using your host as
> your
> main testing workstation :-)
Unfortunately, I only have one physical host available.  The storage is
NFS on the host which has a RAID10 array setup.  Not optimum but it's
what I've got.
> > 
> > 
> > * Can I run the Engine on the host that will run the VMs without
> > running it on a VM?  That is I install the OS on my physical box,
> > install Engine, then setup datastores (iSCSI LUN), networking etc.
> > 

> 
> That used to be possible up to 3.5 and is called all-in-one setup.
> 

I remember reading about that and it's deprecated if I remember.
> > 
> > 
> > * How do I run more than one Engine.  With just one there is no
> > redundancy so can I run another Engine that access the same
> > Datacenter, etc. as the first?  Or does each Engine have to have it's
> > own Datacenter and the backup is achieved by migrating between the
> > Engine's Datacenters as needed.
> > 

> 
> There is just one Engine and normally you would have more hosts and it
> would migrate around those hosts using the shared storage if you need to
> do maintenance on those hosts.
> 
> 
> > 
> > 
> > * Given I have a hosted Engine setup how can I "undo" it and  get to
> > running just the Engine on the host.  Do I have to undo everything or
> > can I just install another instance of the Engine on the host but not
> > in a VM, move the VMs to it and then remove the Engine VM.
> > 
> > 

> 
> Get a second physical box, install an OS, install Engine on it and
> restore the db backup on it and this should work. AIO setup isn't
> possible in 3.6 onwards.
> 
> 
> > 
> > * System shutdown - If I shutdown the host what is the proper
> > procedure?  Go to global maintenance mode and then shutdown the host
> > or do I have to do some other steps to make sure VMs don't get
> > corrupted.  On ESXi we'd put a host into maintenance mode after
> > shutting down or moving the VMs so I assume it's the same here.
> > Shutdown VMS since there is nowhere to move the VMS, go into global
> > maintenance mode. Shutdown.  On startup the  Engine will come up, then
> > I start my VMs.
> > 

> 
> -  Shutdown any VMs that are running
> - stop ovirt-ha-agent and ovirt-ha-broker (they keep the Engine up)
> - stop the Engine
> - stop vdsmd
> - stop sanlock
> - unmount the shared storage
> - shutdown host.
> The Engine will come up once you powerup the host. If you use
> hosted-engine --set-maintenance --mode=global then ha-agent/ha-broker
> won't start the Engine for you. You have to use hosted-engine
> --mode=none first. You could add that to your system startup if you
> prefer this sequence.
> The above is my own recipe and works for me YMMV. (got it scripted and
> can post it but it does more or less what I wrote)
> 
> 

Thanks.  That helps.  If you get a chance I'd like the script.
> > 
> > 
> > * Upgrading engine and host - Do I have to go to maintenance mode then
> > run yum to install the new versions on the host and engine and then
> > run engine-setup or do I need to go into maintenance mode?  I assume
> > the 4.0 production install will be much more involved but hopefully
> > keeping updated will make it a little less painful.
> > 
> > 

> 
> Its on the wiki somewhere but I think the order is:
> - enable global maintenance on the host
> - upgrade the engine by running engine-setup and it will tell you
> whether it needs a yum upgrade engine-setup or it will do the oVirt
> engine upgrade straightaway
> - upgrade the rest of the engine packages and restart
> - while the engine is down, upgrade the host and restart if needed
> -  disable global maintenance on the host and if all is well Engine will
> be restarted.
> 
> While hosted-engine seems complicated I haven't had any major issues
> with it but neither have my other oVirt deployments, standalone Engine
> or AIO.
> 
> Hope this answers some of your questions,
> 
> Joop
> 
> 
> 

Thanks.  This clarifies a lot.  I had things down pa

Re: [ovirt-users] Sanlock add Lockspace Errors

2016-06-03 Thread Nir Soffer
On Fri, Jun 3, 2016 at 11:27 AM, InterNetX - Juergen Gotteswinter
 wrote:
> What if we move all vm off the lun which causes this error, drop the lun
> and recreated it. Will we "migrate" the error with the VM to a different
> lun or could this be a fix?

This should will fix the ids file, but since we don't know why this corruption
happened, it may happen again.

Please open a bug with the log I requested so we can investigate this issue.

To fix the ids file you don't have to recreate the lun, just
initialize the ids lv.

1. Put the domain to maintenance (via engine)

No host should access it while you reconstruct the ids file

2. Activate the ids lv

You may need to connect to this iscsi target first, unless you have other
vgs connected on the same target.

lvchange -ay sd_uuid/ids

3. Initialize the lockspace

sanlock direct init -s :0:/dev//ids:0

4. Deactivate the ids lv

lvchange -an sd_uuid/ids

6. Activate the domain (via engine)

The domain should become active after a while.

Nir

>
> Am 6/3/2016 um 10:08 AM schrieb InterNetX - Juergen Gotteswinter:
>> Hello David,
>>
>> thanks for your explanation of those messages, is there any possibility
>> to get rid of this? i already figured out that it might be an corruption
>> of the ids file, but i didnt find anything about re-creating or other
>> solutions to fix this.
>>
>> Imho this occoured after an outage where several hosts, and the iscsi
>> SAN has been fenced and/or rebooted.
>>
>> Thanks,
>>
>> Juergen
>>
>>
>> Am 6/2/2016 um 6:03 PM schrieb David Teigland:
>>> On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
> This is a mess that's been caused by improper use of storage, and various
> sanity checks in sanlock have all reported errors for "impossible"
> conditions indicating that something catastrophic has been done to the
> storage it's using.  Some fundamental rules are not being followed.

 Thanks David.

 Do you need more output from sanlock to understand this issue?
>>>
>>> I can think of nothing more to learn from sanlock.  I'd suggest tighter,
>>> higher level checking or control of storage.  Low level sanity checks
>>> detecting lease corruption are not a convenient place to work from.
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Questions on oVirt

2016-06-03 Thread Joop
On 3-6-2016 18:10, Brett I. Holcomb wrote:
> On Fri, 2016-06-03 at 08:41 +0200, jvandewege wrote:
>> On 3-6-2016 3:23, Brett I. Holcomb wrote:
>> Thanks.  That helps.  If you get a chance I'd like the script.
>
Here it is, heavily tailored to my setup and my 'servers' which seem to
have their own problems. I never had problems shutting down servers
which run oVirt but it looks like that consumer computers are more
sensitive to some manipulation that oVirt does. Without this script I
can't shutdown my Shuttle XH61V, it will hang on wdmd.
The umount /rhev/.../ovirt01 are the nfs mounts that my hosts exports
and the umount /rhev/../pakhuis are the nfs mount to a NAS. So I don't
have everything on a single box but that doesn't matter. Got the same
kind of setup on my laptop but then without the external NFS on my NAS.
Host is running Fedora22, engine is running Centos-7.2

Joop

service ovirt-ha-agent stop
service ovirt-ha-broker stop
ssh engine01 init 0
service vdsmd stop
systemctl stop sanlock.service
umount /rhev/data-center/mnt/ovirt01.puzzle-it.nu:_nfs_data
umount /rhev/data-center/mnt/ovirt01.puzzle-it.nu:_nfs_export
umount /rhev/data-center/mnt/ovirt01.puzzle-it.nu:_nfs_iso
umount /rhev/data-center/mnt/pakhuis:_volume1_nfs_export
umount /rhev/data-center/mnt/pakhuis:_volume1_nfs_iso
umount /rhev/data-center/mnt/pakhuis:_volume1_nfs_data
umount /rhev/data-center/mnt/pakhuis:_volume1_nfs_he
systemctl stop nfs-server.service


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Questions on oVirt

2016-06-03 Thread Brett I. Holcomb
Thanks.  I have my VMs on my Synology unit with the export directory on
the host server itself.
On Fri, 2016-06-03 at 20:49 +0200, Joop wrote:
> On 3-6-2016 18:10, Brett I. Holcomb wrote:
> > 
> > On Fri, 2016-06-03 at 08:41 +0200, jvandewege wrote:
> > > 
> > > On 3-6-2016 3:23, Brett I. Holcomb wrote:
> > > Thanks.  That helps.  If you get a chance I'd like the script.
> Here it is, heavily tailored to my setup and my 'servers' which seem
> to
> have their own problems. I never had problems shutting down servers
> which run oVirt but it looks like that consumer computers are more
> sensitive to some manipulation that oVirt does. Without this script I
> can't shutdown my Shuttle XH61V, it will hang on wdmd.
> The umount /rhev/.../ovirt01 are the nfs mounts that my hosts exports
> and the umount /rhev/../pakhuis are the nfs mount to a NAS. So I
> don't
> have everything on a single box but that doesn't matter. Got the same
> kind of setup on my laptop but then without the external NFS on my
> NAS.
> Host is running Fedora22, engine is running Centos-7.2
> 
> Joop
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users