Re: [ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Yedidyah Bar David
On Tue, Feb 16, 2016 at 9:30 AM, Johan Kooijman  wrote:
> Yes. I pasted the information on AIO, that was wrong on my end. I have an
> engine running on dedicated hardware and about 20 nodes in this cluster. I
> do like to upgrade without downtime :) I know how to achieve this on the
> node-end, but since I have to go from C6 to C7, I wonder what the procedure
> would be for engine.

Please explain exactly what you are trying to do.

Note that engine is still supported on el6.

(new) Hosts are not.

all-in-one is running both together, thus is not supported on el6 either.

IIRC we do not have a tested procedure to upgrade the engine from C6 to C7
yet, see also:

https://bugzilla.redhat.com/show_bug.cgi?id=1234257
https://bugzilla.redhat.com/show_bug.cgi?id=1285743

Best,

>
> On Mon, Feb 15, 2016 at 9:21 PM, Alexander Wels  wrote:
>>
>> On Monday, February 15, 2016 08:21:40 PM Johan Kooijman wrote:
>> > Hi Alexander,
>> >
>> > Thanks for the input! My 3.5 is running on C6 however:
>> >
>> > Upgrade of All-in-One on EL6 is not supported in 3.6. VDSM and the
>> > packages
>> > requiring it are not built anymore for EL6
>> >
>>
>> Well that was a piece of information you forgot to mention in your initial
>> email. So now I am not entirely sure what you are trying to do. Are you
>> trying
>> to save your existing VMs when you reinstall your machine?
>>
>>
>> > On Mon, Feb 15, 2016 at 3:37 PM, Alexander Wels 
>> > wrote:
>> > > On Monday, February 15, 2016 02:40:47 PM Johan Kooijman wrote:
>> > > > Hi,
>> > > >
>> > > > Can anybody recommend me best practice upgrade path for an upgrade
>> > > > from
>> > > > oVirt 3.5 on C6 to 3.6 on C7.2?
>> > >
>> > > The answer sort of depends on what you want. Do you want no downtime
>> > > on
>> > > your
>> > > VMs or is downtime acceptable. Also are you running hosted engine or
>> > > not?
>> > >
>> > > This is the basic plan which can be adjusted based on what your needs
>> > > are:
>> > >
>> > > 1. Update engine from 3.5 to 3.6 (if hosted engine might be trickier,
>> > > not
>> > > sure
>> > > haven't played with hosted engine).
>> > > 2. Create a new 3.6 cluster.
>> > > 3. Put 1 host in maintenance (which will migrate the VMs to the other
>> > > hosts).
>> > > 4. Remove the host from the DC.
>> > > 5. Install C7.2 on the host
>> > > 6. Add that host to the new 3.6 cluster.
>> > > 7. Optional (you can cross cluster live migrate some VMs from 6 to 7
>> > > (just
>> > > not
>> > > the other way around, so once the VM is moved its stuck in the new
>> > > cluster).
>> > > 8. Go to 3 until all hosts are moved.
>> > > 9. Your 3.5 cluster should now be empty, and can be removed.
>> > > 10. Upgrade your DC to 3.6 (Can't upgrade if any lower clusters
>> > > exist).
>> > >
>> > > If you can have downtime, then just shut down the VMs running on the
>> > > host
>> > > in
>> > > step 3 before putting it in maintenance. Once the host is moved to the
>> > > new
>> > > cluster you can start the VMs.
>> > >
>> > > Alexander
>> > > ___
>> > > Users mailing list
>> > > Users@ovirt.org
>> > > http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Johan Kooijman
Yes. I pasted the information on AIO, that was wrong on my end. I have an
engine running on dedicated hardware and about 20 nodes in this cluster. I
do like to upgrade without downtime :) I know how to achieve this on the
node-end, but since I have to go from C6 to C7, I wonder what the procedure
would be for engine.

On Mon, Feb 15, 2016 at 9:21 PM, Alexander Wels  wrote:

> On Monday, February 15, 2016 08:21:40 PM Johan Kooijman wrote:
> > Hi Alexander,
> >
> > Thanks for the input! My 3.5 is running on C6 however:
> >
> > Upgrade of All-in-One on EL6 is not supported in 3.6. VDSM and the
> packages
> > requiring it are not built anymore for EL6
> >
>
> Well that was a piece of information you forgot to mention in your initial
> email. So now I am not entirely sure what you are trying to do. Are you
> trying
> to save your existing VMs when you reinstall your machine?
>
>
> > On Mon, Feb 15, 2016 at 3:37 PM, Alexander Wels 
> wrote:
> > > On Monday, February 15, 2016 02:40:47 PM Johan Kooijman wrote:
> > > > Hi,
> > > >
> > > > Can anybody recommend me best practice upgrade path for an upgrade
> from
> > > > oVirt 3.5 on C6 to 3.6 on C7.2?
> > >
> > > The answer sort of depends on what you want. Do you want no downtime on
> > > your
> > > VMs or is downtime acceptable. Also are you running hosted engine or
> not?
> > >
> > > This is the basic plan which can be adjusted based on what your needs
> are:
> > >
> > > 1. Update engine from 3.5 to 3.6 (if hosted engine might be trickier,
> not
> > > sure
> > > haven't played with hosted engine).
> > > 2. Create a new 3.6 cluster.
> > > 3. Put 1 host in maintenance (which will migrate the VMs to the other
> > > hosts).
> > > 4. Remove the host from the DC.
> > > 5. Install C7.2 on the host
> > > 6. Add that host to the new 3.6 cluster.
> > > 7. Optional (you can cross cluster live migrate some VMs from 6 to 7
> (just
> > > not
> > > the other way around, so once the VM is moved its stuck in the new
> > > cluster).
> > > 8. Go to 3 until all hosts are moved.
> > > 9. Your 3.5 cluster should now be empty, and can be removed.
> > > 10. Upgrade your DC to 3.6 (Can't upgrade if any lower clusters exist).
> > >
> > > If you can have downtime, then just shut down the VMs running on the
> host
> > > in
> > > step 3 before putting it in maintenance. Once the host is moved to the
> new
> > > cluster you can start the VMs.
> > >
> > > Alexander
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Alexander Wels
On Monday, February 15, 2016 08:21:40 PM Johan Kooijman wrote:
> Hi Alexander,
> 
> Thanks for the input! My 3.5 is running on C6 however:
> 
> Upgrade of All-in-One on EL6 is not supported in 3.6. VDSM and the packages
> requiring it are not built anymore for EL6
> 

Well that was a piece of information you forgot to mention in your initial 
email. So now I am not entirely sure what you are trying to do. Are you trying 
to save your existing VMs when you reinstall your machine?


> On Mon, Feb 15, 2016 at 3:37 PM, Alexander Wels  wrote:
> > On Monday, February 15, 2016 02:40:47 PM Johan Kooijman wrote:
> > > Hi,
> > > 
> > > Can anybody recommend me best practice upgrade path for an upgrade from
> > > oVirt 3.5 on C6 to 3.6 on C7.2?
> > 
> > The answer sort of depends on what you want. Do you want no downtime on
> > your
> > VMs or is downtime acceptable. Also are you running hosted engine or not?
> > 
> > This is the basic plan which can be adjusted based on what your needs are:
> > 
> > 1. Update engine from 3.5 to 3.6 (if hosted engine might be trickier, not
> > sure
> > haven't played with hosted engine).
> > 2. Create a new 3.6 cluster.
> > 3. Put 1 host in maintenance (which will migrate the VMs to the other
> > hosts).
> > 4. Remove the host from the DC.
> > 5. Install C7.2 on the host
> > 6. Add that host to the new 3.6 cluster.
> > 7. Optional (you can cross cluster live migrate some VMs from 6 to 7 (just
> > not
> > the other way around, so once the VM is moved its stuck in the new
> > cluster).
> > 8. Go to 3 until all hosts are moved.
> > 9. Your 3.5 cluster should now be empty, and can be removed.
> > 10. Upgrade your DC to 3.6 (Can't upgrade if any lower clusters exist).
> > 
> > If you can have downtime, then just shut down the VMs running on the host
> > in
> > step 3 before putting it in maintenance. Once the host is moved to the new
> > cluster you can start the VMs.
> > 
> > Alexander
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Johan Kooijman
Hi Alexander,

Thanks for the input! My 3.5 is running on C6 however:

Upgrade of All-in-One on EL6 is not supported in 3.6. VDSM and the packages
requiring it are not built anymore for EL6



On Mon, Feb 15, 2016 at 3:37 PM, Alexander Wels  wrote:

> On Monday, February 15, 2016 02:40:47 PM Johan Kooijman wrote:
> > Hi,
> >
> > Can anybody recommend me best practice upgrade path for an upgrade from
> > oVirt 3.5 on C6 to 3.6 on C7.2?
>
> The answer sort of depends on what you want. Do you want no downtime on
> your
> VMs or is downtime acceptable. Also are you running hosted engine or not?
>
> This is the basic plan which can be adjusted based on what your needs are:
>
> 1. Update engine from 3.5 to 3.6 (if hosted engine might be trickier, not
> sure
> haven't played with hosted engine).
> 2. Create a new 3.6 cluster.
> 3. Put 1 host in maintenance (which will migrate the VMs to the other
> hosts).
> 4. Remove the host from the DC.
> 5. Install C7.2 on the host
> 6. Add that host to the new 3.6 cluster.
> 7. Optional (you can cross cluster live migrate some VMs from 6 to 7 (just
> not
> the other way around, so once the VM is moved its stuck in the new
> cluster).
> 8. Go to 3 until all hosts are moved.
> 9. Your 3.5 cluster should now be empty, and can be removed.
> 10. Upgrade your DC to 3.6 (Can't upgrade if any lower clusters exist).
>
> If you can have downtime, then just shut down the VMs running on the host
> in
> step 3 before putting it in maintenance. Once the host is moved to the new
> cluster you can start the VMs.
>
> Alexander
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] "bridge port" error when setting up hosted-engine on 2nd oVirt 3.6.2 node

2016-02-15 Thread Simone Tiraboschi
On Mon, Feb 15, 2016 at 3:50 PM, Yedidyah Bar David  wrote:

> On Mon, Feb 15, 2016 at 4:22 PM, Mike DePaulo 
> wrote:
> > Hi,
> >
> > I am getting the following error when I attempt to setup my 2nd ovirt
> > node as a hosted-engine:
> >
> > RuntimeError: The selected device None is not a supported bridge port
> >
> > The 1st node is death-star (192.168.1.50)
> > The 2nd node is starkiller-base (192.168.1.52)
> >
> > This is not a production environment; this is my apartment.
> >
> > I am able to access the engine's webGUI.
> >
> > Both death-star and starkiller-base start out with only eno1 as their
> only NIC.
> > death-star, by the end of the hosted-engine-setup, had ovirtmgmt also
> > (with the same MAC address.)
> > starkiller-base does not have ovirtmgmt yet.
> > I am not using VLANs.
> >
> > node version: ovirt-node-iso-3.6-0.999.201602121021.el7.centos.iso
> > engine appliance version:
> oVirt-Engine-Appliance-CentOS-x86_64-7-20160126.ova
>
> Seems like a bug in [1] - seems like we do not configure the bridge anymore
> on additional host. But I didn't yet try that myself, I might be missing
> something. Adding Simone.
>
> [1]
> https://gerrit.ovirt.org/#/q/Ifcff652ef28e6b912514df2dd2daac2b07eca61e,n,z
> --
> Didi
>

Yes, it was a real bug: https://gerrit.ovirt.org/#/c/53428/ should address
it.
It should be available in tomorrow build.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] lvm_dev_whitelist vdsm.conf option

2016-02-15 Thread jojo

On 2016-02-15 16:09, Adam Litke wrote:

On 11/02/16 17:22 +0100, Johannes Tiefenbacher wrote:

Hi,
I am wondering what the lvm_dev_whitelist vdsm.conf option is for?

I had some issues whith vdsm switching off logical volumes that it 
better should keep it's hands off.


Could this option help to handle this?


Yes this option should be exactly what you are looking for.  You
specify a comma separated list of glob expressions and vdsm will
instruct lvm to ignore all devices which are not matched by your list.

See:
https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html 


for more background information on the LVM filters.



perfect, I will try this out.
thanks a lot for your answer.
all the best
Jojo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virtio-serial0 duplicate id

2016-02-15 Thread jojo

On 2016-02-14 12:16, Arik Hadas wrote:


- Original Message -


- Original Message -

On 11 Feb 2016, at 17:02, Johannes Tiefenbacher  wrote:

Hi,
finally I am posting something to this list :) I read it for quite some
time now and I am an ovirt user since 3.0.

Hi,
welcome:)



I updated an engine installation from 3.2 to 3.6 (stepwise of course, and
yes I know that's pretty outdated ;-). Then I updated the associated
Centos6 hosts vdsm as well, from 3.10.x to 3.16.30. I also set my cluster
comp level to 3.5(3.6 comp level is only possible with El7 hosts if I
understood correctly).

After my first failover test a VM could not be restarted, altough the
host
where it was running could correctly be fenced.

The reason according to engine's log was this:

VM  is down with error. Exit message: internal error process
exited
while connecting to monitor: qemu-kvm: -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4:
Duplicate ID 'virtio-serial0' for device


I then recognized that I am not able to run this VM on any host. Ich
checked the virtual hardware in the engine database and could confirm
that
ALL my VMs had this problem: 2 devices with alias='virtio-serial0’

it may very well be a bug, but it would be quite difficult to say unless it
is reproducible. It may be broken from earlier releases
Arik/Shmuel, maybe it rings a bell?

In 3.6 we changed virtio-serial to be a managed device.
The script named 03_06_0310_change_virtio_serial_to_managed_device.sql
changes unmanaged virtio-serial devices (that were all unmanaged before) to
be managed.
A potential flow that will cause this duplication I can think of is:
1. Have a running VM in a pre-3.6 engine - it has unmanaged virtio-serial
2. Upgrade to 3.6 while the VM is running - the unmanaged virtio-serial
becomes managed
3. Do something that will change the hash of the devices
=> the engine will add an additional unmanaged virtio-serial device

Why didn't it happen before? because the handling of unmanaged devices was:
1. Upon change in the VM devices (their hash), ask for all the devices
(full-list)
2. Remove all previous unmanaged devices
3. Add every device that does not exist in the database
When we add an unmanaged device we generate a new ID (!) - therefore we had
to remove all the previous unmanaged devices before adding the new ones.
If the previous unmanaged virtio-serial became managed, it is not removed and
we will end up having two virtio-serial devices.

@Johannes - is it true that the VM was running before the engine got updated
to 3.6 and wasn't powered-off since then?

yes that's true


I managed to simulate this.
We probably need to prevent the addition of unmanaged virtio-serial in 3.6
engine but IMO we should also use the ID reported by VDSM instead of
generating a new one to eliminate similar issues in the future.
@Eli, Omer - can you recall why can't we use the ID we get from VDSM for the
unmanaged devices?
(we can continue this discussion in devel-list or in bugzilla..)


e.g.:


engine=# SELECT * FROM vm_device WHERE vm_device.device = 'virtio-serial'
AND vm_id = 'cbfa359f-d0b8-484b-8ec0-cf9b8e4bb3ec' ORDER BY vm_id;
-[ RECORD 1
]-+-
device_id | 2821d03c-ce88-4613-9095-e88eadcd3792
vm_id | cbfa359f-d0b8-484b-8ec0-cf9b8e4bb3ec
type  | controller
device| virtio-serial
address   |
boot_order| 0
spec_params   | { }
is_managed| t
is_plugged| f
is_readonly   | f
_create_date  | 2016-01-14 08:30:43.797161+01
_update_date  | 2016-02-10 10:04:56.228724+01
alias | virtio-serial0
custom_properties | { }
snapshot_id   |
logical_name  |
is_using_scsi_reservation | f
-[ RECORD 2
]-+-
device_id | 29e0805f-d836-451a-9ec3-9031baa995e6
vm_id | cbfa359f-d0b8-484b-8ec0-cf9b8e4bb3ec
type  | controller
device| virtio-serial
address   | {bus=0x00, domain=0x, type=pci,
slot=0x04,
function=0x0}
boot_order| 0
spec_params   | { }
is_managed| f
is_plugged| t
is_readonly   | f
_create_date  | 2016-02-11 13:47:02.69992+01
_update_date  |
alias | virtio-serial0
custom_properties |
snapshot_id   |
logical_name  |
is_using_scsi_reservation | f



My solution was this:

DELETE FROM vm_device WHERE vm_id='cbfa359f-d0b8-484b-8ec0-cf9b8e4bb3ec'
AND vm_device.device = 'virtio-serial' AND address = '';

(just renaming one of the aliases to virtio-serial1" did not help)

I believe it is not 

Re: [ovirt-users] change pool icon

2016-02-15 Thread Alexander Wels
On Monday, February 15, 2016 06:18:13 PM alireza sadeh seighalan wrote:
> hi again
> 
> i knew but i want to change after vms created. after vm  created (from
> specific pool) change the template's icon is useless.
> 

If that doesn't happen, then it is most likely a bug.

> On Mon, Feb 15, 2016 at 6:14 PM, Alexander Wels  wrote:
> > On Saturday, February 13, 2016 11:25:24 AM alireza sadeh seighalan wrote:
> > > hi everyone
> > > 
> > > how can i change pool icon after create them? i have around 40 pool vms
> > 
> > but
> > 
> > > now i want to change their icons.thanks in advance
> > 
> > Change the icon in the template that the pool is based on?
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move from Gluster to NFS

2016-02-15 Thread Yaniv Dary
Adding Sandro and Didi might be able to give more detailed flow to do this.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Sun, Feb 14, 2016 at 7:12 PM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

> Is there a reason why this is not possible?
>
>
>
> Can I setup a second host in the engine cluster and move the engine with
> “storage” to that host?
>
>
>
> So, what you would recommend is:
>
>
>
> 1.   Move all VMs from engine host to another Host
>
> 2.   Setup NFS on the empty HE host
>
> 3.   Shutdown the HE, and disable HA-proxy and agent
>
> 4.   Re-deploy the engine and restore from backup the HE
>
> 5.   Enjoy ?
>
>
>
> Thank you for any help on this,
>
> I really don’t want to end up with broken environment J
>
>
>
> Kind regards,
>
>
>
> --
>
> Christophe
>
>
>
> *From:* Yaniv Dary [mailto:yd...@redhat.com]
> *Sent:* dimanche 14 février 2016 16:32
> *To:* Christophe TREFOIS 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] Move from Gluster to NFS
>
>
>
> You will not be able to move it between storage that is way I suggested
> the backup and restore path.
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road
>
> Building A, 4th floor
>
> Ra'anana, Israel 4350109
>
>
>
> Tel : +972 (9) 7692306
>
> 8272306
>
> Email: yd...@redhat.com
>
> IRC : ydary
>
>
>
> On Sun, Feb 14, 2016 at 5:28 PM, Christophe TREFOIS <
> christophe.tref...@uni.lu> wrote:
>
> Hi Yaniv,
>
>
>
> Would you recommend doing a clean install or can I simply move the HE from
> the gluster mount point to NFS and tell HA agent to boot from there?
>
>
>
> What do you think?
>
>
>
> Thank you,
>
>
>
> --
>
> Christophe
>
> Sent from my iPhone
>
>
> On 14 Feb 2016, at 15:30, Yaniv Dary  wrote:
>
> We will probably need to backup and restore the HE vm after doing a clean
> install on NFS.
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road
>
> Building A, 4th floor
>
> Ra'anana, Israel 4350109
>
>
>
> Tel : +972 (9) 7692306
>
> 8272306
>
> Email: yd...@redhat.com
>
> IRC : ydary
>
>
>
> On Sat, Feb 6, 2016 at 11:30 PM, Christophe TREFOIS <
> christophe.tref...@uni.lu> wrote:
>
> Dear all,
>
>
>
> I currently have a self-hosted setup with gluster on 1 node.
>
> I do have other data centers with 3 other hosts and local (sharable) NFS
> storage. Furthermore, I have 1 NFS export domain.
>
>
>
> We would like to move from Gluster to NFS only on the first host.
>
>
>
> Does anybody have any experience with this?
>
>
>
> Thank you,
>
>
>
> —
>
> Christophe
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] lvm_dev_whitelist vdsm.conf option

2016-02-15 Thread Adam Litke

On 11/02/16 17:22 +0100, Johannes Tiefenbacher wrote:

Hi,
I am wondering what the lvm_dev_whitelist vdsm.conf option is for?

I had some issues whith vdsm switching off logical volumes that it 
better should keep it's hands off.


Could this option help to handle this?


Yes this option should be exactly what you are looking for.  You
specify a comma separated list of glob expressions and vdsm will
instruct lvm to ignore all devices which are not matched by your list.

See:
https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html
for more background information on the LVM filters.

--
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] change pool icon

2016-02-15 Thread Alexander Wels
On Saturday, February 13, 2016 11:25:24 AM alireza sadeh seighalan wrote:
> hi everyone
> 
> how can i change pool icon after create them? i have around 40 pool vms but
> now i want to change their icons.thanks in advance

Change the icon in the template that the pool is based on?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] "bridge port" error when setting up hosted-engine on 2nd oVirt 3.6.2 node

2016-02-15 Thread Yedidyah Bar David
On Mon, Feb 15, 2016 at 4:22 PM, Mike DePaulo  wrote:
> Hi,
>
> I am getting the following error when I attempt to setup my 2nd ovirt
> node as a hosted-engine:
>
> RuntimeError: The selected device None is not a supported bridge port
>
> The 1st node is death-star (192.168.1.50)
> The 2nd node is starkiller-base (192.168.1.52)
>
> This is not a production environment; this is my apartment.
>
> I am able to access the engine's webGUI.
>
> Both death-star and starkiller-base start out with only eno1 as their only 
> NIC.
> death-star, by the end of the hosted-engine-setup, had ovirtmgmt also
> (with the same MAC address.)
> starkiller-base does not have ovirtmgmt yet.
> I am not using VLANs.
>
> node version: ovirt-node-iso-3.6-0.999.201602121021.el7.centos.iso
> engine appliance version: oVirt-Engine-Appliance-CentOS-x86_64-7-20160126.ova

Seems like a bug in [1] - seems like we do not configure the bridge anymore
on additional host. But I didn't yet try that myself, I might be missing
something. Adding Simone.

[1] https://gerrit.ovirt.org/#/q/Ifcff652ef28e6b912514df2dd2daac2b07eca61e,n,z
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Alexander Wels
On Monday, February 15, 2016 02:40:47 PM Johan Kooijman wrote:
> Hi,
> 
> Can anybody recommend me best practice upgrade path for an upgrade from
> oVirt 3.5 on C6 to 3.6 on C7.2?

The answer sort of depends on what you want. Do you want no downtime on your 
VMs or is downtime acceptable. Also are you running hosted engine or not?

This is the basic plan which can be adjusted based on what your needs are:

1. Update engine from 3.5 to 3.6 (if hosted engine might be trickier, not sure 
haven't played with hosted engine).
2. Create a new 3.6 cluster.
3. Put 1 host in maintenance (which will migrate the VMs to the other hosts).
4. Remove the host from the DC.
5. Install C7.2 on the host
6. Add that host to the new 3.6 cluster.
7. Optional (you can cross cluster live migrate some VMs from 6 to 7 (just not 
the other way around, so once the VM is moved its stuck in the new cluster).
8. Go to 3 until all hosts are moved.
9. Your 3.5 cluster should now be empty, and can be removed.
10. Upgrade your DC to 3.6 (Can't upgrade if any lower clusters exist).

If you can have downtime, then just shut down the VMs running on the host in 
step 3 before putting it in maintenance. Once the host is moved to the new 
cluster you can start the VMs.

Alexander
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade patch 3.5 -> 3.6

2016-02-15 Thread Johan Kooijman
Hi,

Can anybody recommend me best practice upgrade path for an upgrade from
oVirt 3.5 on C6 to 3.6 on C7.2?

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Michal Skrivanek

> On 15 Feb 2016, at 12:00, Jean-Pierre Ribeauville  
> wrote:
> 
> Hi,
>  
> You hit the target !!!
>  
> I enable overcommitting on the destination , then I’m able to migrate towards 
> it.
>  
> Now I’ve to clarify my  Guests memory requirements.
>  
> Thx for your help.
>  
> Regards,
>  
> J.P.
>  
> _
> De : Jean-Pierre Ribeauville 
> Envoyé : lundi 15 février 2016 11:03
> À : 'ILanit Stein'
> Cc : users@ovirt.org 
> Objet : RE: [ovirt-users] migration failed no available host found 
>  
>  
> Hi,
>  
> Within the ovirt GUI , I got this :
>  
> Max free Memory for scheduling new VMs : 0 Mb
>  
> It seems to be the root cause of my issue .
>  
> Vmstat -s ran on the destination host returns :
>  
> [root@ldc01omv01 vdsm]# vmstat -s
>  49182684 K total memory
>   4921536 K used memory
>   5999188 K active memory
>   1131436 K inactive memory
>  39891992 K free memory
>  2344 K buffer memory
>   4366812 K swap cache
>  24707068 K total swap
> 0 K used swap
>  24707068 K free swap
>   3090822 non-nice user cpu ticks
>  8068 nice user cpu ticks
>   2637035 system cpu ticks
> 804915819 idle cpu ticks
>298074 IO-wait cpu ticks
> 6 IRQ cpu ticks
>  5229 softirq cpu ticks
> 0 stolen cpu ticks
>  58678411 pages paged in
>  78586581 pages paged out
> 0 pages swapped in
> 0 pages swapped out
> 541412845 interrupts
>1224374736 CPU context switches
>1455276687 boot time
>476762 forks
> [root@ldc01omv01 vdsm]#
>  
>  
> Is it vdsm that returns this info to ovirt ?
>  
> I tried a migration this morning at 10/04.
>  
> I attached  vdsm destination log .
>  
> << Fichier: vdsm.log >> 
>  
> Is it worth to increase of level of destination log ?
>  
>  
> Thx for help.
>  
> Regards,
>  
> J.P.
>  
> -Message d'origine-
> De : ILanit Stein [mailto:ist...@redhat.com ] 
> Envoyé : dimanche 14 février 2016 10:05
> À : Jean-Pierre Ribeauville
> Cc : users@ovirt.org 
> Objet : Re: [ovirt-users] migration failed no available host found 
>  
> Hi Jean-Pierre,
>  
> Seems by the log you've sent that the destination host, ldc01omv01, is 
> filtered out, cause of lake of memory.
> Is there enough memory on the destination, to run this VM?
>  
> Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
> /var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
> more details.
>  
> Thanks,
> Ilanit.
>  
> - Original Message -
> From: "Jean-Pierre Ribeauville"  >
> To: users@ovirt.org 
> Sent: Friday, February 12, 2016 4:59:20 PM
> Subject: [ovirt-users] migration failed no available host found 
>  
>  
>  
> Hi, 
>  
>  
>  
> When trying to migrate a Guest between two nodes of a cluster (from node1 to 
> ldc01omv01) , I got this error ( in ovirt/engine.log file) : 
>  
>  
>  
> 2016-02-12 15:05:31,485 INFO 
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
> (ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
> (09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
> VAR__FILTERTYPE__INTERNAL filter Memory 
>  
> 2016-02-12 15:05:31,495 DEBUG 
> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
> (org.ovirt.thread.pool-7-thread-34) About to run task 
> java.util.concurrent.FutureTask from : java.lang.Exception 
>  
> at 
> org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
>  [utils.jar:] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [rt.jar:1.7.0_85] 
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [rt.jar:1.7.0_85] 
>  
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85] 
>  
>  
>  
> 2016-02-12 15:05:31,502 INFO 
> [org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
> (org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
> MigrateVmToServerCommand internal: false. Entities affected : ID: 
> b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with 
> role type USER 
>  
> 2016-02-12 15:05:31,505 INFO 
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
> (org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
> (09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
> VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86) 
>  
> 2016-02-12 15:05:31,509 WARN 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job 
> ID: 7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: 
> -1, Message: Migration 

Re: [ovirt-users] ovirt-guest-agent DBUS exception

2016-02-15 Thread Vinzenz Feenstra
Adding it back to the list
> On Feb 15, 2016, at 10:04 AM, Vinzenz Feenstra  wrote:
> 
>> 
>> On Feb 12, 2016, at 7:53 PM, Jean-Pierre Ribeauville 
>> > wrote:
>> 
>> Hi,
>>  
>> When trying  to run this command : /usr/bin/python 
>> /usr/share/ovirt-guest-agent/ovirt-guest-agent.py
>>  
>> I got this error :
>>  
>> Exception in thread CredServer:
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
>> self.run()
>>   File "/usr/share/ovirt-guest-agent/CredServer.py", line 253, in run
>> self._dbus = CredDBusObject()
>>   File "/usr/share/ovirt-guest-agent/CredServer.py", line 127, in __init__
>> self._name = dbus.service.BusName('org.ovirt.vdsm.Credentials', bus)
>>   File "/usr/lib64/python2.7/site-packages/dbus/service.py", line 131, in 
>> __new__
>> retval = bus.request_name(name, name_flags)
>>   File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 303, in 
>> request_name
>> 'su', (name, flags))
>>   File "/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in 
>> call_blocking
>> message, timeout)
>> DBusException: org.freedesktop.DBus.Error.AccessDenied: Connection ":1.128" 
>> is not allowed to own the service "org.ovirt.vdsm.Credentials" due to 
>> security policies in the configuration file
>>  
>> Any hint to go further in investigation  ?
> 
> 
> You need to ensure that the org.ovirt.vdsm.Credentials.conf is installed in 
> /etc/dbus-1/system.d
> 
>>  
>> Thx
> 
> 
>> Regards,
>>  
>> J.P. Ribeauville
>>  
>> P: +33.(0).1.47.17.20.49
>> .
>> Puteaux 3 Etage 5  Bureau 4
>>  
>> jpribeauvi...@axway.com 
>> http://www.axway.com 
>>  
>> P Pensez à l’environnement avant d’imprimer.
>>  
>>  
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] migration failed no available host found ....

2016-02-15 Thread Jean-Pierre Ribeauville
Hi,

You hit the target !!!

I enable overcommitting on the destination , then  I’m able to migrate towards 
it.

Now I’ve to clarify my  Guests  memory requirements.

Thx for your help.

Regards,

J.P.

_
De : Jean-Pierre Ribeauville
Envoyé : lundi 15 février 2016 11:03
À : 'ILanit Stein'
Cc : users@ovirt.org
Objet : RE: [ovirt-users] migration failed no available host found 


Hi,

Within the ovirt GUI , I got this :

Max free Memory for scheduling new VMs : 0 Mb

It seems to be the root cause of my issue .

Vmstat -s ran on the destination host returns :

[root@ldc01omv01 vdsm]# vmstat -s
 49182684 K total memory
  4921536 K used memory
  5999188 K active memory
  1131436 K inactive memory
 39891992 K free memory
 2344 K buffer memory
  4366812 K swap cache
 24707068 K total swap
0 K used swap
 24707068 K free swap
  3090822 non-nice user cpu ticks
 8068 nice user cpu ticks
  2637035 system cpu ticks
804915819 idle cpu ticks
   298074 IO-wait cpu ticks
6 IRQ cpu ticks
 5229 softirq cpu ticks
0 stolen cpu ticks
 58678411 pages paged in
 78586581 pages paged out
0 pages swapped in
0 pages swapped out
541412845 interrupts
   1224374736 CPU context switches
   1455276687 boot time
   476762 forks
[root@ldc01omv01 vdsm]#


Is it vdsm that returns this info to ovirt ?

I tried a migration this morning at 10/04.

I attached  vdsm destination log .

 << Fichier: vdsm.log >>

Is it worth to increase of level of destination log ?


Thx for help.

Regards,

J.P.

-Message d'origine-
De : ILanit Stein [mailto:ist...@redhat.com]
Envoyé : dimanche 14 février 2016 10:05
À : Jean-Pierre Ribeauville
Cc : users@ovirt.org
Objet : Re: [ovirt-users] migration failed no available host found 

Hi Jean-Pierre,

Seems by the log you've sent that the destination host, ldc01omv01, is filtered 
out, cause of lake of memory.
Is there enough memory on the destination, to run this VM?

Would you please send the source/destination hosts /var/log/vdsm/vdsm.log, 
/var/log/libvirt/qemu/VM_RHEL7-2, and /var/log/vdsm/libvirt.log, to provide 
more details.

Thanks,
Ilanit.

- Original Message -
From: "Jean-Pierre Ribeauville" 
>
To: users@ovirt.org
Sent: Friday, February 12, 2016 4:59:20 PM
Subject: [ovirt-users] migration failed no available host found 



Hi,



When trying to migrate a Guest between two nodes of a cluster (from node1 to 
ldc01omv01) , I got this error ( in ovirt/engine.log file) :



2016-02-12 15:05:31,485 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(ajp-/127.0.0.1:8702-4) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory

2016-02-12 15:05:31,495 DEBUG 
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
(org.ovirt.thread.pool-7-thread-34) About to run task 
java.util.concurrent.FutureTask from : java.lang.Exception

at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.beforeExecute(ThreadPoolUtil.java:52)
 [utils.jar:]

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[rt.jar:1.7.0_85]

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[rt.jar:1.7.0_85]

at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_85]



2016-02-12 15:05:31,502 INFO 
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Running command: 
MigrateVmToServerCommand internal: false. Entities affected : ID: 
b77e6171-cbdf-44cb-b851-1b776a3fb616 Type: VMAction group MIGRATE_VM with role 
type USER

2016-02-12 15:05:31,505 INFO 
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Candidate host ldc01omv01 
(09bb3024-170f-48a1-a78a-951a2c61c680) was filtered out by 
VAR__FILTERTYPE__INTERNAL filter Memory (correlation id: ff31b86)

2016-02-12 15:05:31,509 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-34) [ff31b86] Correlation ID: ff31b86, Job ID: 
7b917604-f487-43a3-9cd2-4e7f95e545cc, Call Stack: null, Custom Event ID: -1, 
Message: Migration failed, No available host found (VM: VM_RHEL7-2, Source: 
node1).





In ovirt GUI nothing strange .



How may I go further to investigate this issue ?





Thx for help.



Regards,






J.P. Ribeauville




P: +33.(0).1.47.17.20.49

.

Puteaux 3 Etage 5 Bureau 4



jpribeauvi...@axway.com
http://www.axway.com






P Pensez à l’environnement avant d’imprimer.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users