Re: [ovirt-users] Importing an OVA appliance

2017-02-11 Thread Shahar Havivi
Importing OVA to ovirt currently works only from VMWara OVA,
use VMs-Import-Import OVA
make sure that the OVA permissions are vdsm:kvm (36:36),
as for exporting to OVA its in progress.

On Fri, Feb 10, 2017 at 5:00 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> Has any one tried and documented the process of importing an appliance
> (OVA or qcow2) into oVirt? Also, need to have a procedure to eexport the VM
> into an appliance from oVirt.
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava eXzaTech Consulting And Services Pvt. Ltd.
> Do not print this e-mail unless required. Save Paper & trees.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hoste has available updates but nothing in yum

2017-02-11 Thread Yedidyah Bar David
On Sun, Feb 12, 2017 at 2:22 AM, Gianluca Cecchi
 wrote:
> Hello all,
> I have 2 nodes in CentOS and oVirt 4.1 and since yesterday I see this event
> in dashbard pane
>
> Host ovmsrv05 has available updates:
> collectd,collectd-disk,collectd-netlink,collectd-virt,collectd-write_http,fluentd,rubygem-fluent-plugin-rewrite-tag-filter,rubygem-fluent-plugin-secure-forward.
>
> Actually collectd is not installed on my systen
>
> $ sudo rpm -qa | grep collect
> libcollection-0.6.2-27.el7.x86_64
> $
>
> And " yum update: brings no new packeage... what to check?

Were these hosts installed as 4.1, or upgraded from 4.0? If upgraded, how?

collectd is a new requirement in 4.1. For your specific case, see also:

https://bugzilla.redhat.com/1405810

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] "epel-release preventing host update."

2017-02-11 Thread Yedidyah Bar David
On Fri, Feb 10, 2017 at 12:56 PM, Arman Khalatyan  wrote:
> After upgrade to ovirt 4.1 the epel has an conflicts in the collectd part:
> if you disable epel from the hosts it is ok.
>
> the hosts are not able to check or update because of this:
>
> 017-02-10 11:51:17,056+01 INFO
> [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
> (DefaultQuartzScheduler6) [a13d0a89-a1d4-495c-9ed4-5e30aae11ae8] There is no
> host with more than 10 running guests, no balancing is needed
> 2017-02-10 11:51:17,069+01 WARN
> [org.ovirt.engine.core.bll.scheduling.policyunits.CpuAndMemoryBalancingPolicyUnit]
> (DefaultQuartzScheduler6) [a13d0a89-a1d4-495c-9ed4-5e30aae11ae8] All
> candidate hosts have been filtered, can't balance the cluster 'clei' based
> on the CPU usage, will try memory based approach
> 2017-02-10 11:51:17,099+01 INFO
> [org.ovirt.engine.core.bll.scheduling.policyunits.PowerSavingBalancePolicyUnit]
> (DefaultQuartzScheduler6) [a13d0a89-a1d4-495c-9ed4-5e30aae11ae8] Automatic
> power management is disabled for cluster 'clei'.
> 2017-02-10 11:51:27,278+01 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Yum: [u'collectd-write_http-5.7.0-2.el7.x86_64 requires collectd(x86-64) =
> 5.7.0-2.el7', u'collectd-disk-5.7.0-2.el7.x86_64 requires collectd(x86-64) =
> 5.7.0-2.el7']
> 2017-02-10 11:51:27,278+01 INFO
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Yum: Performing yum transaction rollback
> 2017-02-10 11:51:27,281+01 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Failed to execute stage 'Package installation':
> [u'collectd-write_http-5.7.0-2.el7.x86_64 requires collectd(x86-64) =
> 5.7.0-2.el7', u'collectd-disk-5.7.0-2.el7.x86_64 requires collectd(x86-64) =
> 5.7.0-2.el7']
> 2017-02-10 11:51:27,281+01 INFO
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Yum Performing yum transaction rollback
> 2017-02-10 11:51:27,378+01 INFO
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Stage: Pre-termination
> 2017-02-10 11:51:27,400+01 INFO
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Retrieving installation logs to:
> '/var/log/ovirt-engine/host-deploy/ovirt-host-mgmt-20170210115127-clei36.cls-a9265eb.log'
> 2017-02-10 11:51:27,533+01 INFO
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [a9265eb]
> Stage: Termination
> 2017-02-10 11:51:27,708+01 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog] (pool-5-thread-3) [a9265eb] SSH
> error running command r...@clei36.cls:'umask 0077;
> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
> "${MYTMP}"/ovirt-host-mgmt DIALOG/dialect=str:machine
> DIALOG/customization=bool:True': Command returned failure code 1 during SSH
> session 'r...@clei36.cls'
> 2017-02-10 11:51:27,708+01 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog] (pool-5-thread-3) [a9265eb]
> Exception: java.io.IOException: Command returned failure code 1 during SSH
> session 'r...@clei36.cls'

This is a known issue, mentioned in:

https://www.ovirt.org/release/4.1.0/

(search for EPEL).

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network issues

2017-02-11 Thread Edward Haas
It's unclear what is the current status of your network configuration,
perhaps you can share
your /etc/sysconfig/network-scripts content and the output of "ip addr".

Please also include the following information:
- What are the networks you expect to have on the host?
- The persisted VDSM config currently on the host:
/var/lib/vdsm/persistence/netconf

Thanks,
Edy.


On Fri, Feb 10, 2017 at 7:31 PM, Bryan Sockel  wrote:

> Need some help.  I was doing some work on one of my ovirt hosts, removed
> it from cluster, and re-added it to the cluster to bring it back into my
> working cluster.  I got a messege that my network's where out of sync.  I
> accidentally sync'd up my working network, now neither of my hosts will
> bring launch my hosted engine appliance.  While machines aren't currently
> mission critical, i do need to get these machines backup up and running.
>
>
> The network configuration is supposed to be 6 nics forming a bond network,
> my vlan's hanging off that bond, including my ovirtmgmt network.
>
>
>
>
> Feb 10 11:17:22 vm-host-altn-2.altn.int kernel: tg3 :81:00.1 p1p2:
> Flow control is off for TX and off for RX
> Feb 10 11:17:22 vm-host-altn-2.altn.int kernel: tg3 :81:00.1 p1p2:
> EEE is disabled
> Feb 10 11:17:22 vm-host-altn-2.altn.int kernel: bond0: link status
> definitely up for interface p1p2, 1000 Mbps full duplex
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface bond0.102:  [  OK  ]
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface bond0.20:  can't add bond0.20 to bridge DMZ: Operation not
> supported
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: [  OK  ]
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface bond0.30:  can't add bond0.30 to bridge Devel: Operation not
> supported
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: [  OK  ]
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface bond0.40:  can't add bond0.40 to bridge Lab: Operation not
> supported
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: [  OK  ]
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface bond0.50:  can't add bond0.50 to bridge Workstation: Operation
> not supported
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: [  OK  ]
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: Bringing up
> interface Workstation:  device Workstation already exists; can't create
> bridge with the sa
> Feb 10 11:17:26 vm-host-altn-2.altn.int network[1522]: [FAILED]
> Feb 10 11:17:26 vm-host-altn-2.altn.int systemd[1]: network.service:
> control process exited, code=exited status=1
> Feb 10 11:17:26 vm-host-altn-2.altn.int systemd[1]: Failed to start LSB:
> Bring up/down networking.
> -- Subject: Unit network.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit network.service has failed.
>
>
> ..
>
>
> Thanks
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hoste has available updates but nothing in yum

2017-02-11 Thread Gianluca Cecchi
Hello all,
I have 2 nodes in CentOS and oVirt 4.1 and since yesterday I see this event
in dashbard pane

Host ovmsrv05 has available updates:
collectd,collectd-disk,collectd-netlink,collectd-virt,collectd-write_http,fluentd,rubygem-fluent-plugin-rewrite-tag-filter,rubygem-fluent-plugin-secure-forward.

Actually collectd is not installed on my systen

$ sudo rpm -qa | grep collect
libcollection-0.6.2-27.el7.x86_64
$

And " yum update: brings no new packeage... what to check?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Requirements to use ovirt-image-repository

2017-02-11 Thread Gianluca Cecchi
Hello,
is it sufficient to open outside glance port (9292) of glance.ovirt.org or
is there anything else to do?
Making a test from an environment without outside restrictions it seems
that is only the engine that connects, correct?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris
Thanks for the links, I will add them to my reading list. Absolutely 
would read the docs before deploying ovirt in production and definitely 
would not use this storage configuration, this is purely to keep from 
wasting electricity.


Chris.

On 2017-02-11 19:18, Doug Ingham wrote:

On 11 February 2017 at 15:39, Bartosiak-Jentys, Chris 
 wrote:



Thank you for your reply Doug,

I didn't use localhost as I was preparing to follow instructions (blog 
post: 
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/) 
 for setting up CTDB and had already created hostnames for the 
floating IP when I decided to ditch that and go with the hosts file 
hack. I already had the volumes mounted on those hostnames but you are 
absolutely right, simply using localhost would be the best option.


oVirt 3.5? 2014? That's ld. Both oVirt & Gluster have moved on a 
lot since then. I would strongly recommend studying Gluster's 
documentation before implementing it in production. It's not 
complicated, but you have to have a good understanding of what you're 
doing & why if you want to protect the integrity of your data & avoid 
waking up one day to find everything in meltdown.


https://gluster.readthedocs.io/en/latest/

Red Hat's portal is also very good & full of detailed tips for tuning 
your setup, however their "stable" versions (which they have to 
support) are of course much older than the project's own latest stable, 
so keep this in mind when considering their advice.


https://access.redhat.com/documentation/en/red-hat-storage/

Likewise with their oVirt documentation, although their supported oVirt 
versions are much closer to the current stable release. It also 
features a lot of very good advice for configuring & tuning an oVirt 
(RHEV) & GlusterFS (RHGS) hyperconverged setup.


https://access.redhat.com/documentation/en/red-hat-virtualization/

For any other Gluster specific questions, you can usually get good & 
timely responses on their mailing list & IRC channel.


Thank you for your suggested outline of how to power up/down the 
cluster, I hadn't considered the fact that turning on two out of date 
nodes would clobber data on the new node. This is something I will need 
to be very careful to avoid. The setup is mostly for lab work so not 
really mission critical but I do run a few VM's (freeIPA, GitLab and 
pfSense) that I'd like to keep up 24/7. I make regular backups (outside 
of ovirt) of those just in case.


Thanks, I will do some reading on how gluster handles quorum and heal 
operations but your procedure sounds like a sensible way to operate 
this cluster.


Regards,

Chris.

On 2017-02-11 18:08, Doug Ingham wrote:

On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris 
 wrote:

Hello list,

Just wanted to get your opinion on my ovirt home lab setup. While this 
is not a production setup I would like it to run relatively reliably so 
please tell me if the following storage configuration is likely to 
result in corruption or just bat s**t insane.


I have a 3 node hosted engine setup, VM data store and engine data 
store are both replica 3 gluster volumes (one brick on each host).
I do not want to run all 3 hosts 24/7 due to electricity costs, I only 
power up the larger hosts (2 Dell R710's) when I need additional 
resources for VM's.


I read about using CTDB and floating/virtual IP's to allow the storage 
mount point to transition between available hosts but after some 
thought decided to go about this another, simpler, way:


I created a common hostname for the storage mount points: gfs-data and 
gfs-engine


On each host I edited /etc/hosts file to have these hostnames resolve 
to each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP

on host2 gfs-data & gfs-engine --> host2 IP
etc.

In ovirt engine each storage domain is mounted as gfs-data:/data and 
gfs-engine:/engine
My thinking is that this way no matter which host is up and acting as 
SPM it will be able to mount the storage as its only dependent on that 
host being up.


I changed gluster options for server-quorum-ratio so that the volumes 
remain up even if quorum is not met, I know this is risky but its just 
a lab setup after all.


So, any thoughts on the /etc/hosts method to ensure the storage mount 
point is always available? Is data corruption more or less inevitable 
with this setup? Am I insane ;) ?


Why not just use localhost? And no need for CTDB with a floating IP, 
oVirt uses libgfapi for Gluster which deals with that all natively.


As for the quorum issue, I would most definitely *not* run with quorum 
disabled when you're running more than one node. As you say you 
specifically plan for when the other 2 nodes of the replica 3 set will 
be active or not, I'd do something along the lines of the following...


Going from 3 nodes to 1 node:
- Put nodes 2 & 3 in maintenance to offload their virtual load;
- Once the 2 nodes are free of load, disable quorum on the Gluster 
volumes;

- Power

[ovirt-users] Issue with Moving Disks

2017-02-11 Thread Bryan Sockel
Hi,

I attempted to move a few disks from a glusterFS to an ISCSI file system. I 
have number of errors during the process, and it looks like a few of the 
disk moves are hung.  They are hung at various percentages. in the events, 
it says moving Disk X from A to B, validation completes but i copying volume 
returns with an error.

Is there a way to kill the move process so i can have access back to the 
VM's?



Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Doug Ingham
On 11 February 2017 at 15:39, Bartosiak-Jentys, Chris <
chris.bartosiak-jen...@certico.co.uk> wrote:

> Thank you for your reply Doug,
>
> I didn't use localhost as I was preparing to follow instructions (blog
> post: http://community.redhat.com/blog/2014/11/up-and-
> running-with-ovirt-3-5-part-two/)  for setting up CTDB and had already
> created hostnames for the floating IP when I decided to ditch that and go
> with the hosts file hack. I already had the volumes mounted on those
> hostnames but you are absolutely right, simply using localhost would be the
> best option.
>
oVirt 3.5? 2014? That's ld. Both oVirt & Gluster have moved on a lot
since then. I would strongly recommend studying Gluster's documentation
before implementing it in production. It's not complicated, but you have to
have a good understanding of what you're doing & why if you want to protect
the integrity of your data & avoid waking up one day to find everything in
meltdown.

https://gluster.readthedocs.io/en/latest/

Red Hat's portal is also very good & full of detailed tips for tuning your
setup, however their "stable" versions (which they have to support) are of
course much older than the project's own latest stable, so keep this in
mind when considering their advice.

https://access.redhat.com/documentation/en/red-hat-storage/

Likewise with their oVirt documentation, although their supported oVirt
versions are much closer to the current stable release. It also features a
lot of very good advice for configuring & tuning an oVirt (RHEV) &
GlusterFS (RHGS) hyperconverged setup.

https://access.redhat.com/documentation/en/red-hat-virtualization/

For any other Gluster specific questions, you can usually get good & timely
responses on their mailing list & IRC channel.

Thank you for your suggested outline of how to power up/down the cluster, I
> hadn't considered the fact that turning on two out of date nodes would
> clobber data on the new node. This is something I will need to be very
> careful to avoid. The setup is mostly for lab work so not really mission
> critical but I do run a few VM's (freeIPA, GitLab and pfSense) that I'd
> like to keep up 24/7. I make regular backups (outside of ovirt) of those
> just in case.
>
> Thanks, I will do some reading on how gluster handles quorum and heal
> operations but your procedure sounds like a sensible way to operate this
> cluster.
>
> Regards,
>
> Chris.
>
>
> On 2017-02-11 18:08, Doug Ingham wrote:
>
>
>
> On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
> chris.bartosiak-jen...@certico.co.uk> wrote:
>
>> Hello list,
>>
>> Just wanted to get your opinion on my ovirt home lab setup. While this is
>> not a production setup I would like it to run relatively reliably so please
>> tell me if the following storage configuration is likely to result in
>> corruption or just bat s**t insane.
>>
>> I have a 3 node hosted engine setup, VM data store and engine data store
>> are both replica 3 gluster volumes (one brick on each host).
>> I do not want to run all 3 hosts 24/7 due to electricity costs, I only
>> power up the larger hosts (2 Dell R710's) when I need additional resources
>> for VM's.
>>
>> I read about using CTDB and floating/virtual IP's to allow the storage
>> mount point to transition between available hosts but after some thought
>> decided to go about this another, simpler, way:
>>
>> I created a common hostname for the storage mount points: gfs-data and
>> gfs-engine
>>
>> On each host I edited /etc/hosts file to have these hostnames resolve to
>> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
>> on host2 gfs-data & gfs-engine --> host2 IP
>> etc.
>>
>> In ovirt engine each storage domain is mounted as gfs-data:/data and
>> gfs-engine:/engine
>> My thinking is that this way no matter which host is up and acting as SPM
>> it will be able to mount the storage as its only dependent on that host
>> being up.
>>
>> I changed gluster options for server-quorum-ratio so that the volumes
>> remain up even if quorum is not met, I know this is risky but its just a
>> lab setup after all.
>>
>> So, any thoughts on the /etc/hosts method to ensure the storage mount
>> point is always available? Is data corruption more or less inevitable with
>> this setup? Am I insane ;) ?
>
>
> Why not just use localhost? And no need for CTDB with a floating IP, oVirt
> uses libgfapi for Gluster which deals with that all natively.
>
> As for the quorum issue, I would most definitely *not* run with quorum
> disabled when you're running more than one node. As you say you
> specifically plan for when the other 2 nodes of the replica 3 set will be
> active or not, I'd do something along the lines of the following...
>
> Going from 3 nodes to 1 node:
>  - Put nodes 2 & 3 in maintenance to offload their virtual load;
>  - Once the 2 nodes are free of load, disable quorum on the Gluster
> volumes;
>  - Power down the 2 nodes.
>
> Going from 1 node to 3 nodes:
>  - Power on *only* 1 of the 

Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris
Thank you for your reply Doug, 

I didn't use localhost as I was preparing to follow instructions (blog
post:
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/)
 for setting up CTDB and had already created hostnames for the floating
IP when I decided to ditch that and go with the hosts file hack. I
already had the volumes mounted on those hostnames but you are
absolutely right, simply using localhost would be the best option. 

Thank you for your suggested outline of how to power up/down the
cluster, I hadn't considered the fact that turning on two out of date
nodes would clobber data on the new node. This is something I will need
to be very careful to avoid. The setup is mostly for lab work so not
really mission critical but I do run a few VM's (freeIPA, GitLab and
pfSense) that I'd like to keep up 24/7. I make regular backups (outside
of ovirt) of those just in case. 

Thanks, I will do some reading on how gluster handles quorum and heal
operations but your procedure sounds like a sensible way to operate this
cluster. 

Regards, 

Chris. 

On 2017-02-11 18:08, Doug Ingham wrote:

> On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris 
>  wrote:
> 
>> Hello list,
>> 
>> Just wanted to get your opinion on my ovirt home lab setup. While this is 
>> not a production setup I would like it to run relatively reliably so please 
>> tell me if the following storage configuration is likely to result in 
>> corruption or just bat s**t insane.
>> 
>> I have a 3 node hosted engine setup, VM data store and engine data store are 
>> both replica 3 gluster volumes (one brick on each host).
>> I do not want to run all 3 hosts 24/7 due to electricity costs, I only power 
>> up the larger hosts (2 Dell R710's) when I need additional resources for 
>> VM's.
>> 
>> I read about using CTDB and floating/virtual IP's to allow the storage mount 
>> point to transition between available hosts but after some thought decided 
>> to go about this another, simpler, way:
>> 
>> I created a common hostname for the storage mount points: gfs-data and 
>> gfs-engine
>> 
>> On each host I edited /etc/hosts file to have these hostnames resolve to 
>> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
>> on host2 gfs-data & gfs-engine --> host2 IP
>> etc.
>> 
>> In ovirt engine each storage domain is mounted as gfs-data:/data and 
>> gfs-engine:/engine
>> My thinking is that this way no matter which host is up and acting as SPM it 
>> will be able to mount the storage as its only dependent on that host being 
>> up.
>> 
>> I changed gluster options for server-quorum-ratio so that the volumes remain 
>> up even if quorum is not met, I know this is risky but its just a lab setup 
>> after all.
>> 
>> So, any thoughts on the /etc/hosts method to ensure the storage mount point 
>> is always available? Is data corruption more or less inevitable with this 
>> setup? Am I insane ;) ?
> 
> Why not just use localhost? And no need for CTDB with a floating IP, oVirt 
> uses libgfapi for Gluster which deals with that all natively. 
> 
> As for the quorum issue, I would most definitely *not* run with quorum 
> disabled when you're running more than one node. As you say you specifically 
> plan for when the other 2 nodes of the replica 3 set will be active or not, 
> I'd do something along the lines of the following...
> 
> Going from 3 nodes to 1 node: 
> - Put nodes 2 & 3 in maintenance to offload their virtual load; 
> - Once the 2 nodes are free of load, disable quorum on the Gluster volumes; 
> - Power down the 2 nodes.
> 
> Going from 1 node to 3 nodes: 
> - Power on *only* 1 of the pair of nodes (if you power on both & self-heal is 
> enabled, Gluster will "heal" the files on the main node with the older files 
> on the 2 nodes which were powered down); 
> - Allow Gluster some time to detect that the files are in split-brain; 
> - Tell Gluster to heal the files in split-brain based on modification time; 
> - Once the 2 nodes are in sync, re-enable quorum & power on the last node, 
> which will be resynchronised automatically; 
> - Take the 2 hosts out of maintenance mode. 
> 
> If you want to power on the 2nd two nodes at the same time, make absolutely 
> sure self-heal is disabled first! If you don't, Gluster will see the 2nd two 
> nodes as in quorum & heal the data on your 1st node with the out-of-date 
> data. 
> 
> -- 
> Doug

-- 

Chris Bartosiak-Jentys
Certico
Tel: 0 444 884
Mob: 077 0246 8132 
e-mail: ch...@certico.co.uk 
www.certico.co.uk [1]

-

Confidentiality Notice: the information contained in this email and any
attachments may be legally privileged and confidential.
If you are not an intended recipient, you are hereby notified that any
dissemination, distribution, or copying of this e-mail is strictly
prohibited.
If you have received this e-mail in error, please notify the sender and
permanently delete the e-mail and any attachments immediately.
You should not re

Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Doug Ingham
On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
chris.bartosiak-jen...@certico.co.uk> wrote:

> Hello list,
>
> Just wanted to get your opinion on my ovirt home lab setup. While this is
> not a production setup I would like it to run relatively reliably so please
> tell me if the following storage configuration is likely to result in
> corruption or just bat s**t insane.
>
> I have a 3 node hosted engine setup, VM data store and engine data store
> are both replica 3 gluster volumes (one brick on each host).
> I do not want to run all 3 hosts 24/7 due to electricity costs, I only
> power up the larger hosts (2 Dell R710's) when I need additional resources
> for VM's.
>
> I read about using CTDB and floating/virtual IP's to allow the storage
> mount point to transition between available hosts but after some thought
> decided to go about this another, simpler, way:
>
> I created a common hostname for the storage mount points: gfs-data and
> gfs-engine
>
> On each host I edited /etc/hosts file to have these hostnames resolve to
> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
> on host2 gfs-data & gfs-engine --> host2 IP
> etc.
>
> In ovirt engine each storage domain is mounted as gfs-data:/data and
> gfs-engine:/engine
> My thinking is that this way no matter which host is up and acting as SPM
> it will be able to mount the storage as its only dependent on that host
> being up.
>
> I changed gluster options for server-quorum-ratio so that the volumes
> remain up even if quorum is not met, I know this is risky but its just a
> lab setup after all.
>
> So, any thoughts on the /etc/hosts method to ensure the storage mount
> point is always available? Is data corruption more or less inevitable with
> this setup? Am I insane ;) ?
>

Why not just use localhost? And no need for CTDB with a floating IP, oVirt
uses libgfapi for Gluster which deals with that all natively.

As for the quorum issue, I would most definitely *not* run with quorum
disabled when you're running more than one node. As you say you
specifically plan for when the other 2 nodes of the replica 3 set will be
active or not, I'd do something along the lines of the following...

Going from 3 nodes to 1 node:
 - Put nodes 2 & 3 in maintenance to offload their virtual load;
 - Once the 2 nodes are free of load, disable quorum on the Gluster volumes;
 - Power down the 2 nodes.

Going from 1 node to 3 nodes:
 - Power on *only* 1 of the pair of nodes (if you power on both & self-heal
is enabled, Gluster will "heal" the files on the main node with the older
files on the 2 nodes which were powered down);
 - Allow Gluster some time to detect that the files are in split-brain;
 - Tell Gluster to heal the files in split-brain based on modification time;
 - Once the 2 nodes are in sync, re-enable quorum & power on the last node,
which will be resynchronised automatically;
 - Take the 2 hosts out of maintenance mode.

If you want to power on the 2nd two nodes at the same time, make absolutely
sure self-heal is disabled first! If you don't, Gluster will see the 2nd
two nodes as in quorum & heal the data on your 1st node with the
out-of-date data.


-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading oVirt-Node-NG from 4.0.3 to 4.0.6

2017-02-11 Thread Thomas Kendall
Hey Yuval,

If there is a way to download 4.0.6 and use it to upgrade the 4.0.3 node, I
have not found that documentation yet. Is that how I'm suppose to do it? Do
you have any links I can reference?

Thanks,
Thomas


On Feb 9, 2017 3:51 AM, "Yuval Turgeman"  wrote:

Hi, so 4.0.6 was downloaded but it is not upgrading the node ?

On Fri, Feb 3, 2017 at 11:58 PM, Thomas Kendall  wrote:

> We recently migrated from 3.6 to 4.0, but I'm a little confused about how
> to keep the nodes up to date. I see the auto-updates come through for my
> 4.0.3 nodes, but they don't seem to upgrade them to the newer 4.0.x
> releases.
>
> Is there a way to do this upgrade?  I have two nodes that were installed
> with 4.0.3, and I would like to bring them up to the same version as
> everything else.
>
> For reference, the 4.0.3 nodes were built off the 4.0-2016083011 iso, and
> the 4.0.6 nodes were built off the 4.0-2017011712 iso.
>
> Thanks,
> Thomas
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine errors after 4.1 upgrade.

2017-02-11 Thread Todd Punderson
Hi,
I was able to resolve the issue by reinstalling the hosts via
"Install/Reinstall" in the engine. I set the hosted engine to deploy, I did
that on all three hosts and it seems to be working fine now.
Thanks

On Wed, Feb 8, 2017 at 2:59 AM Yedidyah Bar David  wrote:

> On Wed, Feb 8, 2017 at 2:31 AM, Todd Punderson 
> wrote:
> > Seeing issues with my hosted engine, it seems it's unable to extract
> vm.conf
> > from storage. My ovirt-hosted-engine-ha/agent.log is full of this
> repeating
> > over and over. This is happening on all 3 of my hosts. My storage is
> > glusterfs on the hosts themselves.
> >
> > Hopefully this is enough info to get started.
> >
> > Thanks!
> >
> > MainThread::INFO::2017-02-07
> >
> 19:27:33,063::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> > Initializing VDSM
> > MainThread::INFO::2017-02-07
> >
> 19:27:35,455::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Connecting the storage
> > MainThread::INFO::2017-02-07
> >
> 19:27:35,456::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> > Connecting storage server
> > MainThread::INFO::2017-02-07
> >
> 19:27:40,169::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> > Connecting storage server
> > MainThread::INFO::2017-02-07
> >
> 19:27:40,202::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> > Refreshing the storage domain
> > MainThread::INFO::2017-02-07
> >
> 19:27:40,418::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Preparing images
> > MainThread::INFO::2017-02-07
> >
> 19:27:40,419::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
> > Preparing images
> > MainThread::INFO::2017-02-07
> >
> 19:27:43,370::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Reloading vm.conf from the shared storage domain
> > MainThread::INFO::2017-02-07
> >
> 19:27:43,371::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Trying to get a fresher copy of vm configuration from the OVF_STORE
> > MainThread::INFO::2017-02-07
> >
> 19:27:45,968::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:3e14c1b5-5ade-4827-aad4-66c59824acd2,
> > volUUID:3cbeeb3b-f755-4d42-a654-8dab34213792
> > MainThread::INFO::2017-02-07
> >
> 19:27:46,257::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce,
> > volUUID:8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> > MainThread::INFO::2017-02-07
> >
> 19:27:46,355::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Extracting Engine VM OVF from the OVF_STORE
> > MainThread::INFO::2017-02-07
> >
> 19:27:46,366::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > OVF_STORE volume path:
> > /rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:
> _engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> > MainThread::ERROR::2017-02-07
> >
> 19:27:46,389::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Unable to extract HEVM OVF
> > MainThread::ERROR::2017-02-07
> >
> 19:27:46,390::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
>
> Can you please attach the output of:
>
> sudo -u vdsm dd
> if=/rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:
> _engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
> | tar -tvf -
>
> Thanks.
>
> Did everything work well in 4.0? How did you upgrade?
>
> Best,
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris

Hello list,

Just wanted to get your opinion on my ovirt home lab setup. While this 
is not a production setup I would like it to run relatively reliably so 
please tell me if the following storage configuration is likely to 
result in corruption or just bat s**t insane.


I have a 3 node hosted engine setup, VM data store and engine data store 
are both replica 3 gluster volumes (one brick on each host).
I do not want to run all 3 hosts 24/7 due to electricity costs, I only 
power up the larger hosts (2 Dell R710's) when I need additional 
resources for VM's.


I read about using CTDB and floating/virtual IP's to allow the storage 
mount point to transition between available hosts but after some thought 
decided to go about this another, simpler, way:


I created a common hostname for the storage mount points: gfs-data and 
gfs-engine


On each host I edited /etc/hosts file to have these hostnames resolve to 
each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP

on host2 gfs-data & gfs-engine --> host2 IP
etc.

In ovirt engine each storage domain is mounted as gfs-data:/data and 
gfs-engine:/engine
My thinking is that this way no matter which host is up and acting as 
SPM it will be able to mount the storage as its only dependent on that 
host being up.


I changed gluster options for server-quorum-ratio so that the volumes 
remain up even if quorum is not met, I know this is risky but its just a 
lab setup after all.


So, any thoughts on the /etc/hosts method to ensure the storage mount 
point is always available? Is data corruption more or less inevitable 
with this setup? Am I insane ;) ?


Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] add vm numa node message

2017-02-11 Thread Gianluca Cecchi
Hello,
yesterday I saw this message 3 times

Add VM NUMA node successfully.

and it seems to me that it was at the same time when I edited some VMs and
in
Console --> Advance parameters
I set "disable strict user checking"
Are they indeed related? What is the relation with NUMA?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying oVirt 4.1

2017-02-11 Thread Douglas Schilling Landgraf
On Sat, Feb 11, 2017 at 02:29:15PM +0100, Andy Michielsen wrote:
> Hello all,
> 
> I was trying to give the latest oVirt a go and install it on my KVM system
> before committing it on any hardware.
> 
> The installation went fine. When logging on into the node https://:9090
> I wanted to install an hosted engine but I get this messages
> 
> Failed to execute stage 'Environment setup': [Errno 2] No such file or
> directory: '/etc/iscsi/initiatorname.iscsi'
> Hosted Engine deployment failed
> 
> But I must admit I have not set up iscsi nor do I want to. Maybe local, nfs
> or glusterfs.
> 
> What is the way to do this properly ?

It should work out of box, is it EL7 or Fedora? Could you please provide
logs? Like vdsm.log and host-deploy logs.

My suggestion for now is create this file in the hypervisor you are
trying to deploy: 'touch /etc/iscsi/initiatorname.iscsi' and continue the 
deploy.

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Trying oVirt 4.1

2017-02-11 Thread Andy Michielsen
Hello all,

I was trying to give the latest oVirt a go and install it on my KVM system
before committing it on any hardware.

The installation went fine. When logging on into the node https://:9090
I wanted to install an hosted engine but I get this messages

Failed to execute stage 'Environment setup': [Errno 2] No such file or
directory: '/etc/iscsi/initiatorname.iscsi'
Hosted Engine deployment failed

But I must admit I have not set up iscsi nor do I want to. Maybe local, nfs
or glusterfs.

What is the way to do this properly ?

Kind regards.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] changing IP addresses

2017-02-11 Thread Ben De Luca
Hi,
I have a client who wants to change the IP's of an ovirt cluster
running hosted engine. Is there a guide to do this? or would it be simpler
to rebuild the cluster and migrate the machines?

-Ben
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Optimizations for VoIP VM

2017-02-11 Thread Yaniv Kaul
On Feb 11, 2017 7:58 AM, "Jim Kusznir"  wrote:

Sorry for the delayed response, I finally found where gmail hid this
response... :(

So the application is FusionPBX, a FreeSwitch-based VoIP system, running on
a very unloaded (1% cpu load, 2-4 VMs running) system.  I've been
experiencing intermittent call breakup, for which external support
immediately blamed on the virtualization solution claiming that "You can't
virtualize VoIP systems without causing voice breakup and other call
quality issues".  Previously, I had attempted to run FreePBX
(asterisk-based) on a Hyper-V system, and I did find that to be the case;
moving over to very weak, but dedicated hardware, fixed the problem
immediately.

Since I sent this message, I did extensive testing with my system, and it
appears that the breakup is in fact network related.  I've been able to do
phone to phone calls on the local network for extended durations without
issue, and even have phone to phone calls on external networks without
issue.  However, calls going to my VoIP provider do break up, so it appears
to be the network route to my provider.

So, oVirt does not appear to be to blame (which I didn't think so, but was
hoping for some "expert information" to support this...It appears that I
got that and more with my tests).


Great to hear. I do believe that setting affinity and possibly taking into
account NUMA makes sense. Perhaps using SR-IOV is needed for low latency.
There is interesting work upstream qemu to improve throughout and reduce
latency in the expanse of more CPU usage.
Lastly, real time, mainly kernel and qemu-kvm,  is also technology that
might be needed for some workloads. See [1].
Y.

[1]  https://mpolednik.github.io/2016/09/19/real-time-host-in-ovirt/



Thank you again for your work on such a great product!

--Jim

On Wed, Jan 4, 2017 at 10:08 AM, Chris Adams  wrote:

> Once upon a time, Yaniv Dary  said:
> > Can you please describe the application network requirements?
> > Does it relay on low latency? Pass-through or SR-IOV could help with
> > reducing that.
>
> For VoIP, latency can be an issue, but the amount of latency from adding
> VM networking overhead isn't a big deal (because other network latency
> will have a larger impact).  10ms isn't really a problem for VoIP for
> example.
>
> The bigger network concern for VoIP is jitter; for that, the only
> solution is to not over-provision hardware CPUs or total network
> bandwidth.
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users