[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Jim Kusznir
So, I'm still at a loss...It sounds like its either insufficient ram/swap,
or insufficient network.  It seems to be neither now.  At this point, it
appears that gluster is just "broke" and killing my systems for no
descernable reason.  Here's detals, all from the same system (currently
running 3 VMs):

[root@ovirt3 ~]# w
 22:26:53 up 36 days,  4:34,  1 user,  load average: 42.78, 55.98, 53.31
USER TTY  FROM LOGIN@   IDLE   JCPU   PCPU WHAT
root pts/0192.168.8.90 22:262.00s  0.12s  0.11s w

bwm-ng reports the highest data usage was about 6MB/s during this test (and
that was combined; I have two different gig networks.  One gluster network
(primary VM storage) runs on one, the other network handles everything
else).

[root@ovirt3 ~]# free -m
  totalusedfree  shared  buff/cache
 available
Mem:  31996   13236 232  18   18526
 18195
Swap: 163831475   14908

top - 22:32:56 up 36 days,  4:41,  1 user,  load average: 17.99, 39.69,
47.66
Tasks: 407 total,   1 running, 405 sleeping,   1 stopped,   0 zombie
%Cpu(s):  8.6 us,  2.1 sy,  0.0 ni, 87.6 id,  1.6 wa,  0.0 hi,  0.1 si,
0.0 st
KiB Mem : 32764284 total,   228296 free, 13541952 used, 18994036 buff/cache
KiB Swap: 16777212 total, 15246200 free,  1531012 used. 18643960 avail Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+
COMMAND

30036 qemu  20   0 6872324   5.2g  13532 S 144.6 16.5 216:14.55
/usr/libexec/qemu-kvm -name guest=BillingWin,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/v+
28501 qemu  20   0 5034968   3.6g  12880 S  16.2 11.7  73:44.99
/usr/libexec/qemu-kvm -name guest=FusionPBX,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/va+
 2694 root  20   0 2169224  12164   3108 S   5.0  0.0   3290:42
/usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
data.ovirt3.nwfiber.com.gluster-brick2-data -p /var/run/+
14293 root  15  -5  944700  13356   4436 S   4.0  0.0  16:32.15
/usr/sbin/glusterfs --volfile-server=192.168.8.11
--volfile-server=192.168.8.12 --volfile-server=192.168.8.13 --+
25100 vdsm   0 -20 6747440 107868  12836 S   2.3  0.3  21:35.20
/usr/bin/python2 /usr/share/vdsm/vdsmd

28971 qemu  20   0 2842592   1.5g  13548 S   1.7  4.7 241:46.49
/usr/libexec/qemu-kvm -name guest=unifi.palousetech.com,debug-threads=on -S
-object secret,id=masterKey0,format=+
12095 root  20   0  162276   2836   1868 R   1.3  0.0   0:00.25 top


 2708 root  20   0 1906040  12404   3080 S   1.0  0.0   1083:33
/usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
engine.ovirt3.nwfiber.com.gluster-brick1-engine -p /var/+
28623 qemu  20   0 4749536   1.7g  12896 S   0.7  5.5   4:30.64
/usr/libexec/qemu-kvm -name guest=billing.nwfiber.com,debug-threads=on -S
-object secret,id=masterKey0,format=ra+
   10 root  20   0   0  0  0 S   0.3  0.0 215:54.72
[rcu_sched]

 1030 sanlock   rt   0  773804  27908   2744 S   0.3  0.1  35:55.61
/usr/sbin/sanlock daemon

 1890 zabbix20   0   83904   1696   1612 S   0.3  0.0  24:30.63
/usr/sbin/zabbix_agentd: collector [idle 1 sec]

 2722 root  20   0 1298004   6148   2580 S   0.3  0.0  38:10.82
/usr/sbin/glusterfsd -s ovirt3.nwfiber.com --volfile-id
iso.ovirt3.nwfiber.com.gluster-brick4-iso -p /var/run/gl+
 6340 root  20   0   0  0  0 S   0.3  0.0   0:04.30
[kworker/7:0]

10652 root  20   0   0  0  0 S   0.3  0.0   0:00.23
[kworker/u64:2]

14724 root  20   0 1076344  17400   3200 S   0.3  0.1  10:04.13
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -+
22011 root  20   0   0  0  0 S   0.3  0.0   0:05.04
[kworker/10:1]


Not sure why the system load dropped other than I was trying to take a
picture of it :)

In any case, it appears that at this time, I have plenty of swap, ram, and
network capacity, and yet things are still running very sluggish; I'm still
getting e-mails from servers complaining about loss of communication with
something or another; I still get e-mails from the engine about bad engine
status, then recovery, etc.

I've shut down 2/3 of my VMs, toojust trying to keep the critical ones
operating.

At this point, I don't believe the problem is the memory leak, but it seems
to be triggered by the memory leak, as in all my problems started when I
got low ram warnings from one of my 3 nodes and began recovery efforts from
that.

I do really like the idea / concept behind glusterfs, but I really have to
figure out why its been so poor performing from day one, and its caused 95%
of my outages (including several large ones lately).  If I can get it
stable, reliable, and well performing, then I'd love to keep it.  If I
can't, then perhaps NFS is the way to go?  I don't like the single point of
failure aspect of it, but my other NAS boxes I run for clients (central
storage for 

[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Johan Bernhardsson
Load like that is mostly io based either the machine is swapping or network 
is to slow. Check I/o wait in top.


And the problem where you get oom killer to kill off gluster. That means 
that you don't monitor ram usage on the servers? Either it's eating all 
your ram and swap gets really io intensive and then is killed off. Or you 
have the wrong swap settings in sysctl.conf (there are tons of broken 
guides that recommends swappines to 0 but that disables swap on newer 
kernels. The proper swappines for only swapping when nesseary is 1 or a 
sufficiently low number like 10 default is 60)



Moving to nfs will not improve things. You will get more memory since 
gluster isn't running and that is good. But you will have a single node 
that can fail with all your storage and it would still be on 1 gigabit only 
and your three node cluster would easily saturate that link.


On July 7, 2018 04:13:13 Jim Kusznir  wrote:
So far it does not appear to be helping much. I'm still getting VM's 
locking up and all kinds of notices from overt engine about non-responsive 
hosts.  I'm still seeing load averages in the 20-30 range.


Jim

On Fri, Jul 6, 2018, 3:13 PM Jim Kusznir  wrote:
Thank you for the advice and help

I do plan on going 10Gbps networking; haven't quite jumped off that cliff 
yet, though.


I did put my data-hdd (main VM storage volume) onto a dedicated 1Gbps 
network, and I've watched throughput on that and never seen more than 
60GB/s achieved (as reported by bwm-ng).  I have a separate 1Gbps network 
for communication and ovirt migration, but I wanted to break that up 
further (separate out VM traffice from migration/mgmt traffic).  My three 
SSD-backed gluster volumes run the main network too, as I haven't been able 
to get them to move to the new network (which I was trying to use as all 
gluster).  I tried bonding, but that seamed to reduce performance rather 
than improve it.


--Jim

On Fri, Jul 6, 2018 at 2:52 PM, Jamie Lawrence  
wrote:


Hi Jim,

I don't have any targeted suggestions, because there isn't much to latch on 
to. I can say Gluster replica three  (no arbiters) on dedicated servers 
serving a couple Ovirt VM clusters here have not had these sorts of issues.


I suspect your long heal times (and the resultant long periods of high 
load) are at least partly related to 1G networking. That is just a matter 
of IO - heals of VMs involve moving a lot of bits. My cluster uses 10G 
bonded NICs on the gluster and ovirt boxes for storage traffic and separate 
bonded 1G for ovirtmgmt and communication with other machines/people, and 
we're occasionally hitting the bandwidth ceiling on the storage network. 
I'm starting to think about 40/100G, different ways of splitting up 
intensive systems, and considering iSCSI for specific volumes, although I 
really don't want to go there.


I don't run FreeNAS[1], but I do run FreeBSD as storage servers for their 
excellent ZFS implementation, mostly for backups. ZFS will make your `heal` 
problem go away, but not your bandwidth problems, which become worse 
(because of fewer NICS pushing traffic). 10G hardware is not exactly in the 
impulse-buy territory, but if you can, I'd recommend doing some testing 
using it. I think at least some of your problems are related.


If that's not possible, my next stops would be optimizing everything I 
could about sharding, healing and optimizing for serving the shard size to 
squeeze as much performance out of 1G as I could, but that will only go so far.


-j

[1] FreeNAS is just a storage-tuned FreeBSD with a GUI.



On Jul 6, 2018, at 1:19 PM, Jim Kusznir  wrote:

hi all:

Once again my production ovirt cluster is collapsing in on itself.  My 
servers are intermittently unavailable or degrading, customers are noticing 
and calling in.  This seems to be yet another gluster failure that I 
haven't been able to pin down.


I posted about this a while ago, but didn't get anywhere (no replies that I 
found).  The problem started out as a glusterfsd process consuming large 
amounts of ram (up to the point where ram and swap were exhausted and the 
kernel OOM killer killed off the glusterfsd process).  For reasons not 
clear to me at this time, that resulted in any VMs running on that host and 
that gluster volume to be paused with I/O error (the glusterfs process is 
usually unharmed; why it didn't continue I/O with other servers is 
confusing to me).


I have 3 servers and a total of 4 gluster volumes (engine, iso, data, and 
data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is replica 3. 
 The first 3 are backed by an LVM partition (some thin provisioned) on an 
SSD; the 4th is on a seagate hybrid disk (hdd + some internal flash for 
acceleration).  data-hdd is the only thing on the disk.  Servers are Dell 
R610 with the PERC/6i raid card, with the disks individually passed through 
to the OS (no raid enabled).


The above RAM usage issue came from the data-hdd volume.  Yesterday, I 
cought one of the 

[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Jim Kusznir
So far it does not appear to be helping much. I'm still getting VM's
locking up and all kinds of notices from overt engine about non-responsive
hosts.  I'm still seeing load averages in the 20-30 range.

Jim

On Fri, Jul 6, 2018, 3:13 PM Jim Kusznir  wrote:

> Thank you for the advice and help
>
> I do plan on going 10Gbps networking; haven't quite jumped off that cliff
> yet, though.
>
> I did put my data-hdd (main VM storage volume) onto a dedicated 1Gbps
> network, and I've watched throughput on that and never seen more than
> 60GB/s achieved (as reported by bwm-ng).  I have a separate 1Gbps network
> for communication and ovirt migration, but I wanted to break that up
> further (separate out VM traffice from migration/mgmt traffic).  My three
> SSD-backed gluster volumes run the main network too, as I haven't been able
> to get them to move to the new network (which I was trying to use as all
> gluster).  I tried bonding, but that seamed to reduce performance rather
> than improve it.
>
> --Jim
>
> On Fri, Jul 6, 2018 at 2:52 PM, Jamie Lawrence 
> wrote:
>
>> Hi Jim,
>>
>> I don't have any targeted suggestions, because there isn't much to latch
>> on to. I can say Gluster replica three  (no arbiters) on dedicated servers
>> serving a couple Ovirt VM clusters here have not had these sorts of issues.
>>
>> I suspect your long heal times (and the resultant long periods of high
>> load) are at least partly related to 1G networking. That is just a matter
>> of IO - heals of VMs involve moving a lot of bits. My cluster uses 10G
>> bonded NICs on the gluster and ovirt boxes for storage traffic and separate
>> bonded 1G for ovirtmgmt and communication with other machines/people, and
>> we're occasionally hitting the bandwidth ceiling on the storage network.
>> I'm starting to think about 40/100G, different ways of splitting up
>> intensive systems, and considering iSCSI for specific volumes, although I
>> really don't want to go there.
>>
>> I don't run FreeNAS[1], but I do run FreeBSD as storage servers for their
>> excellent ZFS implementation, mostly for backups. ZFS will make your `heal`
>> problem go away, but not your bandwidth problems, which become worse
>> (because of fewer NICS pushing traffic). 10G hardware is not exactly in the
>> impulse-buy territory, but if you can, I'd recommend doing some testing
>> using it. I think at least some of your problems are related.
>>
>> If that's not possible, my next stops would be optimizing everything I
>> could about sharding, healing and optimizing for serving the shard size to
>> squeeze as much performance out of 1G as I could, but that will only go so
>> far.
>>
>> -j
>>
>> [1] FreeNAS is just a storage-tuned FreeBSD with a GUI.
>>
>> > On Jul 6, 2018, at 1:19 PM, Jim Kusznir  wrote:
>> >
>> > hi all:
>> >
>> > Once again my production ovirt cluster is collapsing in on itself.  My
>> servers are intermittently unavailable or degrading, customers are noticing
>> and calling in.  This seems to be yet another gluster failure that I
>> haven't been able to pin down.
>> >
>> > I posted about this a while ago, but didn't get anywhere (no replies
>> that I found).  The problem started out as a glusterfsd process consuming
>> large amounts of ram (up to the point where ram and swap were exhausted and
>> the kernel OOM killer killed off the glusterfsd process).  For reasons not
>> clear to me at this time, that resulted in any VMs running on that host and
>> that gluster volume to be paused with I/O error (the glusterfs process is
>> usually unharmed; why it didn't continue I/O with other servers is
>> confusing to me).
>> >
>> > I have 3 servers and a total of 4 gluster volumes (engine, iso, data,
>> and data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is
>> replica 3.  The first 3 are backed by an LVM partition (some thin
>> provisioned) on an SSD; the 4th is on a seagate hybrid disk (hdd + some
>> internal flash for acceleration).  data-hdd is the only thing on the disk.
>> Servers are Dell R610 with the PERC/6i raid card, with the disks
>> individually passed through to the OS (no raid enabled).
>> >
>> > The above RAM usage issue came from the data-hdd volume.  Yesterday, I
>> cought one of the glusterfsd high ram usage before the OOM-Killer had to
>> run.  I was able to migrate the VMs off the machine and for good measure,
>> reboot the entire machine (after taking this opportunity to run the
>> software updates that ovirt said were pending).  Upon booting back up, the
>> necessary volume healing began.  However, this time, the healing caused all
>> three servers to go to very, very high load averages (I saw just under 200
>> on one server; typically they've been 40-70) with top reporting IO Wait at
>> 7-20%.  Network for this volume is a dedicated gig network.  According to
>> bwm-ng, initially the network bandwidth would hit 50MB/s (yes, bytes), but
>> tailed off to mostly in the kB/s for a while.  All machines' load averages
>> were still 40+ 

[ovirt-users] Re: OVN ACLs

2018-07-06 Thread Greg Sheremeta
Hi Niyazi!

cc'ing some people who may be able to assist.

Best wishes,
Greg

On Thu, Jul 5, 2018 at 3:20 PM Niyazi Elvan  wrote:

> Hi All,
>
> I have started testing oVirt 4.2 and focused on OVN recently. I was
> wondering whether there is a plan to manage L2->L7 ACLs through oVirt web
> ui.
> If not, how could it be possible to manage ACLs except command line tools
> ? Using opendaylight ??
>
> All the best !
> Niyazi
>
> --
> Niyazi Elvan
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIF233I4EJXWEQA56GGSGAMKEQI7C56X/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJ56JHOGMR7NMFKKLJNXX7ZTWQ5TSR75/


[ovirt-users] Re: import OVA failed

2018-07-06 Thread Greg Sheremeta
Hi,

The log you attached doesn't seem to contain any helpful information.
Please attach timely errors you see in vdsm.log and
/var/log/vdsm/import/import-[id].log

cc'ing Arik who may be able to help.

Best wishes,
Greg

On Thu, Jun 28, 2018 at 9:27 PM du_hon...@yeah.net 
wrote:

> Hi
> my ovirt-engine version is 4.2.1, I import OVA by web UI, face some error,
> can you give me some advise?
>
> --
>
> Regards
>
> Hongyu Du
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7JF7N6F335OSJGX6KYIGEJTYWVXEXH7F/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2H6HJDMETJQ7GIJWP7COIKTZZELPDTD3/


[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Jim Kusznir
Thank you for the advice and help

I do plan on going 10Gbps networking; haven't quite jumped off that cliff
yet, though.

I did put my data-hdd (main VM storage volume) onto a dedicated 1Gbps
network, and I've watched throughput on that and never seen more than
60GB/s achieved (as reported by bwm-ng).  I have a separate 1Gbps network
for communication and ovirt migration, but I wanted to break that up
further (separate out VM traffice from migration/mgmt traffic).  My three
SSD-backed gluster volumes run the main network too, as I haven't been able
to get them to move to the new network (which I was trying to use as all
gluster).  I tried bonding, but that seamed to reduce performance rather
than improve it.

--Jim

On Fri, Jul 6, 2018 at 2:52 PM, Jamie Lawrence 
wrote:

> Hi Jim,
>
> I don't have any targeted suggestions, because there isn't much to latch
> on to. I can say Gluster replica three  (no arbiters) on dedicated servers
> serving a couple Ovirt VM clusters here have not had these sorts of issues.
>
> I suspect your long heal times (and the resultant long periods of high
> load) are at least partly related to 1G networking. That is just a matter
> of IO - heals of VMs involve moving a lot of bits. My cluster uses 10G
> bonded NICs on the gluster and ovirt boxes for storage traffic and separate
> bonded 1G for ovirtmgmt and communication with other machines/people, and
> we're occasionally hitting the bandwidth ceiling on the storage network.
> I'm starting to think about 40/100G, different ways of splitting up
> intensive systems, and considering iSCSI for specific volumes, although I
> really don't want to go there.
>
> I don't run FreeNAS[1], but I do run FreeBSD as storage servers for their
> excellent ZFS implementation, mostly for backups. ZFS will make your `heal`
> problem go away, but not your bandwidth problems, which become worse
> (because of fewer NICS pushing traffic). 10G hardware is not exactly in the
> impulse-buy territory, but if you can, I'd recommend doing some testing
> using it. I think at least some of your problems are related.
>
> If that's not possible, my next stops would be optimizing everything I
> could about sharding, healing and optimizing for serving the shard size to
> squeeze as much performance out of 1G as I could, but that will only go so
> far.
>
> -j
>
> [1] FreeNAS is just a storage-tuned FreeBSD with a GUI.
>
> > On Jul 6, 2018, at 1:19 PM, Jim Kusznir  wrote:
> >
> > hi all:
> >
> > Once again my production ovirt cluster is collapsing in on itself.  My
> servers are intermittently unavailable or degrading, customers are noticing
> and calling in.  This seems to be yet another gluster failure that I
> haven't been able to pin down.
> >
> > I posted about this a while ago, but didn't get anywhere (no replies
> that I found).  The problem started out as a glusterfsd process consuming
> large amounts of ram (up to the point where ram and swap were exhausted and
> the kernel OOM killer killed off the glusterfsd process).  For reasons not
> clear to me at this time, that resulted in any VMs running on that host and
> that gluster volume to be paused with I/O error (the glusterfs process is
> usually unharmed; why it didn't continue I/O with other servers is
> confusing to me).
> >
> > I have 3 servers and a total of 4 gluster volumes (engine, iso, data,
> and data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is
> replica 3.  The first 3 are backed by an LVM partition (some thin
> provisioned) on an SSD; the 4th is on a seagate hybrid disk (hdd + some
> internal flash for acceleration).  data-hdd is the only thing on the disk.
> Servers are Dell R610 with the PERC/6i raid card, with the disks
> individually passed through to the OS (no raid enabled).
> >
> > The above RAM usage issue came from the data-hdd volume.  Yesterday, I
> cought one of the glusterfsd high ram usage before the OOM-Killer had to
> run.  I was able to migrate the VMs off the machine and for good measure,
> reboot the entire machine (after taking this opportunity to run the
> software updates that ovirt said were pending).  Upon booting back up, the
> necessary volume healing began.  However, this time, the healing caused all
> three servers to go to very, very high load averages (I saw just under 200
> on one server; typically they've been 40-70) with top reporting IO Wait at
> 7-20%.  Network for this volume is a dedicated gig network.  According to
> bwm-ng, initially the network bandwidth would hit 50MB/s (yes, bytes), but
> tailed off to mostly in the kB/s for a while.  All machines' load averages
> were still 40+ and gluster volume heal data-hdd info reported 5 items
> needing healing.  Server's were intermittently experiencing IO issues, even
> on the 3 gluster volumes that appeared largely unaffected.  Even the OS
> activities on the hosts itself (logging in, running commands) would often
> be very delayed.  The ovirt engine was seemingly randomly throwing engine
> 

[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Jamie Lawrence
Hi Jim,

I don't have any targeted suggestions, because there isn't much to latch on to. 
I can say Gluster replica three  (no arbiters) on dedicated servers serving a 
couple Ovirt VM clusters here have not had these sorts of issues. 

I suspect your long heal times (and the resultant long periods of high load) 
are at least partly related to 1G networking. That is just a matter of IO - 
heals of VMs involve moving a lot of bits. My cluster uses 10G bonded NICs on 
the gluster and ovirt boxes for storage traffic and separate bonded 1G for 
ovirtmgmt and communication with other machines/people, and we're occasionally 
hitting the bandwidth ceiling on the storage network. I'm starting to think 
about 40/100G, different ways of splitting up intensive systems, and 
considering iSCSI for specific volumes, although I really don't want to go 
there.

I don't run FreeNAS[1], but I do run FreeBSD as storage servers for their 
excellent ZFS implementation, mostly for backups. ZFS will make your `heal` 
problem go away, but not your bandwidth problems, which become worse (because 
of fewer NICS pushing traffic). 10G hardware is not exactly in the impulse-buy 
territory, but if you can, I'd recommend doing some testing using it. I think 
at least some of your problems are related.

If that's not possible, my next stops would be optimizing everything I could 
about sharding, healing and optimizing for serving the shard size to squeeze as 
much performance out of 1G as I could, but that will only go so far.

-j

[1] FreeNAS is just a storage-tuned FreeBSD with a GUI.

> On Jul 6, 2018, at 1:19 PM, Jim Kusznir  wrote:
> 
> hi all:
> 
> Once again my production ovirt cluster is collapsing in on itself.  My 
> servers are intermittently unavailable or degrading, customers are noticing 
> and calling in.  This seems to be yet another gluster failure that I haven't 
> been able to pin down.
> 
> I posted about this a while ago, but didn't get anywhere (no replies that I 
> found).  The problem started out as a glusterfsd process consuming large 
> amounts of ram (up to the point where ram and swap were exhausted and the 
> kernel OOM killer killed off the glusterfsd process).  For reasons not clear 
> to me at this time, that resulted in any VMs running on that host and that 
> gluster volume to be paused with I/O error (the glusterfs process is usually 
> unharmed; why it didn't continue I/O with other servers is confusing to me).
> 
> I have 3 servers and a total of 4 gluster volumes (engine, iso, data, and 
> data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is replica 3.  
> The first 3 are backed by an LVM partition (some thin provisioned) on an SSD; 
> the 4th is on a seagate hybrid disk (hdd + some internal flash for 
> acceleration).  data-hdd is the only thing on the disk.  Servers are Dell 
> R610 with the PERC/6i raid card, with the disks individually passed through 
> to the OS (no raid enabled).
> 
> The above RAM usage issue came from the data-hdd volume.  Yesterday, I cought 
> one of the glusterfsd high ram usage before the OOM-Killer had to run.  I was 
> able to migrate the VMs off the machine and for good measure, reboot the 
> entire machine (after taking this opportunity to run the software updates 
> that ovirt said were pending).  Upon booting back up, the necessary volume 
> healing began.  However, this time, the healing caused all three servers to 
> go to very, very high load averages (I saw just under 200 on one server; 
> typically they've been 40-70) with top reporting IO Wait at 7-20%.  Network 
> for this volume is a dedicated gig network.  According to bwm-ng, initially 
> the network bandwidth would hit 50MB/s (yes, bytes), but tailed off to mostly 
> in the kB/s for a while.  All machines' load averages were still 40+ and 
> gluster volume heal data-hdd info reported 5 items needing healing.  Server's 
> were intermittently experiencing IO issues, even on the 3 gluster volumes 
> that appeared largely unaffected.  Even the OS activities on the hosts itself 
> (logging in, running commands) would often be very delayed.  The ovirt engine 
> was seemingly randomly throwing engine down / engine up / engine failed 
> notifications.  Responsiveness on ANY VM was horrific most of the time, with 
> random VMs being inaccessible.
> 
> I let the gluster heal run overnight.  By morning, there were still 5 items 
> needing healing, all three servers were still experiencing high load, and 
> servers were still largely unstable.
> 
> I've noticed that all of my ovirt outages (and I've had a lot, way more than 
> is acceptable for a production cluster) have come from gluster.  I still have 
> 3 VMs who's hard disk images have become corrupted by my last gluster crash 
> that I haven't had time to repair / rebuild yet (I believe this crash was 
> caused by the OOM issue previously mentioned, but I didn't know it at the 
> time).
> 
> Is gluster really ready for production yet?  It seems so 

[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Darrell Budic
Jim-

In additional to my comments on the gluster-users list (go conservative on your 
cluster-shd settings for all volumes), I have one ovirt specific one that can 
help you in the situation you’re in, at least if you’re seeing the same client 
side memory use issue I am on gluster 3.12.9+. Since its client side, you can 
(temporarily) recover the RAM by putting a node into maintenance (without 
stopping gluster, and ignoring pending heals if needed), then re-activate it. 
It will unmount the gluster volumes, restarting the glusterfsds that are 
hogging the RAM. Then do it to the next node, and the next. Keeps you from 
having to reboot a node and making your heal situation worse. You may have 
repeat it occasionally, but it will keep you going, and you can stagger it 
between nodes and/or just redistribute VMs afterward.

  -Darrell
> From: Jim Kusznir 
> Subject: [ovirt-users] Ovirt cluster unstable; gluster to blame (again)
> Date: July 6, 2018 at 3:19:34 PM CDT
> To: users
> 
> hi all:
> 
> Once again my production ovirt cluster is collapsing in on itself.  My 
> servers are intermittently unavailable or degrading, customers are noticing 
> and calling in.  This seems to be yet another gluster failure that I haven't 
> been able to pin down.
> 
> I posted about this a while ago, but didn't get anywhere (no replies that I 
> found).  The problem started out as a glusterfsd process consuming large 
> amounts of ram (up to the point where ram and swap were exhausted and the 
> kernel OOM killer killed off the glusterfsd process).  For reasons not clear 
> to me at this time, that resulted in any VMs running on that host and that 
> gluster volume to be paused with I/O error (the glusterfs process is usually 
> unharmed; why it didn't continue I/O with other servers is confusing to me).
> 
> I have 3 servers and a total of 4 gluster volumes (engine, iso, data, and 
> data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is replica 3.  
> The first 3 are backed by an LVM partition (some thin provisioned) on an SSD; 
> the 4th is on a seagate hybrid disk (hdd + some internal flash for 
> acceleration).  data-hdd is the only thing on the disk.  Servers are Dell 
> R610 with the PERC/6i raid card, with the disks individually passed through 
> to the OS (no raid enabled).
> 
> The above RAM usage issue came from the data-hdd volume.  Yesterday, I cought 
> one of the glusterfsd high ram usage before the OOM-Killer had to run.  I was 
> able to migrate the VMs off the machine and for good measure, reboot the 
> entire machine (after taking this opportunity to run the software updates 
> that ovirt said were pending).  Upon booting back up, the necessary volume 
> healing began.  However, this time, the healing caused all three servers to 
> go to very, very high load averages (I saw just under 200 on one server; 
> typically they've been 40-70) with top reporting IO Wait at 7-20%.  Network 
> for this volume is a dedicated gig network.  According to bwm-ng, initially 
> the network bandwidth would hit 50MB/s (yes, bytes), but tailed off to mostly 
> in the kB/s for a while.  All machines' load averages were still 40+ and 
> gluster volume heal data-hdd info reported 5 items needing healing.  Server's 
> were intermittently experiencing IO issues, even on the 3 gluster volumes 
> that appeared largely unaffected.  Even the OS activities on the hosts itself 
> (logging in, running commands) would often be very delayed.  The ovirt engine 
> was seemingly randomly throwing engine down / engine up / engine failed 
> notifications.  Responsiveness on ANY VM was horrific most of the time, with 
> random VMs being inaccessible.
> 
> I let the gluster heal run overnight.  By morning, there were still 5 items 
> needing healing, all three servers were still experiencing high load, and 
> servers were still largely unstable.
> 
> I've noticed that all of my ovirt outages (and I've had a lot, way more than 
> is acceptable for a production cluster) have come from gluster.  I still have 
> 3 VMs who's hard disk images have become corrupted by my last gluster crash 
> that I haven't had time to repair / rebuild yet (I believe this crash was 
> caused by the OOM issue previously mentioned, but I didn't know it at the 
> time).
> 
> Is gluster really ready for production yet?  It seems so unstable to me  
> I'm looking at replacing gluster with a dedicated NFS server likely FreeNAS.  
> Any suggestions?  What is the "right" way to do production storage on this (3 
> node cluster)?  Can I get this gluster volume stable enough to get my VMs to 
> run reliably again until I can deploy another storage solution?
> 
> --Jim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> 

[ovirt-users] Re: (v4.2.5-1.el7) Snapshots UI - html null

2018-07-06 Thread Greg Sheremeta
Hi Brett,

This is a bug. It could be
https://bugzilla.redhat.com/show_bug.cgi?id=1533214
If you think so, please add any details you think would help. If you think
it's something else, please open a new bug.

Best wishes,
Greg

On Tue, Jul 3, 2018 at 9:33 AM Maton, Brett 
wrote:

> Actually the extra nic is assigned to network 'Empty' in the edit VM form,
> and is throwing the html null error in the snapshots form/view
>
> On 3 July 2018 at 14:26, Maton, Brett  wrote:
>
>> I think the issue is being caused by a missing network.
>>
>> One of the upgrades of my test oVirt cluster went sideways and I endedup
>> reinstalling from fresh and importing the storage domains from the preivous
>> cluster.
>> I haven't created all of the networks that were in the previous ovirt
>> install as they're not really needed at the moment.
>>
>> The vm's that are throwing the html null error when trying to view
>> snapshots have a secondary nic that isn't assigned to any network.
>>
>> Regards,
>> Brett
>>
>>
>> On 2 July 2018 at 08:04, Maton, Brett  wrote:
>>
>>> Hi,
>>>
>>>   I'm trying to restore a VM snapshot theough the UI but keep running
>>> into this error:
>>>
>>> Uncaught exception occurred. Please try reloading the page. Details:
>>> Exception caught: html is null
>>> Please have your administrator check the UI logs
>>>
>>> ui log attached.
>>>
>>> CentOS 7
>>> oVirt 4.2.5-1.el7
>>>
>>> Regards,
>>> Brett
>>>
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIKNNUVCYDU5INHD5AUC2FGKF2FLTT5G/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V67ZOWTUXTCZQRGNMUKJ5PARCLDW2AAF/


[ovirt-users] Re: Ovirt/ RHV Disaster Recovery Designing

2018-07-06 Thread Greg Sheremeta
Hi Tanzeeb,

Unfortunately googling for 'ovirt disaster recovery' doesn't achieve a
great result. Search for things developed primarily by Maor [cc'd] recently.

On Fri, Jul 6, 2018 at 12:59 PM  wrote:

> Hi
> I've looking for some ideas about designing disaster recovery planning
> with ovirt 4.2. Since 4.2 we have options to have intergration with
> disaster recovery site also. I went through some videos at internet and
> want to share if those are correct or not. Thus you can help me regarding
> what other things I should be having ideas during planning and designing
> more about this.
>
> 1. We've to keep both sites on and already created datacenter, cluster and
> one site with storage domain attached and other site with no storage domain.
> 2. We've to keep latency of 10ms at maximum between both sites.
> 3. Have to configure virtual machines with affinity group.
> 4. The VM's which are configured as high-available vm, will be migrated
> first.
> 5. There's an Ansible script while have to run to fail over and fail back.
>
> Now, please could you help me out regarding this.
> 1. Where can I find this ansible script? Is this
> https://github.com/oVirt/ovirt-ansible-disaster-recovery.git?
>

yes. Note the youtube links on that.

@Maor for the rest of your questions.


> 2. Can this script run at rhv also ? or just for ovirt? Or what other
> changes I have to make for rhv to have this functionalities?
> 3. How ovirt/rhv works with the OVN compared to NSX? Is there's any
> challenges ?
> 4. Can you share me some high and low level diagram related to rhv
> disaster planning?
> 5. How the storage migrates on backend ? During the script ?
> 6. Where do I run this script? During fail over at primary site and during
> fail back at secondary site?
> 7. Basically since only the storage migrates and all other components are
> on already so basically backend it's only dealing with storage migration
> according to defined policies ?
>
> Please could you help me out with some designing and ideas what're the
> best architecture during planning on disaster recovery at RHV.
>

If you are a Red Hat customer, a Red Hat Consulting engagement could also
help.


> Thank you.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BNWV3MGHYMAP2PQG2ZSASUIFBAENUM7/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OLXCN3QLNJDIVAYLI7UDOAC4B23NRRQ/


[ovirt-users] Re: Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Greg Sheremeta
Hi Jim,

On Fri, Jul 6, 2018 at 4:22 PM Jim Kusznir  wrote:

> hi all:
>
> Once again my production ovirt cluster is collapsing in on itself.  My
> servers are intermittently unavailable or degrading, customers are noticing
> and calling in.  This seems to be yet another gluster failure that I
> haven't been able to pin down.
>
> I posted about this a while ago, but didn't get anywhere (no replies that
> I found).
>

cc'ing some people that might be able to assist.


>   The problem started out as a glusterfsd process consuming large amounts
> of ram (up to the point where ram and swap were exhausted and the kernel
> OOM killer killed off the glusterfsd process).  For reasons not clear to me
> at this time, that resulted in any VMs running on that host and that
> gluster volume to be paused with I/O error (the glusterfs process is
> usually unharmed; why it didn't continue I/O with other servers is
> confusing to me).
>
> I have 3 servers and a total of 4 gluster volumes (engine, iso, data, and
> data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is replica
> 3.  The first 3 are backed by an LVM partition (some thin provisioned) on
> an SSD; the 4th is on a seagate hybrid disk (hdd + some internal flash for
> acceleration).  data-hdd is the only thing on the disk.  Servers are Dell
> R610 with the PERC/6i raid card, with the disks individually passed through
> to the OS (no raid enabled).
>
> The above RAM usage issue came from the data-hdd volume.  Yesterday, I
> cought one of the glusterfsd high ram usage before the OOM-Killer had to
> run.  I was able to migrate the VMs off the machine and for good measure,
> reboot the entire machine (after taking this opportunity to run the
> software updates that ovirt said were pending).  Upon booting back up, the
> necessary volume healing began.  However, this time, the healing caused all
> three servers to go to very, very high load averages (I saw just under 200
> on one server; typically they've been 40-70) with top reporting IO Wait at
> 7-20%.  Network for this volume is a dedicated gig network.  According to
> bwm-ng, initially the network bandwidth would hit 50MB/s (yes, bytes), but
> tailed off to mostly in the kB/s for a while.  All machines' load averages
> were still 40+ and gluster volume heal data-hdd info reported 5 items
> needing healing.  Server's were intermittently experiencing IO issues, even
> on the 3 gluster volumes that appeared largely unaffected.  Even the OS
> activities on the hosts itself (logging in, running commands) would often
> be very delayed.  The ovirt engine was seemingly randomly throwing engine
> down / engine up / engine failed notifications.  Responsiveness on ANY VM
> was horrific most of the time, with random VMs being inaccessible.
>
> I let the gluster heal run overnight.  By morning, there were still 5
> items needing healing, all three servers were still experiencing high load,
> and servers were still largely unstable.
>
> I've noticed that all of my ovirt outages (and I've had a lot, way more
> than is acceptable for a production cluster) have come from gluster.  I
> still have 3 VMs who's hard disk images have become corrupted by my last
> gluster crash that I haven't had time to repair / rebuild yet (I believe
> this crash was caused by the OOM issue previously mentioned, but I didn't
> know it at the time).
>
> Is gluster really ready for production yet?  It seems so unstable to
> me  I'm looking at replacing gluster with a dedicated NFS server likely
> FreeNAS.  Any suggestions?  What is the "right" way to do production
> storage on this (3 node cluster)?  Can I get this gluster volume stable
> enough to get my VMs to run reliably again until I can deploy another
> storage solution?
>
> --Jim
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQX3LQFQQPW4JTCB7B6FY2LLR6NA2CB3/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXJYWM6W4SNEYLQHQLV6CD3ZZRR2S7ED/


[ovirt-users] Ovirt cluster unstable; gluster to blame (again)

2018-07-06 Thread Jim Kusznir
hi all:

Once again my production ovirt cluster is collapsing in on itself.  My
servers are intermittently unavailable or degrading, customers are noticing
and calling in.  This seems to be yet another gluster failure that I
haven't been able to pin down.

I posted about this a while ago, but didn't get anywhere (no replies that I
found).  The problem started out as a glusterfsd process consuming large
amounts of ram (up to the point where ram and swap were exhausted and the
kernel OOM killer killed off the glusterfsd process).  For reasons not
clear to me at this time, that resulted in any VMs running on that host and
that gluster volume to be paused with I/O error (the glusterfs process is
usually unharmed; why it didn't continue I/O with other servers is
confusing to me).

I have 3 servers and a total of 4 gluster volumes (engine, iso, data, and
data-hdd).  The first 3 are replica 2+arb; the 4th (data-hdd) is replica
3.  The first 3 are backed by an LVM partition (some thin provisioned) on
an SSD; the 4th is on a seagate hybrid disk (hdd + some internal flash for
acceleration).  data-hdd is the only thing on the disk.  Servers are Dell
R610 with the PERC/6i raid card, with the disks individually passed through
to the OS (no raid enabled).

The above RAM usage issue came from the data-hdd volume.  Yesterday, I
cought one of the glusterfsd high ram usage before the OOM-Killer had to
run.  I was able to migrate the VMs off the machine and for good measure,
reboot the entire machine (after taking this opportunity to run the
software updates that ovirt said were pending).  Upon booting back up, the
necessary volume healing began.  However, this time, the healing caused all
three servers to go to very, very high load averages (I saw just under 200
on one server; typically they've been 40-70) with top reporting IO Wait at
7-20%.  Network for this volume is a dedicated gig network.  According to
bwm-ng, initially the network bandwidth would hit 50MB/s (yes, bytes), but
tailed off to mostly in the kB/s for a while.  All machines' load averages
were still 40+ and gluster volume heal data-hdd info reported 5 items
needing healing.  Server's were intermittently experiencing IO issues, even
on the 3 gluster volumes that appeared largely unaffected.  Even the OS
activities on the hosts itself (logging in, running commands) would often
be very delayed.  The ovirt engine was seemingly randomly throwing engine
down / engine up / engine failed notifications.  Responsiveness on ANY VM
was horrific most of the time, with random VMs being inaccessible.

I let the gluster heal run overnight.  By morning, there were still 5 items
needing healing, all three servers were still experiencing high load, and
servers were still largely unstable.

I've noticed that all of my ovirt outages (and I've had a lot, way more
than is acceptable for a production cluster) have come from gluster.  I
still have 3 VMs who's hard disk images have become corrupted by my last
gluster crash that I haven't had time to repair / rebuild yet (I believe
this crash was caused by the OOM issue previously mentioned, but I didn't
know it at the time).

Is gluster really ready for production yet?  It seems so unstable to
me  I'm looking at replacing gluster with a dedicated NFS server likely
FreeNAS.  Any suggestions?  What is the "right" way to do production
storage on this (3 node cluster)?  Can I get this gluster volume stable
enough to get my VMs to run reliably again until I can deploy another
storage solution?

--Jim
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQX3LQFQQPW4JTCB7B6FY2LLR6NA2CB3/


[ovirt-users] Re: OVIRT 4.2: MountError on Posix Compliant FS

2018-07-06 Thread Nir Soffer
On Fri, Jul 6, 2018 at 2:48 PM Sandro Bonazzola  wrote:

> 2018-07-06 11:26 GMT+02:00 :
>
>> Hi all,
>>
>> I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm
>> trying to use as datastorage domain a configured Spectrum Scale (GPFS)
>> distributed filesystem for test it.
>>
>
> Hi, welcome to oVirt community!
>
>
>
>
>>
>> I've completed the configuration of the storage, and the filesystem
>> defined are correctly mounted on the client hosts as we can see:
>>
>> gpfs_kvmgpfs  233T  288M  233T   1% /gpfs/kvm
>> gpfs_fast   gpfs  8.8T  5.2G  8.8T   1% /gpfs/fast
>>
>> The content output of the mount comand is:
>>
>> gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime)
>> gpfs_fast on /gpfs/fast type gpfs (rw,relatime)
>>
>> I can write and read to the mounted filesystem correctly, but when I try
>> to add the local mounted filesystem I encountered some errors related to
>> the mounting process of the storage.
>>
>> The parameters passed to add new storage domain are:
>>
>> Data Center: Default (V4)
>> Name: gpfs_kvm
>> Description: VM data on GPFS
>> Domain Function: Data
>> Storage Type: Posix Compliant FS
>> Host to Use: kvm1c01
>> Path: /gpfs/kvm
>> VFS Type: gpfs
>> Mount Options: rw, relatime
>>
>> The ovirt error that I obtained is:
>> Error while executing action Add Storage Connection: Problem while
>> trying to mount target
>>
>> On the /var/log/vdsm/vdsm.log I can see:
>> 2018-07-05 09:07:43,400+0200 INFO  (jsonrpc/0) [vdsm.api] START
>> connectStorageServer(domType=6,
>> spUUID=u'----', conList=[{u'mnt_options':
>> u'rw,relatime', u'id': u'----',
>> u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1',
>> u'vfs_type': u'gpfs', u'password': '', u'port': u''}],
>> options=None) from=:::10.2.1.254,43908,
>> flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f,
>> task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46)
>> 2018-07-05 09:07:43,403+0200 INFO  (jsonrpc/0)
>> [storage.StorageServer.MountConnection] Creating directory
>> u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167)
>> 2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.fileUtils]
>> Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None
>> (fileUtils:197)
>> 2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.Mount]
>> mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)
>>
>
You can find vdsm mount command in supervdsmd.log.


> MountError: (32, ';mount: wrong fs type, bad option, bad superblock on
>> /gpfs/kvm,\n   missing codepage or helper program, or other error\n\n
>>  In some cases useful info is found in syslog - try\n   dmesg |
>> tail or so.\n')
>>
>
Did you look in /var/log/messages or demsg as the error message suggests?


>
>> I know, that from previous versions on the GPFS implementation, they
>> removed the device on /dev, due to incompatibilities with systemd. I don't
>> know if this change affect the ovirt mounting process.
>>
>
We support POSIX compliant file systems. If GPFS is POSIX compliant, it
should
work, but I don't think we test it.
Elad, do we test GPFS?

You should start with mounting the file system manually. When you have a
working
mount command, you may need to add some mount options to the storage domain
additional mount options.

Nir


>
>> Can you help me to add this filesystem to the ovirt environment?
>>
>
> Adding Nir and Tal for all your questions.
> In the meanwhile, can you please provide a sos report from the host? Did
> dmesg provide any useful information?
>
>
>
>> The parameters that I used previously are ok? Or i need to do some
>> modification?
>> Its possible that the process fails because i dont have a device related
>> to the gpfs filesystems on /dev?
>> Can we apply some kind of workaround to mount manually the filesystem to
>> the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm
>> manually and then mount the /gpfs/kvm over this?
>> It's posible to modify the code to bypass some comprobations or something?
>>
>> Reading the available documentation over Internet I find that ovirt was
>> compatible with this fs (gpfs) implementation, because it's POSIX
>> Compliant, this is a main reason for test it in our cluster.
>>
>> It remains compatible on the actual versions? Or maybe there are changes
>> that brokes this integration?
>>
>> Many thanks for all in advance!
>>
>> Kind regards!
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLXLSI4ZQFU32PV7AXLYD5LJBS23HBKO/
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE 

[ovirt-users] Ovirt/ RHV Disaster Recovery Designing

2018-07-06 Thread samee095
Hi 
I've looking for some ideas about designing disaster recovery planning with 
ovirt 4.2. Since 4.2 we have options to have intergration with disaster 
recovery site also. I went through some videos at internet and want to share if 
those are correct or not. Thus you can help me regarding what other things I 
should be having ideas during planning and designing more about this.

1. We've to keep both sites on and already created datacenter, cluster and one 
site with storage domain attached and other site with no storage domain.
2. We've to keep latency of 10ms at maximum between both sites.
3. Have to configure virtual machines with affinity group. 
4. The VM's which are configured as high-available vm, will be migrated first.
5. There's an Ansible script while have to run to fail over and fail back.

Now, please could you help me out regarding this. 
1. Where can I find this ansible script? Is this 
https://github.com/oVirt/ovirt-ansible-disaster-recovery.git?
2. Can this script run at rhv also ? or just for ovirt? Or what other changes I 
have to make for rhv to have this functionalities? 
3. How ovirt/rhv works with the OVN compared to NSX? Is there's any challenges 
? 
4. Can you share me some high and low level diagram related to rhv disaster 
planning? 
5. How the storage migrates on backend ? During the script ? 
6. Where do I run this script? During fail over at primary site and during fail 
back at secondary site? 
7. Basically since only the storage migrates and all other components are on 
already so basically backend it's only dealing with storage migration according 
to defined policies ? 

Please could you help me out with some designing and ideas what're the best 
architecture during planning on disaster recovery at RHV. 
Thank you. 
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BNWV3MGHYMAP2PQG2ZSASUIFBAENUM7/


[ovirt-users] Re: ovirt, postfix and sendmail

2018-07-06 Thread Fabrice Bacchella


> Le 6 juil. 2018 à 15:46, Sandro Bonazzola  a écrit :
> 
> 

> I pushed a change for 4.3 requiring server(smtp) instead of postfix. On EL7 
> server(smtp) resolves by default to postfix so nothing really change except 
> you can now install another MTA and remove postfix.
> On Fedora, server(smtp) resolves to exim but I added a Suggest clause pulling 
> in postfix if nothing provides server(smtp).
> 

Nice ! Thank you.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FPF3DAUYB2TQKGMFYAJLBGJI6U2V6FXF/


[ovirt-users] Ovirt Networking Setup Questions

2018-07-06 Thread Dave Mintz
Hello,

I am trying to understand, by reading all available docs, how to correctly set 
up networking.

What I have:

6 nodes, 1 engine - all have 3 nics.

What I want:
To separate traffic into three zones - ovirtmgmt, prod (for vm and admin web 
interface, data (for db access).

So, I have three subnets - 10.10.10.0, 10.10.20.0, and 10.10.30.0 - with 
matching VLANs 10, 20, 30

I am trying to understand how bonding and bridging work.  

Should I bond all three nics together?  The instructionshere: 
https://www.ovirt.org/documentation/how-to/networking/bonding-vlan-bridge/ 
 
seem to do that and then create two bridges br0 and ovirtmgmt, which they then 
assign ip addresses to.

I can do that, but shouldn’t I be configuring all of the networking at the 
logical network level?  So, in that case, should I just create one bond/bridge 
and then do all of the networking at the logical level?

Sorry if this is newbie stuff.

Thanks

Dave___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LIJP3BVNW2IJ3JWST3NBXJ32VVPQAXAP/


[ovirt-users] Re: ovirt, postfix and sendmail

2018-07-06 Thread Sandro Bonazzola
2018-07-04 12:12 GMT+02:00 Fabrice Bacchella :

>
>
> > Le 4 juil. 2018 à 11:03, Yedidyah Bar David  a écrit :
> >
> > On Wed, Jul 4, 2018 at 11:04 AM, Fabrice Bacchella
> >  wrote:
> >> ovirt in version 4.2 choose to incorporate postfix as a mandatory MTA:
> >
> > This was added in 4.0, AFAIU:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1301966
> >
> > IMHO the bug is somewhat incorrect. HA sends its email using smtplib,
> > which IIUC does not require a local /usr/sbin/sendmail . Indeed, the
> > default is to send through 'localhost:25', and for this to work you
> > need some MTA listening there. But admins might find it perfectly
> > reasonable to not have any sendmail locally, although this is the unix
> > tradition, and configure everything to send through a remote MTA.
> > hosted-engine --deploy already asks about this, so should be easy to
> > do there. Other common stuff, such as crond, also allow doing this. So
> > ideally, if the admin accepts the default 'localhost:25', the script
> > should try to connect there (perhaps also if user provides custom
> > values?), and if it fails, or if the other side does not look like an
> > MTA (e.g. does not accept a HELO or EHLO, not sure what's the best
> > way), prompt, and if 'localhost', suggest to install some MTA. But
> > email is a hard problem, not sure how complex we need to make the
> > setup script...
> >
> >>
> >> yum erase postfix
> >> ...
> >> Removing:
> >> postfix x86_64
> >> 2:2.10.1-6.el7@base 12 M
> >> Removing for dependencies:
> >> cockpit-ovirt-dashboard noarch
> >> 0.11.28-1.el7 @ovirt-4.215 M
> >> ovirt-host  x86_64
> >> 4.2.3-1.el7   @ovirt-4.211 k
> >> ovirt-hosted-engine-setup   noarch
> >> 2.2.22.1-1.el7@ovirt-4.2   2.2 M
> >>
> >> Is there a way to change that ? It's not about postfix being inferior or
> >> superior to other solutions. It's that it didn't ask any thing, didn't
> check
> >> if one was already installed. It's just installed.
> >>
> >> For example:
> >> rpm -q --provides postfix
> >> MTA
> >> config(postfix) = 2:2.10.1-6.el7
> >> postfix = 2:2.10.1-6.el7
> >> postfix(x86-64) = 2:2.10.1-6.el7
> >> server(smtp)
> >> smtpd
> >> smtpdaemon
> >>
> >> rpm -q --provides sendmail
> >> MTA
> >> config(sendmail) = 8.14.7-5.el7
> >> sendmail = 8.14.7-5.el7
> >> sendmail(x86-64) = 8.14.7-5.el7
> >> server(smtp)
> >> smtpdaemon
> >>
> >> There is a lot of other dependencies to declare other than postfix, MTA
> >> would have been better.
> >
> > I agree, and suggest to open an RFE on ovirt-host (and elsewhere?
> > didn't check) to change the Requires:.
> >
> > Seems like the thing we want to require is 'server(smtp)':
> >
> > https://fedoraproject.org/wiki/Features/ServerProvides
> >
> > Best regards,
>
> Done:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1598085



I pushed a change for 4.3 requiring server(smtp) instead of postfix. On EL7
server(smtp) resolves by default to postfix so nothing really change except
you can now install another MTA and remove postfix.
On Fedora, server(smtp) resolves to exim but I added a Suggest clause
pulling in postfix if nothing provides server(smtp).




>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/7ZFPH67MIQNWALAE4T2WXX3UVM6UAWSE/
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSSMGFHPSCNJ45RAUQO6SNVVCTWFLCM5/


[ovirt-users] Re: Snapshots don't seem to work with Windows VMs

2018-07-06 Thread Sandro Bonazzola
2018-06-29 22:12 GMT+02:00 :

> I've been testing with Windows 10 1607 LTSB and created several snapshots.
> I can preview each one, but they all boot with the same configuration and
> software installed and files as the "active VM" - so I cannot see at all
> how to roll back changes. Is this a known issue? I believe I'm following
> the documentation properly, but I can't see that the snapshots are doing
> anything.
>
> I'm testing on Scientific Linux 7.4 using oVirt 4.2.2.6-1.el7.centos.
>

Hi,
I would recommend to upgrade to Scientific Linux 7.5 and oVirt 4.2.4.
I'm not aware of similar issues, Michal can you follow up?



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/SMBXAKQQCS322ZI5P47XWQO73N4WHY7I/
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FS3SK2N5W47Q2BDNWNW7LUPNU7OLXI5H/


[ovirt-users] Re: [ovirt-devel] oVirt LLDP Labeler

2018-07-06 Thread Sandro Bonazzola
2018-07-06 15:05 GMT+02:00 Greg Sheremeta :

> On Fri, Jul 6, 2018 at 8:28 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2018-07-02 10:51 GMT+02:00 Ales Musil :
>>
>>> Hello,
>>>
>>> I would like to announce that oVirt LLDP Labeler is officially
>>> available.
>>>
>>
>> Is this a request for adding the project in oVirt incubation[1]?
>>
>
> Do we still do this? I bet we have a lot of developers that don't know
> about it, because I forgot about it and I've been here for 5 years.
>

Well, we have it as a written procedure established by the oVirt board.
We may gently communicate to oVirt Board we don't intend to follow that
procedure anymore and if they don't object we'll drop it in 2 weeks...



>
>
>> I don't see it listed in oVirt projects.
>> I also don't see any official release of it within the project:
>> https://github.com/almusil/ovirt-lldp-labeler/releases
>> Is this documented anywhere in ovirt.org? I couldn't find anything about
>> it.
>>
>> [1] https://www.ovirt.org/develop/projects/incubating-an-subproject/
>>
>>
>>
>>
>>
>>>
>>> The Labeler is service that runs along with engine and is capable of
>>> labeling host network interfaces according to their reported VLANs via
>>> LLDP. The attached labels are named "lldp_vlan_${VLAN}", where ${VLAN} is
>>> ID of the corresponding VLAN. This can make work of an administrator
>>> easier, because any network with the same label, will be automatically
>>> attached to the corresponding host interface.
>>>
>>> The Labeler is currently tested only with Juniper switches which are
>>> capable of reporting all of their VLANs that are present on the interface.
>>>
>>> We would like to extend the Labeler with auto bonding feature. Those
>>> interfaces that are detected on the same switch would be automatically
>>> bonded.
>>>
>>> The Labeler source is available here: https://github.com/
>>> almusil/ovirt-lldp-labeler
>>> And the build: https://copr.fedorainfracloud.org/coprs/
>>> amusil/ovirt-lldp-labeler/
>>>
>>>
>>> If you have any suggestions or problems please don't hesitate to report
>>> them on the GitHub page.
>>>
>>> --
>>>
>>> ALES MUSIL
>>> Associate software engineer - rhv network
>>>
>>> Red Hat EMEA 
>>>
>>>
>>> amu...@redhat.com   IM: amusil
>>> 
>>>
>>> ___
>>> Devel mailing list -- de...@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>>> guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/de...@ovirt.org/
>>> message/PIBBRGEVHNXF3VAOLFXHTNCWZ3ZUCNZA/
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/VPGAOX455ROEV2D35ACVAFPA5GC6LD4D/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/63MFJG6OJVMP32K4HZTEIU7E2TDABA3O/


[ovirt-users] Re: [ovirt-devel] oVirt LLDP Labeler

2018-07-06 Thread Greg Sheremeta
On Fri, Jul 6, 2018 at 8:28 AM Sandro Bonazzola  wrote:

>
>
> 2018-07-02 10:51 GMT+02:00 Ales Musil :
>
>> Hello,
>>
>> I would like to announce that oVirt LLDP Labeler is officially available.
>>
>
> Is this a request for adding the project in oVirt incubation[1]?
>

Do we still do this? I bet we have a lot of developers that don't know
about it, because I forgot about it and I've been here for 5 years.


> I don't see it listed in oVirt projects.
> I also don't see any official release of it within the project:
> https://github.com/almusil/ovirt-lldp-labeler/releases
> Is this documented anywhere in ovirt.org? I couldn't find anything about
> it.
>
> [1] https://www.ovirt.org/develop/projects/incubating-an-subproject/
>
>
>
>
>
>>
>> The Labeler is service that runs along with engine and is capable of
>> labeling host network interfaces according to their reported VLANs via
>> LLDP. The attached labels are named "lldp_vlan_${VLAN}", where ${VLAN} is
>> ID of the corresponding VLAN. This can make work of an administrator
>> easier, because any network with the same label, will be automatically
>> attached to the corresponding host interface.
>>
>> The Labeler is currently tested only with Juniper switches which are
>> capable of reporting all of their VLANs that are present on the interface.
>>
>> We would like to extend the Labeler with auto bonding feature. Those
>> interfaces that are detected on the same switch would be automatically
>> bonded.
>>
>> The Labeler source is available here:
>> https://github.com/almusil/ovirt-lldp-labeler
>> And the build:
>> https://copr.fedorainfracloud.org/coprs/amusil/ovirt-lldp-labeler/
>>
>>
>> If you have any suggestions or problems please don't hesitate to report
>> them on the GitHub page.
>>
>> --
>>
>> ALES MUSIL
>> Associate software engineer - rhv network
>>
>> Red Hat EMEA 
>>
>>
>> amu...@redhat.com   IM: amusil
>> 
>>
>> ___
>> Devel mailing list -- de...@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/PIBBRGEVHNXF3VAOLFXHTNCWZ3ZUCNZA/
>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPGAOX455ROEV2D35ACVAFPA5GC6LD4D/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I6S5KGWPKI4FL5WUHDGGLNRNAZ3WOTGK/


[ovirt-users] Re: GlusterFS 4.1

2018-07-06 Thread Sandro Bonazzola
2018-07-04 12:23 GMT+02:00 Chris Boot :

> All,
>
> Now that GlusterFS 4.1 LTS has been released, and is the "default"
> version of GlusterFS in CentOS (you get this from
> "centos-release-gluster" now), what's the status with regards to oVirt?
>
> How badly is oVirt 4.2.4 likely to break if one were to upgrade the
> gluster* packages to 4.1?
>
>
We are discussing about this in de...@ovirt.org mailing list.
Gluster 3.12 will be supported for an additional 6 months so we have time
for testing 4.1 properly and work with it.
If there's any specific new feature in Gluster 4.1 you may need when used
with oVirt, I would suggest to open a bug asking to support Gluster 4.1
indicating the specific feature you're interested in.




> Thanks,
> Chris
>
> --
> Chris Boot
> bo...@boo.tc
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/ECTRCG7EOZBTXQJNB4RKN6JYVHVOWV4S/
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LYEWHBLW5NVUWHEAXEBAW4UNTSGG5DL/


[ovirt-users] Re: Engine Setup Error

2018-07-06 Thread Sandro Bonazzola
2018-07-03 14:28 GMT+02:00 Sakhi Hadebe :

> Hi,
>
> We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command
> "hosted-engine --deploy".
>

Hi, any reason for using command line instead of the cockpit web ui?



>
> After providing answers it runs the ansible script and hit the Error when
> creating glusterfs storage domain. Attached the screenshot of the ERROR.
>
> Please help.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/XN5ML4VTDL6BDAAFFBGFXI5KEEZDMGNK/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKN3YXHKORMOY7QHJSAML2FSZQE2A2L5/


[ovirt-users] Re: Performing a quickstart task in the new console

2018-07-06 Thread Sandro Bonazzola
2018-07-06 3:23 GMT+02:00 Dave Thacker :

> Hello list,
> I'm using the quick start documentation to build a lab setup.
>  Immediately after the task "Configure Networks", there is a task "Attach
> oVirt Node or Enterprise Linux Host"  I haven't found a spot in the new GUI
> that matches the navigation tree in the manual.   How would I navigate to
> that task?
>
>

Hi, welcome to oVirt community!
Sadly documentation on oVirt web site is a bit outdated. This one:
https://ovirt.org/documentation/install-guide/chap-Adding_a_Hypervisor/
should be aligned with current procedure for adding hosts.




> Thanks.
>
> --
> Dave Thacker
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/YOXC6KTQVO6RMTYTUICAOKMBJ7WNGWWG/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YTILSJFPNMZEOBSUDNTTIK3JQ5Y75RW/


[ovirt-users] Re: Progetto di virtualizzazione ....vorremmo utilizzare oVirt

2018-07-06 Thread Gianluca Cecchi
2018-07-06 11:36 GMT+02:00 Marco Tosato :

> Buongiorno a tutti!
>
>
>
> La mia azienda sta intraprendendo un nuovo progetto che richiede un
> cluster di virtualizzazione di medie dimensioni (6 nodi), vorremmo valutare
> oVirt e GlusterFS come soluzione e stiamo cercando un partner (azienda o
> professionista) che abbia già esperienza con questo software e che possa
> fornirci i seguenti servizi:
>
>
>
>- Installazione e messa in produzione del sistema
>- Risoluzione di eventuali problemi che si possano verificare in futuro
>- formazione e approfondimento su oVirt/GlusterFS
>
>
>
> Grazie a tutti!
>
>
>

Hello Marco,
depending on your project time frames I could be interested in
collaborating with your company.
I will write off-list for details.
In the mean time thanks for sharing the opportunity.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NN3JHHTAADO533WV4HEL3FPJRYYBMNRV/


[ovirt-users] Re: ovirt-node: freshly installed node: network interfaces not visible

2018-07-06 Thread Sandro Bonazzola
2018-05-28 15:50 GMT+02:00 :

> Hello,
>
> I'm deploying a test cluster to get used to the ovirt product.
> My ovirt ( 4.2.2) engine is deployed in a vmware virtual machine.
>
> I'm busy deploying 5 hosts using the ovirt-node iso.
>

Welcome to oVirt community!
Since you're deploying 5 hosts with oVirt Node I would suggest to use Self
Hosted oVirt Engine solution instead of keeping the oVirt Engine in a
vmware VM.
You can find (a bit outdated) documentation here:
https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/





>
> I have issue with the additional physical network interface ( used for vm
> trafic and iSCSI)
>
> In ovirt manager I can't see the physical network interfaces. It's thus
> not possible to assign logical network to physical interfaces and my hosts
> stays in "non operational" status.
>
> How should  I configure the additional interfaces in the  ovirt-node
> installer to have them recognized ?
>
> I somehow managed to configure one host and I can see a comment line in
> /etc/sysconfig/network-script/ifcfg-... saying vdsm has acquired the
> interface
> No Such line in the ifcfg-* file in non operational hosts
>
> Any idea ?
> Etienne
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/XZO4THOPXNKQQDBJB7NBYRY5JLLNMQY5/
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRPSK5R42ZLQBD43PPNHE2G67NJR4M2T/


[ovirt-users] Re: [ovirt-devel] oVirt LLDP Labeler

2018-07-06 Thread Sandro Bonazzola
2018-07-02 10:51 GMT+02:00 Ales Musil :

> Hello,
>
> I would like to announce that oVirt LLDP Labeler is officially available.
>

Is this a request for adding the project in oVirt incubation[1]? I don't
see it listed in oVirt projects.
I also don't see any official release of it within the project:
https://github.com/almusil/ovirt-lldp-labeler/releases
Is this documented anywhere in ovirt.org? I couldn't find anything about it.

[1] https://www.ovirt.org/develop/projects/incubating-an-subproject/





>
> The Labeler is service that runs along with engine and is capable of
> labeling host network interfaces according to their reported VLANs via
> LLDP. The attached labels are named "lldp_vlan_${VLAN}", where ${VLAN} is
> ID of the corresponding VLAN. This can make work of an administrator
> easier, because any network with the same label, will be automatically
> attached to the corresponding host interface.
>
> The Labeler is currently tested only with Juniper switches which are
> capable of reporting all of their VLANs that are present on the interface.
>
> We would like to extend the Labeler with auto bonding feature. Those
> interfaces that are detected on the same switch would be automatically
> bonded.
>
> The Labeler source is available here: https://github.com/
> almusil/ovirt-lldp-labeler
> And the build: https://copr.fedorainfracloud.org/coprs/
> amusil/ovirt-lldp-labeler/
>
>
> If you have any suggestions or problems please don't hesitate to report
> them on the GitHub page.
>
> --
>
> ALES MUSIL
> Associate software engineer - rhv network
>
> Red Hat EMEA 
>
>
> amu...@redhat.com   IM: amusil
> 
>
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/de...@ovirt.org/
> message/PIBBRGEVHNXF3VAOLFXHTNCWZ3ZUCNZA/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPGAOX455ROEV2D35ACVAFPA5GC6LD4D/


[ovirt-users] Re: oVirt - Scaling out from one to many

2018-07-06 Thread Sandro Bonazzola
2018-07-05 13:16 GMT+02:00 Leo David :

> Hello everyone,
> I have two things that i really need to understand regarding storage
> scaling using gluster.
>
> Basically, I am trying to figure out a way to go from 1 single instance
> to multiple nodes cluster.
>
> 1. Single node SelfHosted Engine HyperConverged setup - scale instance up
> to 3 nodes:
> - the gluster volumes are created as distributed type.
> - is there a procedure to migate this single-host scenario to multiple
> nodes, considering that the 3 nodes setup is using replica 3 gluster
> volumes ?
>

Sahina, a quick check on the oVirt website didn't help me finding the
documentation for Leo.
Can you please assist?



>
> 2. 3 Nodes SelfHosted Engine - add more storage nodes
> - how should I increase the size of the already present replica 3 volumes ?
> - should the volumes be as distribute-replicated in an environment larger
> than 3 nodes ?
>

Couldn't find documentation on ovirt.org site, but this should apply to
your case:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html/maintaining_red_hat_hyperconverged_infrastructure/scaling#scaling_rhhi_by_adding_additional_volumes_on_new_nodes
Sahina, can you help preparing some documentation for ovirt.org as well?


>
> 3. Is there a limit as a maximum number of compute nodes per cluster ?
>

I'm not aware of limits but according to above document I won't go over the
9 nodes in this configuration.





>
> Thank you very much !
>
> Leo
>
> --
> Best regards, Leo David
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/VIU4T447JRNYZ2HBF366JLWYTE3W42CZ/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIAWN2AV5PSNDKG4DP6C4CYWC3RUFF42/


[ovirt-users] Re: OVIRT 4.2: MountError on Posix Compliant FS

2018-07-06 Thread Sandro Bonazzola
2018-07-06 11:26 GMT+02:00 :

> Hi all,
>
> I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm
> trying to use as datastorage domain a configured Spectrum Scale (GPFS)
> distributed filesystem for test it.
>

Hi, welcome to oVirt community!




>
> I've completed the configuration of the storage, and the filesystem
> defined are correctly mounted on the client hosts as we can see:
>
> gpfs_kvmgpfs  233T  288M  233T   1% /gpfs/kvm
> gpfs_fast   gpfs  8.8T  5.2G  8.8T   1% /gpfs/fast
>
> The content output of the mount comand is:
>
> gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime)
> gpfs_fast on /gpfs/fast type gpfs (rw,relatime)
>
> I can write and read to the mounted filesystem correctly, but when I try
> to add the local mounted filesystem I encountered some errors related to
> the mounting process of the storage.
>
> The parameters passed to add new storage domain are:
>
> Data Center: Default (V4)
> Name: gpfs_kvm
> Description: VM data on GPFS
> Domain Function: Data
> Storage Type: Posix Compliant FS
> Host to Use: kvm1c01
> Path: /gpfs/kvm
> VFS Type: gpfs
> Mount Options: rw, relatime
>
> The ovirt error that I obtained is:
> Error while executing action Add Storage Connection: Problem while
> trying to mount target
>
> On the /var/log/vdsm/vdsm.log I can see:
> 2018-07-05 09:07:43,400+0200 INFO  (jsonrpc/0) [vdsm.api] START
> connectStorageServer(domType=6, 
> spUUID=u'----',
> conList=[{u'mnt_options': u'rw,relatime', u'id': 
> u'----',
> u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'gpfs', u'password': '', u'port': u''}],
> options=None) from=:::10.2.1.254,43908, 
> flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f,
> task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46)
> 2018-07-05 09:07:43,403+0200 INFO  (jsonrpc/0) 
> [storage.StorageServer.MountConnection]
> Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167)
> 2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.fileUtils]
> Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None
> (fileUtils:197)
> 2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.Mount]
> mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)
> MountError: (32, ';mount: wrong fs type, bad option, bad superblock on
> /gpfs/kvm,\n   missing codepage or helper program, or other error\n\n
>  In some cases useful info is found in syslog - try\n   dmesg |
> tail or so.\n')
>
> I know, that from previous versions on the GPFS implementation, they
> removed the device on /dev, due to incompatibilities with systemd. I don't
> know if this change affect the ovirt mounting process.
>
> Can you help me to add this filesystem to the ovirt environment?
>

Adding Nir and Tal for all your questions.
In the meanwhile, can you please provide a sos report from the host? Did
dmesg provide any useful information?



> The parameters that I used previously are ok? Or i need to do some
> modification?
> Its possible that the process fails because i dont have a device related
> to the gpfs filesystems on /dev?
> Can we apply some kind of workaround to mount manually the filesystem to
> the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm
> manually and then mount the /gpfs/kvm over this?
> It's posible to modify the code to bypass some comprobations or something?
>
> Reading the available documentation over Internet I find that ovirt was
> compatible with this fs (gpfs) implementation, because it's POSIX
> Compliant, this is a main reason for test it in our cluster.
>
> It remains compatible on the actual versions? Or maybe there are changes
> that brokes this integration?
>
> Many thanks for all in advance!
>
> Kind regards!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/LLXLSI4ZQFU32PV7AXLYD5LJBS23HBKO/
>



-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BF4TLSEILTHCJ4ABOLUFN24PSLVOACXT/


[ovirt-users] Re: Progetto di virtualizzazione ....vorremmo utilizzare oVirt

2018-07-06 Thread Sandro Bonazzola
Buongiorno Marco,
welcome to oVirt community! I'm pretty sure there are consultant here that
can help you with your project.

If you're looking for enterprise level support, for 6 nodes you may also
consider contacting Red Hat Milan office for a solution based on Red Hat
Hyperconverged Infrastructure product which is based on oVirt and
GlusterFS. See here some use cases:
https://www.redhat.com/it/technologies/storage/use-cases/virtualization

Il giorno 6 luglio 2018 11:36, Marco Tosato  ha scritto:

> Buongiorno a tutti!
>
>
>
> La mia azienda sta intraprendendo un nuovo progetto che richiede un
> cluster di virtualizzazione di medie dimensioni (6 nodi), vorremmo valutare
> oVirt e GlusterFS come soluzione e stiamo cercando un partner (azienda o
> professionista) che abbia già esperienza con questo software e che possa
> fornirci i seguenti servizi:
>
>
>
>- Installazione e messa in produzione del sistema
>- Risoluzione di eventuali problemi che si possano verificare in futuro
>- formazione e approfondimento su oVirt/GlusterFS
>
>
>
> Grazie a tutti!
>
>
>
>
>
> *Marco Tosato*
> Ricerca e Sviluppo
> --
>
> [image: Esapro] 
>
>
>
> *Esapro Srl*
> Sede Legale
> Largo G. Donegani 2
> 20121 Milano (MI)
> REA CCIAA MI 2515336
> P.I./C.F. 03836010409
> Cap. Sociale Euro 111.111,00 i.v.
>
>
> Sede Amministrativa
> Via Cappello 12/a
> 35010 San Pietro in Gu (PD)
> T: +39 049 9490075
> F: +39 049 5960992
> www.esapro.it - i...@esapro.it
>
> [image: CSQ IQNet - UNI EN ISO 9001:2015]
>
> Ai sensi del D. Lgs. 196/2003 e successive modificazioni ed integrazioni
> si precisa che le informazioni contenute in questo messaggio sono riservate
> ed a uso esclusivo del destinatario. Sono vietati la riproduzione e l’uso
> di questa e-mail in mancanza di autorizzazione del mittente. Se avete
> ricevuto questa e-mail per errore, vi invitiamo ad eliminarla senza
> copiarla e a non inoltrarla a terzi, dandocene gentilmente comunicazione. 
> Grazie
> per la collaborazione / This message, for the D. Lgs n. 196/2003 (Privacy
> Code), may contain confidential and/or privileged information. If you are
> not the addressee or authorized to receive this for the addressee, you must
> not use, copy, disclose or take any action based on this message or any
> information herein. If you have received this message in error, please
> advise the sender immediately by reply e-mail and delete this message.
> Thank you for your cooperation.
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/ZD2OMPJVMOCPOLFHHWPDZRODEIKUYLEA/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5L43OVYKNTTRTH7IVIJPGGXAENQJ5MA3/


[ovirt-users] Progetto di virtualizzazione ....vorremmo utilizzare oVirt

2018-07-06 Thread Marco Tosato
Buongiorno a tutti!

La mia azienda sta intraprendendo un nuovo progetto che richiede un cluster di 
virtualizzazione di medie dimensioni (6 nodi), vorremmo valutare oVirt e 
GlusterFS come soluzione e stiamo cercando un partner (azienda o 
professionista) che abbia già esperienza con questo software e che possa 
fornirci i seguenti servizi:


  *   Installazione e messa in produzione del sistema
  *   Risoluzione di eventuali problemi che si possano verificare in futuro
  *   formazione e approfondimento su oVirt/GlusterFS

Grazie a tutti!


Marco Tosato
Ricerca e Sviluppo


[Esapro]



Esapro Srl
Sede Legale
Largo G. Donegani 2
20121 Milano (MI)
REA CCIAA MI 2515336
P.I./C.F. 03836010409
Cap. Sociale Euro 111.111,00 i.v.



Sede Amministrativa
Via Cappello 12/a
35010 San Pietro in Gu (PD)
T: +39 049 9490075
F: +39 049 5960992
www.esapro.it - i...@esapro.it


[CSQ IQNet - UNI EN ISO 9001:2015]



Ai sensi del D. Lgs. 196/2003 e successive modificazioni ed integrazioni si 
precisa che le informazioni contenute in questo messaggio sono riservate ed a 
uso esclusivo del destinatario. Sono vietati la riproduzione e l'uso di questa 
e-mail in mancanza di autorizzazione del mittente. Se avete ricevuto questa 
e-mail per errore, vi invitiamo ad eliminarla senza copiarla e a non inoltrarla 
a terzi, dandocene gentilmente comunicazione. Grazie per la collaborazione / 
This message, for the D. Lgs n. 196/2003 (Privacy Code), may contain 
confidential and/or privileged information. If you are not the addressee or 
authorized to receive this for the addressee, you must not use, copy, disclose 
or take any action based on this message or any information herein. If you have 
received this message in error, please advise the sender immediately by reply 
e-mail and delete this message. Thank you for your cooperation.



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZD2OMPJVMOCPOLFHHWPDZRODEIKUYLEA/


[ovirt-users] OVIRT 4.2: MountError on Posix Compliant FS

2018-07-06 Thread jtorres
Hi all,

I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying 
to use as datastorage domain a configured Spectrum Scale (GPFS) distributed 
filesystem for test it.

I've completed the configuration of the storage, and the filesystem defined are 
correctly mounted on the client hosts as we can see:

gpfs_kvmgpfs  233T  288M  233T   1% /gpfs/kvm
gpfs_fast   gpfs  8.8T  5.2G  8.8T   1% /gpfs/fast

The content output of the mount comand is:

gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime)
gpfs_fast on /gpfs/fast type gpfs (rw,relatime)

I can write and read to the mounted filesystem correctly, but when I try to add 
the local mounted filesystem I encountered some errors related to the mounting 
process of the storage.

The parameters passed to add new storage domain are:

Data Center: Default (V4)
Name: gpfs_kvm
Description: VM data on GPFS
Domain Function: Data
Storage Type: Posix Compliant FS
Host to Use: kvm1c01
Path: /gpfs/kvm
VFS Type: gpfs
Mount Options: rw, relatime

The ovirt error that I obtained is: 
Error while executing action Add Storage Connection: Problem while trying 
to mount target

On the /var/log/vdsm/vdsm.log I can see:
2018-07-05 09:07:43,400+0200 INFO  (jsonrpc/0) [vdsm.api] START 
connectStorageServer(domType=6, spUUID=u'----', 
conList=[{u'mnt_options': u'rw,relatime', u'id': 
u'----', u'connection': u'/gpfs/kvm', u'iqn': 
u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': 
'', u'port': u''}], options=None) from=:::10.2.1.254,43908, 
flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, 
task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46)
2018-07-05 09:07:43,403+0200 INFO  (jsonrpc/0) 
[storage.StorageServer.MountConnection] Creating directory 
u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167)
2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating 
directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197)
2018-07-05 09:07:43,404+0200 INFO  (jsonrpc/0) [storage.Mount] mounting 
/gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)
MountError: (32, ';mount: wrong fs type, bad option, bad superblock on 
/gpfs/kvm,\n   missing codepage or helper program, or other error\n\n   
In some cases useful info is found in syslog - try\n   dmesg | tail or 
so.\n')

I know, that from previous versions on the GPFS implementation, they removed 
the device on /dev, due to incompatibilities with systemd. I don't know if this 
change affect the ovirt mounting process.

Can you help me to add this filesystem to the ovirt environment? 
The parameters that I used previously are ok? Or i need to do some modification?
Its possible that the process fails because i dont have a device related to the 
gpfs filesystems on /dev?
Can we apply some kind of workaround to mount manually the filesystem to the 
ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually 
and then mount the /gpfs/kvm over this?
It's posible to modify the code to bypass some comprobations or something?

Reading the available documentation over Internet I find that ovirt was 
compatible with this fs (gpfs) implementation, because it's POSIX Compliant, 
this is a main reason for test it in our cluster.

It remains compatible on the actual versions? Or maybe there are changes that 
brokes this integration?

Many thanks for all in advance!

Kind regards!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLXLSI4ZQFU32PV7AXLYD5LJBS23HBKO/


[ovirt-users] Re: Cannot import a qcow2 image

2018-07-06 Thread etienne . charlier
From a user point of view ...

Letsencrypt or another certificate authority ... it should not matter...

Just having one set of files ( cer/key/ca-chain) with a clear name referenced 
from "all config files" would be the easiest...

Once you get the certs from you provider, you just overwrite the files with 
your own , restart the services and "that's it" ;-)

Letsencrypt renewing does not have to be handled on ovirt host  (on a bastion 
host where LE is configured,  a simple script can be run to update the certs 
and restart the services...)

My 0.02€
Etienne
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QJIAZ25JQYO76OI5T3CAS2E4CKLS2LMU/