Re: [ovirt-users] 4.2 downgrade

2017-09-30 Thread Ryan Mahoney
I installed from the master repo, should I have used the 42pre repo?
Should I add that repo and update to that version, or should I update the
master repo every once and awhile?

On Sat, Sep 30, 2017 at 1:37 PM, Yaniv Kaul  wrote:

>
>
> On Sep 30, 2017 8:09 AM, "Ryan Mahoney" 
> wrote:
>
> Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the
> "master" repo was development repo).  What's my chances/best way if
> possible to roll back to 4.0 (or 4.1 for that matter).
>
>
> There is no roll back to oVirt installation.
> That being said, I believe the Alpha quality is good. It is not feature
> complete and we of course have more polishing to do, but it's very usable
> and we will continue to ship updates to it. Let us know promptly what
> issues you encounter.
> Y.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI VLAN host connections - bond or multipath & IPv6

2017-09-30 Thread Ben Bradley

On 28/09/17 22:27, Ben Bradley wrote:

On 28/09/17 08:32, Yaniv Kaul wrote:



On Wed, Sep 27, 2017 at 10:59 PM, Ben Bradley > wrote:


Hi All

I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and
will hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host
to dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here.
Both iSCSI host and oVirt host plugged into both switches.

If this was non-iSCSI traffic and without oVirt I would create
bonded interfaces in active-backup mode and layer the VLANs on top
of that.

But for iSCSI traffic without oVirt involved I wouldn't bother with
a bond and just use multipath.

 From scanning the oVirt docs it looks like there is an option to
have oVirt configure iSCSI multipathing.


Look for iSCSI bonding - that's the feature you are looking for.


Thanks for the replies.

By iSCSI bonding, do you mean the oVirt feature "iSCSI multipathing" as 
mentioned here 
https://www.ovirt.org/documentation/admin-guide/chap-Storage/ ?


Separate links seems to be the consensus then. Since these are links 
dedicated to iSCSI traffic, not shared. the ovirtmgmt bridge lives on 
top of an active-backup bond on other NICs.


Thanks, Ben


And an extra question about oVirt's iSCSI multipathing - should each 
path be a separate VLAN+subnet?
I assume it should be separate VLANs for running separate physical 
fabrics if desired.


Thanks, Ben




So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single
storage link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?

Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely
down to the fact I could use link-local addressing and not have to
worry about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI
supported by oVirt?


No, we do not. There has been some work in the area[1], but I'm not 
sure it is complete.

Y.

[1] 
https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:ipv6-iscsi-target-support 




Thanks, Ben
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.2 hosted-engine command damaged

2017-09-30 Thread Julián Tete
I updated my lab environment from oVirt 4.1.x to oVirt 4.2 Alpha

The hosted-engine command has been corrupted

An example:

hosted-engine --vm-status

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 213, in 
if not status_checker.print_status ():
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 110, in print_status
all_host_stats = self._get_all_host_stats ()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 75, in _get_all_host_stats
all_host_stats = ha_cli.get_all_host_stats ()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 154, in get_all_host_stats
return self.get_all_stats (self.StatModes.HOST)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 99, in get_all_stats
stats = broker.get_stats_from_storage (service)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
  File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'

hosted-engine --set-maintenance --mode = none

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 88, in 
if not maintenance.set_mode (sys.argv [1]):
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 76, in set_mode
value = m_global,
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 240, in set_maintenance_mode
str (value))
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 187, in set_global_md_flag
all_stats = broker.get_stats_from_storage (service)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
  File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'

hosted-engine --vm-start

VM exists and its status is Up

Hardware

Manufacturer: HP
Family: ProLiant
Product Name: ProLiant BL460c Gen8
CPU Model Name: Intel (R) Xeon (R) CPU E5-2667 v2 @ 3.30GHz
CPU Type: Intel SandyBridge Family
CPU Sockets: 2
CPU Cores per Socket: 8
CPU Threads per Core: 2 (SMT Enabled)

Software:

OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 4.12.0 - 1.el7.elrepo.x86_64
KVM Version: 2.9.0 - 16.el7_4.5.1
LIBVIRT Version: libvirt-3.2.0-14.el7_4.3
VDSM Version: vdsm-4.20.3-95.git0813890.el7.centos
SPICE Version: 0.12.8 - 2.el7.1
GlusterFS Version: glusterfs-3.12.1-2.el7
CEPH Version: librbd1-0.94.5-2.el7
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help with Power Management network

2017-09-30 Thread ~Stack~
On 09/30/2017 06:51 AM, Dan Yasny wrote:
> The power management command is sent by the engine via a proxy host.
> That means you need at least one more host to act as proxy. The engine
> itself doesn't need to access the bmc network directly. Just like the
> engine needs no access to the atorage network to perform storage
> manipulations. 
> 
> I think in some recent versions fencing by the engine was introduced,
> but I don't have a setup in front of me to verify.

Ah, good to know. Thank you for clarifying!
~Stack~




signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed

2017-09-30 Thread Yaniv Kaul
On Sep 30, 2017 7:50 PM, "M R"  wrote:

Hello!

I have been using Ovirt for last four weeks, testing and trying to get
things working.

I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.


It's a bit difficult and inefficient to list all issues in a single post -
unless you feel they are related ?
Also, it'd be challenging to understand them without logs.

Lastly, it's usually a good habit, when something doesn't work, solve it,
rather than continue. I do suspect your issues are somehow related.
Y.


My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.


1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.

2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?

3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.

4.  Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.

5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.

6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.

7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once 

[ovirt-users] deprecating export domain?

2017-09-30 Thread Charles Kozler
Hello,

I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated

To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger audience

This seems like a somewhat significant change to make and I am curious
where this is scheduled? Currently, a lot of my backups rely explicitly on
an export domain for online snapshots, so I'd like to plan accordingly

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4.2 downgrade

2017-09-30 Thread Yaniv Kaul
On Sep 30, 2017 8:09 AM, "Ryan Mahoney" 
wrote:

Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the "master"
repo was development repo).  What's my chances/best way if possible to roll
back to 4.0 (or 4.1 for that matter).


There is no roll back to oVirt installation.
That being said, I believe the Alpha quality is good. It is not feature
complete and we of course have more polishing to do, but it's very usable
and we will continue to ship updates to it. Let us know promptly what
issues you encounter.
Y.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Passing through a display port to a GUEST vm

2017-09-30 Thread Alexander Witte
Hi,

Our server has 2 display ports on an integrated graphics card.  One port 
displays the host OS (Centos7 with KVM installed) and we would like the second 
display port to display one of the GUEST VMs (a Windows 10 server).  I was just 
curious if anyone had set this kind of thing up before or if this is even 
possible as there is not external Video card.  This is all in an oVirt 
environment.

If the passthrough on the display port is not possible I was thinking maybe of 
using a usb to hdmi adapter and passing through the USB port to the guest VM?

Here’s the server we’re using:

https://www.menmicro.com/products/box-pcs/bl70w/

If anyone has done this or has any thoughts it would be helpful!

Thanks,

Alex Witte
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed

2017-09-30 Thread M R
Hello!

I have been using Ovirt for last four weeks, testing and trying to get
things working.

I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.

My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.


1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.

2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?

3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.

4.  Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.

5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.

6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.

7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once I figure out a way to get commands, it
maybe is working(?). Log shows up fine, but it still doesn't give
functionality I¨m looking for. As I mentioned in the other email that I
have found this: https://www.mail-archive.com/users@ovirt.org/msg40422.html
. It will give right syntax in log, but still, won't fix error 43 with
nvidia drivers. If anybody got this working or has ideas 

Re: [ovirt-users] Qemu prevents vm from starting up properly

2017-09-30 Thread M R
Hello!

I have maybe found a way to do this.
I found this older email archive where similar problem was described:
https://www.mail-archive.com/users@ovirt.org/msg40422.html

With this -cpu arguments show up corretcly in log.
But the it still won't fix nvidia problem 43, which is annoying "bug"
implemented by nvidia.

I have several gtx graphic cards collecting dust and would like to use
them,  but fail to do so...

best regards
Mikko

On Thu, Sep 28, 2017 at 8:46 AM, Yedidyah Bar David  wrote:

> On Wed, Sep 27, 2017 at 8:32 PM, M R  wrote:
> > Hello!
> >
> > Thank you very much! I had misunderstood how it was suppose to be
> written in
> > qemu_cmdline. There was a typo in syntax and error log revealed it. It is
> > working IF I use ["-spice", "tls-ciphers=DES-CBC3-SHA"].
> > So I believe that installation is correctly done.
> >
> > Though, my problem still exists.
> > This is what I have been trying to use for qemu_cmdline:
> > ["-cpu", "kvm=off, hv_vendor_id=sometext"]
> > It does not work and most likely is incorrectly written.
>
> You should first come up with something that works when you try it
> manually, then try adapting that to the hook's syntax.
>
> >
> > I understood that qemu commands are often exported into xml files and the
> > command I'm trying to write is the following:
> >
> > 
> > 
> >   
> > 
> > 
> >   
> > 
> > 
>
> I guess you refer above to libvirt xml. This isn't strictly
> related to qemu, although in practice most usage of libvirt
> is with qemu.
>
> >
> > How do I write this in custom properties for qemu_cmdline?
>
> If you have a working libvirt vm, with the options you need,
> simply check how it translated your xml to qemu's command line.
> You can see this either in its logs, or using ps.
>
> Best,
>
> >
> >
> > best regards
> >
> > Mikko
> >
> >
> >
> > On 27 Sep 2017 3:27 pm, "Yedidyah Bar David"  wrote:
> >>
> >> On Wed, Sep 27, 2017 at 1:14 PM, M R  wrote:
> >> > Hello!
> >> >
> >> > I did check logs from hosts, but didnt notice anything that would help
> >> > me. I
> >> > can copy paste logs later.
> >> >
> >> > I was not trying to get qemu crash vm.
> >> > I'm trying to add new functionalities with qemu.
> >> >
> >> > I wasnt sure if my syntax was correct, so I copy pasted the example
> >> > command
> >> > for spice from that website. And it still behaves similarly.
> >> >
> >> > My conclusion is that qemu cmdline is setup wrong or it's not working
> at
> >> > all. But I dont know how to check that.
> >>
> >> Please check/share /var/log/libvirt/qemu/* and /var/log/vdsm/* . Thanks.
> >>
> >> >
> >> > On 27 Sep 2017 12:32, "Yedidyah Bar David"  wrote:
> >> >>
> >> >> On Wed, Sep 27, 2017 at 11:32 AM, M R  wrote:
> >> >> > Hello!
> >> >> >
> >> >> > I have followed instructions in
> >> >> > https://www.ovirt.org/develop/developer-guide/vdsm/hook/
> qemucmdline/
> >> >> >
> >> >> > After adding any command for qemu cmdline, vm will try to start,
> but
> >> >> > will
> >> >> > immediately shutdown.
> >> >> >
> >> >> > Is this a bug?
> >> >>
> >> >> If you intended, with the params you passed, to make qemu fail, for
> >> >> whatever
> >> >> reason (debugging qemu?), then it's not a bug :-) Otherwise, it is,
> but
> >> >> we
> >> >> can't know where.
> >> >>
> >> >> > or is the information in the link insufficient?
> >> >> > If it would be possible to confirm this and if there's a way to
> fix,
> >> >> > I
> >> >> > would
> >> >> > really like to have step by step guide of how to get this working.
> >> >>
> >> >> Did you check relevant logs? vdsm/libvirt?
> >> >>
> >> >> Best,
> >> >> --
> >> >> Didi
> >>
> >>
> >>
> >> --
> >> Didi
>
>
>
> --
> Didi
>



Ei
viruksia. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help with Power Management network

2017-09-30 Thread Dan Yasny
The power management command is sent by the engine via a proxy host. That
means you need at least one more host to act as proxy. The engine itself
doesn't need to access the bmc network directly. Just like the engine needs
no access to the atorage network to perform storage manipulations.

I think in some recent versions fencing by the engine was introduced, but I
don't have a setup in front of me to verify.

On Sep 29, 2017 11:13 PM, "~Stack~"  wrote:

> On 09/29/2017 05:31 PM, Dan Yasny wrote:
> > You need more than one host for power management
> >
>
> Seriously?? I didn't see anything like that in the docs...Maybe I just
> missed it.
>
> Also, why wouldn't it still validate? It should still be able to talk to
> the interface over the BMC/IPMI network. The fact that I can run the
> equivalent test on the command line makes it seem like it should at
> least be able to check via the test. Obviously, it would be silly for it
> to try to manage itself, but it could at least verify that the
> configuration is valid, right?
>
> I have more hosts that I'm going to add, I was just hoping to do those
> via the Foreman integration instead of manually building them. Since I'm
> not quite ready for that, I will just build a second host on Monday and
> report back.
>
> Thanks for the feedback. I would have never guess that as a possibility.
> :-)
>
> ~Stack~
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Metrics store doc page link broken?

2017-09-30 Thread Fred Rolland
The file has been renamed:
https://github.com/ViaQ/Main/blob/master/README-install.md

On Fri, Sep 29, 2017 at 7:56 PM, Gianluca Cecchi 
wrote:

> Hello,
> I was just giving an eye to what in subject here:
>
> https://www.ovirt.org/develop/release-management/features/
> metrics/metrics-store-installation/
>
> But it seems that the main link to
> "Metrics store setup on top of openshift"
> Is broken...
> Any other point to begin to see the expected workflow for it?
> Thanks
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users