Re: [ovirt-users] Engine reports

2018-04-06 Thread Rich Megginson

Great!

On 04/06/2018 04:43 PM, Peter Hudec wrote:

Ansible playbook finished OK. The next step after weekend.

thanks
Peter

On 06/04/2018 23:34, Rich Megginson wrote:

On 04/06/2018 03:08 PM, Peter Hudec wrote:

Hi,

https://tools.apps.hudecof.net/paste/view/1d493d52

The difference is
- I used latest ansible 2.5 from PIPY source, since the RPM package do
not fit requirements
- I used GIT repo for openshift-ansible.

I could start it over with the RPM based openshift-ansible

I recommend that you start over with the RPM based openshift-ansible,
which is what the instructions say, which is what I have tested

alternately - if you want to go the more experimental route, you could
enable the epel-testing yum repo - ansible-2.5 is in epel-testing which
I've been told by the openshift-ansible guys should work


 Peter

On 06/04/2018 20:23, Rich Megginson wrote:

I'm assuming you are running on a CentOS7 machine, recently updated to
the latest base packages. Please confirm.

Please provide the output of the following commands:

sudo yum repolist

sudo rpm -q ansible

sudo yum repoquery -i ansible

sudo rpm -q openshift-ansible

sudo yum repoquery -i openshift-ansible

sudo rpm -q origin

sudo yum repoquery -i origin

sudo cat
/usr/share/ansible/openshift-ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py



sudo ls -alrtF /etc/origin/logging

Please confirm that you followed the steps listed in the link below,
including the use of the ansible inventory, vars.yaml.template, and repo
files:
https://github.com/richm/Main/blob/7e663351dc371b7895564072c7656ebfda45068d/README-install.md



On 04/06/2018 03:19 AM, Peter Hudec wrote:

first issue is ansible version, could be solved by virtual env

FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
versions: 2.4.3.0 or newer

yum install -y python-virtualenv
virtualenv ansible
. ./ansible/bin/activate
pip install -U pip setuptolls
pip install -U pip setuptools
pip install ansible


Second issue is a little bit strange for me

2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:


     1. Hosts:    localhost
    Play: OpenShift Aggregated Logging
    Task: pulling down signing items from host
    Message:  ESC[0;31mAll items completedESC[0m

I have found in logs

2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
(item=ca.crl.srl) => {
   "changed": false,
   "invocation": {
   "module_args": {
   "src": "/etc/origin/logging/ca.crl.srl"
   }
   },
   "item": "ca.crl.srl",
   "msg": "file not found: /etc/origin/logging/ca.crl.srl"


Is this good or bad ?

  Peter

On 05/04/2018 16:11, Rich Megginson wrote:

Is it possible that you could start over from scratch, using the
latest
instructions/files at
https://github.com/ViaQ/Main/pull/37/files?

On 04/05/2018 07:19 AM, Peter Hudec wrote:

The version is from

/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version




[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64
package

[PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
origin-3.7.2-1.el7.git.0.f0186dd.x86_64

Hmm, why the version do not match ?

   Peter

On 04/04/2018 17:41, Shirly Radco wrote:

Hi,


I have a updated the installation instructions for installing
OpenShift
3.9, based on Rich's work , for the oVirt use case.

Please make sure where you get the 3.10 rpm and disable that repo.

This is the PR that includes the metrics store installation of
OpenShift 3.9

https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md




It should be merged soon, but you can use it to install.


Please make sure to add the ansible-inventory-origin-39-aio file as
described below.
These are required parameters for the ansible playbook.


Best regards,

--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 


TRIED. TESTED. TRUSTED. 


On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson mailto:rmegg...@redhat.com>> wrote:

    I'm sorry.  I misunderstood the request.

    We are in the process of updating the instructions for
installing
    viaq logging based on upstream origin 3.9 -
    https://github.com/ViaQ/Main/pull/37
    

    In the meantime, you can follow along on that PR, and we will
have
    instructions very soon.


    On 04/04/2018 08:26 AM, Rich Megginson wrote:

    On 04/04/2018 08:22 AM, Shirly Radco wrote:



    --

    SHIRLY RADCO

    BI SeNIOR SOFTWARE ENGINE

Re: [ovirt-users] Engine reports

2018-04-06 Thread Peter Hudec
Ansible playbook finished OK. The next step after weekend.

thanks
Peter

On 06/04/2018 23:34, Rich Megginson wrote:
> On 04/06/2018 03:08 PM, Peter Hudec wrote:
>> Hi,
>>
>> https://tools.apps.hudecof.net/paste/view/1d493d52
>>
>> The difference is
>> - I used latest ansible 2.5 from PIPY source, since the RPM package do
>> not fit requirements
>> - I used GIT repo for openshift-ansible.
>>
>> I could start it over with the RPM based openshift-ansible
> 
> I recommend that you start over with the RPM based openshift-ansible,
> which is what the instructions say, which is what I have tested
> 
> alternately - if you want to go the more experimental route, you could
> enable the epel-testing yum repo - ansible-2.5 is in epel-testing which
> I've been told by the openshift-ansible guys should work
> 
>>
>> Peter
>>
>> On 06/04/2018 20:23, Rich Megginson wrote:
>>> I'm assuming you are running on a CentOS7 machine, recently updated to
>>> the latest base packages. Please confirm.
>>>
>>> Please provide the output of the following commands:
>>>
>>> sudo yum repolist
>>>
>>> sudo rpm -q ansible
>>>
>>> sudo yum repoquery -i ansible
>>>
>>> sudo rpm -q openshift-ansible
>>>
>>> sudo yum repoquery -i openshift-ansible
>>>
>>> sudo rpm -q origin
>>>
>>> sudo yum repoquery -i origin
>>>
>>> sudo cat
>>> /usr/share/ansible/openshift-ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py
>>>
>>>
>>>
>>> sudo ls -alrtF /etc/origin/logging
>>>
>>> Please confirm that you followed the steps listed in the link below,
>>> including the use of the ansible inventory, vars.yaml.template, and repo
>>> files:
>>> https://github.com/richm/Main/blob/7e663351dc371b7895564072c7656ebfda45068d/README-install.md
>>>
>>>
>>>
>>> On 04/06/2018 03:19 AM, Peter Hudec wrote:
 first issue is ansible version, could be solved by virtual env

 FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
 versions: 2.4.3.0 or newer

 yum install -y python-virtualenv
 virtualenv ansible
 . ./ansible/bin/activate
 pip install -U pip setuptolls
 pip install -U pip setuptools
 pip install ansible


 Second issue is a little bit strange for me

 2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:


     1. Hosts:    localhost
    Play: OpenShift Aggregated Logging
    Task: pulling down signing items from host
    Message:  ESC[0;31mAll items completedESC[0m

 I have found in logs

 2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
 (item=ca.crl.srl) => {
   "changed": false,
   "invocation": {
   "module_args": {
   "src": "/etc/origin/logging/ca.crl.srl"
   }
   },
   "item": "ca.crl.srl",
   "msg": "file not found: /etc/origin/logging/ca.crl.srl"


 Is this good or bad ?

  Peter

 On 05/04/2018 16:11, Rich Megginson wrote:
> Is it possible that you could start over from scratch, using the
> latest
> instructions/files at
> https://github.com/ViaQ/Main/pull/37/files?
>
> On 04/05/2018 07:19 AM, Peter Hudec wrote:
>> The version is from
>>
>> /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version
>>
>>
>>
>>
>> [PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
>> /usr/bin/openshift version
>> openshift v3.10.0-alpha.0+f0186dd-401
>> kubernetes v1.9.1+a0ce1bc657
>> etcd 3.2.16
>>
>> this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64
>> package
>>
>> [PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
>> origin-3.7.2-1.el7.git.0.f0186dd.x86_64
>>
>> Hmm, why the version do not match ?
>>
>>   Peter
>>
>> On 04/04/2018 17:41, Shirly Radco wrote:
>>> Hi,
>>>
>>>
>>> I have a updated the installation instructions for installing
>>> OpenShift
>>> 3.9, based on Rich's work , for the oVirt use case.
>>>
>>> Please make sure where you get the 3.10 rpm and disable that repo.
>>>
>>> This is the PR that includes the metrics store installation of
>>> OpenShift 3.9
>>>
>>> https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md
>>>
>>>
>>>
>>>
>>> It should be merged soon, but you can use it to install.
>>>
>>>
>>> Please make sure to add the ansible-inventory-origin-39-aio file as
>>> described below.
>>> These are required parameters for the ansible playbook.
>>>
>>>
>>> Best regards,
>>>
>>> -- 
>>>
>>> SHIRLY RADCO
>>>
>>> BI SeNIOR SOFTWARE ENGINEER
>>>
>>> Red Hat Israel 

Re: [ovirt-users] Engine reports

2018-04-06 Thread Rich Megginson

On 04/06/2018 03:08 PM, Peter Hudec wrote:

Hi,

https://tools.apps.hudecof.net/paste/view/1d493d52

The difference is
- I used latest ansible 2.5 from PIPY source, since the RPM package do
not fit requirements
- I used GIT repo for openshift-ansible.

I could start it over with the RPM based openshift-ansible


I recommend that you start over with the RPM based openshift-ansible, 
which is what the instructions say, which is what I have tested


alternately - if you want to go the more experimental route, you could 
enable the epel-testing yum repo - ansible-2.5 is in epel-testing which 
I've been told by the openshift-ansible guys should work




Peter

On 06/04/2018 20:23, Rich Megginson wrote:

I'm assuming you are running on a CentOS7 machine, recently updated to
the latest base packages. Please confirm.

Please provide the output of the following commands:

sudo yum repolist

sudo rpm -q ansible

sudo yum repoquery -i ansible

sudo rpm -q openshift-ansible

sudo yum repoquery -i openshift-ansible

sudo rpm -q origin

sudo yum repoquery -i origin

sudo cat
/usr/share/ansible/openshift-ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py


sudo ls -alrtF /etc/origin/logging

Please confirm that you followed the steps listed in the link below,
including the use of the ansible inventory, vars.yaml.template, and repo
files:
https://github.com/richm/Main/blob/7e663351dc371b7895564072c7656ebfda45068d/README-install.md


On 04/06/2018 03:19 AM, Peter Hudec wrote:

first issue is ansible version, could be solved by virtual env

FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
versions: 2.4.3.0 or newer

yum install -y python-virtualenv
virtualenv ansible
. ./ansible/bin/activate
pip install -U pip setuptolls
pip install -U pip setuptools
pip install ansible


Second issue is a little bit strange for me

2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:


    1. Hosts:    localhost
   Play: OpenShift Aggregated Logging
   Task: pulling down signing items from host
   Message:  ESC[0;31mAll items completedESC[0m

I have found in logs

2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
(item=ca.crl.srl) => {
  "changed": false,
  "invocation": {
  "module_args": {
  "src": "/etc/origin/logging/ca.crl.srl"
  }
  },
  "item": "ca.crl.srl",
  "msg": "file not found: /etc/origin/logging/ca.crl.srl"


Is this good or bad ?

 Peter

On 05/04/2018 16:11, Rich Megginson wrote:

Is it possible that you could start over from scratch, using the latest
instructions/files at
https://github.com/ViaQ/Main/pull/37/files?

On 04/05/2018 07:19 AM, Peter Hudec wrote:

The version is from

/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version



[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package

[PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
origin-3.7.2-1.el7.git.0.f0186dd.x86_64

Hmm, why the version do not match ?

  Peter

On 04/04/2018 17:41, Shirly Radco wrote:

Hi,


I have a updated the installation instructions for installing
OpenShift
3.9, based on Rich's work , for the oVirt use case.

Please make sure where you get the 3.10 rpm and disable that repo.

This is the PR that includes the metrics store installation of
OpenShift 3.9

https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md



It should be merged soon, but you can use it to install.


Please make sure to add the ansible-inventory-origin-39-aio file as
described below.
These are required parameters for the ansible playbook.


Best regards,

--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 


TRIED. TESTED. TRUSTED. 


On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson mailto:rmegg...@redhat.com>> wrote:

   I'm sorry.  I misunderstood the request.

   We are in the process of updating the instructions for
installing
   viaq logging based on upstream origin 3.9 -
   https://github.com/ViaQ/Main/pull/37
   

   In the meantime, you can follow along on that PR, and we will
have
   instructions very soon.


   On 04/04/2018 08:26 AM, Rich Megginson wrote:

   On 04/04/2018 08:22 AM, Shirly Radco wrote:



   --

   SHIRLY RADCO

   BI SeNIOR SOFTWARE ENGINEER

   Red Hat Israel 

   
   TRIED. TESTED. TRUSTED. 


   On Wed, Apr 4, 2018 at 5:07 PM, 

Re: [ovirt-users] Engine reports

2018-04-06 Thread Peter Hudec
Hi,

https://tools.apps.hudecof.net/paste/view/1d493d52

The difference is
- I used latest ansible 2.5 from PIPY source, since the RPM package do
not fit requirements
- I used GIT repo for openshift-ansible.

I could start it over with the RPM based openshift-ansible

Peter

On 06/04/2018 20:23, Rich Megginson wrote:
> I'm assuming you are running on a CentOS7 machine, recently updated to
> the latest base packages. Please confirm.
> 
> Please provide the output of the following commands:
> 
> sudo yum repolist
> 
> sudo rpm -q ansible
> 
> sudo yum repoquery -i ansible
> 
> sudo rpm -q openshift-ansible
> 
> sudo yum repoquery -i openshift-ansible
> 
> sudo rpm -q origin
> 
> sudo yum repoquery -i origin
> 
> sudo cat
> /usr/share/ansible/openshift-ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py
> 
> 
> sudo ls -alrtF /etc/origin/logging
> 
> Please confirm that you followed the steps listed in the link below,
> including the use of the ansible inventory, vars.yaml.template, and repo
> files:
> https://github.com/richm/Main/blob/7e663351dc371b7895564072c7656ebfda45068d/README-install.md
> 
> 
> On 04/06/2018 03:19 AM, Peter Hudec wrote:
>> first issue is ansible version, could be solved by virtual env
>>
>> FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
>> versions: 2.4.3.0 or newer
>>
>> yum install -y python-virtualenv
>> virtualenv ansible
>> . ./ansible/bin/activate
>> pip install -U pip setuptolls
>> pip install -U pip setuptools
>> pip install ansible
>>
>>
>> Second issue is a little bit strange for me
>>
>> 2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:
>>
>>
>>    1. Hosts:    localhost
>>   Play: OpenShift Aggregated Logging
>>   Task: pulling down signing items from host
>>   Message:  ESC[0;31mAll items completedESC[0m
>>
>> I have found in logs
>>
>> 2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
>> (item=ca.crl.srl) => {
>>  "changed": false,
>>  "invocation": {
>>  "module_args": {
>>  "src": "/etc/origin/logging/ca.crl.srl"
>>  }
>>  },
>>  "item": "ca.crl.srl",
>>  "msg": "file not found: /etc/origin/logging/ca.crl.srl"
>>
>>
>> Is this good or bad ?
>>
>> Peter
>>
>> On 05/04/2018 16:11, Rich Megginson wrote:
>>> Is it possible that you could start over from scratch, using the latest
>>> instructions/files at
>>> https://github.com/ViaQ/Main/pull/37/files?
>>>
>>> On 04/05/2018 07:19 AM, Peter Hudec wrote:
 The version is from

 /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version



 [PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
 /usr/bin/openshift version
 openshift v3.10.0-alpha.0+f0186dd-401
 kubernetes v1.9.1+a0ce1bc657
 etcd 3.2.16

 this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package

 [PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
 origin-3.7.2-1.el7.git.0.f0186dd.x86_64

 Hmm, why the version do not match ?

  Peter

 On 04/04/2018 17:41, Shirly Radco wrote:
> Hi,
>
>
> I have a updated the installation instructions for installing
> OpenShift
> 3.9, based on Rich's work , for the oVirt use case.
>
> Please make sure where you get the 3.10 rpm and disable that repo.
>
> This is the PR that includes the metrics store installation of
> OpenShift 3.9
>
> https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md
>
>
>
> It should be merged soon, but you can use it to install.
>
>
> Please make sure to add the ansible-inventory-origin-39-aio file as
> described below.
> These are required parameters for the ansible playbook.
>
>
> Best regards,
>
> -- 
>
> SHIRLY RADCO
>
> BI SeNIOR SOFTWARE ENGINEER
>
> Red Hat Israel 
>
> 
> TRIED. TESTED. TRUSTED. 
>
>
> On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson  > wrote:
>
>   I'm sorry.  I misunderstood the request.
>
>   We are in the process of updating the instructions for
> installing
>   viaq logging based on upstream origin 3.9 -
>   https://github.com/ViaQ/Main/pull/37
>   
>
>   In the meantime, you can follow along on that PR, and we will
> have
>   instructions very soon.
>
>
>   On 04/04/2018 08:26 AM, Rich Megginson wrote:
>
>   On 04/04/2018 08:22 AM, Shirly Radco wrote:
>
>
>
>   --
>
>   SHIRLY RADCO
>
>   B

Re: [ovirt-users] Engine reports

2018-04-06 Thread Rich Megginson
I'm assuming you are running on a CentOS7 machine, recently updated to 
the latest base packages. Please confirm.


Please provide the output of the following commands:

sudo yum repolist

sudo rpm -q ansible

sudo yum repoquery -i ansible

sudo rpm -q openshift-ansible

sudo yum repoquery -i openshift-ansible

sudo rpm -q origin

sudo yum repoquery -i origin

sudo cat 
/usr/share/ansible/openshift-ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py


sudo ls -alrtF /etc/origin/logging

Please confirm that you followed the steps listed in the link below, 
including the use of the ansible inventory, vars.yaml.template, and repo 
files:

https://github.com/richm/Main/blob/7e663351dc371b7895564072c7656ebfda45068d/README-install.md

On 04/06/2018 03:19 AM, Peter Hudec wrote:

first issue is ansible version, could be solved by virtual env

FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
versions: 2.4.3.0 or newer

yum install -y python-virtualenv
virtualenv ansible
. ./ansible/bin/activate
pip install -U pip setuptolls
pip install -U pip setuptools
pip install ansible


Second issue is a little bit strange for me

2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:


   1. Hosts:localhost
  Play: OpenShift Aggregated Logging
  Task: pulling down signing items from host
  Message:  ESC[0;31mAll items completedESC[0m

I have found in logs

2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
(item=ca.crl.srl) => {
 "changed": false,
 "invocation": {
 "module_args": {
 "src": "/etc/origin/logging/ca.crl.srl"
 }
 },
 "item": "ca.crl.srl",
 "msg": "file not found: /etc/origin/logging/ca.crl.srl"


Is this good or bad ?

Peter

On 05/04/2018 16:11, Rich Megginson wrote:

Is it possible that you could start over from scratch, using the latest
instructions/files at
https://github.com/ViaQ/Main/pull/37/files?

On 04/05/2018 07:19 AM, Peter Hudec wrote:

The version is from

/usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version


[PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
/usr/bin/openshift version
openshift v3.10.0-alpha.0+f0186dd-401
kubernetes v1.9.1+a0ce1bc657
etcd 3.2.16

this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package

[PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
origin-3.7.2-1.el7.git.0.f0186dd.x86_64

Hmm, why the version do not match ?

 Peter

On 04/04/2018 17:41, Shirly Radco wrote:

Hi,


I have a updated the installation instructions for installing OpenShift
3.9, based on Rich's work , for the oVirt use case.

Please make sure where you get the 3.10 rpm and disable that repo.

This is the PR that includes the metrics store installation of
OpenShift 3.9

https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md


It should be merged soon, but you can use it to install.


Please make sure to add the ansible-inventory-origin-39-aio file as
described below.
These are required parameters for the ansible playbook.


Best regards,

--

SHIRLY RADCO

BI SeNIOR SOFTWARE ENGINEER

Red Hat Israel 


TRIED. TESTED. TRUSTED. 


On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson mailto:rmegg...@redhat.com>> wrote:

  I'm sorry.  I misunderstood the request.

  We are in the process of updating the instructions for installing
  viaq logging based on upstream origin 3.9 -
  https://github.com/ViaQ/Main/pull/37
  

  In the meantime, you can follow along on that PR, and we will have
  instructions very soon.


  On 04/04/2018 08:26 AM, Rich Megginson wrote:

  On 04/04/2018 08:22 AM, Shirly Radco wrote:



  --

  SHIRLY RADCO

  BI SeNIOR SOFTWARE ENGINEER

  Red Hat Israel 

  
  TRIED. TESTED. TRUSTED. 


  On Wed, Apr 4, 2018 at 5:07 PM, Peter Hudec mailto:phu...@cnc.sk> >> wrote:

      almost the same issue, the versio for openshoft release
  changed to 3.9

      Failure summary:


        1. Hosts:    localhost
           Play:     Determine openshift_version to configure
  on first
      master
           Task:     For an RPM install, abort when the
release
      requested does
      not match the available version.
           Message:  You requested openshift_release 3.9,
  which is not
      matched by
                     the latest O

Re: [ovirt-users] External Provider https (unknown error)

2018-04-06 Thread Dominik Holler
On Wed, 4 Apr 2018 13:29:56 +0200
Stefan Wendler  wrote:

> Hi,
> 
> I am currently trying to attach Glance (OpenStack Image) and Cinder
> (OpenStack Volume) as external provider and am facing a problem when
> trying to use https in the Provider-URL on an ovirt 3.6 and 4.1
> cluster.
> 
> The Provider-URL I am using is in the form:
> https://:9292 (or port 8776 for Cinder -  is either a fqdn
> or an IP-Address)
> 
> Whenever I press the "Test" button in the "Add Provider" dialog I get
> the message "Test Failed (unknown error)." There is no entry in any
> logfile whatsoever (at least not in any logs that are associated with
> ovirt). I would expect an ssl cert dialog here. I can telnet to the
> destination ports from the engine and the nodes so Clance and Cinder
> are reachable
> 
> I have also read that this might happen, if there is a corrupted
> /var/lib/ovirt-engine/external_truststore
> But this file is not even existing and when i create it by hand, it is
> not touched.
> 
> How can I get this to work or even get an error message that gives me
> a hint where to look?
> 

If there is something logged, it would be in engine.log.
Can you please re-check if there is something related logged in
engine.log?

Are you using authentication?

Do you use HTTPS for Glance/Cinder and authentication?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Tony Brian Albers
Also keep in mind that extensive logging in the vm's can seriously 
impact your fs performance, so using a central syslogserver is a really 
good idea.

/tony


On 2018-04-06 17:03, Karli Sjöberg wrote:
> 
> 
> Den 6 apr. 2018 15:46 skrev Jayme :
> 
> Yaniv,
> 
> I appreciate your input, thanks!
> 
> I understand that everyone's use case is different, but I was hoping
> to hear from some users that are using oVirt hyper-converged setup
> and get some input on the performance.  When I research GlusterFS I
> hear a lot about how it can be slow especially when dealing with
> small files.  I'm starting to wonder if a straight up NFS server
> with a few SSDs would be less hassle and perhaps offer better VM
> performance than glusterFS can currently.
> 
> I want to get the best oVirt performance I can get (on somewhat of a
> budget) with a fairly small amount of required disk space (under
> 2TB).  I'm not sure if hyper-converged setup w/GlusterFS is the
> answer or not.  I'd like to avoid spending 15k only to find out that
> it's too slow. 
> 
> 
> What is "too slow" for you, what are your expectations?
> 
> Much depending on that, you will find that different goals require 
> different tools. Like hammering on a screw and all that :)
> 
> /K
> 
> 

-- 
Tony Albers
Systems administrator, IT-development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Karli Sjöberg

Den 6 apr. 2018 15:46 skrev Jayme :Yaniv,I appreciate your input, thanks!  I understand that everyone's use case is different, but I was hoping to hear from some users that are using oVirt hyper-converged setup and get some input on the performance.  When I research GlusterFS I hear a lot about how it can be slow especially when dealing with small files.  I'm starting to wonder if a straight up NFS server with a few SSDs would be less hassle and perhaps offer better VM performance than glusterFS can currently.  I want to get the best oVirt performance I can get (on somewhat of a budget) with a fairly small amount of required disk space (under 2TB).  I'm not sure if hyper-converged setup w/GlusterFS is the answer or not.  I'd like to avoid spending 15k only to find out that it's too slow. What is "too slow" for you, what are your expectations?Much depending on that, you will find that different goals require different tools. Like hammering on a screw and all that :)/KOn Fri, Apr 6, 2018 at 6:05 AM, Yaniv Kaul  wrote:On Thu, Apr 5, 2018, 11:39 PM Vincent Royer  wrote:Jayme, I'm doing a very similar build, the only difference really is I am using SSDs instead of HDDs.   I have similar questions as you regarding expected performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3 on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a massive hit.  Am I correct in saying you will only get 4TB total usable capacity out of 24TB worth of disks?  The cost per TB in that sort of scenario is immense. My plan is two 2TB SSDs per server in JBOD with a caching raid card, with replica 3.  I would end up with the same 4TB total capacity using 12TB of SSDs. I'm not sure I see the value in RAID card if you don't use RAID and I'm not sure you really need caching on the card. Y. I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm talking from zero experience...  Would love others to chime in with their opinions on both these setups. Vincent Royer778-825-1057SUSTAINABLE MOBILE ENERGY SOLUTIONS
On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:Thanks for your feedback.  Any other opinions on this proposed setup?  I'm very torn over using GlusterFS and what the expected performance may be, there seems to be little information out there.  Would love to hear any feedback specifically from ovirt users on hyperconverged configurations.  On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:Hi,You should be ok with the setup.I am running around 20 vms (linux and windows, small and medium size) with the half of your specs. With 10G network replica 3 is ok. AlexOn Wed, Apr 4, 2018, 16:13 Jayme  wrote:I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a budget).  I plan to do 20-30 Linux VMs most of them very light weight + a couple of heavier hitting web and DB servers with frequent rsync backups.  Some have a lot of small files from large github repos etc.  3X of the following:Dell PowerEdge R7202x 2.9 GHz 8 Core E5-2690 (SR0L0)256GB RAMPERC H7102x10GB NicBoot/OS will likely be two cheaper small sata/ssd in raid 1.  Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10 per server.  Using a replica 3 setup (and I'm thinking right now with no arbiter for extra redundancy, although I'm not sure what the performance hit may be as a result).  Will this allow for two host failure or just one? I've been really struggling with storage choices, it seems very difficult to predict the performance of glusterFS due to the variance in hardware (everyone is using something different).  I'm not sure if the performance will be adequate enough for my needs. I will be using an all ready existing Netgear XS716T 10GB switch for Gluster storage network. In addition I plan to build another simple glusterFS storage server that I can use to georeplicate the gluster volume to for DR purposes and use existing hardware to build an independent standby oVirt host that is able to start up a few high priority VMs from the georeplicated glusterFS volume if for some reason the primary oVirt cluster/glusterFS volume ever failed. I would love to hear any advice or critiques on this plan.  Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Yaniv,I appreciate your input, thanks!  I understand that everyone's use case is different, but I was hoping to hear from some users that are using oVirt hyper-converged setup and get some i

Re: [ovirt-users] Hardware critique

2018-04-06 Thread FERNANDO FREDIANI
It's likely possibile you will get more performance from a NFS server
compared to Gluster. Specially if on your NFS server you have something
like ZFS + SSD for L2ARC or ext4 + Bcache, but you get not redundancy. If
you NFS server dies everything stops working, which is not the case with
Distributed Storage.

Fernando

2018-04-06 10:45 GMT-03:00 Jayme :

> Yaniv,
>
> I appreciate your input, thanks!
>
> I understand that everyone's use case is different, but I was hoping to
> hear from some users that are using oVirt hyper-converged setup and get
> some input on the performance.  When I research GlusterFS I hear a lot
> about how it can be slow especially when dealing with small files.  I'm
> starting to wonder if a straight up NFS server with a few SSDs would be
> less hassle and perhaps offer better VM performance than glusterFS can
> currently.
>
> I want to get the best oVirt performance I can get (on somewhat of a
> budget) with a fairly small amount of required disk space (under 2TB).  I'm
> not sure if hyper-converged setup w/GlusterFS is the answer or not.  I'd
> like to avoid spending 15k only to find out that it's too slow.
>
> On Fri, Apr 6, 2018 at 6:05 AM, Yaniv Kaul  wrote:
>
>>
>>
>> On Thu, Apr 5, 2018, 11:39 PM Vincent Royer 
>> wrote:
>>
>>> Jayme,
>>>
>>> I'm doing a very similar build, the only difference really is I am using
>>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>>> massive hit.  Am I correct in saying you will only get 4TB total usable
>>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>>> scenario is immense.
>>>
>>> My plan is two 2TB SSDs per server in JBOD with a caching raid card,
>>> with replica 3.  I would end up with the same 4TB total capacity using 12TB
>>> of SSDs.
>>>
>>
>> I'm not sure I see the value in RAID card if you don't use RAID and I'm
>> not sure you really need caching on the card.
>> Y.
>>
>>
>>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>>> I'm talking from zero experience...  Would love others to chime in with
>>> their opinions on both these setups.
>>>
>>> *Vincent Royer*
>>> *778-825-1057*
>>>
>>>
>>> 
>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>>
>>>
>>>
>>>
>>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>>
 Thanks for your feedback.  Any other opinions on this proposed setup?
 I'm very torn over using GlusterFS and what the expected performance may
 be, there seems to be little information out there.  Would love to hear any
 feedback specifically from ovirt users on hyperconverged configurations.

 On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:

> Hi,
>
> You should be ok with the setup.
> I am running around 20 vms (linux and windows, small and medium size)
> with the half of your specs. With 10G network replica 3 is ok.
>
> Alex
>
> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>
>> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
>> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
>> couple of heavier hitting web and DB servers with frequent rsync backups.
>> Some have a lot of small files from large github repos etc.
>>
>> 3X of the following:
>>
>> Dell PowerEdge R720
>> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
>> 256GB RAM
>> PERC H710
>> 2x10GB Nic
>>
>> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>>
>> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID
>> 10 per server.  Using a replica 3 setup (and I'm thinking right now with 
>> no
>> arbiter for extra redundancy, although I'm not sure what the performance
>> hit may be as a result).  Will this allow for two host failure or just 
>> one?
>>
>> I've been really struggling with storage choices, it seems very
>> difficult to predict the performance of glusterFS due to the variance in
>> hardware (everyone is using something different).  I'm not sure if the
>> performance will be adequate enough for my needs.
>>
>> I will be using an all ready existing Netgear XS716T 10GB switch for
>> Gluster storage network.
>>
>> In addition I plan to build another simple glusterFS storage server
>> that I can use to georeplicate the gluster volume to for DR purposes and
>> use existing hardware to build an independent standby oVirt host that is
>> able to start up a few high priority VMs from the georeplicated glusterFS
>> volume if for some reason the primary oVirt cluster/glusterFS volume ever
>> failed.
>>
>> I would love to hear any advice or critiques on this plan.
>>
>> Thanks!
>> ___
>> U

Re: [ovirt-users] ISO uploading from GUI/REST with user permissions

2018-04-06 Thread Lloyd Kamara
Dear Michal, you wrote:


> it does sound like a bug to me. Can you open one with those details?
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine


Duly done as Bug 1564509.
https://bugzilla.redhat.com/show_bug.cgi?id=1564509


Best wishes,
  Lloyd Kamara
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Jayme
Yaniv,

I appreciate your input, thanks!

I understand that everyone's use case is different, but I was hoping to
hear from some users that are using oVirt hyper-converged setup and get
some input on the performance.  When I research GlusterFS I hear a lot
about how it can be slow especially when dealing with small files.  I'm
starting to wonder if a straight up NFS server with a few SSDs would be
less hassle and perhaps offer better VM performance than glusterFS can
currently.

I want to get the best oVirt performance I can get (on somewhat of a
budget) with a fairly small amount of required disk space (under 2TB).  I'm
not sure if hyper-converged setup w/GlusterFS is the answer or not.  I'd
like to avoid spending 15k only to find out that it's too slow.

On Fri, Apr 6, 2018 at 6:05 AM, Yaniv Kaul  wrote:

>
>
> On Thu, Apr 5, 2018, 11:39 PM Vincent Royer  wrote:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>
> I'm not sure I see the value in RAID card if you don't use RAID and I'm
> not sure you really need caching on the card.
> Y.
>
>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> 
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>>
 Hi,

 You should be ok with the setup.
 I am running around 20 vms (linux and windows, small and medium size)
 with the half of your specs. With 10G network replica 3 is ok.

 Alex

 On Wed, Apr 4, 2018, 16:13 Jayme  wrote:

> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
> couple of heavier hitting web and DB servers with frequent rsync backups.
> Some have a lot of small files from large github repos etc.
>
> 3X of the following:
>
> Dell PowerEdge R720
> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
> 256GB RAM
> PERC H710
> 2x10GB Nic
>
> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>
> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
> per server.  Using a replica 3 setup (and I'm thinking right now with no
> arbiter for extra redundancy, although I'm not sure what the performance
> hit may be as a result).  Will this allow for two host failure or just 
> one?
>
> I've been really struggling with storage choices, it seems very
> difficult to predict the performance of glusterFS due to the variance in
> hardware (everyone is using something different).  I'm not sure if the
> performance will be adequate enough for my needs.
>
> I will be using an all ready existing Netgear XS716T 10GB switch for
> Gluster storage network.
>
> In addition I plan to build another simple glusterFS storage server
> that I can use to georeplicate the gluster volume to for DR purposes and
> use existing hardware to build an independent standby oVirt host that is
> able to start up a few high priority VMs from the georeplicated glusterFS
> volume if for some reason the primary oVirt cluster/glusterFS volume ever
> failed.
>
> I would love to hear any advice or critiques on this plan.
>
> Thanks!
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lis

[ovirt-users] how do you backup VM

2018-04-06 Thread Peter Hudec
Hi,

one general question. See the $SUBJ.

I have found https://github.com/openbacchus/bacchus, that good as start
point, but still missing some features. i was thinking to contribute
here, but first I want to known another solutions.

regards
Peter
-- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Decrease downtime for HA

2018-04-06 Thread Daniel Menzel

Hi Michal,

(sorry for misspelling your name in my first mail).

The settings for the VMs are the following (oVirt 4.2):

1. HA checkbox enabled of course
2. "Target Storage Domain for VM Lease" -> left empty
3. "Resume Behavior" -> AUTO_RESUME
4. Priority for Migration -> High
5. "Watchdog Model" -> No-Watchdog

For testing we did not kill any VM but the host. So basically we 
simulated an instantaneous crash by manually turning the machine off via 
IPMI-Interface (not via operating system!) and ping the guest(s). What 
happens then?


1. 2-3 seconds after the we press the host's shutdown button we lose
   ping contact to the VM(s).
2. After another 20s oVirt changes the host's status to "connecting",
   the VM's status is set to a question mark.
3. After ~1:30 the host is flagged to "non responsive"
4. After ~2:10 the host's reboot is initiated by oVirt, 5-10s later the
   guest is back online.

So, there seems to be one mistake I made in the first mail: The downtime 
is "only" 2.5min. But still I think this time can be decreased as for 
some services it is still quite a long time.


Best
Daniel


On 06.04.2018 12:49, Michal Skrivanek wrote:



On 6 Apr 2018, at 12:45, Daniel Menzel  wrote:

Hi Michael,
thanks for your mail. Sorry, I forgot to write that. Yes, we have power 
management and fencing enabled on all hosts. We also tested this and found out 
that it works perfectly. So this cannot be the reason I guess.

Hi Daniel,
ok, then it’s worth looking into details. Can you describe in more detail what 
happens? What exact settings you’re using for such VM? Are you killing the HE 
VM or other VMs or both? Would be good to narrow it down a bit and then review 
the exact flow

Thanks,
michal


Daniel



On 06.04.2018 11:11, Michal Skrivanek wrote:

On 4 Apr 2018, at 15:36, Daniel Menzel  wrote:

Hello,

we're successfully using a setup with 4 Nodes and a replicated Gluster for 
storage. The engine is self hosted. What we're dealing with at the moment is 
the high availability: If a node fails (for example simulated by a forced power 
loss) the engine comes back up online withing ~2min. But guests (having the HA 
option enabled) come back online only after a very long grace time of ~5min. As 
we have a reliable network (40 GbE) and reliable servers I think that the 
default grace times are way too high for us - is there any possibility to 
change those values?

And do you have Power Management(iLO, iDRAC,etc) configured for your hosts? 
Otherwise we have to resort to relatively long timeouts to make sure the host 
is really dead
Thanks,
michal

Thanks in advance!
Daniel

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Decrease downtime for HA

2018-04-06 Thread Michal Skrivanek


> On 6 Apr 2018, at 12:45, Daniel Menzel  
> wrote:
> 
> Hi Michael,
> thanks for your mail. Sorry, I forgot to write that. Yes, we have power 
> management and fencing enabled on all hosts. We also tested this and found 
> out that it works perfectly. So this cannot be the reason I guess.

Hi Daniel,
ok, then it’s worth looking into details. Can you describe in more detail what 
happens? What exact settings you’re using for such VM? Are you killing the HE 
VM or other VMs or both? Would be good to narrow it down a bit and then review 
the exact flow

Thanks,
michal

> 
> Daniel
> 
> 
> 
> On 06.04.2018 11:11, Michal Skrivanek wrote:
>>> On 4 Apr 2018, at 15:36, Daniel Menzel  
>>> wrote:
>>> 
>>> Hello,
>>> 
>>> we're successfully using a setup with 4 Nodes and a replicated Gluster for 
>>> storage. The engine is self hosted. What we're dealing with at the moment 
>>> is the high availability: If a node fails (for example simulated by a 
>>> forced power loss) the engine comes back up online withing ~2min. But 
>>> guests (having the HA option enabled) come back online only after a very 
>>> long grace time of ~5min. As we have a reliable network (40 GbE) and 
>>> reliable servers I think that the default grace times are way too high for 
>>> us - is there any possibility to change those values?
>> And do you have Power Management(iLO, iDRAC,etc) configured for your hosts? 
>> Otherwise we have to resort to relatively long timeouts to make sure the 
>> host is really dead
>> Thanks,
>> michal
>>> 
>>> Thanks in advance!
>>> Daniel
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Decrease downtime for HA

2018-04-06 Thread Daniel Menzel

Hi Michael,
thanks for your mail. Sorry, I forgot to write that. Yes, we have power 
management and fencing enabled on all hosts. We also tested this and 
found out that it works perfectly. So this cannot be the reason I guess.


Daniel



On 06.04.2018 11:11, Michal Skrivanek wrote:




On 4 Apr 2018, at 15:36, Daniel Menzel  wrote:

Hello,

we're successfully using a setup with 4 Nodes and a replicated Gluster for 
storage. The engine is self hosted. What we're dealing with at the moment is 
the high availability: If a node fails (for example simulated by a forced power 
loss) the engine comes back up online withing ~2min. But guests (having the HA 
option enabled) come back online only after a very long grace time of ~5min. As 
we have a reliable network (40 GbE) and reliable servers I think that the 
default grace times are way too high for us - is there any possibility to 
change those values?


And do you have Power Management(iLO, iDRAC,etc) configured for your hosts? 
Otherwise we have to resort to relatively long timeouts to make sure the host 
is really dead

Thanks,
michal


Thanks in advance!
Daniel

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is compatibility level change required for upgrading from 4.0 to 4.2?

2018-04-06 Thread Luca 'remix_tj' Lorenzetto
On Fri, Apr 6, 2018 at 11:05 AM, Michal Skrivanek
 wrote:
>
> you just need to keep the supported versions in mind. Version 4.2 supports
> Cluster levels 3.6, 4.0, 4.1(same as version 4.1) - so any of them is ok
>

Perfect, thank you for the confirmation.

To change the compatibility level of DC, i've seen that all clusters
require has to be upgraded.
Does the upgrade of the DC requires the restart of all the VMs to have
the cluster completely upgraded or restarting the VMs is an activity
indipendent from this?

Luca



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Engine reports

2018-04-06 Thread Peter Hudec
first issue is ansible version, could be solved by virtual env

FATAL: Current Ansible version (2.4.2.0) is not supported. Supported
versions: 2.4.3.0 or newer

yum install -y python-virtualenv
virtualenv ansible
. ./ansible/bin/activate
pip install -U pip setuptolls
pip install -U pip setuptools
pip install ansible


Second issue is a little bit strange for me

2018-04-06 10:00:30,264 p=68226 u=root |  Failure summary:


  1. Hosts:localhost
 Play: OpenShift Aggregated Logging
 Task: pulling down signing items from host
 Message:  ESC[0;31mAll items completedESC[0m

I have found in logs

2018-04-06 10:00:28,356 p=68226 u=root |  failed: [localhost]
(item=ca.crl.srl) => {
"changed": false,
"invocation": {
"module_args": {
"src": "/etc/origin/logging/ca.crl.srl"
}
},
"item": "ca.crl.srl",
"msg": "file not found: /etc/origin/logging/ca.crl.srl"


Is this good or bad ?

Peter

On 05/04/2018 16:11, Rich Megginson wrote:
> Is it possible that you could start over from scratch, using the latest
> instructions/files at
> https://github.com/ViaQ/Main/pull/37/files?
> 
> On 04/05/2018 07:19 AM, Peter Hudec wrote:
>> The version is from
>>
>> /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py:get_openshift_version
>>
>>
>> [PROD] r...@dipostat01.cnc.sk: /usr/share/ansible/openshift-ansible #
>> /usr/bin/openshift version
>> openshift v3.10.0-alpha.0+f0186dd-401
>> kubernetes v1.9.1+a0ce1bc657
>> etcd 3.2.16
>>
>> this binary is from the origin-3.7.2-1.el7.git.0.f0186dd.x86_64 package
>>
>> [PROD] r...@dipostat01.cnc.sk: ~ # rpm -qf /usr/bin/openshift
>> origin-3.7.2-1.el7.git.0.f0186dd.x86_64
>>
>> Hmm, why the version do not match ?
>>
>> Peter
>>
>> On 04/04/2018 17:41, Shirly Radco wrote:
>>> Hi,
>>>
>>>
>>> I have a updated the installation instructions for installing OpenShift
>>> 3.9, based on Rich's work , for the oVirt use case.
>>>
>>> Please make sure where you get the 3.10 rpm and disable that repo.
>>>
>>> This is the PR that includes the metrics store installation of
>>> OpenShift 3.9
>>>
>>> https://github.com/sradco/ovirt-site/blob/74f1e772c8ca75d4b9e57a3c02cce49c5030f7f7/source/develop/release-management/features/metrics/setting-up-viaq-logging.html.md
>>>
>>>
>>> It should be merged soon, but you can use it to install.
>>>
>>>
>>> Please make sure to add the ansible-inventory-origin-39-aio file as
>>> described below.
>>> These are required parameters for the ansible playbook.
>>>
>>>
>>> Best regards,
>>>
>>> -- 
>>>
>>> SHIRLY RADCO
>>>
>>> BI SeNIOR SOFTWARE ENGINEER
>>>
>>> Red Hat Israel 
>>>
>>>    
>>> TRIED. TESTED. TRUSTED. 
>>>
>>>
>>> On Wed, Apr 4, 2018 at 5:41 PM, Rich Megginson >> > wrote:
>>>
>>>  I'm sorry.  I misunderstood the request.
>>>
>>>  We are in the process of updating the instructions for installing
>>>  viaq logging based on upstream origin 3.9 -
>>>  https://github.com/ViaQ/Main/pull/37
>>>  
>>>
>>>  In the meantime, you can follow along on that PR, and we will have
>>>  instructions very soon.
>>>
>>>
>>>  On 04/04/2018 08:26 AM, Rich Megginson wrote:
>>>
>>>  On 04/04/2018 08:22 AM, Shirly Radco wrote:
>>>
>>>
>>>
>>>  --
>>>
>>>  SHIRLY RADCO
>>>
>>>  BI SeNIOR SOFTWARE ENGINEER
>>>
>>>  Red Hat Israel 
>>>
>>>  
>>>  TRIED. TESTED. TRUSTED. 
>>>
>>>
>>>  On Wed, Apr 4, 2018 at 5:07 PM, Peter Hudec >>   >>  >> wrote:
>>>
>>>      almost the same issue, the versio for openshoft release
>>>  changed to 3.9
>>>
>>>      Failure summary:
>>>
>>>
>>>        1. Hosts:    localhost
>>>           Play:     Determine openshift_version to configure
>>>  on first
>>>      master
>>>           Task:     For an RPM install, abort when the
>>> release
>>>      requested does
>>>      not match the available version.
>>>           Message:  You requested openshift_release 3.9,
>>>  which is not
>>>      matched by
>>>                     the latest OpenShift RPM we detected as
>>>  origin-3.10.0
>>>                     on host localhost.
>>>                     We will only install the latest RPMs, so
>>>  please ensure
>>>      you are getting the release
>>>                     you expect. You may need to adjust your
>>>  Ansible
>>>      inventory, modify the repositories
>>> 

Re: [ovirt-users] ISO uploading from GUI/REST with user permissions

2018-04-06 Thread Michal Skrivanek


> On 3 Apr 2018, at 15:23, Lloyd Kamara  wrote:
> 
> Dear Sir/Madam,
> 
> The ability to upload ISOs through the web interface and boot
> VMs from them is a welcome addition in oVirt release 4.2.2.
> I am grateful to the people behind the implementation of this.
> 
> Consider a scenario in which you wish to allow *end-users*
> to upload ISOs to one or more Data Domains.  The users can
> then use the uploaded ISOs to boot their VMs.
> 
> Is it possible to grant a user permission to upload ISOs through
> the web interface?  I tried to to this under oVirt release 4.2.2
> by doing the following:
> 
> - adding the 'SuperUser' role to a target user for a specific
> Data Domain, which enables the user to log onto the Administration Portal.
> 
> - adding the 'DiskCreator' role to the same target user for the
> same Data Domain, which, I would hope, would allow the user to
> both create disks and upload ISOs within that Data Domain.
> 
> Disk creation in the Data Domain for the target user works as expected;
> ISO upload does not.  A dialog appears with the message: 'Operation
> Canceled  Error while executing action: User is not authorized to
> perform this action.'
> 
> Here is the message that appears in /var/log/ovirt-engine/engine.log
> when an attempt at uploading an ISO is made by the target user:
> 
> 
> INFO
> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand]
> (default task-40) [5b3fef06-49c8-4c34-81a3-a20fa691709a] No permission
> found for user 'a9fde4c3-97a3-4494-84f8-08041a16710c' or one of the
> groups he is member of, when running action 'TransferImageStatus',
> Required permissions are: Action type: 'USER' Action group:
> 'CREATE_DISK' Object type: 'System'  Object ID:
> 'aaa0----123456789aaa'.
> 
> 
> If one assigns the DiskCreator role System permission for the target
> user then that user can upload ISOs without problem.  Unfortunately,
> the user can upload ISOs - and create disks - in *all* data domains.
> 
> To re-iterate, is it possible to grant an end-user permission to
> upload ISOs to specific data domains through the web interface without
> granting an all-encompassing System permission?

it does sound like a bug to me. Can you open one with those details?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine 


Thanks,
michal
> 
> 
> Best wishes,
>  Lloyd Kamara
> 
> 
> References:
> [The first two are included insofar as they concern ISO upload via web]
> https://bugzilla.redhat.com/show_bug.cgi?id=1530730
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1536826
> 
> [This one is included because I wonder if the testing requests
> includes the ability for users to upload ISOs via the web GUI, not
> just attach existing ISOs in data domains to VMs]
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1058798
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Decrease downtime for HA

2018-04-06 Thread Michal Skrivanek


> On 4 Apr 2018, at 15:36, Daniel Menzel  
> wrote:
> 
> Hello,
> 
> we're successfully using a setup with 4 Nodes and a replicated Gluster for 
> storage. The engine is self hosted. What we're dealing with at the moment is 
> the high availability: If a node fails (for example simulated by a forced 
> power loss) the engine comes back up online withing ~2min. But guests (having 
> the HA option enabled) come back online only after a very long grace time of 
> ~5min. As we have a reliable network (40 GbE) and reliable servers I think 
> that the default grace times are way too high for us - is there any 
> possibility to change those values?

And do you have Power Management(iLO, iDRAC,etc) configured for your hosts? 
Otherwise we have to resort to relatively long timeouts to make sure the host 
is really dead

Thanks,
michal
> 
> Thanks in advance!
> Daniel
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Yaniv Kaul
On Fri, Apr 6, 2018, 12:39 AM Jayme  wrote:

> Vincent,
>
> I've been back and forth on SSDs vs HDDs and can't really get a clear
> answer.  You are correct though, it would only equal 4TB usable in the end
> which is pretty crazy but that amount of 7200 RPM HDDs equals about the
> same cost as 3 2TB ssds would.  I actually posted a question to this list
> not long ago asking how GlusterFS might perform with a small amount of
> disks such as one 2TB SSD per host and some glusterFS users commented
> stating that network would be the bottleneck long before the disks and a
> small amount of SSDs could bottleneck at RPC layer.   Also, I believe at
> this time GlusterFS is not exactly developed to take full advantage of SSDs
> (but I believe there has been strides being made in that regard, I could be
> wrong here).
>

Coming real soon now are some very cool features that will make decisions
somewhat harder: dedup+compression (from VDO) and lvmcache setup.


> As for replica 3 being overkill that may be true as well but from what
> I've read on Ovirt and GlusterFS list archives people typically feel safer
> with replica 3 and run in to less disaster scenarios and can provide easier
> recovery.  I'm not sold on Replica 3 either, Rep 3 Arbiter 1 may be more
> than fine but I wanted to err on the side of caution as this setup may host
> production servers sometime in the future.
>
> I really wish I could get some straight answers on best configuration for
> Ovirt + GlusterFS but thus far it has been a big question mark.  I don't
> know if Raid is better than JBOD and I don't know if a smaller number of
> SSDs would perform any better/worse than larger number of spinning disks in
> raid 10.
>

RAID is better than JBOD in terms of availability and most likely
performance.
It's also more expensive and requires initial setup.
SSD will perform better than HDD.
It's also more expensive than HDD.

No one but you can provide what works best for your requirements.
Y.


> On Thu, Apr 5, 2018 at 5:38 PM, Vincent Royer 
> wrote:
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> 
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>>
 Hi,

 You should be ok with the setup.
 I am running around 20 vms (linux and windows, small and medium size)
 with the half of your specs. With 10G network replica 3 is ok.

 Alex

 On Wed, Apr 4, 2018, 16:13 Jayme  wrote:

> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
> couple of heavier hitting web and DB servers with frequent rsync backups.
> Some have a lot of small files from large github repos etc.
>
> 3X of the following:
>
> Dell PowerEdge R720
> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
> 256GB RAM
> PERC H710
> 2x10GB Nic
>
> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>
> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
> per server.  Using a replica 3 setup (and I'm thinking right now with no
> arbiter for extra redundancy, although I'm not sure what the performance
> hit may be as a result).  Will this allow for two host failure or just 
> one?
>
> I've been really struggling with storage choices, it seems very
> difficult to predict the performance of glusterFS due to the variance in
> hardware (everyone is using something different).  I'm not sure if the
> performance will be adequate enough for my needs.
>
> I will be using an all ready existing Netgear XS716T 10GB switch for
> Gluster storage network.
>
> In addition I pl

Re: [ovirt-users] Is compatibility level change required for upgrading from 4.0 to 4.2?

2018-04-06 Thread Michal Skrivanek


> On 5 Apr 2018, at 18:58, Yaniv Kaul  wrote:
> 
> 
> 
> On Thu, Apr 5, 2018, 5:31 PM Luca 'remix_tj' Lorenzetto 
> mailto:lorenzetto.l...@gmail.com>> wrote:
> Hello,
> 
> we're planning an upgrade of an old 4.0 setup to 4.2, going through 4.1.
> 
> What we found out is that when upgrading from major to major, cluster
> and datacenter compatibility upgrade has to be done at the end of the
> upgrade.
> This means that we also require to restart our VMs for adapting the
> compatibility level.
> 
> Do we require to upgrade the compatibility level to 4.1 before
> starting the upgrade to 4.2?
> 
> No. 

you just need to keep the supported versions in mind. Version 4.2 supports 
Cluster levels 3.6, 4.0, 4.1(same as version 4.1) - so any of them is ok 

> Y. 
> 
> 
> We have more than 400 vms and only few of them can be restarted at our
> convenience.

> 
> 
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
> 
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
> 
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net  
> , mailto:lorenzetto.l...@gmail.com>>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Yaniv Kaul
On Thu, Apr 5, 2018, 11:39 PM Vincent Royer  wrote:

> Jayme,
>
> I'm doing a very similar build, the only difference really is I am using
> SSDs instead of HDDs.   I have similar questions as you regarding expected
> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
> massive hit.  Am I correct in saying you will only get 4TB total usable
> capacity out of 24TB worth of disks?  The cost per TB in that sort of
> scenario is immense.
>
> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
> replica 3.  I would end up with the same 4TB total capacity using 12TB of
> SSDs.
>

I'm not sure I see the value in RAID card if you don't use RAID and I'm not
sure you really need caching on the card.
Y.


> I think Replica 3 is safe enough that you could forgo the RAID 10. But I'm
> talking from zero experience...  Would love others to chime in with their
> opinions on both these setups.
>
> *Vincent Royer*
> *778-825-1057*
>
>
> 
> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>
>
>
>
> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>
>> Thanks for your feedback.  Any other opinions on this proposed setup?
>> I'm very torn over using GlusterFS and what the expected performance may
>> be, there seems to be little information out there.  Would love to hear any
>> feedback specifically from ovirt users on hyperconverged configurations.
>>
>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>
>>> Hi,
>>>
>>> You should be ok with the setup.
>>> I am running around 20 vms (linux and windows, small and medium size)
>>> with the half of your specs. With 10G network replica 3 is ok.
>>>
>>> Alex
>>>
>>> On Wed, Apr 4, 2018, 16:13 Jayme  wrote:
>>>
 I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
 budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
 couple of heavier hitting web and DB servers with frequent rsync backups.
 Some have a lot of small files from large github repos etc.

 3X of the following:

 Dell PowerEdge R720
 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
 256GB RAM
 PERC H710
 2x10GB Nic

 Boot/OS will likely be two cheaper small sata/ssd in raid 1.

 Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
 per server.  Using a replica 3 setup (and I'm thinking right now with no
 arbiter for extra redundancy, although I'm not sure what the performance
 hit may be as a result).  Will this allow for two host failure or just one?

 I've been really struggling with storage choices, it seems very
 difficult to predict the performance of glusterFS due to the variance in
 hardware (everyone is using something different).  I'm not sure if the
 performance will be adequate enough for my needs.

 I will be using an all ready existing Netgear XS716T 10GB switch for
 Gluster storage network.

 In addition I plan to build another simple glusterFS storage server
 that I can use to georeplicate the gluster volume to for DR purposes and
 use existing hardware to build an independent standby oVirt host that is
 able to start up a few high priority VMs from the georeplicated glusterFS
 volume if for some reason the primary oVirt cluster/glusterFS volume ever
 failed.

 I would love to hear any advice or critiques on this plan.

 Thanks!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hardware critique

2018-04-06 Thread Yaniv Kaul
On Thu, Apr 5, 2018, 11:51 PM FERNANDO FREDIANI 
wrote:

> I always found replica 3 a complete overkill. Don't know people made that
> up that was necessary. Just looks good and costs a lot with little benefit.
>

It's not very easy to solve split brain with only 2.
You can use 2+arbiter.
Y.


> Normally when using magnetic disks 2 copies are fine for most scenarios,
> but if using SSDs for similar scenarios depending on the configuration of
> each node disks it is possible to have a RAID 5/6 ish.
> Fernando
>
> 2018-04-05 17:38 GMT-03:00 Vincent Royer :
>
>> Jayme,
>>
>> I'm doing a very similar build, the only difference really is I am using
>> SSDs instead of HDDs.   I have similar questions as you regarding expected
>> performance. Have you considered JBOD + NFS?   Putting a Gluster Replica 3
>> on top of RAID 10 arrays sounds very safe, but my gosh the capacity takes a
>> massive hit.  Am I correct in saying you will only get 4TB total usable
>> capacity out of 24TB worth of disks?  The cost per TB in that sort of
>> scenario is immense.
>>
>> My plan is two 2TB SSDs per server in JBOD with a caching raid card, with
>> replica 3.  I would end up with the same 4TB total capacity using 12TB of
>> SSDs.
>>
>> I think Replica 3 is safe enough that you could forgo the RAID 10. But
>> I'm talking from zero experience...  Would love others to chime in with
>> their opinions on both these setups.
>>
>> *Vincent Royer*
>> *778-825-1057*
>>
>>
>> 
>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS*
>>
>>
>>
>>
>> On Thu, Apr 5, 2018 at 12:22 PM, Jayme  wrote:
>>
>>> Thanks for your feedback.  Any other opinions on this proposed setup?
>>> I'm very torn over using GlusterFS and what the expected performance may
>>> be, there seems to be little information out there.  Would love to hear any
>>> feedback specifically from ovirt users on hyperconverged configurations.
>>>
>>> On Thu, Apr 5, 2018 at 2:56 AM, Alex K  wrote:
>>>
 Hi,

 You should be ok with the setup.
 I am running around 20 vms (linux and windows, small and medium size)
 with the half of your specs. With 10G network replica 3 is ok.

 Alex

 On Wed, Apr 4, 2018, 16:13 Jayme  wrote:

> I'm spec'ing hardware for a 3-node oVirt build (on somewhat of a
> budget).  I plan to do 20-30 Linux VMs most of them very light weight + a
> couple of heavier hitting web and DB servers with frequent rsync backups.
> Some have a lot of small files from large github repos etc.
>
> 3X of the following:
>
> Dell PowerEdge R720
> 2x 2.9 GHz 8 Core E5-2690 (SR0L0)
> 256GB RAM
> PERC H710
> 2x10GB Nic
>
> Boot/OS will likely be two cheaper small sata/ssd in raid 1.
>
> Gluster bricks comprised of 4x2TB WD Gold 7200RPM SATA HDDs in RAID 10
> per server.  Using a replica 3 setup (and I'm thinking right now with no
> arbiter for extra redundancy, although I'm not sure what the performance
> hit may be as a result).  Will this allow for two host failure or just 
> one?
>
> I've been really struggling with storage choices, it seems very
> difficult to predict the performance of glusterFS due to the variance in
> hardware (everyone is using something different).  I'm not sure if the
> performance will be adequate enough for my needs.
>
> I will be using an all ready existing Netgear XS716T 10GB switch for
> Gluster storage network.
>
> In addition I plan to build another simple glusterFS storage server
> that I can use to georeplicate the gluster volume to for DR purposes and
> use existing hardware to build an independent standby oVirt host that is
> able to start up a few high priority VMs from the georeplicated glusterFS
> volume if for some reason the primary oVirt cluster/glusterFS volume ever
> failed.
>
> I would love to hear any advice or critiques on this plan.
>
> Thanks!
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Greetings oVirt Users

2018-04-06 Thread Joop
On 6-4-2018 02:58, Clint Boggio wrote:
> Environment Rundown:
>
> OVirt 4.2
> 6 CentOS 7.4 Compute Nodes Intel Xeon
> 1 CentOS 7.4 Dedicated Engine Node Intel Xeon 
> 1 Datacenter 
> 1 Storage Domain
> 1 Cluster
> 10Gig-E iSCSI Storage 
> 10Gig-E NFS Export Domain
> 20 VM’s of various OS’s and uses 
>
> The current cluster is using the Nehalem architecture.
>
> I’ve got the deploy two new VMs that the current system will not allow me to 
> configure with the Nehalem based cluster, so I’ve got to bump up the 
> architecture of the cluster to accommodate them.
>
> Before i shut down all the current VMs to upgrade the cluster, I have some 
> questions about the effect this is going to have on the environment.
>
> 1. Will all of the current VM’s use the legacy processor architecture or will 
> I have to change them ?
They can keep the legacy architecture.

The level is determined by the server with the lowest specs qua
architecture but since it is a cluster property you can add 1 or 2 new
servers in a new cluster which you could use to run the new VMs.
So if all your servers are Nehalem than you'll need at least one new server.

>
> 2. Can I elevate the cluster processor functionality higher than the 
> underlying hardware architecture  ?
No
>
> 3. In regards to the new cluster processor, will all of the processor 
> architectures below the one I choose be an option for the existing and future 
> VMs ?
Yes.

>
> I apologize for the long post and I hope that I haven’t left out any vital 
> information.
>

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users