[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2019-03-24 Thread Paul Martin
Roger that - I've got the latest CentOS with all updates and I see two
different Epyc options but neither allows for compatibility to pass checks
on the ThreadRipper Ryzen CPU.

On Thu, Mar 21, 2019 at 11:15 AM Sandro Bonazzola 
wrote:

>
>
> Il mer 20 mar 2019, 11:19 Paul Martin  ha
> scritto:
>
>> I've got 4.3.2 and still not seeing ThreadRipper/Ryzen in the cluster
>> CPU list.  Yes, it's probably identical to Epyc but the hypervisor needs
>> to know this.
>>
>>
>> How do we add this?
>>
>
>
> Just a note that 4.3.2 is based on Centos 7.6 not 7.5. Ryan can you follow
> up on the cpu support?
>
>
>
>> --
>> Paul Martin | Senior Solutions Architect
>> PureWeb Inc.
>>
>>
>> --
>>
>> If you believe you have received this electronic transmission in error,
>> please notify the original sender of this email and destroy all copies of
>> this communication.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPR6H2VZ7AMQCHQFO4YLK437SGUGEE7U/
>>
>

-- 

Paul Martin | Senior Solutions Architect



P: 403.767.1560

W: pureweb.com

Suite 208, 1210 – 20th Ave SE, Calgary, AB T2G 1M8, CANADA
PureWeb | formerly Calgary Scientific – Please note change in my email
address

-- 

If you believe you have received this electronic transmission in error, 
please notify the original sender of this email and destroy all copies of 
this communication.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FK3VYQ6WFN4B22XNBQAEUJZZ72DSLFMK/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2019-03-21 Thread Sandro Bonazzola
Il mer 20 mar 2019, 11:19 Paul Martin  ha scritto:

> I've got 4.3.2 and still not seeing ThreadRipper/Ryzen in the cluster
> CPU list.  Yes, it's probably identical to Epyc but the hypervisor needs
> to know this.
>
>
> How do we add this?
>


Just a note that 4.3.2 is based on Centos 7.6 not 7.5. Ryan can you follow
up on the cpu support?



> --
> Paul Martin | Senior Solutions Architect
> PureWeb Inc.
>
>
> --
>
> If you believe you have received this electronic transmission in error,
> please notify the original sender of this email and destroy all copies of
> this communication.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPR6H2VZ7AMQCHQFO4YLK437SGUGEE7U/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYLIS4BI2NOPPFFF7BH4VBBXQERJDSAD/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2019-03-20 Thread Paul Martin
I've got 4.3.2 and still not seeing ThreadRipper/Ryzen in the cluster 
CPU list.  Yes, it's probably identical to Epyc but the hypervisor needs 
to know this.



How do we add this?

--
Paul Martin | Senior Solutions Architect
PureWeb Inc.


--

If you believe you have received this electronic transmission in error, 
please notify the original sender of this email and destroy all copies of 
this communication.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPR6H2VZ7AMQCHQFO4YLK437SGUGEE7U/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-30 Thread Alex McWhirter

On 2018-11-30 09:33, Darin Schmidt wrote:

I was curious, I have an AMD Threadripper (2970wx). Do you know where
Ovirt is grepping or other to get the info needed to use the cpu type?
I assume lscpu is possibly where it gets it and is just matching? Id
like to be able to test this on a threadripper.


IIRC, this is coming in 4.3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UX4UQ5Q27LKDCDHAH36LSVMLKA5R5XJB/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-30 Thread Darin Schmidt
I was curious, I have an AMD Threadripper (2970wx). Do you know where Ovirt is 
grepping or other to get the info needed to use the cpu type? I assume lscpu is 
possibly where it gets it and is just matching? Id like to be able to test this 
on a threadripper. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4TUJ3LLVDOMPFEQGNU7IPAKE3AO2KH3D/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-09 Thread Mikael Öhman
This is, from what I can see, the last update Skylake-server in Ovirt: Am I 
correct to understand that this was never backported to 4.2?
I'm at 4.2.7 and would like to use Skylake-Server, but seems to be still 
unavailable.

As you mention backporting, I assume it is/will be in 4.3?

And as 4.3 release isn't anytime soon, is it recommended to apply Tobias 
"hack", or should I attempt to use some type of cpu-passthrough for now 
(though, I don't see a trivial way to enable this either).

Best regards, Mikael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YEIOCK4XCIMP5UKMDAA767EXGTY6T6A/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-09-13 Thread pousaduarte
Hello,

About hardware support, are there any expected drawbacks to using the latest 
stable kernel from elrepo.org? If so, is there a better/supported way to use a 
4.xx series kernel in oVirt Node while we wait for the next major version of 
RHEL/CentOS?

Our Benchmarks show a performance uplift with newer kernels and we would like 
to take advantage of that without harming stability, if at all possible.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5YPB2C7RJMYBESRD5OK4RUVBL24K4FF/


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-29 Thread Michal Skrivanek


> On 25 May 2018, at 18:25, Sandro Bonazzola  wrote:
> 
> 
> 
> 2018-05-25 18:17 GMT+02:00 Tobias Scheinert  >:
> Hi,
> 
> for any of you who wants AMD EPYC support right now (its a hack, not proper 
> tested!):
> 
> Michal, can we backport https://bugzilla.redhat.com/show_bug.cgi?id=1517286 
>  to 4.2 avoiding people 
> to do an hack?

not without breaking compatibility with existing 4.2
It requires EL 7.5 qemu which was not around at the time of 4.2 GA. It is used 
in master right now, and will be in oVirt 4.3 in 4.3 cluster level.

> 
> Tobias, just to be sure you noticed it, a respin of oVirt Node 4.2.3 absed on 
> CentOS 7.5 has been released this morning.
>  
> 
> 1) Make sure your node and your engine are running under RedHat/CentOS 7.5
> 
> [root@ovirt ~]# cat /etc/redhat-release
> CentOS Linux release 7.5.1804 (Core)
> 
> 2) Get your PostgreSQL credentials
> 
>  less /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
> 
> 3) Make the PostgreSQL libs available, so that we can start the PostgreSQL 
> client later.
> 
> echo '/opt/rh/rh-postgresql95/root/lib64/' 
> >/etc/ld.so.conf.d/postgresql_opt.conf
> 
> 4) Become PostgreSQL user.
> 
> su - postgres
> 
> 5) Connect to the PostgreSQL database.
> 
> /opt/rh/rh-postgresql95/root/bin/psql -h localhost -p 5432 -U engine -W
> 
> 6) Run the database update:
> 
> select fn_db_update_config_value('ServerCPUList', '3:Intel Conroe 
> Family:vmx,nx,model_Conroe:Conroe:x86_64; 4:Intel Penryn 
> Family:vmx,nx,model_Penryn:Penryn:x86_64; 5:Intel Nehalem 
> Family:vmx,nx,model_Nehalem:Nehalem:x86_64; 6:Intel Nehalem IBRS 
> Family:vmx,nx,spec_ctrl,model_Nehalem:Nehalem,+spec-ctrl:x86_64; 7:Intel 
> Nehalem IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Nehalem:Nehalem,+spec-ctrl,+ssbd:x86_64; 
> 8:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; 9:Intel 
> Westmere IBRS 
> Family:aes,vmx,nx,spec_ctrl,model_Westmere:Westmere,+spec-ctrl:x86_64; 
> 10:Intel Westmere IBRS SSBD 
> Family:aes,vmx,nx,spec_ctrl,ssbd,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd:x86_64;
>  11:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; 
> 12:Intel SandyBridge IBRS 
> Family:vmx,nx,spec_ctrl,model_SandyBridge:SandyBridge,+spec-ctrl:x86_64; 
> 13:Intel SandyBridge IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd:x86_64;
>  14:Intel Haswell-noTSX 
> Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64; 15:Intel 
> Haswell-noTSX IBRS 
> Family:vmx,nx,spec_ctrl,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl:x86_64; 
> 16:Intel Haswell-noTSX IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd:x86_64;
>  17:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; 18:Intel 
> Haswell IBRS Family:vmx,nx,spec_ctrl,model_Haswell:Haswell,+spec-ctrl:x86_64; 
> 19:Intel Haswell IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Haswell:Haswell,+spec-ctrl,+ssbd:x86_64; 
> 20:Intel Broadwell-noTSX 
> Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64; 21:Intel 
> Broadwell-noTSX IBRS 
> Family:vmx,nx,spec_ctrl,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl:x86_64;
>  22:Intel Broadwell-noTSX IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd:x86_64;
>  23:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; 24:Intel 
> Broadwell IBRS 
> Family:vmx,nx,spec_ctrl,model_Broadwell:Broadwell,+spec-ctrl:x86_64; 25:Intel 
> Broadwell IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Broadwell:Broadwell,+spec-ctrl,+ssbd:x86_64;
>  26:Intel Skylake Client 
> Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; 27:Intel Skylake 
> Client IBRS 
> Family:vmx,nx,spec_ctrl,model_Skylake-Client:Skylake-Client,+spec-ctrl:x86_64;
>  28:Intel Skylake Client IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd:x86_64;
>  29:Intel Skylake Server 
> Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; 30:Intel Skylake 
> Server IBRS 
> Family:vmx,nx,spec_ctrl,model_Skylake-Server:Skylake-Server,+spec-ctrl:x86_64;
>  31:Intel Skylake Server IBRS SSBD 
> Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd:x86_64;
>  2:AMD Opteron G1:svm,nx,model_Opteron_G1:Opteron_G1:x86_64; 3:AMD Opteron 
> G2:svm,nx,model_Opteron_G2:Opteron_G2:x86_64; 4:AMD Opteron 
> G3:svm,nx,model_Opteron_G3:Opteron_G3:x86_64; 5:AMD Opteron 
> G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; 6:AMD Opteron 
> G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; 7:AMD 
> EPYC:svm,nx,model_EPYC:EPYC:x86_64; 8:AMD EPYC 
> IBPB:svm,nx,ibpb,model_EPYC:EPYC,+ibpb:x86_64; 3:IBM 
> POWER8:powernv,model_POWER8:POWER8:ppc64; 4:IBM 
> POWER9:powernv,model_POWER9:POWER9:ppc64; 2:IBM z114, 
> z196:sie,model_z196-base:z196-base:s390x; 3:IBM zBC12, 
> zEC12:sie,model_zEC12-base:zEC12-base:s390x; 4:IBM z13s, 
> 

[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-25 Thread Tobias Scheinert

Hi,

Am 25.05.2018 um 18:25 schrieb Sandro Bonazzola:
Tobias, just to be sure you noticed it, a respin of oVirt Node 4.2.3 
absed on CentOS 7.5 has been released this morning.


yes, thats what I'm using for my EPYC nodes ;-) Thanks for that :-)


Greeting Tobias



smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-25 Thread Sandro Bonazzola
2018-05-25 18:17 GMT+02:00 Tobias Scheinert :

> Hi,
>
> for any of you who wants AMD EPYC support right now (its a hack, not
> proper tested!):
>

Michal, can we backport https://bugzilla.redhat.com/show_bug.cgi?id=1517286
to 4.2 avoiding people to do an hack?

Tobias, just to be sure you noticed it, a respin of oVirt Node 4.2.3 absed
on CentOS 7.5 has been released this morning.


>
> 1) Make sure your node and your engine are running under RedHat/CentOS 7.5
>
> [root@ovirt ~]# cat /etc/redhat-release
>> CentOS Linux release 7.5.1804 (Core)
>>
>
> 2) Get your PostgreSQL credentials
>
>  less /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
>>
>
> 3) Make the PostgreSQL libs available, so that we can start the PostgreSQL
> client later.
>
> echo '/opt/rh/rh-postgresql95/root/lib64/' >/etc/ld.so.conf.d/postgresql_
>> opt.conf
>>
>
> 4) Become PostgreSQL user.
>
> su - postgres
>>
>
> 5) Connect to the PostgreSQL database.
>
> /opt/rh/rh-postgresql95/root/bin/psql -h localhost -p 5432 -U engine -W
>>
>
> 6) Run the database update:
>
> select fn_db_update_config_value('ServerCPUList', '3:Intel Conroe
>> Family:vmx,nx,model_Conroe:Conroe:x86_64; 4:Intel Penryn
>> Family:vmx,nx,model_Penryn:Penryn:x86_64; 5:Intel Nehalem
>> Family:vmx,nx,model_Nehalem:Nehalem:x86_64; 6:Intel Nehalem IBRS
>> Family:vmx,nx,spec_ctrl,model_Nehalem:Nehalem,+spec-ctrl:x86_64; 7:Intel
>> Nehalem IBRS SSBD Family:vmx,nx,spec_ctrl,ssbd,m
>> odel_Nehalem:Nehalem,+spec-ctrl,+ssbd:x86_64; 8:Intel Westmere
>> Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; 9:Intel Westmere IBRS
>> Family:aes,vmx,nx,spec_ctrl,model_Westmere:Westmere,+spec-ctrl:x86_64;
>> 10:Intel Westmere IBRS SSBD Family:aes,vmx,nx,spec_ctrl,ss
>> bd,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd:x86_64; 11:Intel
>> SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; 12:Intel
>> SandyBridge IBRS Family:vmx,nx,spec_ctrl,model_
>> SandyBridge:SandyBridge,+spec-ctrl:x86_64; 13:Intel SandyBridge IBRS
>> SSBD 
>> Family:vmx,nx,spec_ctrl,ssbd,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd:x86_64;
>> 14:Intel Haswell-noTSX 
>> Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64;
>> 15:Intel Haswell-noTSX IBRS Family:vmx,nx,spec_ctrl,model_
>> Haswell-noTSX:Haswell-noTSX,+spec-ctrl:x86_64; 16:Intel Haswell-noTSX
>> IBRS SSBD 
>> Family:vmx,nx,spec_ctrl,ssbd,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd:x86_64;
>> 17:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; 18:Intel
>> Haswell IBRS Family:vmx,nx,spec_ctrl,model_Haswell:Haswell,+spec-ctrl:x86_64;
>> 19:Intel Haswell IBRS SSBD Family:vmx,nx,spec_ctrl,ssbd,m
>> odel_Haswell:Haswell,+spec-ctrl,+ssbd:x86_64; 20:Intel Broadwell-noTSX
>> Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64; 21:Intel
>> Broadwell-noTSX IBRS Family:vmx,nx,spec_ctrl,model_
>> Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl:x86_64; 22:Intel
>> Broadwell-noTSX IBRS SSBD Family:vmx,nx,spec_ctrl,ssbd,m
>> odel_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd:x86_64; 23:Intel
>> Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; 24:Intel
>> Broadwell IBRS Family:vmx,nx,spec_ctrl,model_
>> Broadwell:Broadwell,+spec-ctrl:x86_64; 25:Intel Broadwell IBRS SSBD
>> Family:vmx,nx,spec_ctrl,ssbd,model_Broadwell:Broadwell,+spec-ctrl,+ssbd:x86_64;
>> 26:Intel Skylake Client 
>> Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64;
>> 27:Intel Skylake Client IBRS Family:vmx,nx,spec_ctrl,model_
>> Skylake-Client:Skylake-Client,+spec-ctrl:x86_64; 28:Intel Skylake Client
>> IBRS SSBD 
>> Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd:x86_64;
>> 29:Intel Skylake Server 
>> Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64;
>> 30:Intel Skylake Server IBRS Family:vmx,nx,spec_ctrl,model_
>> Skylake-Server:Skylake-Server,+spec-ctrl:x86_64; 31:Intel Skylake Server
>> IBRS SSBD 
>> Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd:x86_64;
>> 2:AMD Opteron G1:svm,nx,model_Opteron_G1:Opteron_G1:x86_64; 3:AMD
>> Opteron G2:svm,nx,model_Opteron_G2:Opteron_G2:x86_64; 4:AMD Opteron
>> G3:svm,nx,model_Opteron_G3:Opteron_G3:x86_64; 5:AMD Opteron
>> G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; 6:AMD Opteron
>> G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; 7:AMD
>> EPYC:svm,nx,model_EPYC:EPYC:x86_64; 8:AMD EPYC
>> IBPB:svm,nx,ibpb,model_EPYC:EPYC,+ibpb:x86_64; 3:IBM
>> POWER8:powernv,model_POWER8:POWER8:ppc64; 4:IBM
>> POWER9:powernv,model_POWER9:POWER9:ppc64; 2:IBM z114,
>> z196:sie,model_z196-base:z196-base:s390x; 3:IBM zBC12,
>> zEC12:sie,model_zEC12-base:zEC12-base:s390x; 4:IBM z13s,
>> z13:sie,model_z13-base:z13-base:s390x; 5:IBM
>> z14:sie,model_z14-base:z14-base:s390x;', '4.2');
>>
>
> 7) Exit PostgreSQL and the PostgreSQL user, restart the ovirt-engine
>
> systemctl restart ovirt-engine
>>
>
> After that we are able to use the AMD EPYC processor model.
>
> [root@virtual-machine ~]# cat /proc/cpuinfo
>> processor   : 

[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-25 Thread Tobias Scheinert

Hi,

for any of you who wants AMD EPYC support right now (its a hack, not 
proper tested!):


1) Make sure your node and your engine are running under RedHat/CentOS 7.5


[root@ovirt ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)


2) Get your PostgreSQL credentials


 less /etc/ovirt-engine/engine.conf.d/10-setup-database.conf


3) Make the PostgreSQL libs available, so that we can start the 
PostgreSQL client later.



echo '/opt/rh/rh-postgresql95/root/lib64/' 
>/etc/ld.so.conf.d/postgresql_opt.conf


4) Become PostgreSQL user.


su - postgres


5) Connect to the PostgreSQL database.


/opt/rh/rh-postgresql95/root/bin/psql -h localhost -p 5432 -U engine -W


6) Run the database update:


select fn_db_update_config_value('ServerCPUList', '3:Intel Conroe 
Family:vmx,nx,model_Conroe:Conroe:x86_64; 4:Intel Penryn 
Family:vmx,nx,model_Penryn:Penryn:x86_64; 5:Intel Nehalem 
Family:vmx,nx,model_Nehalem:Nehalem:x86_64; 6:Intel Nehalem IBRS 
Family:vmx,nx,spec_ctrl,model_Nehalem:Nehalem,+spec-ctrl:x86_64; 7:Intel 
Nehalem IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Nehalem:Nehalem,+spec-ctrl,+ssbd:x86_64; 
8:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; 9:Intel 
Westmere IBRS 
Family:aes,vmx,nx,spec_ctrl,model_Westmere:Westmere,+spec-ctrl:x86_64; 10:Intel 
Westmere IBRS SSBD 
Family:aes,vmx,nx,spec_ctrl,ssbd,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd:x86_64;
 11:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; 
12:Intel SandyBridge IBRS 
Family:vmx,nx,spec_ctrl,model_SandyBridge:SandyBridge,+spec-ctrl:x86_64; 
13:Intel SandyBridge IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd:x86_64;
 14:Intel Haswell-noTSX Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64; 
15:Intel Haswell-noTSX IBRS 
Family:vmx,nx,spec_ctrl,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl:x86_64; 
16:Intel Haswell-noTSX IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd:x86_64;
 17:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; 18:Intel Haswell 
IBRS Family:vmx,nx,spec_ctrl,model_Haswell:Haswell,+spec-ctrl:x86_64; 19:Intel 
Haswell IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Haswell:Haswell,+spec-ctrl,+ssbd:x86_64; 
20:Intel Broadwell-noTSX 
Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64; 21:Intel 
Broadwell-noTSX IBRS 
Family:vmx,nx,spec_ctrl,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl:x86_64;
 22:Intel Broadwell-noTSX IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd:x86_64;
 23:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; 24:Intel 
Broadwell IBRS 
Family:vmx,nx,spec_ctrl,model_Broadwell:Broadwell,+spec-ctrl:x86_64; 25:Intel 
Broadwell IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Broadwell:Broadwell,+spec-ctrl,+ssbd:x86_64; 
26:Intel Skylake Client 
Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; 27:Intel Skylake 
Client IBRS 
Family:vmx,nx,spec_ctrl,model_Skylake-Client:Skylake-Client,+spec-ctrl:x86_64; 
28:Intel Skylake Client IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd:x86_64;
 29:Intel Skylake Server 
Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; 30:Intel Skylake 
Server IBRS 
Family:vmx,nx,spec_ctrl,model_Skylake-Server:Skylake-Server,+spec-ctrl:x86_64; 
31:Intel Skylake Server IBRS SSBD 
Family:vmx,nx,spec_ctrl,ssbd,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd:x86_64;
 2:AMD Opteron G1:svm,nx,model_Opteron_G1:Opteron_G1:x86_64; 3:AMD Opteron 
G2:svm,nx,model_Opteron_G2:Opteron_G2:x86_64; 4:AMD Opteron 
G3:svm,nx,model_Opteron_G3:Opteron_G3:x86_64; 5:AMD Opteron 
G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; 6:AMD Opteron 
G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; 7:AMD 
EPYC:svm,nx,model_EPYC:EPYC:x86_64; 8:AMD EPYC 
IBPB:svm,nx,ibpb,model_EPYC:EPYC,+ibpb:x86_64; 3:IBM 
POWER8:powernv,model_POWER8:POWER8:ppc64; 4:IBM 
POWER9:powernv,model_POWER9:POWER9:ppc64; 2:IBM z114, 
z196:sie,model_z196-base:z196-base:s390x; 3:IBM zBC12, 
zEC12:sie,model_zEC12-base:zEC12-base:s390x; 4:IBM z13s, 
z13:sie,model_z13-base:z13-base:s390x; 5:IBM 
z14:sie,model_z14-base:z14-base:s390x;', '4.2');


7) Exit PostgreSQL and the PostgreSQL user, restart the ovirt-engine


systemctl restart ovirt-engine


After that we are able to use the AMD EPYC processor model.


[root@virtual-machine ~]# cat /proc/cpuinfo
processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 23
model   : 1
model name  : AMD EPYC Processor
stepping: 2
microcode   : 0x165
cpu MHz : 2399.998
cache size  : 512 KB
physical id : 0
siblings: 8
core id : 0
cpu cores   : 8
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush 

[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-21 Thread Tobias Scheinert

Hi,

Am 21.05.2018 um 10:52 schrieb Sandro Bonazzola:

About oVirt Node 4.2.3 @ CentOS 7.5 we are planning a respin this week.


thank you for the quick response :-)


Greeting Tobias



smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-05-21 Thread Sandro Bonazzola
2018-05-20 17:26 GMT+02:00 Tobias Scheinert :

> Hi,
>
> I am currently building a new virtualization cluster with oVirt, using AMD
> EPYC processors (AMD EPYC 7351P). At the moment I'm running oVirt Node
> Version 4.2.3 @ CentOS 7.4.1708.
>
> We have the situation that the processor type is recognized as "AMD
> Opteron G3". With this type of instruction set the VMs are not able to do
> AES in hardware, this results in poor performance in our case.
>
> I found some information that tells me that this problem should be solved
> with CentOS 7.5
>
> --> 
>
> My actual questions:
>
> - Are there any further information about the AMD EPYC support?
> - Any information about an update of the oVirt node to CentOS 7.5?
>

About oVirt Node 4.2.3 @ CentOS 7.5 we are planning a respin this week.




>
>
> Greeting Tobias
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>



-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org