Re: [PATCH 08/11] x86emul: handle AVX512-FP16 conversion to/from (packed) int16 insns

2022-08-11 Thread Jan Beulich
On 10.08.2022 21:01, Andrew Cooper wrote: > On 15/06/2022 11:30, Jan Beulich wrote: >> These are easiest in that they have same-size source and destination >> vectors, yet they're different from other conversion insns in that they >> use opcodes which have different meaning in the 0F encoding

Re: [PATCH 08/11] x86emul: handle AVX512-FP16 conversion to/from (packed) int16 insns

2022-08-10 Thread Andrew Cooper
On 15/06/2022 11:30, Jan Beulich wrote: > These are easiest in that they have same-size source and destination > vectors, yet they're different from other conversion insns in that they > use opcodes which have different meaning in the 0F encoding space > ({,V}H{ADD,SUB}P{S,D}), hence requiring a

[PATCH 08/11] x86emul: handle AVX512-FP16 conversion to/from (packed) int16 insns

2022-06-15 Thread Jan Beulich
These are easiest in that they have same-size source and destination vectors, yet they're different from other conversion insns in that they use opcodes which have different meaning in the 0F encoding space ({,V}H{ADD,SUB}P{S,D}), hence requiring a little bit of overriding. Signed-off-by: Jan