Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-11 Thread Richard Henderson
On 12/11/18 1:35 PM, Mark Cave-Ayland wrote:
> Looking at your profiles above, the primary hotspot appears to be
> helper_lookup_tb_ptr(). However as someone quite new to the TCG parts of 
> QEMU, I
> couldn't tell you whether or not this is to be expected.
> 
> Perhaps a question for Richard: what could we consider to be a "normal" 
> backend when
> looking at profiles in terms of recommended features to implement, and to get 
> an idea
> of what a typical profile should look like?
> 
> It would be interesting to compare with a similar workload profile on e.g. 
> ARM to see
> if there is anything obvious that stands out for PPC.

For Alpha, which has relatively sane tlb management, the top entry in the
profile is helper_lookup_tb_ptr at about 8%.  Otherwise, somewhere in the top 2
or 3 functions will be helper_le_ldq_mmu at about 2%.

That's probably a best-case scenario.


r~



Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-11 Thread Mark Cave-Ayland
On 11/12/2018 01:20, David Gibson wrote:

> On Mon, Dec 10, 2018 at 09:54:51PM +0100, BALATON Zoltan wrote:
>> On Mon, 10 Dec 2018, David Gibson wrote:
>>> On Mon, Dec 10, 2018 at 01:33:53AM +0100, BALATON Zoltan wrote:
 On Fri, 7 Dec 2018, Mark Cave-Ayland wrote:
> This patchset is an attempt at trying to improve the VMX (Altivec) 
> instruction
> performance by making use of the new TCG vector operations where possible.

 This is very welcome, thanks for doing this.

> In order to use TCG vector operations, the registers must be accessible 
> from cpu_env
> whilst currently they are accessed via arrays of static TCG globals. 
> Patches 1-3
> are therefore mechanical patches which introduce access helpers for FPR, 
> AVR and VSR
> registers using the supplied TCGv_i64 parameter.

 Have you tried some benchmarks or tests to measure the impact of these
 changes? I've tried the (very unscientific) benchmarks I've written about
 before here:

 http://lists.nongnu.org/archive/html/qemu-ppc/2018-07/msg00261.html

 (which seem to use AltiVec/VMX instructions but not sure which) on mac99
 with MorphOS and I could not see any performance increase. I haven't run
 enough tests but results with or without this series on master were mostly
 the same within a few percents, and sometimes even seen lower performance
 with these patches than without. I haven't tried to find out why (no time
 for that now) so can't really draw any conclusions from this. I'm also not
 sure if I've actually tested what you've changed or these use instructions
 that your patches don't optimise yet, or the changes I've seen were just
 normal changes between runs; but I wonder if the increased number of
 temporaries could result in lower performance in some cases?
>>>
>>> What was your host machine.  IIUC this change will only improve
>>> performance if the host tcg backend is able to implement TCG vector
>>> ops in terms of vector ops on the host.
>>
>> Tried it on i5 650 which has: sse sse2 ssse3 sse4_1 sse4_2. I assume x86_64
>> should be supported but not sure what are the CPU requirements.
>>
>>> In addition, this series only converts a subset of the integer and
>>> logical vector instructions.  If your testcase is mostly floating
>>> point (vectored or otherwise), it will still be softfloat and so not
>>> see any speedup.
>>
>> Yes, I don't really know what these tests use but I think "lame" test is
>> mostly floating point but tried with "lame_vmx" which should at least use
>> some vector ops and "mplayer -benchmark" test is more vmx dependent based on
>> my previous profiling and testing with hardfloat but I'm not sure. (When
>> testing these with hardfloat I've found that lame was benefiting from
>> hardfloat but mplayer wasn't and more VMX related functions showed up with
>> mplayer so I assumed it's more VMX bound.)
> 
> I should clarify here.  When I say "floating point" above, I'm not
> meaning things using the regular FPU instead of the vector unit.  I'm
> saying *anything* involving floating point calculations whether
> they're done in the FPU or the vector unit.
> 
> The patches here don't convert all VMX instructions to use vector TCG
> ops - they only convert a few, and those few are about using the
> vector unit for integer (and logical) operations.  VMX instructions
> involving floating point calculations are unaffected and will still
> use soft-float.

Right. As I mentioned in an earlier email, this is hopefully laying the 
groundwork
for future evolution of the TCG vector operations and making use of the existing
functions first.

Certainly I'd be interested at looking at the hardfloat patches after this, 
since FP
performance is something that can offer enormous benefit to MacOS emulation.

>> I've tried to do some profiling again to find out what's used but I can't
>> get good results with the tools I have (oprofile stopped working since I've
>> updated my machine and Linux perf provides results that are hard to
>> interpret for me, haven't tried if gprof would work now it didn't before)
>> but I've seen some vector related helpers in the profile so at least some
>> vector ops are used. The "helper_vperm" came up top at about 11th (not sure
>> where is it called from), other vector helpers were lower.
>>
>> I don't remember details now but previously when testing hardfloat I've
>> written this: "I've looked at vperm which came out top in one of the
>> profiles I've taken and on little endian hosts it has the loop backwards and
>> also accesses vector elements from end to front which I wonder may be enough
>> for the compiler to not be able to optimise it? But I haven't checked
>> assembly. The altivec dependent mplayer video decoding test did not change
>> much with hardfloat, it took 98% compared to master so likely altivec is
>> dominating here." (Although this was with the PPC specific vector 

Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-10 Thread BALATON Zoltan

On Tue, 11 Dec 2018, David Gibson wrote:

On Mon, Dec 10, 2018 at 09:54:51PM +0100, BALATON Zoltan wrote:

Yes, I don't really know what these tests use but I think "lame" test is
mostly floating point but tried with "lame_vmx" which should at least use
some vector ops and "mplayer -benchmark" test is more vmx dependent based on
my previous profiling and testing with hardfloat but I'm not sure. (When
testing these with hardfloat I've found that lame was benefiting from
hardfloat but mplayer wasn't and more VMX related functions showed up with
mplayer so I assumed it's more VMX bound.)


I should clarify here.  When I say "floating point" above, I'm not
meaning things using the regular FPU instead of the vector unit.  I'm
saying *anything* involving floating point calculations whether
they're done in the FPU or the vector unit.


OK that clarifies it. I admit I was only testing these but didn't have 
time to look what changed exactly.



The patches here don't convert all VMX instructions to use vector TCG
ops - they only convert a few, and those few are about using the
vector unit for integer (and logical) operations.  VMX instructions
involving floating point calculations are unaffected and will still
use soft-float.


What I've said above about lame test being more FPU and mplayer more VMX 
intensive probably still holds as I've retried now on a Haswell i5 and got 
1-2% difference with lame_vmx and ~6% with mplayer. That's very little 
improvement but if only some VMX instructions should be faster then this 
may make sense.


These tests are not the best, maybe there are better ways to measure this 
but I don't know of any,



Maybe the PPC softmmu should be reviewed and optimised by someone who knows
it...


I'm not sure there is anyone who knows it at this point.  I probably
know it as well as anybody, and the ppc32 code scares me.  It's a
crufty mess and it would be nice to clean up, but that requires
someone with enough time and interest.


At least this seems to be a big bottleneck in PPC emulation and one that's 
not being worked on (others like hardfloat and VMX while not finished and 
still lot to do but already there are some results but no one is looking 
at softmmu). I was just trying to direct some attention to that softmmu 
may also need some optimisation and hope someone would notice this. I have 
some interest but not much time these days and if it scares you what 
should I say. I don't even understand most of it so it would take a lot of
time to even get how it works and what would need to be done. So I hope 
someone with more time or knowledge shows up and maybe at least provides 
some hints on what may need to be done.


Regards,
BALATON Zoltan



Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-10 Thread David Gibson
On Mon, Dec 10, 2018 at 09:54:51PM +0100, BALATON Zoltan wrote:
> On Mon, 10 Dec 2018, David Gibson wrote:
> > On Mon, Dec 10, 2018 at 01:33:53AM +0100, BALATON Zoltan wrote:
> > > On Fri, 7 Dec 2018, Mark Cave-Ayland wrote:
> > > > This patchset is an attempt at trying to improve the VMX (Altivec) 
> > > > instruction
> > > > performance by making use of the new TCG vector operations where 
> > > > possible.
> > > 
> > > This is very welcome, thanks for doing this.
> > > 
> > > > In order to use TCG vector operations, the registers must be accessible 
> > > > from cpu_env
> > > > whilst currently they are accessed via arrays of static TCG globals. 
> > > > Patches 1-3
> > > > are therefore mechanical patches which introduce access helpers for 
> > > > FPR, AVR and VSR
> > > > registers using the supplied TCGv_i64 parameter.
> > > 
> > > Have you tried some benchmarks or tests to measure the impact of these
> > > changes? I've tried the (very unscientific) benchmarks I've written about
> > > before here:
> > > 
> > > http://lists.nongnu.org/archive/html/qemu-ppc/2018-07/msg00261.html
> > > 
> > > (which seem to use AltiVec/VMX instructions but not sure which) on mac99
> > > with MorphOS and I could not see any performance increase. I haven't run
> > > enough tests but results with or without this series on master were mostly
> > > the same within a few percents, and sometimes even seen lower performance
> > > with these patches than without. I haven't tried to find out why (no time
> > > for that now) so can't really draw any conclusions from this. I'm also not
> > > sure if I've actually tested what you've changed or these use instructions
> > > that your patches don't optimise yet, or the changes I've seen were just
> > > normal changes between runs; but I wonder if the increased number of
> > > temporaries could result in lower performance in some cases?
> > 
> > What was your host machine.  IIUC this change will only improve
> > performance if the host tcg backend is able to implement TCG vector
> > ops in terms of vector ops on the host.
> 
> Tried it on i5 650 which has: sse sse2 ssse3 sse4_1 sse4_2. I assume x86_64
> should be supported but not sure what are the CPU requirements.
> 
> > In addition, this series only converts a subset of the integer and
> > logical vector instructions.  If your testcase is mostly floating
> > point (vectored or otherwise), it will still be softfloat and so not
> > see any speedup.
> 
> Yes, I don't really know what these tests use but I think "lame" test is
> mostly floating point but tried with "lame_vmx" which should at least use
> some vector ops and "mplayer -benchmark" test is more vmx dependent based on
> my previous profiling and testing with hardfloat but I'm not sure. (When
> testing these with hardfloat I've found that lame was benefiting from
> hardfloat but mplayer wasn't and more VMX related functions showed up with
> mplayer so I assumed it's more VMX bound.)

I should clarify here.  When I say "floating point" above, I'm not
meaning things using the regular FPU instead of the vector unit.  I'm
saying *anything* involving floating point calculations whether
they're done in the FPU or the vector unit.

The patches here don't convert all VMX instructions to use vector TCG
ops - they only convert a few, and those few are about using the
vector unit for integer (and logical) operations.  VMX instructions
involving floating point calculations are unaffected and will still
use soft-float.

> I've tried to do some profiling again to find out what's used but I can't
> get good results with the tools I have (oprofile stopped working since I've
> updated my machine and Linux perf provides results that are hard to
> interpret for me, haven't tried if gprof would work now it didn't before)
> but I've seen some vector related helpers in the profile so at least some
> vector ops are used. The "helper_vperm" came up top at about 11th (not sure
> where is it called from), other vector helpers were lower.
> 
> I don't remember details now but previously when testing hardfloat I've
> written this: "I've looked at vperm which came out top in one of the
> profiles I've taken and on little endian hosts it has the loop backwards and
> also accesses vector elements from end to front which I wonder may be enough
> for the compiler to not be able to optimise it? But I haven't checked
> assembly. The altivec dependent mplayer video decoding test did not change
> much with hardfloat, it took 98% compared to master so likely altivec is
> dominating here." (Although this was with the PPC specific vector helpers
> before VMX patch so not sure if this is still relevant.)
> 
> The top 10 in profile were still related to low level memory access and MMU
> management stuff as I've found before:
> 
> http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg03609.html
> http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg03704.html
> 
> I think implementing i2c for mac99 may help 

Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-10 Thread BALATON Zoltan

On Mon, 10 Dec 2018, Richard Henderson wrote:

On 12/10/18 2:54 PM, BALATON Zoltan wrote:

Tried it on i5 650 which has: sse sse2 ssse3 sse4_1 sse4_2. I assume x86_64
should be supported but not sure what are the CPU requirements.


Not quite.  I only support avx1 and later.

I thought about supporting sse4 and later (that's the minimum with all of the
instructions that do what we need), but there is only one cpu generation with
sse4 and without avx1, and avx1 is already 8 years old.


OK that explains why I haven't seen any improvements. My CPU just predates 
AVX and happens to be in the generation you mention. But I agree this 
probably does not worth the effort. Maybe I should test on something newer 
instead.


Thank you,
BALATON Zoltan



Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-10 Thread Richard Henderson
On 12/10/18 2:54 PM, BALATON Zoltan wrote:
>> What was your host machine.  IIUC this change will only improve
>> performance if the host tcg backend is able to implement TCG vector
>> ops in terms of vector ops on the host.
> 
> Tried it on i5 650 which has: sse sse2 ssse3 sse4_1 sse4_2. I assume x86_64
> should be supported but not sure what are the CPU requirements.

Not quite.  I only support avx1 and later.

I thought about supporting sse4 and later (that's the minimum with all of the
instructions that do what we need), but there is only one cpu generation with
sse4 and without avx1, and avx1 is already 8 years old.


r~



Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-10 Thread BALATON Zoltan

On Mon, 10 Dec 2018, David Gibson wrote:

On Mon, Dec 10, 2018 at 01:33:53AM +0100, BALATON Zoltan wrote:

On Fri, 7 Dec 2018, Mark Cave-Ayland wrote:

This patchset is an attempt at trying to improve the VMX (Altivec) instruction
performance by making use of the new TCG vector operations where possible.


This is very welcome, thanks for doing this.


In order to use TCG vector operations, the registers must be accessible from 
cpu_env
whilst currently they are accessed via arrays of static TCG globals. Patches 1-3
are therefore mechanical patches which introduce access helpers for FPR, AVR 
and VSR
registers using the supplied TCGv_i64 parameter.


Have you tried some benchmarks or tests to measure the impact of these
changes? I've tried the (very unscientific) benchmarks I've written about
before here:

http://lists.nongnu.org/archive/html/qemu-ppc/2018-07/msg00261.html

(which seem to use AltiVec/VMX instructions but not sure which) on mac99
with MorphOS and I could not see any performance increase. I haven't run
enough tests but results with or without this series on master were mostly
the same within a few percents, and sometimes even seen lower performance
with these patches than without. I haven't tried to find out why (no time
for that now) so can't really draw any conclusions from this. I'm also not
sure if I've actually tested what you've changed or these use instructions
that your patches don't optimise yet, or the changes I've seen were just
normal changes between runs; but I wonder if the increased number of
temporaries could result in lower performance in some cases?


What was your host machine.  IIUC this change will only improve
performance if the host tcg backend is able to implement TCG vector
ops in terms of vector ops on the host.


Tried it on i5 650 which has: sse sse2 ssse3 sse4_1 sse4_2. I assume 
x86_64 should be supported but not sure what are the CPU requirements.



In addition, this series only converts a subset of the integer and
logical vector instructions.  If your testcase is mostly floating
point (vectored or otherwise), it will still be softfloat and so not
see any speedup.


Yes, I don't really know what these tests use but I think "lame" test is 
mostly floating point but tried with "lame_vmx" which should at least use 
some vector ops and "mplayer -benchmark" test is more vmx dependent based 
on my previous profiling and testing with hardfloat but I'm not sure. 
(When testing these with hardfloat I've found that lame was benefiting 
from hardfloat but mplayer wasn't and more VMX related functions showed up 
with mplayer so I assumed it's more VMX bound.)


I've tried to do some profiling again to find out what's used but I can't 
get good results with the tools I have (oprofile stopped working since 
I've updated my machine and Linux perf provides results that are hard to 
interpret for me, haven't tried if gprof would work now it didn't before) 
but I've seen some vector related helpers in the profile so at least some 
vector ops are used. The "helper_vperm" came up top at about 11th (not 
sure where is it called from), other vector helpers were lower.


I don't remember details now but previously when testing hardfloat I've 
written this: "I've looked at vperm which came out top in one of the 
profiles I've taken and on little endian hosts it has the loop backwards 
and also accesses vector elements from end to front which I wonder may be 
enough for the compiler to not be able to optimise it? But I haven't 
checked assembly. The altivec dependent mplayer video decoding test did 
not change much with hardfloat, it took 98% compared to master so likely 
altivec is dominating here." (Although this was with the PPC specific 
vector helpers before VMX patch so not sure if this is still relevant.)


The top 10 in profile were still related to low level memory access and 
MMU management stuff as I've found before:


http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg03609.html
http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg03704.html

I think implementing i2c for mac99 may help this and some other 
optimisations may also be possible but I don't know enough about these to 
try that.


It also looks like with --enable-debug something is always flusing tlb and 
blowing away tb caches so these will be top in profile and likely dominate 
runtime so can't really use profile to measure impact of VMX patch. 
Without --enable-debug I can't get call graphs so can't get useful 
profile. I think I've looked at this before as well but can't remember now 
which check enabled by --enable-debug is responsible for constant tb 
cache flush and if that could be avoided. I just don't use --enable-debug 
since unless need to debug somthing.


Maybe the PPC softmmu should be reviewed and optimised by someone who 
knows it...


Regards,
BALATON Zoltan



Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-09 Thread David Gibson
On Mon, Dec 10, 2018 at 01:33:53AM +0100, BALATON Zoltan wrote:
> On Fri, 7 Dec 2018, Mark Cave-Ayland wrote:
> > This patchset is an attempt at trying to improve the VMX (Altivec) 
> > instruction
> > performance by making use of the new TCG vector operations where possible.
> 
> This is very welcome, thanks for doing this.
> 
> > In order to use TCG vector operations, the registers must be accessible 
> > from cpu_env
> > whilst currently they are accessed via arrays of static TCG globals. 
> > Patches 1-3
> > are therefore mechanical patches which introduce access helpers for FPR, 
> > AVR and VSR
> > registers using the supplied TCGv_i64 parameter.
> 
> Have you tried some benchmarks or tests to measure the impact of these
> changes? I've tried the (very unscientific) benchmarks I've written about
> before here:
> 
> http://lists.nongnu.org/archive/html/qemu-ppc/2018-07/msg00261.html
> 
> (which seem to use AltiVec/VMX instructions but not sure which) on mac99
> with MorphOS and I could not see any performance increase. I haven't run
> enough tests but results with or without this series on master were mostly
> the same within a few percents, and sometimes even seen lower performance
> with these patches than without. I haven't tried to find out why (no time
> for that now) so can't really draw any conclusions from this. I'm also not
> sure if I've actually tested what you've changed or these use instructions
> that your patches don't optimise yet, or the changes I've seen were just
> normal changes between runs; but I wonder if the increased number of
> temporaries could result in lower performance in some cases?

What was your host machine.  IIUC this change will only improve
performance if the host tcg backend is able to implement TCG vector
ops in terms of vector ops on the host.

In addition, this series only converts a subset of the integer and
logical vector instructions.  If your testcase is mostly floating
point (vectored or otherwise), it will still be softfloat and so not
see any speedup.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH 0/6] target/ppc: convert VMX instructions to use TCG vector operations

2018-12-09 Thread BALATON Zoltan

On Fri, 7 Dec 2018, Mark Cave-Ayland wrote:

This patchset is an attempt at trying to improve the VMX (Altivec) instruction
performance by making use of the new TCG vector operations where possible.


This is very welcome, thanks for doing this.


In order to use TCG vector operations, the registers must be accessible from 
cpu_env
whilst currently they are accessed via arrays of static TCG globals. Patches 1-3
are therefore mechanical patches which introduce access helpers for FPR, AVR 
and VSR
registers using the supplied TCGv_i64 parameter.


Have you tried some benchmarks or tests to measure the impact of these 
changes? I've tried the (very unscientific) benchmarks I've written about 
before here:


http://lists.nongnu.org/archive/html/qemu-ppc/2018-07/msg00261.html

(which seem to use AltiVec/VMX instructions but not sure which) on mac99 
with MorphOS and I could not see any performance increase. I haven't run 
enough tests but results with or without this series on master were mostly 
the same within a few percents, and sometimes even seen lower performance 
with these patches than without. I haven't tried to find out why (no time 
for that now) so can't really draw any conclusions from this. I'm also not 
sure if I've actually tested what you've changed or these use instructions 
that your patches don't optimise yet, or the changes I've seen were just 
normal changes between runs; but I wonder if the increased number of 
temporaries could result in lower performance in some cases?


Regards,
BALATON Zoltan