Branch: refs/heads/staging
  Home:   https://github.com/qemu/qemu
  Commit: ba8924113ca459d02ee67656a713f7ff494b968e
      
https://github.com/qemu/qemu/commit/ba8924113ca459d02ee67656a713f7ff494b968e
  Author: Shaobo Song <[email protected]>
  Date:   2022-07-12 (Tue, 12 Jul 2022)

  Changed paths:
    M tcg/region.c

  Log Message:
  -----------
  tcg: Fix returned type in alloc_code_gen_buffer_splitwx_memfd()

This fixes a bug in POSIX-compliant environments. Since we had allocated
a buffer named 'tcg-jit' with read-write access protections we need a int
type to combine these access flags and return it, whereas we had inexplicably
return a bool type. It may cause an unnecessary protection change in
tcg_region_init().

Cc: [email protected]
Fixes: 7be9ebcf924c ("tcg: Return the map protection from 
alloc_code_gen_buffer")
Signed-off-by: Shaobo Song <[email protected]>
Reviewed-by: Alex BennĂ©e <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Richard Henderson <[email protected]>


  Commit: b0f650f0477ae775e0915e3d60ab5110ad5e9157
      
https://github.com/qemu/qemu/commit/b0f650f0477ae775e0915e3d60ab5110ad5e9157
  Author: Ilya Leoshkevich <[email protected]>
  Date:   2022-07-12 (Tue, 12 Jul 2022)

  Changed paths:
    M accel/tcg/cputlb.c

  Log Message:
  -----------
  accel/tcg: Fix unaligned stores to s390x low-address-protected lowcore

If low-address-protection is active, unaligned stores to non-protected
parts of lowcore lead to protection exceptions. The reason is that in
such cases tlb_fill() call in store_helper_unaligned() covers
[0, addr + size) range, which contains the protected portion of
lowcore. This range is too large.

The most straightforward fix would be to make sure we stay within the
original [addr, addr + size) range. However, if an unaligned access
affects a single page, we don't need to call tlb_fill() in
store_helper_unaligned() at all, since it would be identical to
the previous tlb_fill() call in store_helper(), and therefore a no-op.
If an unaligned access covers multiple pages, this situation does not
occur.

Therefore simply skip TLB handling in store_helper_unaligned() if we
are dealing with a single page.

Fixes: 2bcf018340cb ("s390x/tcg: low-address protection support")
Signed-off-by: Ilya Leoshkevich <[email protected]>
Message-Id: <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Signed-off-by: Richard Henderson <[email protected]>


  Commit: 08c8a31214e8ca29e05b9f6c3ee942b28ec58457
      
https://github.com/qemu/qemu/commit/08c8a31214e8ca29e05b9f6c3ee942b28ec58457
  Author: Richard Henderson <[email protected]>
  Date:   2022-07-12 (Tue, 12 Jul 2022)

  Changed paths:
    M accel/tcg/cputlb.c
    M tcg/region.c

  Log Message:
  -----------
  Merge tag 'pull-tcg-20220712' of https://gitlab.com/rth7680/qemu into staging

Fix for duplicate tlb check on unaligned access.
Fix for w^x code gen buffer mapping.

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmLNEksdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8KPwf9EybXFrlI1u9A2nOK
# 8puFCKdN7eGjYo2dkRd/CyqugmsaS3IuL9cooWi7/A6pOtyuIWdlyI/r+PAZat3p
# GfvZvx9GejWpbUv6GYX2extZAev1EbhaaM6ZOg/EZGOWTjiINZMztuIWhbjftRUj
# 6E8FLkj/5PWQzYvi6TbMMAMqg5QsYERZIZ4SfDfjE2a8s8rloYDBdvVEaG35NOa/
# pv93clb7OrnE5VyJLHyfs8VwpbtJKsQy/Twwh1+828X/fetwJWT5AKfPZTIHLELL
# tVuABJA25wSfPPmjtXTzDjq5x5/UWKc16Zvk1tbcxuknLegxUH0Agy+qJRI3x5FA
# M3ZHOg==
# =b4EN
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 12 Jul 2022 11:48:51 AM +0530
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "[email protected]"
# gpg: Good signature from "Richard Henderson <[email protected]>" 
[ultimate]

* tag 'pull-tcg-20220712' of https://gitlab.com/rth7680/qemu:
  accel/tcg: Fix unaligned stores to s390x low-address-protected lowcore
  tcg: Fix returned type in alloc_code_gen_buffer_splitwx_memfd()

Signed-off-by: Richard Henderson <[email protected]>


Compare: https://github.com/qemu/qemu/compare/93c4ea344ea1...08c8a31214e8

Reply via email to