What is the difference between virtio-net-pci, virtio-net-pci-non-transitional and virtio-net-pci-transitional
I am seeing multiple virtio models for -nic. % qemu-system-x86_64 -nic model=help | grep virtio virtio-net-pci virtio-net-pci-non-transitional virtio-net-pci-transitional I want to know the difference between them. When to use what? Thanks and Best Regards, Ahmmad Ismail
Re: Question About Qemu > Hackintosh
Matt Stacy writes: > I was wondering will Qemu Linux ever be able to emulate Apple Silicon, > modified Arm? Only if someone writes the code for it. The core aarch64 emulation is pretty solid but there are custom Apple instructions and system registers which are undocumented. If you want to emulate an actual M1 system that is a lot of work because there are numerous peripherals that would need emulating. > The reason why I ask is because Open Core is now > dead to me and many other users because Apple has decided to switch to their > own CPUs and GPUs. I have an Intel based system still and > I still want to be able to update my Mac, but can’t eventually, because it > won’t be supported anymore using the old way, Open Core. If you > guys have any ideas for making virtualization happen for this, it would be > great. I mean, I love my Hackintosh, but I’m in fear that it’ll > outdate eventually and not be any use to me. Please email me back, when you > get the chance. -- Alex Bennée
Re: Modification to single Threaded Multi-Core emulation in TCG
Arnabjyoti Kalita writes: > Hello all, > > I have a requirement to use the single-threaded implementation of > multi-core emulation in TCG. This schedules the multiple cores in a > round robin fashion from what I understand, at the end of a set timer > interval. --accel tcg,thread=single will force the round robin scheduling you need. > I want to make a modification to this approach. I would like to > schedule the vCPUs in a round-robin fashion but at the end of every > Translation Block (TB)'s execution. So, instead of having a timer to > switch the vCPU, I would like to forcefully switch the vCPU when a > translation block has been executed. as a gross hack you can run with -d nochain and then ensure cpu_tb_exec does the equivalent of rr_kick_next_cpu after you execute the block. However before you go down this route I do have to ask why? What is it you are trying to achieve with this running mode? > What would be the best way to implement this approach? Do I need to > raise an interrupt at the end of the execution of every TB ? Where in > code should I start making changes ? I do not want the functionality > of the original TCG execution driver to change. > > Best Regards, > Arnabjyoti Kalita -- Alex Bennée
Re: qemu-user aarch64 and pointer authentication
On Tue, 11 Jan 2022 at 17:06, zadig wrote: > > > Because qemu-user is specifically emulating a Linux kernel. > > We don't want to provide a million tweakable command line options, > > it gets unmaintainable very quickly. We just want to provide the > > process with the environment that the Linux kernel gives it. > Yes, I agree. I should note that I'm increasingly sure our usermode emulation code isn't setting up TCR_EL1 correctly. Fixing this might make more bits available for authentication. I'll put this on my todo list to investigate. > > That's system emulation, which is unrelated to usermode emulation > > provided by qemu-aarch64. (If you use system emulation, then the > > guest kernel that you run under QEMU gets to choose what page > > size and so on it configures.) > I know this is about system emulation, but I do not know how > the Linux kernel works, for example if it sources the ID_AA64MMFR0_EL1 > register or not (which - I think - hints the granule size). ID_AA64MMFR0_EL1 tells guest OS code which page sizes the emulated CPU supports. QEMU's emulated CPUs always support all 3 pagesizes (4K, 16K, 64K); real hardware often only supports a subset of those. The guest OS itself then decides what page size it wants to use and programs TCR_EL1.{TG0,TG1} appropriately (assuming that it was built with support for at least one page size that the hardware supports.) > Anyways, I will try using system emulation and a custom kernel, to see > if I can extend the signature size in pointers. There are definitely config options which can affect it (for instance the ARM64_VA_BITS_52 config option notes that it will reduce the size of the pointer auth signature size to 3 bits !) -- PMM
Re: qemu-user aarch64 and pointer authentication
Because qemu-user is specifically emulating a Linux kernel. We don't want to provide a million tweakable command line options, it gets unmaintainable very quickly. We just want to provide the process with the environment that the Linux kernel gives it. Yes, I agree. That's system emulation, which is unrelated to usermode emulation provided by qemu-aarch64. (If you use system emulation, then the guest kernel that you run under QEMU gets to choose what page size and so on it configures.) I know this is about system emulation, but I do not know how the Linux kernel works, for example if it sources the ID_AA64MMFR0_EL1 register or not (which - I think - hints the granule size). Anyways, I will try using system emulation and a custom kernel, to see if I can extend the signature size in pointers. Thank you for helping, - zadig PS: just noticed that I used "BTI" like the instruction instead of "TBI" .. - facepalming myself. On 1/11/22 17:53, Peter Maydell wrote: On Tue, 11 Jan 2022 at 16:33, zadig wrote: Thanks for your celerity. The architecture specifies that the number of bits used for the signature depends on various properties of the CPU and of the configuration that the host OS has put it into. Yes, this is why I checked for the TCR value, because basically it only depends on its value and if BTI is enabled. This sounds like a bug -- can you provide a repro case ? Also, if you could confirm that this still happens on a QEMU built from current git that would be helpful. (We do have some test cases for pauth -- see tests/tcg/aarch64/pauth*.c -- but it looks like they only test against the aut* instructions, not against retab.) I checked using commit 64c01c7da449bcafc614b27ecf1325bb08031c84 and the RETAB was honored. You can't change TCR from usermode, because it's a privileged register. What you get is what QEMU sets it as, which in theory should be the value that a real Linux kernel would set it to for the kind of CPU that is being emulated. Looking at the code I'm not sure if we're setting TCR the same way the kernel does: to confirm that we'd need to look at the kernel source code and cross-check what values it uses. I do not have a clue about how the Linux kernel set the TCR, but I do not understand why we cannot change it (for example with a command-line option), since we just emulate code ? Because qemu-user is specifically emulating a Linux kernel. We don't want to provide a million tweakable command line options, it gets unmaintainable very quickly. We just want to provide the process with the environment that the Linux kernel gives it. I guess it would be possible if I implement a new machine which derives from TYPE_VIRT_MACHINE and I set custom page granule size and page size. That's system emulation, which is unrelated to usermode emulation provided by qemu-aarch64. (If you use system emulation, then the guest kernel that you run under QEMU gets to choose what page size and so on it configures.) -- PMM OpenPGP_signature Description: OpenPGP digital signature
Re: qemu-user aarch64 and pointer authentication
On Tue, 11 Jan 2022 at 16:33, zadig wrote: > > Thanks for your celerity. > > > The architecture specifies that the number of bits used for the > > signature depends on various properties of the CPU and of > > the configuration that the host OS has put it into. > Yes, this is why I checked for the TCR value, because basically it only > depends on its value and if BTI is enabled. > > > This sounds like a bug -- can you provide a repro case ? > > Also, if you could confirm that this still happens on a > > QEMU built from current git that would be helpful. > > > > (We do have some test cases for pauth -- see tests/tcg/aarch64/pauth*.c -- > > but it looks like they only test against the aut* instructions, not > > against retab.) > I checked using commit 64c01c7da449bcafc614b27ecf1325bb08031c84 and the > RETAB was honored. > > > You can't change TCR from usermode, because it's a privileged > > register. What you get is what QEMU sets it as, which in theory > > should be the value that a real Linux kernel would set it to > > for the kind of CPU that is being emulated. Looking at the code > > I'm not sure if we're setting TCR the same way the kernel does: > > to confirm that we'd need to look at the kernel source code and > > cross-check what values it uses. > > I do not have a clue about how the Linux kernel set the TCR, but I do > not understand why we cannot change it > (for example with a command-line option), since we just emulate code ? Because qemu-user is specifically emulating a Linux kernel. We don't want to provide a million tweakable command line options, it gets unmaintainable very quickly. We just want to provide the process with the environment that the Linux kernel gives it. > I guess it would be possible if I implement a new machine which derives > from TYPE_VIRT_MACHINE > and I set custom page granule size and page size. That's system emulation, which is unrelated to usermode emulation provided by qemu-aarch64. (If you use system emulation, then the guest kernel that you run under QEMU gets to choose what page size and so on it configures.) -- PMM
Re: qemu-user aarch64 and pointer authentication
Thanks for your celerity. The architecture specifies that the number of bits used for the signature depends on various properties of the CPU and of the configuration that the host OS has put it into. Yes, this is why I checked for the TCR value, because basically it only depends on its value and if BTI is enabled. This sounds like a bug -- can you provide a repro case ? Also, if you could confirm that this still happens on a QEMU built from current git that would be helpful. (We do have some test cases for pauth -- see tests/tcg/aarch64/pauth*.c -- but it looks like they only test against the aut* instructions, not against retab.) I checked using commit 64c01c7da449bcafc614b27ecf1325bb08031c84 and the RETAB was honored. You can't change TCR from usermode, because it's a privileged register. What you get is what QEMU sets it as, which in theory should be the value that a real Linux kernel would set it to for the kind of CPU that is being emulated. Looking at the code I'm not sure if we're setting TCR the same way the kernel does: to confirm that we'd need to look at the kernel source code and cross-check what values it uses. I do not have a clue about how the Linux kernel set the TCR, but I do not understand why we cannot change it (for example with a command-line option), since we just emulate code ? I guess it would be possible if I implement a new machine which derives from TYPE_VIRT_MACHINE and I set custom page granule size and page size. On 1/11/22 16:57, Peter Maydell wrote: On Tue, 11 Jan 2022 at 15:28, zadig wrote: Hello, I am running some dummy aarch64 ELF I have built using clang with -mbranch-protection=pac-ret+leaf+b-key. qemu successfully emulates the code, however the pointer authentication signature seems weird to me: only one byte is used for the signature. The architecture specifies that the number of bits used for the signature depends on various properties of the CPU and of the configuration that the host OS has put it into. Here is an example: FE 07 C1 DA PACIB X30, SP Before the LR gets signed, its value is 0xFEFDD99C. After being signed by PACIB, its value is 0x0061FEFDD99C. If I disable BTI, the signature takes 2 bytes, which is "better". However on real aarch64 system (like Apple M1 chips), the signature uses the remaining bytes. That probably indicates that that specific system happens to configure the CPU differently. (For instance, I think the M1 has a different implemented address space size.) 7 bits of signature would be expected for a CPU with TBI (top-byte-ignore) enabled and a 48-bit virtual-address size. In both cases (with or without BTI), the signature is not honored: if I manually strip the signature or change it using gdb, the RETAB instruction does not change the LR for generating a fault, which should be the right behavior. This sounds like a bug -- can you provide a repro case ? Also, if you could confirm that this still happens on a QEMU built from current git that would be helpful. (We do have some test cases for pauth -- see tests/tcg/aarch64/pauth*.c -- but it looks like they only test against the aut* instructions, not against retab.) I have explored the qemu source code, and I guess the following code is responsible for adding the signature to the pointer: target/arm/pauth_helper.c: -- static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier, ARMPACKey *key, bool data) { ... top_bit = 64 - 8 * param.tbi; bot_bit = 64 - param.tsz; ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext); -- We notice how BTI reduces the size of the signature, and how tsz is reducing it too. So, my question is how can we manipulate TCR from qemu-user, in order to change tsz, so we can store the signature on more bytes ? You can't change TCR from usermode, because it's a privileged register. What you get is what QEMU sets it as, which in theory should be the value that a real Linux kernel would set it to for the kind of CPU that is being emulated. Looking at the code I'm not sure if we're setting TCR the same way the kernel does: to confirm that we'd need to look at the kernel source code and cross-check what values it uses. thanks -- PMM OpenPGP_signature Description: OpenPGP digital signature
Re: qemu-user aarch64 and pointer authentication
On Tue, 11 Jan 2022 at 15:28, zadig wrote: > > Hello, > > I am running some dummy aarch64 ELF I have built using clang with > -mbranch-protection=pac-ret+leaf+b-key. > > qemu successfully emulates the code, however the pointer authentication > signature seems weird to me: only one byte is used for the signature. The architecture specifies that the number of bits used for the signature depends on various properties of the CPU and of the configuration that the host OS has put it into. > Here is an example: > > FE 07 C1 DA PACIB X30, SP > > Before the LR gets signed, its value is 0xFEFDD99C. > After being signed by PACIB, its value is 0x0061FEFDD99C. > > If I disable BTI, the signature takes 2 bytes, which is "better". > However on real aarch64 system (like Apple M1 chips), the signature uses > the remaining bytes. That probably indicates that that specific system happens to configure the CPU differently. (For instance, I think the M1 has a different implemented address space size.) 7 bits of signature would be expected for a CPU with TBI (top-byte-ignore) enabled and a 48-bit virtual-address size. > In both cases (with or without BTI), the signature is not honored: if I > manually strip the signature or change it using gdb, the RETAB > instruction does not change the LR for generating a fault, which should > be the right behavior. This sounds like a bug -- can you provide a repro case ? Also, if you could confirm that this still happens on a QEMU built from current git that would be helpful. (We do have some test cases for pauth -- see tests/tcg/aarch64/pauth*.c -- but it looks like they only test against the aut* instructions, not against retab.) > I have explored the qemu source code, and I guess the following code is > responsible for adding the signature to the pointer: > > target/arm/pauth_helper.c: > -- > static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t > modifier, ARMPACKey *key, bool data) { >... >top_bit = 64 - 8 * param.tbi; >bot_bit = 64 - param.tsz; >ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext); > -- > > We notice how BTI reduces the size of the signature, and how tsz is > reducing it too. > > So, my question is how can we manipulate TCR from qemu-user, in order to > change tsz, so we can store the signature on more bytes ? You can't change TCR from usermode, because it's a privileged register. What you get is what QEMU sets it as, which in theory should be the value that a real Linux kernel would set it to for the kind of CPU that is being emulated. Looking at the code I'm not sure if we're setting TCR the same way the kernel does: to confirm that we'd need to look at the kernel source code and cross-check what values it uses. thanks -- PMM
qemu-user aarch64 and pointer authentication
Hello, I am running some dummy aarch64 ELF I have built using clang with -mbranch-protection=pac-ret+leaf+b-key. qemu successfully emulates the code, however the pointer authentication signature seems weird to me: only one byte is used for the signature. Here is an example: FE 07 C1 DA PACIB X30, SP Before the LR gets signed, its value is 0xFEFDD99C. After being signed by PACIB, its value is 0x0061FEFDD99C. If I disable BTI, the signature takes 2 bytes, which is "better". However on real aarch64 system (like Apple M1 chips), the signature uses the remaining bytes. In both cases (with or without BTI), the signature is not honored: if I manually strip the signature or change it using gdb, the RETAB instruction does not change the LR for generating a fault, which should be the right behavior. I have explored the qemu source code, and I guess the following code is responsible for adding the signature to the pointer: target/arm/pauth_helper.c: -- static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier, ARMPACKey *key, bool data) { ... top_bit = 64 - 8 * param.tbi; bot_bit = 64 - param.tsz; ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext); -- We notice how BTI reduces the size of the signature, and how tsz is reducing it too. So, my question is how can we manipulate TCR from qemu-user, in order to change tsz, so we can store the signature on more bytes ? I am running qemu from the git branch stable-6.0, using 16K page. My host is an 86_64 host, running archlinux. Here is how I launch aarch64 qemu-user: ~/git/qemu/build/qemu-aarch64 -L /usr/aarch64-linux-gnu -p 16K -g 1234 build/test_pac_no_bti Regards, -zadig OpenPGP_signature Description: OpenPGP digital signature