Re: How to enable UEFI?

2024-07-08 Thread Dongli Zhang
This is my long-time-ago notes on how to create UEFI-based VM with QEMU.

https://raw.githubusercontent.com/finallyjustice/sample/master/kvm/uefi_ol7.txt

To play with arm64, the below blog mentioned that AAVMF is required.

https://blogs.oracle.com/linux/post/oracle-linux-9-with-qemu-on-an-m1-mac

Dongli Zhang

On 7/8/24 9:07 AM, Frantisek Rysanek wrote:
> Hello Steve,
> 
> I have a faint recollection that, when I compiled QEMU from source, 
> there was a choice of the BIOS / UEFI ROM image to load, as a runtime 
> option... somehow I was supposed to provide a path to the images, if 
> I wanted to change the default, which is SeaBIOS (legacy BIOS 
> services).
> 
> This might be a good pointer to start from:
> https://urldefense.com/v3/__https://medium.com/@tunacici7/qemu-eli5-part-6-uefi-bios-ovmf-7919facf__;!!ACWV5N9M2RV99hQ!JSu_B-bERpPV6Meg3yaMuzJffNKadkj2im976BpmYDQRzri-mQB_r_KvRqpVY45PIVDthEzsYHXEhkz_RgV7Vcd4LEoKcTs$
>  
> 7e31
> 
> At least that's what Google has divulged upon my first query. 
> There were other hits, see for yourself if you want:
> https://urldefense.com/v3/__https://www.google.com/search?q=QEMU*load*UEFI*BIOS*image__;KysrKw!!ACWV5N9M2RV99hQ!JSu_B-bERpPV6Meg3yaMuzJffNKadkj2im976BpmYDQRzri-mQB_r_KvRqpVY45PIVDthEzsYHXEhkz_RgV7Vcd4xI9JO8k$
>  
> 
> Good luck :-)
> 
> Frank
> 
>> Hi all,
>>
>> I didn't specify a motherboard, chipset or CPU for my qemu VM guest.
>> The result worked only with the old BIOS boot system, and not with
>> UEFI. Although I personally prefer the old BIOS system, for the
>> particular demonstration I'm giving I need UEFI. How do I specify UEFI
>> in my Qemu commands?
>>
>> Thanks,
>>
>> SteveT
>>
>> Steve Litt 
>> https://urldefense.com/v3/__http://444domains.com__;!!ACWV5N9M2RV99hQ!JSu_B-bERpPV6Meg3yaMuzJffNKadkj2im976BpmYDQRzri-mQB_r_KvRqpVY45PIVDthEzsYHXEhkz_RgV7Vcd48kLXpVg$
>>  
>>
> 
> 
> 



Re: Macvtap devices?

2022-09-04 Thread Dongli Zhang
I suggest you figure out the diff between different devices by reading some
online docs, e.g., the below doc.

https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking

Here is how I setup a software-only environment to play with macvtap device.

https://github.com/finallyjustice/sample/blob/master/kvm/macvtap.txt

Dongli Zhang

On 9/2/22 4:17 PM, X Tec wrote:
> I have used the -netdev tap,[...] options before, though almost exclusively 
> for bridged networking.
> 
> I understand this setup uses tap (or tun/tap?) virtual devices for the 
> virtual machines.
> 
> But, what are "macvtap" devices really?
> I saw the term while reading some libvirt docs for comparison purposes; they 
> particularly seem to favor these devices...
> But even after internet searching I was not able to understand them.
> 
> So, if someone could help,
> What are they, or what's their difference with "normal" tap devices commonly 
> used in QEMU?
> Are macvtap devices supported in QEMU? How can one use them?
> 
> Thanks beforehand.
> 



Re: possible to resize (extend) a raw image online with qemu-img ?

2021-10-19 Thread Dongli Zhang
I never use qemu-img but just FYI my notes on how to resize raw image.

To resize with 'raw' format.

1. To create image file in 'raw' format.

guest# dd if=/dev/zero of=test.raw bs=1M count=64 oflag=direct

2. To boot VM with QEMU.

host# qemu-system-x86_64 -m 4000M -enable-kvm -smp 4 -vnc :5 \
  -net nic -net user,hostfwd=tcp::5025-:22 \
  -device virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=16 \
  -drive file=ol7.qcow2,if=none,id=drive0 \
  -device virtio-blk-pci,drive=drive1,id=virtblk1,num-queues=16 \
  -drive file=test.raw,if=none,id=drive1 \
  -monitor stdio -cpu host
QEMU 4.2.0 monitor - type 'help' for more information
(qemu)

3. To view disk size within VM.

guest# cat /sys/block/vdb/size
131072

4 (optional). To extend the host image file from 64M to 128M. This step is
optional as 'block_resize' in QEMU shell would be able to do that.

host# dd if=/dev/zero of=test.raw bs=1M count=64 oflag=append conv=notrunc

5. To extend the host image file from 64M to 128M and propagate the change to
guest via QEMU.

(qemu) block_resize drive1 128M

6. Now the new disk size is available within VM.

guest# cat /sys/block/vdb/size
262144

[  124.446907] virtio_blk virtio1: [vdb] new size: 262144 512-byte logical
blocks (134 MB/128 MiB)
[  124.448132] vdb: detected capacity change from 67108864 to 134217728

Dongli Zhang

On 10/19/21 10:08 AM, Lentes, Bernd wrote:
> Hi ML,
> 
> is it possible to extend the disk (raw format) of a running guest with 
> qemu-img ?
> 
> Thanks.
> 
> 
> Bernd
> 



Re: Interesting qemu/virt-manager bug about the "rotational" attribute on virtio-blk disks

2020-07-16 Thread Dongli Zhang
According to below commit, the virtio-blk used to be non-rot but it
(QUEUE_FLAG_NONROT/QUEUE_FLAG_VIRT) was removed due to some reason.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f8b12e513b953aebf30f8ff7d2de9be7e024dbbe

Dongli Zhang

On 7/16/20 1:06 AM, Richard W.M. Jones wrote:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1857515
> 
> A virtio-blk disk which is backed by a raw file on an SSD,
> inside the guest shows rotational = 1.
> 
> I assumed that qemu must have a "rotational" property for disks and
> this would be communicated by virtio to the guest, but qemu and virtio
> don't seem to have this.  Pretty surprising!  Is it called something
> other than "rotational"?
> 
> Rich.
> 



Re: use multiple namespaces in emulated NVMe controller

2020-05-04 Thread Dongli Zhang
The below was trying to add this feature to qemu:

https://patchew.org/QEMU/20200415055140.466900-1-...@irrelevant.dk/

Dongli Zhang

On 5/4/20 6:14 AM, Thanos Makatos wrote:
> I'm using an emulated NVMe controller in QEMU (4.1.0) and want to see whether 
> it's possible to use multiple namespaces in the same controller, each backed 
> by a separate file. However this doesn't seem to be possible? This is how I 
> use one controller and one namespace:
> 
>   -drive file=nvme.img,if=none,id=D22
>   -device nvme,drive=D22,serial=1234
> 
> 



Re: qemu-x86: kernel panic when host is loaded

2020-04-02 Thread Dongli Zhang



On 4/2/20 2:57 AM, Thomas Gleixner wrote:
> Corentin,
> 
> Corentin Labbe  writes:
>> On our kernelci lab, each qemu worker pass an healtcheck job each day and 
>> after each job failure, so it is heavily used.
>> The healtcheck job is a Linux boot with a stable release.
>>
>> Since we upgraded our worker to buster, the qemu x86_64 healthcheck randomly 
>> panic with:
>> <0>[0.009000] Kernel panic - not syncing: IO-APIC + timer doesn't work!  
>> Boot with apic=debug and send a report.  Then try booting with the 'noapic' 
>> option.
>>
>> After some test I found the source of this kernel panic, the host is
>> loaded and qemu run "slower".  Simply renicing all qemu removed this
>> behavour.
>>
>> So now what can I do ?
>> Appart renicing qemu process, does something could be done ?
> 
> As the qemu timer/ioapic routing is actually sane, you might try to add
> "no_timer_check" to the kernel command line.
> 

The no_timer_check is already permanently disabled in below commit?

commit a90ede7b17d1 ("KVM: x86: paravirt skip pit-through-ioapic boot check")

In addition, hyperv and vmware also disabled that:

commit ca3ba2a2f4a4 ("x86, hyperv: Bypass the timer_irq_works() check").

commit 854dd54245f7 ("x86/vmware: Skip timer_irq_works() check on VMware")

Dongli Zhang



Re: [Qemu-discuss] virtio-scsi really slow init with ArchLinux kernel

2018-07-11 Thread Dongli Zhang
While there is no delay with stable 4.7.15 kernel, there is a delay less than 8
seconds with ubuntu ppa kernel 4.7.15 on amd64. Perhaps this is related to
kernel config file, as I used "make defconfig" with scsi virtio enabled 
manually.

[0.905328] scsi host2: Virtio SCSI HBA
[0.905810] scsi 2:0:0:0: Direct-Access QEMU QEMU HARDDISK2.5+
PQ: 0 ANSI: 5
[0.912323] FDC 0 is a S82078B
[0.927547] PCI Interrupt Link [LNKC] enabled at IRQ 10
[1.250295] e1000 :00:03.0 eth0: (PCI:33MHz:32-bit) 52:54:00:12:34:56
[1.250452] e1000 :00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[1.251320] e1000 :00:03.0 ens3: renamed from eth0
[1.600095] tsc: Refined TSC clocksource calibration: 3392.142 MHz
[1.600250] clocksource: tsc: mask: 0x max_cycles:
0x30e54fbd081, max_idle_ns: 440795321209 ns
[7.696221] random: fast init done
[9.264477] sd 2:0:0:0: Power-on or device reset occurred
[9.264743] sd 2:0:0:0: Attached scsi generic sg1 type 0
[9.265183] sd 2:0:0:0: [sda] 62914560 512-byte logical blocks: (32.2 GB/30.0
GiB)
[9.265460] sd 2:0:0:0: [sda] Write Protect is off
[9.265576] sd 2:0:0:0: [sda] Mode Sense: 63 00 00 08
[9.265656] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled,
doesn't support DPO or FUA
[9.266258]  sda: sda1
[9.266638] sd 2:0:0:0: [sda] Attached SCSI disk


Dongli Zhang

On 07/12/2018 01:35 PM, Dongli Zhang wrote:
> Hi Chris,
> 
> On 07/12/2018 01:45 AM, Chris wrote:
>> On Wed, Jul 11, 2018 at 12:43 PM, Greg Kurz  wrote:
>>> I've been observing a similar delay on ppc64 with fedora28 guests:
>>>
>>> # dmesg | egrep 'scsi| sd '
>>> [1.530946] scsi host0: Virtio SCSI HBA
>>> [1.532452] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK
>>> 2.5+ PQ: 0 ANSI: 5
>>> [   21.928378] sd 0:0:0:0: Power-on or device reset occurred
>>> [   21.930012] sd 0:0:0:0: Attached scsi generic sg0 type 0
>>> [   21.931554] sd 0:0:0:0: [sda] 83886080 512-byte logical blocks: (42.9 
>>> GB/40.0 GiB)
>>> [   21.931929] sd 0:0:0:0: [sda] Write Protect is off
>>> [   21.933110] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
>>> [   21.934084] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
>>> doesn't support DPO or FUA
>>> [   21.943566] sd 0:0:0:0: [sda] Attached SCSI disk
>>>
>>> Kernel version is 4.16.16-300.fc28.ppc64. And I cannot reproduce the
>>> issue with other distros that have an older kernel, eg, ubuntu 18.04
>>> with kernel 4.15.0-23-generic.
>>>
>>> My first guess is that it might be a kernel-side regression introduced
>>> in 4.16... maybe bisect ?
>>
>> Interesting. I just tried kernel 4.17.5 from the mainline ppa on
>> Ubuntu 18.04 and now there is a delay. It's only 7.5 seconds but still
>> noticeable. There was previously no delay with the 4.15 kernel.
> 
> I did not observe any delay with stable 4.17.5 on ubuntu 18.04 (I built the
> kernel myself with CONFIG_SCSI_VIRTIO=y):
> 
> # qemu-system-x86_64 -drive
> file=/home/zhang/img/ubuntu1804.qcow2,format=qcow2,if=none,id=virt1 -device
> virtio-scsi-pci,id=virt1 -device scsi-hd,drive=virt1 -m 4096M -enable-kvm 
> -smp 6
> -net nic -net user,hostfwd=tcp::5022-:22 -kernel
> /home/zhang/test/linux-4.17.5/arch/x86_64/boot/bzImage -append "root=/dev/sda1
> init=/sbin/init text" -enable-kvm
> 
> 
> # uname -r
> 4.17.5
> 
> [0.190434] scsi host0: Virtio SCSI HBA
> [0.190924] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK2.5+ PQ: > 0
> ANSI: 5
> [0.206282] random: fast init done
> [0.216651] sd 0:0:0:0: Power-on or device reset occurred
> [0.216907] sd 0:0:0:0: Attached scsi generic sg0 type 0
> [0.217207] ata_piix :00:01.1: version 2.13
> [0.217613] sd 0:0:0:0: [sda] 62914560 512-byte logical blocks: (32.2 GB/30.0 
> GiB)
> [0.217959] sd 0:0:0:0: [sda] Write Protect is off
> [0.218152] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
> [0.218193] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
> doesn't
> support DPO or FUA
> 
> 
> In addition, I could not reproduce with '4.18-rc4' or 'v4.17-rc7'.
> 
> 
> Dongli Zhang
> 
>>
>> Definitely seems like it could be something introduced in kernel 4.16.
>>
>> Chris
>>
> 



Re: [Qemu-discuss] virtio-scsi really slow init with ArchLinux kernel

2018-07-11 Thread Dongli Zhang
Hi Chris,

On 07/12/2018 01:45 AM, Chris wrote:
> On Wed, Jul 11, 2018 at 12:43 PM, Greg Kurz  wrote:
>> I've been observing a similar delay on ppc64 with fedora28 guests:
>>
>> # dmesg | egrep 'scsi| sd '
>> [1.530946] scsi host0: Virtio SCSI HBA
>> [1.532452] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK
>> 2.5+ PQ: 0 ANSI: 5
>> [   21.928378] sd 0:0:0:0: Power-on or device reset occurred
>> [   21.930012] sd 0:0:0:0: Attached scsi generic sg0 type 0
>> [   21.931554] sd 0:0:0:0: [sda] 83886080 512-byte logical blocks: (42.9 
>> GB/40.0 GiB)
>> [   21.931929] sd 0:0:0:0: [sda] Write Protect is off
>> [   21.933110] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
>> [   21.934084] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
>> doesn't support DPO or FUA
>> [   21.943566] sd 0:0:0:0: [sda] Attached SCSI disk
>>
>> Kernel version is 4.16.16-300.fc28.ppc64. And I cannot reproduce the
>> issue with other distros that have an older kernel, eg, ubuntu 18.04
>> with kernel 4.15.0-23-generic.
>>
>> My first guess is that it might be a kernel-side regression introduced
>> in 4.16... maybe bisect ?
> 
> Interesting. I just tried kernel 4.17.5 from the mainline ppa on
> Ubuntu 18.04 and now there is a delay. It's only 7.5 seconds but still
> noticeable. There was previously no delay with the 4.15 kernel.

I did not observe any delay with stable 4.17.5 on ubuntu 18.04 (I built the
kernel myself with CONFIG_SCSI_VIRTIO=y):

# qemu-system-x86_64 -drive
file=/home/zhang/img/ubuntu1804.qcow2,format=qcow2,if=none,id=virt1 -device
virtio-scsi-pci,id=virt1 -device scsi-hd,drive=virt1 -m 4096M -enable-kvm -smp 6
-net nic -net user,hostfwd=tcp::5022-:22 -kernel
/home/zhang/test/linux-4.17.5/arch/x86_64/boot/bzImage -append "root=/dev/sda1
init=/sbin/init text" -enable-kvm


# uname -r
4.17.5

[0.190434] scsi host0: Virtio SCSI HBA
[0.190924] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK2.5+ PQ: 0
ANSI: 5
[0.206282] random: fast init done
[0.216651] sd 0:0:0:0: Power-on or device reset occurred
[0.216907] sd 0:0:0:0: Attached scsi generic sg0 type 0
[0.217207] ata_piix :00:01.1: version 2.13
[0.217613] sd 0:0:0:0: [sda] 62914560 512-byte logical blocks: (32.2 GB/30.0 
GiB)
[0.217959] sd 0:0:0:0: [sda] Write Protect is off
[0.218152] sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08
[0.218193] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't
support DPO or FUA


In addition, I could not reproduce with '4.18-rc4' or 'v4.17-rc7'.


Dongli Zhang

> 
> Definitely seems like it could be something introduced in kernel 4.16.
> 
> Chris
> 



Re: [Qemu-discuss] IRQ per CPU

2018-07-05 Thread Dongli Zhang



On 07/05/2018 12:12 PM, Probir Roy wrote:
>> Does 'per CPU basis' indicates irq per cpu, or irq per device queue?
> 
> IRQ per CPU core, meaning that IRQ will be raised at and served by
> that CPU. Does IRQ per queue mean the same thing?
> 

About 'IRQ per queue', the device may create multiple queue in the OS driver.
The number of queues are always proportional to the number of CPU core/thread
(although this is also always configurable by OS driver).

Usually the per-queue irq/vector is bound to each CPU. As the number of
queue/irq is always the same as the number of CPU, it is sort of 'per CPU
basis', as each CPU will be serving irq for its own queue.

Dongli Zhang



Re: [Qemu-discuss] IRQ per CPU

2018-07-04 Thread Dongli Zhang



On 07/04/2018 10:32 PM, Probir Roy wrote:
> I am writing a virtual device that would generate IRQ per CPU basis. I

Does 'per CPU basis' indicates irq per cpu, or irq per device queue?

AFAIK, the device may create multiple queues in the driver (in OS) and we would
have one irq (vector) per queue?

If you are talking about irq (vector) per queue, I would suggest read about
following on nvme which involves per-queue vector.

https://github.com/qemu/qemu/blob/master/hw/block/nvme.c

Although I am not expert on qemu, in my opinion the qemu nvme code is very
helpful for understanding per-queue vector.

> have written a PCI device from the template which does not generate
> IRQ per CPU. How can write such device in Qemu?
> 
> The code of current device is here:
> https://gist.github.com/proywm/6ca98d3e8ca001965e2c8792fcf97911
> 
> Regards,
> Probir
> 

Dongli Zhang