Re: [PATCH 3/3] target/i386/kvm: get and put AMD pmu registers

2022-11-21 Thread Liang Yan

A little bit more information from kernel perspective.

https://lkml.org/lkml/2022/10/31/476


I was kindly thinking of the same idea, but not sure if it is expected  
from a bare-metal perspective, since the four legacy MSRs


are always there. Also not sure if they are used by other applications.


~Liang


On 11/19/22 07:29, Dongli Zhang wrote:

The QEMU side calls kvm_get_msrs() to save the pmu registers from the KVM
side to QEMU, and calls kvm_put_msrs() to store the pmu registers back to
the KVM side.

However, only the Intel gp/fixed/global pmu registers are involved. There
is not any implementation for AMD pmu registers. The
'has_architectural_pmu_version' and 'num_architectural_pmu_gp_counters' are
calculated at kvm_arch_init_vcpu() via cpuid(0xa). This does not work for
AMD. Before AMD PerfMonV2, the number of gp registers is decided based on
the CPU version.

This patch is to add the support for AMD version=1 pmu, to get and put AMD
pmu registers. Otherwise, there will be a bug:

1. The VM resets (e.g., via QEMU system_reset or VM kdump/kexec) while it
is running "perf top". The pmu registers are not disabled gracefully.

2. Although the x86_cpu_reset() resets many registers to zero, the
kvm_put_msrs() does not puts AMD pmu registers to KVM side. As a result,
some pmu events are still enabled at the KVM side.

3. The KVM pmc_speculative_in_use() always returns true so that the events
will not be reclaimed. The kvm_pmc->perf_event is still active.

4. After the reboot, the VM kernel reports below error:

[0.092011] Performance Events: Fam17h+ core perfctr, Broken BIOS detected, 
complain to your hardware vendor.
[0.092023] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 
c0010200 is 530076)

5. In a worse case, the active kvm_pmc->perf_event is still able to
inject unknown NMIs randomly to the VM kernel.

[...] Uhhuh. NMI received for unknown reason 30 on CPU 0.

The patch is to fix the issue by resetting AMD pmu registers during the
reset.

Cc: Joe Jin 
Signed-off-by: Dongli Zhang 
---
  target/i386/cpu.h |  5 +++
  target/i386/kvm/kvm.c | 83 +--
  2 files changed, 86 insertions(+), 2 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index d4bc19577a..4cf0b98817 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -468,6 +468,11 @@ typedef enum X86Seg {
  #define MSR_CORE_PERF_GLOBAL_CTRL   0x38f
  #define MSR_CORE_PERF_GLOBAL_OVF_CTRL   0x390
  
+#define MSR_K7_EVNTSEL0 0xc001

+#define MSR_K7_PERFCTR0 0xc0010004
+#define MSR_F15H_PERF_CTL0  0xc0010200
+#define MSR_F15H_PERF_CTR0  0xc0010201
+
  #define MSR_MC0_CTL 0x400
  #define MSR_MC0_STATUS  0x401
  #define MSR_MC0_ADDR0x402
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 0b1226ff7f..023fcbce48 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -2005,6 +2005,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
  }
  }
  
+if (IS_AMD_CPU(env)) {

+int64_t family;
+
+family = (env->cpuid_version >> 8) & 0xf;
+if (family == 0xf) {
+family += (env->cpuid_version >> 20) & 0xff;
+}
+
+/*
+ * If KVM_CAP_PMU_CAPABILITY is not supported, there is no way to
+ * disable the AMD pmu virtualization.
+ *
+ * If KVM_CAP_PMU_CAPABILITY is supported, "!has_pmu_cap" indicates
+ * the KVM side has already disabled the pmu virtualization.
+ */
+if (family >= 6 && (!has_pmu_cap || cpu->enable_pmu)) {
+has_architectural_pmu_version = 1;
+
+if (env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_PERFCORE) {
+num_architectural_pmu_gp_counters = 6;
+} else {
+num_architectural_pmu_gp_counters = 4;
+}
+}
+}
+
  cpu_x86_cpuid(env, 0x8000, 0, , , , );
  
  for (i = 0x8000; i <= limit; i++) {

@@ -3326,7 +3352,7 @@ static int kvm_put_msrs(X86CPU *cpu, int level)
  kvm_msr_entry_add(cpu, MSR_KVM_POLL_CONTROL, 
env->poll_control_msr);
  }
  
-if (has_architectural_pmu_version > 0) {

+if (has_architectural_pmu_version > 0 && IS_INTEL_CPU(env)) {
  if (has_architectural_pmu_version > 1) {
  /* Stop the counter.  */
  kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
@@ -3357,6 +3383,26 @@ static int kvm_put_msrs(X86CPU *cpu, int level)
env->msr_global_ctrl);
  }
  }
+
+if (has_architectural_pmu_version > 0 && IS_AMD_CPU(env)) {
+uint32_t sel_base = MSR_K7_EVNTSEL0;
+uint32_t ctr_base = MSR_K7_PERFCTR0;
+uint32_t step = 1;
+
+if (num_architectural_pmu_gp_counters == 6) {
+sel_base = MSR_F15H_PERF_CTL0;
+   

Re: [PATCH 2/3] i386: kvm: disable KVM_CAP_PMU_CAPABILITY if "pmu" is disabled

2022-11-21 Thread Liang Yan



On 11/21/22 06:03, Greg Kurz wrote:

On Sat, 19 Nov 2022 04:29:00 -0800
Dongli Zhang  wrote:


The "perf stat" at the VM side still works even we set "-cpu host,-pmu" in
the QEMU command line. That is, neither "-cpu host,-pmu" nor "-cpu EPYC"
could disable the pmu virtualization in an AMD environment.

We still see below at VM kernel side ...

[0.510611] Performance Events: Fam17h+ core perfctr, AMD PMU driver.

... although we expect something like below.

[0.596381] Performance Events: PMU not available due to virtualization, 
using software events only.
[0.600972] NMI watchdog: Perf NMI watchdog permanently disabled

This is because the AMD pmu (v1) does not rely on cpuid to decide if the
pmu virtualization is supported.

We disable KVM_CAP_PMU_CAPABILITY if the 'pmu' is disabled in the vcpu
properties.

Cc: Joe Jin 
Signed-off-by: Dongli Zhang 
---
  target/i386/kvm/kvm.c | 17 +
  1 file changed, 17 insertions(+)

diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 8fec0bc5b5..0b1226ff7f 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -137,6 +137,8 @@ static int has_triple_fault_event;
  
  static bool has_msr_mcg_ext_ctl;
  
+static int has_pmu_cap;

+
  static struct kvm_cpuid2 *cpuid_cache;
  static struct kvm_cpuid2 *hv_cpuid_cache;
  static struct kvm_msr_list *kvm_feature_msrs;
@@ -1725,6 +1727,19 @@ static void kvm_init_nested_state(CPUX86State *env)
  
  void kvm_arch_pre_create_vcpu(CPUState *cs)

  {
+X86CPU *cpu = X86_CPU(cs);
+int ret;
+
+if (has_pmu_cap && !cpu->enable_pmu) {
+ret = kvm_vm_enable_cap(kvm_state, KVM_CAP_PMU_CAPABILITY, 0,
+KVM_PMU_CAP_DISABLE);

It doesn't seem conceptually correct to configure VM level stuff out of
a vCPU property, which could theoretically be different for each vCPU,
even if this isn't the case with the current code base.

Maybe consider controlling PMU with a machine property and this
could be done in kvm_arch_init() like other VM level stuff ?



There is already a 'pmu' property for x86_cpu with variable 'enable_pmu' 
as we see the above code. It is mainly used by Intel CPU and set to off 
by default since qemu 1.5.


And, this property is spread to AMD CPU too.

I think you may need setup a machine property to disable it from current 
machine model. Otherwise, it will break the Live Migration scenario.




+if (ret < 0) {
+error_report("kvm: Failed to disable pmu cap: %s",
+ strerror(-ret));
+}
+
+has_pmu_cap = 0;
+}
  }
  
  int kvm_arch_init_vcpu(CPUState *cs)

@@ -2517,6 +2532,8 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
  }
  }
  
+has_pmu_cap = kvm_check_extension(s, KVM_CAP_PMU_CAPABILITY);

+
  ret = kvm_get_supported_msrs(s);
  if (ret < 0) {
  return ret;






Re: [PATCH] target/i386/cpu: disable PERFCORE for AMD when cpu.pmu is off

2022-11-01 Thread Liang Yan

Hey Vitaly,

On 10/31/22 6:07 AM, Vitaly Kuznetsov wrote:

Liang Yan  writes:


With cpu.pmu=off, perfctr_core could still be seen in an AMD guest cpuid.
By further digging, I found cpu.perfctr_core did the trick. However,
considering the 'enable_pmu' in KVM could work on both Intel and AMD,
we may add AMD PMU control under 'enabe_pmu' in QEMU too.

This change will overide the property 'perfctr_ctr' and change the AMD PMU
to off by default.

Signed-off-by: Liang Yan 
---
  target/i386/cpu.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 22b681ca37..edf5413c90 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5706,6 +5706,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
  *ecx |= 1 << 1;/* CmpLegacy bit */
  }
  }
+
+if (!cpu->enable_pmu) {
+*ecx &= ~CPUID_EXT3_PERFCORE;
+}
  break;
  case 0x8002:
  case 0x8003:

I may be missing something but my first impression is that this will
make CPUID_EXT3_PERFCORE bit disappear when a !enable_pmu VM is migrated
from an old QEMU (pre-patch) to a new one. If so, then additional
precautions should be taking against that (e.g. tying the change to
CPU/machine model versions, for example).

Thanks for the reply, it is a quite good point. I was struggled with it 
a little bit earlier because cpu.pmu has such operation for Intel CPU. 
After further talk with AMD people, I noticed that AMD PMU is more than 
perfctr_core, it has some legacy counters in use. I will dig a little 
further and send a v2 with extra cpu counters and live migration 
compatibility.



Regards,

Liang







[PATCH] target/i386/cpu: disable PERFCORE for AMD when cpu.pmu is off

2022-10-28 Thread Liang Yan
With cpu.pmu=off, perfctr_core could still be seen in an AMD guest cpuid.
By further digging, I found cpu.perfctr_core did the trick. However,
considering the 'enable_pmu' in KVM could work on both Intel and AMD,
we may add AMD PMU control under 'enabe_pmu' in QEMU too.

This change will overide the property 'perfctr_ctr' and change the AMD PMU
to off by default.

Signed-off-by: Liang Yan 
---
 target/i386/cpu.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 22b681ca37..edf5413c90 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5706,6 +5706,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *ecx |= 1 << 1;/* CmpLegacy bit */
 }
 }
+
+if (!cpu->enable_pmu) {
+*ecx &= ~CPUID_EXT3_PERFCORE;
+}
 break;
 case 0x8002:
 case 0x8003:
-- 
2.34.1




Re: Deadlock between bdrv_drain_all_begin and prepare_mmio_access

2022-08-08 Thread Liang Yan


On 8/2/22 08:35, Kevin Wolf wrote:

Am 24.07.2022 um 23:41 hat Liang Yan geschrieben:

Hello All,

I am facing a lock situation between main-loop thread 1 and vcpu thread 4
when doing a qmp snapshot. QEMU is running on 6.0.x, checked the upstream
code and did not see any big change since between. Guest is a Windows 10 VM.
Unfortunately, I could not get into the windows vm or reproduce the issue by
myself. No iothread is used here, native aio only.

 From the code,

-> AIO_WAIT_WHILE(NULL, bdrv_drain_all_poll());

--> aio_poll(qemu_get_aio_context(), true);

Mainloop mutex is locked when start snapshot in thread 1, vcpu released
thread lock when address_space_rw and try to get thread lock again in
prepare_mmio_access.

It seems main loop thread is stuck at aio_poll with blocking, but I can not
figure out what the addr=4275044592 belongs to from mmio read.

I do not quite understand what really happens here, either block jobs never
drained out or maybe a block io read from vcpu and cause a deadlock? I hope
domain experts here could help figure out the root cause, thanks in advance
and let me know if need any further information.

This does not look like a deadlock to me: Thread 4 is indeed waiting for
thread 1 to release the lock, but I don't think thread 1 is waiting in
any way for thread 4.

In thread 1, bdrv_drain_all_begin() waits for all in-flight I/O requests
to complete. So it looks a bit like some I/O request got stuck. If you
want to debug this a bit further, try to check what it is that makes
bdrv_drain_poll() still return true.


Thanks for the reply.

I agree it is not a pure deadlock. thread 1 seems have more 
responsibility here.


Do you know if there is a way to check in-flight I/O requests here? Is 
it possible that the i/o request is the mmio_read from thread 4?


I could only see the addr=4275044592, but could not identify which 
address space it is belonged.



I am also pretty curious why bdrv_drain_poll() always return true.  Any 
chance that it is blocked in aio_poll(qemu_get_aio_context(), true)?


while((cond)) { \
if(ctx_) { \
aio_context_release(ctx_); \
} \
aio_poll(qemu_get_aio_context(), true);


As mentioned, I only have a dump file, could not reproduce it in my 
local environment.


Though, I have been working on a log patch to print all fd/aio-handlers 
that main-loop is dispatched.




Please also add the QEMU command line you're using, especially the
configuration of the block device backends (for example, does this use
Linux AIO, the thread pool or io_uring?).


it uses native linux aio, and no extra io-thread is assigned here.

-blockdev 
{"driver":"file","filename":".raw","aio":"native","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}


-device 
virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on



Let me now if you need more information and thanks for looking into this 
issue.


~Liang


Kevin



Deadlock between bdrv_drain_all_begin and prepare_mmio_access

2022-07-24 Thread Liang Yan

Hello All,

I am facing a lock situation between main-loop thread 1 and vcpu thread 
4 when doing a qmp snapshot. QEMU is running on 6.0.x, checked the 
upstream code and did not see any big change since between. Guest is a 
Windows 10 VM. Unfortunately, I could not get into the windows vm or 
reproduce the issue by myself. No iothread is used here, native aio only.


From the code,

-> AIO_WAIT_WHILE(NULL, bdrv_drain_all_poll());

--> aio_poll(qemu_get_aio_context(), true);

Mainloop mutex is locked when start snapshot in thread 1, vcpu released 
thread lock when address_space_rw and try to get thread lock again in 
prepare_mmio_access.


It seems main loop thread is stuck at aio_poll with blocking, but I can 
not figure out what the addr=4275044592 belongs to from mmio read.


I do not quite understand what really happens here, either block jobs 
never drained out or maybe a block io read from vcpu and cause a 
deadlock? I hope domain experts here could help figure out the root 
cause, thanks in advance and let me know if need any further information.



Regards,

Liang


(gdb) thread 1
[Switching to thread 1 (Thread 0x7f9ebcf96040 (LWP 358660))]
#0  0x7f9ec6eb4ac6 in __ppoll (fds=0x562dda80bc90, nfds=2, 
timeout=, timeout@entry=0x0, sigmask=sigmask@entry=0x0) 
at ../sysdeps/unix/sysv/linux/ppoll.c:44

44    ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory.
(gdb) bt
#0  0x7f9ec6eb4ac6 in __ppoll (fds=0x562dda80bc90, nfds=2, 
timeout=, timeout@entry=0x0, sigmask=sigmask@entry=0x0) 
at ../sysdeps/unix/sysv/linux/ppoll.c:44
#1  0x562dd7f5a409 in ppoll (__ss=0x0, __timeout=0x0, 
__nfds=, __fds=) at 
/usr/include/x86_64-linux-gnu/bits/poll2.h:77
#2  qemu_poll_ns (fds=, nfds=, 
timeout=timeout@entry=-1) at ../../util/qemu-timer.c:336
#3  0x562dd7f93de9 in fdmon_poll_wait (ctx=0x562dda193860, 
ready_list=0x7ffedaeb3f48, timeout=-1) at ../../util/fdmon-poll.c:80
#4  0x562dd7f6d05b in aio_poll (ctx=, 
blocking=blocking@entry=true) at ../../util/aio-posix.c:607

#5  0x562dd7e67e54 in bdrv_drain_all_begin () at ../../block/io.c:642
#6  bdrv_drain_all_begin () at ../../block/io.c:607
#7  0x562dd7e68a6d in bdrv_drain_all () at ../../block/io.c:693
#8  0x562dd7e54963 in qmp_transaction 
(dev_list=dev_list@entry=0x7ffedaeb4070, 
has_props=has_props@entry=false, props=0x562dda803910, props@entry=0x0, 
errp=errp@entry=0x7ffedaeb4128)

    at ../../blockdev.c:2348
#9  0x562dd7e54d5b in blockdev_do_action (errp=0x7ffedaeb4128, 
action=0x7ffedaeb4060) at ../../blockdev.c:1055
#10 qmp_blockdev_snapshot_sync (has_device=, 
device=, has_node_name=, 
node_name=, snapshot_file=,
    has_snapshot_node_name=, 
snapshot_node_name=0x562dda83c970 "hvd-snapshot", has_format=false, 
format=0x0, has_mode=false, mode=NEW_IMAGE_MODE_EXISTING, 
errp=0x7ffedaeb4128)

    at ../../blockdev.c:1083
#11 0x562dd7f0e5aa in qmp_marshal_blockdev_snapshot_sync 
(args=, ret=, errp=0x7f9ebc61ae90) at 
qapi/qapi-commands-block-core.c:221
#12 0x562dd7f5c5db in do_qmp_dispatch_bh (opaque=0x7f9ebc61aea0) at 
../../qapi/qmp-dispatch.c:131
#13 0x562dd7f5dc27 in aio_bh_call (bh=0x7f9e3000b760) at 
../../util/async.c:164

#14 aio_bh_poll (ctx=ctx@entry=0x562dda193860) at ../../util/async.c:164
#15 0x562dd7f6ca82 in aio_dispatch (ctx=0x562dda193860) at 
../../util/aio-posix.c:381
#16 0x562dd7f5da42 in aio_ctx_dispatch (source=, 
callback=, user_data=) at 
../../util/async.c:306
#17 0x7f9ec7ade17d in g_main_context_dispatch () from 
/lib/x86_64-linux-gnu/libglib-2.0.so.0

#18 0x562dd7f4f320 in glib_pollfds_poll () at ../../util/main-loop.c:231
#19 os_host_main_loop_wait (timeout=) at 
../../util/main-loop.c:254
#20 main_loop_wait (nonblocking=nonblocking@entry=0) at 
../../util/main-loop.c:530

#21 0x562dd7d3cfd9 in qemu_main_loop () at ../../softmmu/runstate.c:725
#22 0x562dd7b7aa82 in main (argc=, argv=out>, envp=) at ../../softmmu/main.c:50



(gdb) thread 4
[Switching to thread 4 (Thread 0x7f9e377fd700 (LWP 358668))]
#0  __lll_lock_wait (futex=futex@entry=0x562dd8337a60 
, private=0) at lowlevellock.c:52

52    lowlevellock.c: No such file or directory.
(gdb) bt
#0  __lll_lock_wait (futex=futex@entry=0x562dd8337a60 
, private=0) at lowlevellock.c:52
#1  0x7f9ec6f9f0a3 in __GI___pthread_mutex_lock 
(mutex=mutex@entry=0x562dd8337a60 ) at 
../nptl/pthread_mutex_lock.c:80
#2  0x562dd7f667c8 in qemu_mutex_lock_impl (mutex=0x562dd8337a60 
, file=0x562dd804c76c "../../softmmu/physmem.c", 
line=2742) at ../../util/qemu-thread-posix.c:79
#3  0x562dd7dca8ce in qemu_mutex_lock_iothread_impl 
(file=file@entry=0x562dd804c76c "../../softmmu/physmem.c", 
line=line@entry=2742) at ../../softmmu/cpus.c:491
#4  0x562dd7da2e91 in prepare_mmio_access (mr=) at 
../../softmmu/physmem.c:2742
#5  0x562dd7da8bbb in flatview_read_continue 
(fv=fv@entry=0x7f9e2827a4c0, addr=addr@entry=4275044592, attrs=..., 
ptr=ptr@entry=0x7f9ebcef7028, len=len@entry=4, addr1=, 
l=,

    

Re: [PATCH v16 00/99] arm tcg/kvm refactor and split with kvm only support

2021-09-20 Thread Liang Yan

Hi Alex,

I am wondering the current status on this patch series.

I have been working on it recently, I am wondering f you have a wip-git-repo,  
I can send my patches there if not duplicated,

otherwise I can resend the series here with a  new rebase and some fixes based 
on the comments.

Let me know what you think, thanks.


Regards,

Liang


On 6/4/21 11:51, Alex Bennée wrote:

Hi,

I have picked up the baton from Claudio to try and get the ARM
re-factoring across the line. Most of the patches from Claudio remain
unchanged and have just had minor fixups from re-basing against the
moving target. I've done my best to make sure any fixes that have been
made in the meantime weren't lost.

I've included Phillipe's qtest_has_accel v7 patches (I had problems
with v8) to aid in my aarch64 testing. I'm expecting them to be
up-streamed by Phillipe in due course. I've also nabbed one of
Phillipe's Kconfig tweaks to allow for target specific expression of
some config variables.

The main thing that enables the --disable-tcg build is the addition of
--with-devices-FOO configure option which is a mechanism to override
the existing default device configurations. The two that I've been
testing are a 64 bit only build on x86:

   '../../configure' '--without-default-features' \
  '--target-list=arm-softmmu,aarch64-softmmu' \
  '--with-devices-aarch64=../../configs/aarch64-softmmu/64bit-only.mak'

which results in the aarch64-softmmu build only supporting sbsa-ref,
virt and xlnx-versal-virt.

The second is a KVM only cross build:

   '../../configure' '--disable-docs' \
 '--target-list=aarch64-softmmu' \
 '--enable-kvm' '--disable-tcg' \
 '--cross-prefix=aarch64-linux-gnu-' \
 '--with-devices-aarch64=../../configs/aarch64-softmmu/virt-only.mak'

Finally I've made a few minor Kconfig and testing tweaks before adding
some gitlab coverage. As a result I was able to drop the Revert: idau
patch because I can properly build an image without stray devices in
the qtree.

The following need review:

  - gitlab: defend the new stripped down arm64 configs
  - tests/qtest: make xlnx-can-test conditional on being configured
  - tests/qtest: split the cdrom-test into arm/aarch64
  - hw/arm: add dependency on OR_IRQ for XLNX_VERSAL
  - target/arm: move CONFIG_V7M out of default-devices

Alex Bennée (6):
   target/arm: move CONFIG_V7M out of default-devices
   hw/arm: add dependency on OR_IRQ for XLNX_VERSAL
   tests/qtest: split the cdrom-test into arm/aarch64
   tests/qtest: make xlnx-can-test conditional on being configured
   configure: allow the overriding of default-config in the build
   gitlab: defend the new stripped down arm64 configs

Claudio Fontana (80):
   meson: add target_user_arch
   accel: add cpu_reset
   target/arm: move translate modules to tcg/
   target/arm: move helpers to tcg/
   arm: tcg: only build under CONFIG_TCG
   target/arm: tcg: add sysemu and user subdirs
   target/arm: tcg: split mte_helper user-only and sysemu code
   target/arm: tcg: move sysemu-only parts of debug_helper
   target/arm: tcg: split tlb_helper user-only and sysemu-only parts
   target/arm: tcg: split m_helper user-only and sysemu-only parts
   target/arm: only build psci for TCG
   target/arm: split off cpu-sysemu.c
   target/arm: tcg: fix comment style before move to cpu-mmu
   target/arm: move physical address translation to cpu-mmu
   target/arm: fix style in preparation of new cpregs module
   target/arm: split cpregs from tcg/helper.c
   target/arm: move cpu definitions to common cpu module
   target/arm: only perform TCG cpu and machine inits if TCG enabled
   target/arm: tcg: add stubs for some helpers for non-tcg builds
   target/arm: move cpsr_read, cpsr_write to cpu_common
   target/arm: add temporary stub for arm_rebuild_hflags
   target/arm: move arm_hcr_el2_eff from tcg/ to common_cpu
   target/arm: split vfp state setting from tcg helpers
   target/arm: move arm_mmu_idx* to cpu-mmu
   target/arm: move sve_zcr_len_for_el to common_cpu
   target/arm: move arm_sctlr away from tcg helpers
   target/arm: move arm_cpu_list to common_cpu
   target/arm: move aarch64_sync_32_to_64 (and vv) to cpu code
   target/arm: new cpu32 ARM 32 bit CPU Class
   target/arm: split 32bit and 64bit arm dump state
   target/arm: move a15 cpu model away from the TCG-only models
   target/arm: fixup sve_exception_el code style before move
   target/arm: move sve_exception_el out of TCG helpers
   target/arm: fix comments style of fp_exception_el before moving it
   target/arm: move fp_exception_el out of TCG helpers
   target/arm: remove now useless ifndef from fp_exception_el
   target/arm: make further preparation for the exception code to move
   target/arm: fix style of arm_cpu_do_interrupt functions before move
   target/arm: move exception code out of tcg/helper.c
   target/arm: rename handle_semihosting to tcg_handle_semihosting
   target/arm: replace CONFIG_TCG with tcg_enabled
   target/arm: move TCGCPUOps to 

Re: [PATCH v1 1/1] vfio: Make migration support non experimental by default.

2021-07-21 Thread Liang Yan


On 7/14/21 6:19 AM, Kirti Wankhede wrote:
>
>
> On 7/10/2021 1:14 PM, Claudio Fontana wrote:
>> On 3/8/21 5:09 PM, Tarun Gupta wrote:
>>> VFIO migration support in QEMU is experimental as of now, which was
>>> done to
>>> provide soak time and resolve concerns regarding bit-stream.
>>> But, with the patches discussed in
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.mail-archive.com%2Fqemu-devel%40nongnu.org%2Fmsg784931.htmldata=04%7C01%7Ckwankhede%40nvidia.com%7C98194e8a856f4e6b611c08d943769ab5%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637614998961553398%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=A2EY9LEqGE0BSrT25h2WtWonb5oi0O%2B6%2BQmvhVf8Wd4%3Dreserved=0
>>> , we have
>>> corrected ordering of saving PCI config space and bit-stream.
>>>
>>> So, this patch proposes to make vfio migration support in QEMU to be
>>> enabled
>>> by default. Tested by successfully migrating mdev device.
>>>
>>> Signed-off-by: Tarun Gupta 
>>> Signed-off-by: Kirti Wankhede 
>>> ---
>>>   hw/vfio/pci.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>>> index f74be78209..15e26f460b 100644
>>> --- a/hw/vfio/pci.c
>>> +++ b/hw/vfio/pci.c
>>> @@ -3199,7 +3199,7 @@ static Property vfio_pci_dev_properties[] = {
>>>   DEFINE_PROP_BIT("x-igd-opregion", VFIOPCIDevice, features,
>>>   VFIO_FEATURE_ENABLE_IGD_OPREGION_BIT, false),
>>>   DEFINE_PROP_BOOL("", VFIOPCIDevice,
>>> - vbasedev.enable_migration, false),
>>> + vbasedev.enable_migration, true),
>>>   DEFINE_PROP_BOOL("x-no-mmap", VFIOPCIDevice, vbasedev.no_mmap,
>>> false),
>>>   DEFINE_PROP_BOOL("x-balloon-allowed", VFIOPCIDevice,
>>>    vbasedev.ram_block_discard_allowed, false),
>>>
>>
>> Hello,
>>
>> has plain snapshot been tested?
>
> Yes.
>
>> If I issue the HMP command "savevm", and then "loadvm", will things
>> work fine?
>
> Yes
>

Hello Kirti,

I enabled x-enable-migration and did some hack on failover_pair_id,
finally made  "virsh save/restore" and "savevm/loadvm"work through.
However, it seems vGPU did not get involved in the real migration
process, the qemu trace file confirmed it, there is no vfio section for
savevm_section_start at all.

I am using kernel 5.8 and latest qemu, vGPU 12.2 with one V100. I am
wondering if there is a version compatible requirement or need extra
setup. Could you share your test setup here? Thanks in advance.

Regards,

Liang



> Thanks,
> Kirti
>



Re: [RFC][PATCH v2 1/3] hw/misc: Add implementation of ivshmem revision 2 device

2020-04-28 Thread Liang Yan
A quick check by checkpatch.pl, pretty straightforward to fix.

ERROR: return is not a function, parentheses are not required
#211: FILE: hw/misc/ivshmem2.c:138:
+return (ivs->features & (1 << feature));

ERROR: memory barrier without comment
#255: FILE: hw/misc/ivshmem2.c:182:
+smp_mb();

ERROR: braces {} are necessary for all arms of this statement
#626: FILE: hw/misc/ivshmem2.c:553:
+if (msg->vector == 0)
[...]

Best,
Liang


On 1/7/20 9:36 AM, Jan Kiszka wrote:
> From: Jan Kiszka 
> 
> This adds a reimplementation of ivshmem in its new revision 2 as
> separate device. The goal of this is not to enable sharing with v1,
> rather to allow explore the properties and potential limitation of the
> new version prior to discussing its integration with the existing code.
> 
> v2 always requires a server to interconnect two more more QEMU
> instances because it provides signaling between peers unconditionally.
> Therefore, only the interconnecting chardev, master mode, and the usage
> of ioeventfd can be configured at device level. All other parameters are
> defined by the server instance.
> 
> A new server protocol is introduced along this. Its primary difference
> is the introduction of a single welcome message that contains all peer
> parameters, rather than a series of single-word messages pushing them
> piece by piece.
> 
> A complicating difference in interrupt handling, compare to v1, is the
> auto-disable mode of v2: When this is active, interrupt delivery is
> disabled by the device after each interrupt event. This prevents the
> usage of irqfd on the receiving side, but it lowers the handling cost
> for guests that implemented interrupt throttling this way (specifically
> when exposing the device via UIO).
> 
> No changes have been made to the ivshmem device regarding migration:
> Only the master can live-migrate, slave peers have to hot-unplug the
> device first.
> 
> The details of the device model will be specified in a succeeding
> commit. Drivers for this device can currently be found under
> 
> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
> 
> To instantiate a ivshmem v2 device, just add
> 
>  ... -chardev socket,path=/tmp/ivshmem_socket,id=ivshmem \
>  -device ivshmem,chardev=ivshmem
> 
> provided the server opened its socket under the default path.
> 
> Signed-off-by: Jan Kiszka 
> ---
>  hw/misc/Makefile.objs  |2 +-
>  hw/misc/ivshmem2.c | 1085 
> 
>  include/hw/misc/ivshmem2.h |   48 ++
>  include/hw/pci/pci_ids.h   |2 +
>  4 files changed, 1136 insertions(+), 1 deletion(-)
>  create mode 100644 hw/misc/ivshmem2.c
>  create mode 100644 include/hw/misc/ivshmem2.h
> 
> diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
> index ba898a5781..90a4a6608c 100644
> --- a/hw/misc/Makefile.objs
> +++ b/hw/misc/Makefile.objs
> @@ -26,7 +26,7 @@ common-obj-$(CONFIG_PUV3) += puv3_pm.o
>  
>  common-obj-$(CONFIG_MACIO) += macio/
>  
> -common-obj-$(CONFIG_IVSHMEM_DEVICE) += ivshmem.o
> +common-obj-$(CONFIG_IVSHMEM_DEVICE) += ivshmem.o ivshmem2.o
>  
>  common-obj-$(CONFIG_REALVIEW) += arm_sysctl.o
>  common-obj-$(CONFIG_NSERIES) += cbus.o
> diff --git a/hw/misc/ivshmem2.c b/hw/misc/ivshmem2.c
> new file mode 100644
> index 00..d5f88ed0e9
> --- /dev/null
> +++ b/hw/misc/ivshmem2.c
> @@ -0,0 +1,1085 @@
> +/*
> + * Inter-VM Shared Memory PCI device, version 2.
> + *
> + * Copyright (c) Siemens AG, 2019
> + *
> + * Authors:
> + *  Jan Kiszka 
> + *
> + * Based on ivshmem.c by Cam Macdonell 
> + *
> + * This code is licensed under the GNU GPL v2.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/units.h"
> +#include "qapi/error.h"
> +#include "qemu/cutils.h"
> +#include "hw/hw.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/msi.h"
> +#include "hw/pci/msix.h"
> +#include "hw/qdev-properties.h"
> +#include "sysemu/kvm.h"
> +#include "migration/blocker.h"
> +#include "migration/vmstate.h"
> +#include "qemu/error-report.h"
> +#include "qemu/event_notifier.h"
> +#include "qemu/module.h"
> +#include "qom/object_interfaces.h"
> +#include "chardev/char-fe.h"
> +#include "sysemu/qtest.h"
> +#include "qapi/visitor.h"
> +
> +#include "hw/misc/ivshmem2.h"
> +
> +#define PCI_VENDOR_ID_IVSHMEM   PCI_VENDOR_ID_SIEMENS
> +#define PCI_DEVICE_ID_IVSHMEM   0x4106
> +
> +#define IVSHMEM_MAX_PEERS   UINT16_MAX
> +#define IVSHMEM_IOEVENTFD   0
> +#define IVSHMEM_MSI 1
> +
> +#define IVSHMEM_REG_BAR_SIZE0x1000
> +
> +#define IVSHMEM_REG_ID  0x00
> +#define IVSHMEM_REG_MAX_PEERS   0x04
> +#define IVSHMEM_REG_INT_CTRL0x08
> +#define IVSHMEM_REG_DOORBELL0x0c
> +#define IVSHMEM_REG_STATE   0x10
> +
> +#define IVSHMEM_INT_ENABLE  0x1
> +
> +#define IVSHMEM_ONESHOT_MODE0x1
> +
> +#define IVSHMEM_DEBUG 0
> +#define IVSHMEM_DPRINTF(fmt, ...)   \
> +do {\
> +if (IVSHMEM_DEBUG) {

Re: [RFC][PATCH v2 3/3] contrib: Add server for ivshmem revision 2

2020-04-28 Thread Liang Yan
A quick check by checkpatch.pl, pretty straightforward to fix.

ERROR: memory barrier without comment
#205: FILE: contrib/ivshmem2-server/ivshmem2-server.c:106:
+smp_mb();

ERROR: spaces required around that '*' (ctx:VxV)
#753: FILE: contrib/ivshmem2-server/main.c:22:
+#define IVSHMEM_SERVER_DEFAULT_SHM_SIZE   (4*1024*1024)
 ^

ERROR: spaces required around that '*' (ctx:VxV)
#753: FILE: contrib/ivshmem2-server/main.c:22:
+#define IVSHMEM_SERVER_DEFAULT_SHM_SIZE   (4*1024*1024)


Best,
Liang



On 1/7/20 9:36 AM, Jan Kiszka wrote:
> From: Jan Kiszka 
> 
> This implements the server process for ivshmem v2 device models of QEMU.
> Again, no effort has been spent yet on sharing code with the v1 server.
> Parts have been copied, others were rewritten.
> 
> In addition to parameters of v1, this server now also specifies
> 
>  - the maximum number of peers to be connected (required to know in
>advance because of v2's state table)
>  - the size of the output sections (can be 0)
>  - the protocol ID to be published to all peers
> 
> When a virtio protocol ID is chosen, only 2 peers can be connected.
> Furthermore, the server will signal the backend variant of the ID to the
> master instance and the frontend ID to the slave peer.
> 
> To start, e.g., a server that allows virtio console over ivshmem, call
> 
> ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003
> 
> TODO: specify the new server protocol.
> 
> Signed-off-by: Jan Kiszka 
> ---
>  Makefile  |   3 +
>  Makefile.objs |   1 +
>  configure |   1 +
>  contrib/ivshmem2-server/Makefile.objs |   1 +
>  contrib/ivshmem2-server/ivshmem2-server.c | 462 
> ++
>  contrib/ivshmem2-server/ivshmem2-server.h | 158 ++
>  contrib/ivshmem2-server/main.c| 313 
>  7 files changed, 939 insertions(+)
>  create mode 100644 contrib/ivshmem2-server/Makefile.objs
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
>  create mode 100644 contrib/ivshmem2-server/main.c
> 
> diff --git a/Makefile b/Makefile
> index 6b5ad1121b..33bb0eefdb 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -427,6 +427,7 @@ dummy := $(call unnest-vars,, \
>  elf2dmp-obj-y \
>  ivshmem-client-obj-y \
>  ivshmem-server-obj-y \
> +ivshmem2-server-obj-y \
>  rdmacm-mux-obj-y \
>  libvhost-user-obj-y \
>  vhost-user-scsi-obj-y \
> @@ -655,6 +656,8 @@ ivshmem-client$(EXESUF): $(ivshmem-client-obj-y) 
> $(COMMON_LDADDS)
>   $(call LINK, $^)
>  ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) $(COMMON_LDADDS)
>   $(call LINK, $^)
> +ivshmem2-server$(EXESUF): $(ivshmem2-server-obj-y) $(COMMON_LDADDS)
> + $(call LINK, $^)
>  endif
>  vhost-user-scsi$(EXESUF): $(vhost-user-scsi-obj-y) libvhost-user.a
>   $(call LINK, $^)
> diff --git a/Makefile.objs b/Makefile.objs
> index 02bf5ce11d..ce243975ef 100644
> --- a/Makefile.objs
> +++ b/Makefile.objs
> @@ -115,6 +115,7 @@ qga-vss-dll-obj-y = qga/
>  elf2dmp-obj-y = contrib/elf2dmp/
>  ivshmem-client-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-client/
>  ivshmem-server-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem-server/
> +ivshmem2-server-obj-$(CONFIG_IVSHMEM) = contrib/ivshmem2-server/
>  libvhost-user-obj-y = contrib/libvhost-user/
>  vhost-user-scsi.o-cflags := $(LIBISCSI_CFLAGS)
>  vhost-user-scsi.o-libs := $(LIBISCSI_LIBS)
> diff --git a/configure b/configure
> index 747d3b4120..1cb1427f1b 100755
> --- a/configure
> +++ b/configure
> @@ -6165,6 +6165,7 @@ if test "$want_tools" = "yes" ; then
>fi
>if [ "$ivshmem" = "yes" ]; then
>  tools="ivshmem-client\$(EXESUF) ivshmem-server\$(EXESUF) $tools"
> +tools="ivshmem2-server\$(EXESUF) $tools"
>fi
>if [ "$curl" = "yes" ]; then
>tools="elf2dmp\$(EXESUF) $tools"
> diff --git a/contrib/ivshmem2-server/Makefile.objs 
> b/contrib/ivshmem2-server/Makefile.objs
> new file mode 100644
> index 00..d233e18ec8
> --- /dev/null
> +++ b/contrib/ivshmem2-server/Makefile.objs
> @@ -0,0 +1 @@
> +ivshmem2-server-obj-y = ivshmem2-server.o main.o
> diff --git a/contrib/ivshmem2-server/ivshmem2-server.c 
> b/contrib/ivshmem2-server/ivshmem2-server.c
> new file mode 100644
> index 00..b341f1fcd0
> --- /dev/null
> +++ b/contrib/ivshmem2-server/ivshmem2-server.c
> @@ -0,0 +1,462 @@
> +/*
> + * Copyright 6WIND S.A., 2014
> + * Copyright (c) Siemens AG, 2019
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * (at your option) any later version.  See the COPYING file in the
> + * top-level directory.
> + */
> +#include "qemu/osdep.h"
> +#include "qemu/host-utils.h"
> +#include "qemu/sockets.h"
> +#include "qemu/atomic.h"
> +
> +#include 
> +#include 
> +
> 

Re: [RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU

2020-04-28 Thread Liang Yan
Hi, All,

Did a test for these patches, all looked fine.

Test environment:
Host: opensuse tumbleweed + latest upstream qemu  + these three patches
Guest: opensuse tumbleweed root fs + custom kernel(5.5) + related
uio-ivshmem driver + ivshmem-console/ivshmem-block tools


1. lspci show

00:04.0 Unassigned class [ff80]: Siemens AG Device 4106 (prog-if 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- 
Capabilities: [40] MSI-X: Enable+ Count=2 Masked-
Vector table: BAR=1 offset=
PBA: BAR=1 offset=0800
Kernel driver in use: virtio-ivshmem


2. virtio-ivshmem-console test
2.1 ivshmem2-server(host)

airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003
*** Example code, do not use in production ***

2.2 guest vm backend(test-01)
localhost:~ # echo "110a 4106 1af4 1100 ffc003 ff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[  185.831277] uio_ivshmem :00:04.0: state_table at
0xfd80, size 0x1000
[  185.835129] uio_ivshmem :00:04.0: rw_section at
0xfd801000, size 0x7000

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

2.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

2.4 backend will serial output of frontend

localhost:~ # virtio/virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...

localhost:~/virtio # ./virtio-ivshmem-console /dev/uio0
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x1
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
queue_sel: 1
queue size: 8
queue driver vector: 2
queue desc: 0x400
queue driver: 0x480
queue device: 0x4c0
queue enable: 1
device_status: 0xf

Welcome to openSUSE Tumbleweed 20200326 - Kernel 5.5.0-rc5-1-default+
(hvc0).

enp0s3:


localhost login:

2.5 close backend and frontend will show
localhost:~ # [  185.685041] virtio-ivshmem :00:04.0: backend failed!

3. virtio-ivshmem-block test

3.1 ivshmem2-server(host)
airey:~/ivshmem/qemu/:[0]# ./ivshmem2-server -F -l 1M -n 2 -V 2 -P 0x8002
*** Example code, do not use in production ***

3.2 guest vm backend(test-01)

localhost:~ # echo "110a 4106 1af4 1100 ffc002 ff" >
/sys/bus/pci/drivers/uio_ivshmem/new_id
[   77.701462] uio_ivshmem :00:04.0: state_table at
0xfd80, size 0x1000
[   77.705231] uio_ivshmem :00:04.0: rw_section at
0xfd801000, size 0x000ff000

localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...

3.3 guest vm frontend(test-02)
need to boot or reboot after backend is done

3.4 guest vm backend(test-01)
localhost:~ # virtio/virtio-ivshmem-block /dev/uio0 /root/disk.img
Waiting for peer to be ready...
Starting virtio device
device_status: 0x0
device_status: 0x1
device_status: 0x3
device_features_sel: 1
device_features_sel: 0
driver_features_sel: 1
driver_features[1]: 0x13
driver_features_sel: 0
driver_features[0]: 0x206
device_status: 0xb
queue_sel: 0
queue size: 8
queue driver vector: 1
queue desc: 0x200
queue driver: 0x280
queue device: 0x2c0
queue enable: 1
device_status: 0xf

3.5 guest vm frontend(test-02), a new disk is attached:

fdisk /dev/vdb

Disk /dev/vdb: 192 KiB, 196608 bytes, 384 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3.6 close backend and frontend will show
localhost:~ # [ 1312.284301] virtio-ivshmem :00:04.0: backend failed!



Tested-by: Liang Yan 

On 1/7/20 9:36 AM, Jan Kiszka wrote:
> Overdue update of the ivshmem 2.0 device model as presented at [1].
> 
> Changes in v2:
>  - changed PCI device ID to Siemens-granted one,
>adjusted PCI device revision to 0
>  - removed unused feature register from device
>  - addressed feedback on specification document
>  - rebased over master
> 
> This version is now fully in sync with the implementation for Jailhouse
> that is currently under review [2][3], UIO and virtio-ivshmem drivers
> are shared. Jailhouse will very likely pick up this revision of the
> device in order to move forward with stressing it.
> 
> More details on the usage with QEMU were in the original cover letter
> (with adjustements to the new device ID):
> 
> If you want to play with this, the basic setup of the shared memory
> device is described in patch 1 and 3. UIO driver and also the
> virtio-ivshmem prototype can be found at
> 
> http://git.kiszka.org/?p=linux.

Re: [RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU

2020-04-09 Thread Liang Yan



On 4/9/20 10:11 AM, Jan Kiszka wrote:
> On 09.04.20 15:52, Liang Yan wrote:
>>
>>
>> On 1/7/20 9:36 AM, Jan Kiszka wrote:
>>> Overdue update of the ivshmem 2.0 device model as presented at [1].
>>>
>>> Changes in v2:
>>>   - changed PCI device ID to Siemens-granted one,
>>>     adjusted PCI device revision to 0
>>>   - removed unused feature register from device
>>>   - addressed feedback on specification document
>>>   - rebased over master
>>>
>>> This version is now fully in sync with the implementation for Jailhouse
>>> that is currently under review [2][3], UIO and virtio-ivshmem drivers
>>> are shared. Jailhouse will very likely pick up this revision of the
>>> device in order to move forward with stressing it.
>>>
>>> More details on the usage with QEMU were in the original cover letter
>>> (with adjustements to the new device ID):
>>>
>>> If you want to play with this, the basic setup of the shared memory
>>> device is described in patch 1 and 3. UIO driver and also the
>>> virtio-ivshmem prototype can be found at
>>>
>>> 
>>> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
>>>
>>>
>>> Accessing the device via UIO is trivial enough. If you want to use it
>>> for virtio, this is additionally to the description in patch 3 needed on
>>> the virtio console backend side:
>>>
>>>  modprobe uio_ivshmem
>>>  echo "110a 4106 1af4 1100 ffc003 ff" >
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>  linux/tools/virtio/virtio-ivshmem-console /dev/uio0
>>>
>>> And for virtio block:
>>>
>>>  echo "110a 4106 1af4 1100 ffc002 ff" >
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>  linux/tools/virtio/virtio-ivshmem-console /dev/uio0
>>> /path/to/disk.img
>>>
>>> After that, you can start the QEMU frontend instance with the
>>> virtio-ivshmem driver installed which can use the new /dev/hvc* or
>>> /dev/vda* as usual.
>>>
>> Hi, Jan,
>>
>> Nice work.
>>
>> I did a full test for your this new version. QEMU device part looks
>> good, virtio console worked as expected. Just had some issue with
>> virtio-ivshmem-block tests here.
>>
>> I suppose you mean "linux/tools/virtio/virtio-ivshmem-block"?
> 
> Yes, copy mistake, had the same issue over in
> https://github.com/siemens/jailhouse/blob/master/Documentation/inter-cell-communication.md
> 
> 
>>
>> Noticed "ffc002" is the main difference, however I saw nothing response
>> when run echo command here, are there anything I need to prepare?
>>
>> I build the driver in guest kernel already.
>>
>> Do I need a new protocol or anything for below command line?
>> ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003
> 
> Yes, you need to adjust that command line - didn't I document that
> somewhere? Looks like I didn't:
> 
> ivshmem2-server -F -l 1M -n 2 -V 2 -P 0x8002
> 
> i.e. a bit more memory is good (but this isn't speed-optimized anyway),
> you only need 2 vectors here (but more do not harm), and the protocol
> indeed needs adjustment (that is the key).
> 

Thanks for the reply. I just confirmed that virtio-ivshmem-block worked
with the new configruation, a "vdb" disk is mounted to fronted VM. I
will send out a full test summary later.

Best,
Liang



> Jan
> 



Re: [RFC][PATCH v2 0/3] IVSHMEM version 2 device for QEMU

2020-04-09 Thread Liang Yan



On 1/7/20 9:36 AM, Jan Kiszka wrote:
> Overdue update of the ivshmem 2.0 device model as presented at [1].
> 
> Changes in v2:
>  - changed PCI device ID to Siemens-granted one,
>adjusted PCI device revision to 0
>  - removed unused feature register from device
>  - addressed feedback on specification document
>  - rebased over master
> 
> This version is now fully in sync with the implementation for Jailhouse
> that is currently under review [2][3], UIO and virtio-ivshmem drivers
> are shared. Jailhouse will very likely pick up this revision of the
> device in order to move forward with stressing it.
> 
> More details on the usage with QEMU were in the original cover letter
> (with adjustements to the new device ID):
> 
> If you want to play with this, the basic setup of the shared memory
> device is described in patch 1 and 3. UIO driver and also the
> virtio-ivshmem prototype can be found at
> 
> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
> 
> Accessing the device via UIO is trivial enough. If you want to use it
> for virtio, this is additionally to the description in patch 3 needed on
> the virtio console backend side:
> 
> modprobe uio_ivshmem
> echo "110a 4106 1af4 1100 ffc003 ff" > 
> /sys/bus/pci/drivers/uio_ivshmem/new_id
> linux/tools/virtio/virtio-ivshmem-console /dev/uio0
> 
> And for virtio block:
> 
> echo "110a 4106 1af4 1100 ffc002 ff" > 
> /sys/bus/pci/drivers/uio_ivshmem/new_id
> linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img
> 
> After that, you can start the QEMU frontend instance with the
> virtio-ivshmem driver installed which can use the new /dev/hvc* or
> /dev/vda* as usual.
> 
Hi, Jan,

Nice work.

I did a full test for your this new version. QEMU device part looks
good, virtio console worked as expected. Just had some issue with
virtio-ivshmem-block tests here.

I suppose you mean "linux/tools/virtio/virtio-ivshmem-block"?

Noticed "ffc002" is the main difference, however I saw nothing response
when run echo command here, are there anything I need to prepare?

I build the driver in guest kernel already.

Do I need a new protocol or anything for below command line?
ivshmem2-server -F -l 64K -n 2 -V 3 -P 0x8003

Best,
Liang



> Any feedback welcome!
> 
> Jan
> 
> PS: Let me know if I missed someone potentially interested in this topic
> on CC - or if you would like to be dropped from the list.
> 
> [1] https://kvmforum2019.sched.com/event/TmxI
> [2] https://groups.google.com/forum/#!topic/jailhouse-dev/ffnCcRh8LOs
> [3] https://groups.google.com/forum/#!topic/jailhouse-dev/HX-0AGF1cjg
> 
> Jan Kiszka (3):
>   hw/misc: Add implementation of ivshmem revision 2 device
>   docs/specs: Add specification of ivshmem device revision 2
>   contrib: Add server for ivshmem revision 2
> 
>  Makefile  |3 +
>  Makefile.objs |1 +
>  configure |1 +
>  contrib/ivshmem2-server/Makefile.objs |1 +
>  contrib/ivshmem2-server/ivshmem2-server.c |  462 
>  contrib/ivshmem2-server/ivshmem2-server.h |  158 +
>  contrib/ivshmem2-server/main.c|  313 +
>  docs/specs/ivshmem-2-device-spec.md   |  376 ++
>  hw/misc/Makefile.objs |2 +-
>  hw/misc/ivshmem2.c| 1085 
> +
>  include/hw/misc/ivshmem2.h|   48 ++
>  include/hw/pci/pci_ids.h  |2 +
>  12 files changed, 2451 insertions(+), 1 deletion(-)
>  create mode 100644 contrib/ivshmem2-server/Makefile.objs
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
>  create mode 100644 contrib/ivshmem2-server/main.c
>  create mode 100644 docs/specs/ivshmem-2-device-spec.md
>  create mode 100644 hw/misc/ivshmem2.c
>  create mode 100644 include/hw/misc/ivshmem2.h
> 



[Bug 1865626] Re: qemu hang when ipl boot from a mdev dasd

2020-03-02 Thread liang yan
s390zp12:~ # cat test.sh
/root/qemu/s390x-softmmu/qemu-system-s390x \
-machine s390-ccw-virtio,accel=kvm \
-nographic \
-bios /root/qemu/pc-bios/s390-ccw/s390-ccw.img \
-device 
vfio-ccw,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/08e8c006-146d-48d3-b21a-c005f9d3a04b,devno=fe.0.1234,bootindex=1
 \
-global vfio-ccw.force-orb-pfch=yes

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1865626

Title:
  qemu hang when ipl boot from a mdev dasd

Status in QEMU:
  New

Bug description:
  qemu latest
  kernel 5.3.18

  I am using a passthrough dasd as boot device, the installment looks
  fine and gets into reboot process. However VM could not boot and just
  hang as below after that. I have been checking on "s390: vfio-ccw dasd
  ipl support" series right now but no clue yet. Could anyone take a
  look for it? Thanks.


  s390vsw188:~ # bash test.sh
  LOADPARM=[]
  executing ccw chain at : 0x0018
  executing ccw chain at : 0xe000

  2020-03-01T06:24:56.879314Z qemu-system-s390x: warning: vfio-ccw
  (devno fe.0.): PFCH flag forced


  s390zp12:~ # cat test.sh
  /root/qemu/s390x-softmmu/qemu-system-s390x \
  -machine s390-ccw-virtio,accel=kvm \
  -nographic \
  -bios /root/qemu/pc-bios/s390-ccw/s390-ccw.img \
  -device 
vfio-ccw,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/08e8c006-146d-48d3-b21a-c005f9d3a04b,,devno=fe.0.,bootindex=1
 \
  -global vfio-ccw.force-orb-pfch=yes \

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1865626/+subscriptions



[Bug 1865626] [NEW] qemu hang when ipl boot from a mdev dasd

2020-03-02 Thread liang yan
Public bug reported:

qemu latest
kernel 5.3.18

I am using a passthrough dasd as boot device, the installment looks fine
and gets into reboot process. However VM could not boot and just hang as
below after that. I have been checking on "s390: vfio-ccw dasd ipl
support" series right now but no clue yet. Could anyone take a look for
it? Thanks.


s390vsw188:~ # bash test.sh
LOADPARM=[]
executing ccw chain at : 0x0018
executing ccw chain at : 0xe000

2020-03-01T06:24:56.879314Z qemu-system-s390x: warning: vfio-ccw (devno
fe.0.): PFCH flag forced


s390zp12:~ # cat test.sh
/root/qemu/s390x-softmmu/qemu-system-s390x \
-machine s390-ccw-virtio,accel=kvm \
-nographic \
-bios /root/qemu/pc-bios/s390-ccw/s390-ccw.img \
-device 
vfio-ccw,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/08e8c006-146d-48d3-b21a-c005f9d3a04b,,devno=fe.0.,bootindex=1
 \
-global vfio-ccw.force-orb-pfch=yes \

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1865626

Title:
  qemu hang when ipl boot from a mdev dasd

Status in QEMU:
  New

Bug description:
  qemu latest
  kernel 5.3.18

  I am using a passthrough dasd as boot device, the installment looks
  fine and gets into reboot process. However VM could not boot and just
  hang as below after that. I have been checking on "s390: vfio-ccw dasd
  ipl support" series right now but no clue yet. Could anyone take a
  look for it? Thanks.


  s390vsw188:~ # bash test.sh
  LOADPARM=[]
  executing ccw chain at : 0x0018
  executing ccw chain at : 0xe000

  2020-03-01T06:24:56.879314Z qemu-system-s390x: warning: vfio-ccw
  (devno fe.0.): PFCH flag forced


  s390zp12:~ # cat test.sh
  /root/qemu/s390x-softmmu/qemu-system-s390x \
  -machine s390-ccw-virtio,accel=kvm \
  -nographic \
  -bios /root/qemu/pc-bios/s390-ccw/s390-ccw.img \
  -device 
vfio-ccw,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/08e8c006-146d-48d3-b21a-c005f9d3a04b,,devno=fe.0.,bootindex=1
 \
  -global vfio-ccw.force-orb-pfch=yes \

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1865626/+subscriptions



Re: [PATCH v4] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-02-03 Thread Liang Yan



On 2/3/20 8:54 AM, Peter Maydell wrote:
> On Mon, 3 Feb 2020 at 13:44, Liang Yan  wrote:
>>
>> Commit e19afd566781 mentioned that target-arm only supports queryable
>> cpu models 'max', 'host', and the current type when KVM is in use.
>> The logic works well until using machine type none.
>>
>> For machine type none, cpu_type will be null if cpu option is not
>> set by command line, strlen(cpu_type) will terminate process.
>> So We add a check above it.
>>
>> This won't affect i386 and s390x since they do not use current_cpu.
>>
>> Signed-off-by: Liang Yan 
>> ---
>>  v4: change code style based on the review from Andrew Jones
>>  v3: change git commit message
>>  v2: fix code style issue
> 
> If a reviewer says "with these changes, reviewed-by:", or
> "otherwise, reviewed-by...", then you should add those tags to
> your commit message, assuming you've only made the changes
> they asked for. That saves them having to look at and reply
> to the patchset again.
> 
> In this case I'll just add them as I add this patch to
> target-arm.next, but if you could handle tags across versions
> for future patchset submissions it makes life a little
> easier for us.
> 
Thanks for the tips, will definitely do in the future.

Best,
Liang

> Applied to target-arm.next, thanks.
> 
> thanks
> -- PMM
> 



[PATCH v4] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-02-03 Thread Liang Yan
Commit e19afd566781 mentioned that target-arm only supports queryable
cpu models 'max', 'host', and the current type when KVM is in use.
The logic works well until using machine type none.

For machine type none, cpu_type will be null if cpu option is not
set by command line, strlen(cpu_type) will terminate process.
So We add a check above it.

This won't affect i386 and s390x since they do not use current_cpu.

Signed-off-by: Liang Yan 
---
 v4: change code style based on the review from Andrew Jones
 v3: change git commit message
 v2: fix code style issue
---
 target/arm/monitor.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/target/arm/monitor.c b/target/arm/monitor.c
index 9725dfff16..c2dc7908de 100644
--- a/target/arm/monitor.c
+++ b/target/arm/monitor.c
@@ -137,17 +137,20 @@ CpuModelExpansionInfo 
*qmp_query_cpu_model_expansion(CpuModelExpansionType type,
 }
 
 if (kvm_enabled()) {
-const char *cpu_type = current_machine->cpu_type;
-int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
 bool supported = false;
 
 if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
 /* These are kvmarm's recommended cpu types */
 supported = true;
-} else if (strlen(model->name) == len &&
-   !strncmp(model->name, cpu_type, len)) {
-/* KVM is enabled and we're using this type, so it works. */
-supported = true;
+} else if (current_machine->cpu_type) {
+const char *cpu_type = current_machine->cpu_type;
+int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
+
+if (strlen(model->name) == len &&
+!strncmp(model->name, cpu_type, len)) {
+/* KVM is enabled and we're using this type, so it works. */
+supported = true;
+}
 }
 if (!supported) {
 error_setg(errp, "We cannot guarantee the CPU type '%s' works "
-- 
2.25.0




Re: [PATCH v3] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-02-03 Thread Liang Yan



On 2/3/20 8:08 AM, Andrew Jones wrote:
> On Fri, Jan 31, 2020 at 10:46:49PM -0500, Liang Yan wrote:
>> Commit e19afd56 mentioned that target-arm only supports queryable
> 
> Please use more hexdigits. I'm not sure QEMU has a policy for that,
> but I'd go with 12.
> 
>> cpu models 'max', 'host', and the current type when KVM is in use.
>> The logic works well until using machine type none.
>>
>> For machine type none, cpu_type will be null if cpu option is not
>> set by command line, strlen(cpu_type) will terminate process.
>> So We add a check above it.
>>
>> This won't affect i386 and s390x since they do not use current_cpu.
>>
>> Signed-off-by: Liang Yan 
>> ---
>>  v3: change git commit message
>>  v2: fix code style issue
>> ---
>>  target/arm/monitor.c | 14 --
>>  1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/target/arm/monitor.c b/target/arm/monitor.c
>> index 9725dfff16..3350cd65d0 100644
>> --- a/target/arm/monitor.c
>> +++ b/target/arm/monitor.c
>> @@ -137,17 +137,19 @@ CpuModelExpansionInfo 
>> *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
>>  }
>>  
>>  if (kvm_enabled()) {
>> -const char *cpu_type = current_machine->cpu_type;
>> -int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
>>  bool supported = false;
>>  
>>  if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
>>  /* These are kvmarm's recommended cpu types */
>>  supported = true;
>> -} else if (strlen(model->name) == len &&
>> -   !strncmp(model->name, cpu_type, len)) {
>> -/* KVM is enabled and we're using this type, so it works. */
>> -supported = true;
>> +} else if (current_machine->cpu_type) {
>> +const char *cpu_type = current_machine->cpu_type;
>> +int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
> 
> Need a blank line here.
> 
>> +if (strlen(model->name) == len &&
>> +!strncmp(model->name, cpu_type, len)) {
> 
> Four spaces of indent too many on the line above.
> 
>> +/* KVM is enabled and we're using this type, so it works. */
>> +supported = true;
>> +}
>>  }
>>  if (!supported) {
>>  error_setg(errp, "We cannot guarantee the CPU type '%s' works "
>> -- 
>> 2.25.0
>>
>>
> 
> With the three changes above
> 
> 
> Reviewed-by: Andrew Jones 
> Tested-by: Andrew Jones 
> 

Thanks for the review, I will update soon.

> 
> It'd be nice to extend tests/qtest/arm-cpu-features.c to also do
> some checks with machine='none' with and without KVM.
> 

Will do it later, thanks for the suggestion.

Best,
Liang

> Thanks,
> drew
> 



[PATCH v3] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-01-31 Thread Liang Yan
Commit e19afd56 mentioned that target-arm only supports queryable
cpu models 'max', 'host', and the current type when KVM is in use.
The logic works well until using machine type none.

For machine type none, cpu_type will be null if cpu option is not
set by command line, strlen(cpu_type) will terminate process.
So We add a check above it.

This won't affect i386 and s390x since they do not use current_cpu.

Signed-off-by: Liang Yan 
---
 v3: change git commit message
 v2: fix code style issue
---
 target/arm/monitor.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/target/arm/monitor.c b/target/arm/monitor.c
index 9725dfff16..3350cd65d0 100644
--- a/target/arm/monitor.c
+++ b/target/arm/monitor.c
@@ -137,17 +137,19 @@ CpuModelExpansionInfo 
*qmp_query_cpu_model_expansion(CpuModelExpansionType type,
 }
 
 if (kvm_enabled()) {
-const char *cpu_type = current_machine->cpu_type;
-int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
 bool supported = false;
 
 if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
 /* These are kvmarm's recommended cpu types */
 supported = true;
-} else if (strlen(model->name) == len &&
-   !strncmp(model->name, cpu_type, len)) {
-/* KVM is enabled and we're using this type, so it works. */
-supported = true;
+} else if (current_machine->cpu_type) {
+const char *cpu_type = current_machine->cpu_type;
+int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
+if (strlen(model->name) == len &&
+!strncmp(model->name, cpu_type, len)) {
+/* KVM is enabled and we're using this type, so it works. */
+supported = true;
+}
 }
 if (!supported) {
 error_setg(errp, "We cannot guarantee the CPU type '%s' works "
-- 
2.25.0




[PATCH v2] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-01-31 Thread Liang Yan
>From commit e19afd56, we know target-arm restricts the list of
queryable cpu models to 'max', 'host', and the current type when
KVM is in use. The logic works well until using machine type none.

For machine type none, cpu_type will be null, and strlen(cpu_type)
will terminate process. So I add a check above it.

This won't affect i386 and s390x, because they are not invovled
current_cpu.

Signed-off-by: Liang Yan 
---
 v2: fix code style issue
---
 target/arm/monitor.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/target/arm/monitor.c b/target/arm/monitor.c
index 9725dfff16..3350cd65d0 100644
--- a/target/arm/monitor.c
+++ b/target/arm/monitor.c
@@ -137,17 +137,19 @@ CpuModelExpansionInfo 
*qmp_query_cpu_model_expansion(CpuModelExpansionType type,
 }
 
 if (kvm_enabled()) {
-const char *cpu_type = current_machine->cpu_type;
-int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
 bool supported = false;
 
 if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
 /* These are kvmarm's recommended cpu types */
 supported = true;
-} else if (strlen(model->name) == len &&
-   !strncmp(model->name, cpu_type, len)) {
-/* KVM is enabled and we're using this type, so it works. */
-supported = true;
+} else if (current_machine->cpu_type) {
+const char *cpu_type = current_machine->cpu_type;
+int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
+if (strlen(model->name) == len &&
+!strncmp(model->name, cpu_type, len)) {
+/* KVM is enabled and we're using this type, so it works. */
+supported = true;
+}
 }
 if (!supported) {
 error_setg(errp, "We cannot guarantee the CPU type '%s' works "
-- 
2.25.0




[PATCH] target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none

2020-01-31 Thread Liang Yan
>From commit e19afd56, we know target-arm restricts the list of
queryable cpu models to 'max', 'host', and the current type when
KVM is in use. The logic works well until using machine type none.

For machine type none, cpu_type will be null, and strlen(cpu_type)
will terminate process. So I add a check above it.

This won't affect i386 and s390x, because they are not invovled
current_cpu.

Signed-off-by: Liang Yan 
---
 target/arm/monitor.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/target/arm/monitor.c b/target/arm/monitor.c
index 9725dfff16..0c0130c1af 100644
--- a/target/arm/monitor.c
+++ b/target/arm/monitor.c
@@ -137,17 +137,19 @@ CpuModelExpansionInfo 
*qmp_query_cpu_model_expansion(CpuModelExpansionType type,
 }
 
 if (kvm_enabled()) {
-const char *cpu_type = current_machine->cpu_type;
-int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
 bool supported = false;
 
 if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
 /* These are kvmarm's recommended cpu types */
 supported = true;
-} else if (strlen(model->name) == len &&
-   !strncmp(model->name, cpu_type, len)) {
-/* KVM is enabled and we're using this type, so it works. */
-supported = true;
+} else if (current_machine->cpu_type) {
+const char *cpu_type = current_machine->cpu_type;
+int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
+if (strlen(model->name) == len &&
+   !strncmp(model->name, cpu_type, len)) {
+   /* KVM is enabled and we're using this type, so it works. */
+   supported = true;
+}
 }
 if (!supported) {
 error_setg(errp, "We cannot guarantee the CPU type '%s' works "
-- 
2.25.0




Re: [PATCH v2 0/2] Add support for 2nd generation AMD EPYC processors

2020-01-07 Thread Liang Yan
Kindly Ping.
Just wondering if there are any plans for it.

Best,
Liang


On 11/7/19 1:00 PM, Moger, Babu wrote:
> The following series adds the support for 2nd generation AMD EPYC Processors
> on qemu guests. The model display name for 2nd generation will be EPYC-Rome.
> 
> Also fixes few missed cpu feature bits in 1st generation EPYC models.
> 
> The Reference documents are available at
> https://developer.amd.com/wp-content/resources/55803_0.54-PUB.pdf
> https://www.amd.com/system/files/TechDocs/24594.pdf
> 
> ---
> v2: Used the versioned CPU models instead of machine-type-based CPU
> compatibility (commented by Eduardo).
> 
> Babu Moger (2):
>   i386: Add missing cpu feature bits in EPYC model
>   i386: Add 2nd Generation AMD EPYC processors
> 
> 
>  target/i386/cpu.c |  119 
> +++--
>  target/i386/cpu.h |2 +
>  2 files changed, 116 insertions(+), 5 deletions(-)
> 
> --
> 


Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU

2019-11-27 Thread Liang Yan


On 11/11/19 7:57 AM, Jan Kiszka wrote:
> To get the ball rolling after my presentation of the topic at KVM Forum
> [1] and many fruitful discussions around it, this is a first concrete
> code series. As discussed, I'm starting with the IVSHMEM implementation
> of a QEMU device and server. It's RFC because, besides specification and
> implementation details, there will still be some decisions needed about
> how to integrate the new version best into the existing code bases.
> 
> If you want to play with this, the basic setup of the shared memory
> device is described in patch 1 and 3. UIO driver and also the
> virtio-ivshmem prototype can be found at
> 
> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2
> 
> Accessing the device via UIO is trivial enough. If you want to use it
> for virtio, this is additionally to the description in patch 3 needed on
> the virtio console backend side:
> 
> modprobe uio_ivshmem
> echo "1af4 1110 1af4 1100 ffc003 ff" > 
> /sys/bus/pci/drivers/uio_ivshmem/new_id
> linux/tools/virtio/virtio-ivshmem-console /dev/uio0
> 
> And for virtio block:
> 
> echo "1af4 1110 1af4 1100 ffc002 ff" > 
> /sys/bus/pci/drivers/uio_ivshmem/new_id
> linux/tools/virtio/virtio-ivshmem-console /dev/uio0 /path/to/disk.img
> 
> After that, you can start the QEMU frontend instance with the
> virtio-ivshmem driver installed which can use the new /dev/hvc* or
> /dev/vda* as usual.
> 
> Any feedback welcome!

Hi, Jan,

I have been playing your code for last few weeks, mostly study and test,
of course. Really nice work. I have a few questions here:

First, qemu part looks good, I tried test between couple VMs, and device
could pop up correctly for all of them, but I had some problems when
trying load driver. For example, if set up two VMs, vm1 and vm2, start
ivshmem server as you suggested. vm1 could load uio_ivshmem and
virtio_ivshmem correctly, vm2 could load uio_ivshmem but could not show
up "/dev/uio0", virtio_ivshmem could not be loaded at all, these still
exist even I switch the load sequence of vm1 and vm2, and sometimes
reset "virtio_ivshmem" could crash both vm1 and vm2. Not quite sure this
is bug or "Ivshmem Mode" issue, but I went through ivshmem-server code,
did not related information.

I started some code work recently, such as fix code style issues and
some work based on above testing, however I know you are also working on
RFC V2, beside the protocol between server-client and client-client is
not finalized yet either, things may change, so much appreciate if you
could squeeze me into your develop schedule and share with me some
plans, :-)  Maybe I could send some pull request in your github repo?

I personally like this project a lot, there would be a lot of potential
and user case for it, especially some devices like
ivshmem-net/ivshmem-block. Anyway, thanks for adding me to the list, and
looking forward to your reply.

Best,
Liang

> 
> Jan
> 
> PS: Let me know if I missed someone potentially interested in this topic
> on CC - or if you would like to be dropped from the list.
> 
> PPS: The Jailhouse queues are currently out of sync /wrt minor details
> of this one, primarily the device ID. Will update them when the general
> direction is clear.
> 
> [1] https://kvmforum2019.sched.com/event/TmxI
> 
> Jan Kiszka (3):
>   hw/misc: Add implementation of ivshmem revision 2 device
>   docs/specs: Add specification of ivshmem device revision 2
>   contrib: Add server for ivshmem revision 2
> 
>  Makefile  |3 +
>  Makefile.objs |1 +
>  configure |1 +
>  contrib/ivshmem2-server/Makefile.objs |1 +
>  contrib/ivshmem2-server/ivshmem2-server.c |  462 
>  contrib/ivshmem2-server/ivshmem2-server.h |  158 +
>  contrib/ivshmem2-server/main.c|  313 +
>  docs/specs/ivshmem-2-device-spec.md   |  333 +
>  hw/misc/Makefile.objs |2 +-
>  hw/misc/ivshmem2.c| 1091 
> +
>  include/hw/misc/ivshmem2.h|   48 ++
>  11 files changed, 2412 insertions(+), 1 deletion(-)
>  create mode 100644 contrib/ivshmem2-server/Makefile.objs
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.c
>  create mode 100644 contrib/ivshmem2-server/ivshmem2-server.h
>  create mode 100644 contrib/ivshmem2-server/main.c
>  create mode 100644 docs/specs/ivshmem-2-device-spec.md
>  create mode 100644 hw/misc/ivshmem2.c
>  create mode 100644 include/hw/misc/ivshmem2.h
> 


[Qemu-devel] [Bug 1815143] Re: qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

2019-02-25 Thread liang yan
Confirmed the fix, thanks for the help.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1815143

Title:
   qemu-system-s390x fails when running without kvm: fatal: EXECUTE on
  instruction prefix 0x7f4 not implemented

Status in QEMU:
  Fix Released

Bug description:
  just wondering if TCG implements instruction prefix 0x7f4

  server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel

  --> starting qemu with kvm enabled works fine
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd -enable-kvm
  Initializing cgroup subsys cpuset
  Initializing cgroup subsys cpu
  Initializing cgroup subsys cpuacct
  Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 
(SUSE Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f)
  setup.289988: Linux is running under KVM in 64-bit mode
  setup.b050d0: The maximum memory size is 128MB
  numa.196305: NUMA mode: plain
  Write protected kernel read-only data: 8692k
  [...]

  --> but starting qemu without kvm enabled works fails
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd 
  qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

  PSW=mask 00018000 addr 0067ed6e cc 00
  R00=8000 R01=0067ed76 R02= 
R03=
  R04=00111548 R05= R06= 
R07=
  R08=000100f6 R09= R10= 
R11=
  R12=00ae2000 R13=00681978 R14=00111548 
R15=bef0
  F00= F01= F02= 
F03=
  F04= F05= F06= 
F07=
  F08= F09= F10= 
F11=
  F12= F13= F14= 
F15=
  V00= V01=
  V02= V03=
  V04= V05=
  V06= V07=
  V08= V09=
  V10= V11=
  V12= V13=
  V14= V15=
  V16= V17=
  V18= V19=
  V20= V21=
  V22= V23=
  V24= V25=
  V26= V27=
  V28= V29=
  V30= V31=
  C00= C01= C02= 
C03=
  C04= C05= C06= 
C07=
  C08= C09= C10= 
C11=
  C12= C13= C14= 
C15=

  Aborted (core dumped)

  
  server3:~ # lscpu
  Architecture:  s390x
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):2
  On-line CPU(s) list:   0,1
  Thread(s) per core:1
  Core(s) per socket:1
  Socket(s) per book:1
  Book(s) per drawer:1
  Drawer(s): 2
  NUMA node(s):  1
  Vendor ID: IBM/S390
  Machine type:  2964
  BogoMIPS:  20325.00
  Hypervisor:z/VM 6.4.0
  Hypervisor vendor: IBM
  Virtualization type:   full
  Dispatching mode:  horizontal
  L1d cache: 128K
  L1i cache: 96K
  L2d cache: 2048K
  L2i cache: 2048K
  L3 cache:  65536K
  L4 cache:  491520K
  NUMA node0 CPU(s): 0-63
  Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh 
highgprs te vx sie
  server3:~ # uname -a
  Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 
(9649989) s390x s390x s390x GNU/Linux
  server3:~ #

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1815143/+subscriptions



[Qemu-devel] [Bug 1815143] Re: qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

2019-02-11 Thread liang yan
A little bit confused here, I tired to bisect it from 2.10, but it was
always good from this branch. then I went back to 2.9.1, it was always
crashed. Machine type related?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1815143

Title:
   qemu-system-s390x fails when running without kvm: fatal: EXECUTE on
  instruction prefix 0x7f4 not implemented

Status in QEMU:
  Incomplete

Bug description:
  just wondering if TCG implements instruction prefix 0x7f4

  server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel

  --> starting qemu with kvm enabled works fine
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd -enable-kvm
  Initializing cgroup subsys cpuset
  Initializing cgroup subsys cpu
  Initializing cgroup subsys cpuacct
  Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 
(SUSE Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f)
  setup.289988: Linux is running under KVM in 64-bit mode
  setup.b050d0: The maximum memory size is 128MB
  numa.196305: NUMA mode: plain
  Write protected kernel read-only data: 8692k
  [...]

  --> but starting qemu without kvm enabled works fails
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd 
  qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

  PSW=mask 00018000 addr 0067ed6e cc 00
  R00=8000 R01=0067ed76 R02= 
R03=
  R04=00111548 R05= R06= 
R07=
  R08=000100f6 R09= R10= 
R11=
  R12=00ae2000 R13=00681978 R14=00111548 
R15=bef0
  F00= F01= F02= 
F03=
  F04= F05= F06= 
F07=
  F08= F09= F10= 
F11=
  F12= F13= F14= 
F15=
  V00= V01=
  V02= V03=
  V04= V05=
  V06= V07=
  V08= V09=
  V10= V11=
  V12= V13=
  V14= V15=
  V16= V17=
  V18= V19=
  V20= V21=
  V22= V23=
  V24= V25=
  V26= V27=
  V28= V29=
  V30= V31=
  C00= C01= C02= 
C03=
  C04= C05= C06= 
C07=
  C08= C09= C10= 
C11=
  C12= C13= C14= 
C15=

  Aborted (core dumped)

  
  server3:~ # lscpu
  Architecture:  s390x
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):2
  On-line CPU(s) list:   0,1
  Thread(s) per core:1
  Core(s) per socket:1
  Socket(s) per book:1
  Book(s) per drawer:1
  Drawer(s): 2
  NUMA node(s):  1
  Vendor ID: IBM/S390
  Machine type:  2964
  BogoMIPS:  20325.00
  Hypervisor:z/VM 6.4.0
  Hypervisor vendor: IBM
  Virtualization type:   full
  Dispatching mode:  horizontal
  L1d cache: 128K
  L1i cache: 96K
  L2d cache: 2048K
  L2i cache: 2048K
  L3 cache:  65536K
  L4 cache:  491520K
  NUMA node0 CPU(s): 0-63
  Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh 
highgprs te vx sie
  server3:~ # uname -a
  Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 
(9649989) s390x s390x s390x GNU/Linux
  server3:~ #

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1815143/+subscriptions



[Qemu-devel] [Bug 1815143] Re: qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

2019-02-11 Thread liang yan
Hi, Thomas, you are right, I am using 2.9.1, and it does look OK in
2.10. do you mind to point me which part of code fixed it? Thanks.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1815143

Title:
   qemu-system-s390x fails when running without kvm: fatal: EXECUTE on
  instruction prefix 0x7f4 not implemented

Status in QEMU:
  Incomplete

Bug description:
  just wondering if TCG implements instruction prefix 0x7f4

  server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel

  --> starting qemu with kvm enabled works fine
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd -enable-kvm
  Initializing cgroup subsys cpuset
  Initializing cgroup subsys cpu
  Initializing cgroup subsys cpuacct
  Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 
(SUSE Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f)
  setup.289988: Linux is running under KVM in 64-bit mode
  setup.b050d0: The maximum memory size is 128MB
  numa.196305: NUMA mode: plain
  Write protected kernel read-only data: 8692k
  [...]

  --> but starting qemu without kvm enabled works fails
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd 
  qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

  PSW=mask 00018000 addr 0067ed6e cc 00
  R00=8000 R01=0067ed76 R02= 
R03=
  R04=00111548 R05= R06= 
R07=
  R08=000100f6 R09= R10= 
R11=
  R12=00ae2000 R13=00681978 R14=00111548 
R15=bef0
  F00= F01= F02= 
F03=
  F04= F05= F06= 
F07=
  F08= F09= F10= 
F11=
  F12= F13= F14= 
F15=
  V00= V01=
  V02= V03=
  V04= V05=
  V06= V07=
  V08= V09=
  V10= V11=
  V12= V13=
  V14= V15=
  V16= V17=
  V18= V19=
  V20= V21=
  V22= V23=
  V24= V25=
  V26= V27=
  V28= V29=
  V30= V31=
  C00= C01= C02= 
C03=
  C04= C05= C06= 
C07=
  C08= C09= C10= 
C11=
  C12= C13= C14= 
C15=

  Aborted (core dumped)

  
  server3:~ # lscpu
  Architecture:  s390x
  CPU op-mode(s):32-bit, 64-bit
  Byte Order:Big Endian
  CPU(s):2
  On-line CPU(s) list:   0,1
  Thread(s) per core:1
  Core(s) per socket:1
  Socket(s) per book:1
  Book(s) per drawer:1
  Drawer(s): 2
  NUMA node(s):  1
  Vendor ID: IBM/S390
  Machine type:  2964
  BogoMIPS:  20325.00
  Hypervisor:z/VM 6.4.0
  Hypervisor vendor: IBM
  Virtualization type:   full
  Dispatching mode:  horizontal
  L1d cache: 128K
  L1i cache: 96K
  L2d cache: 2048K
  L2i cache: 2048K
  L3 cache:  65536K
  L4 cache:  491520K
  NUMA node0 CPU(s): 0-63
  Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh 
highgprs te vx sie
  server3:~ # uname -a
  Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 
(9649989) s390x s390x s390x GNU/Linux
  server3:~ #

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1815143/+subscriptions



[Qemu-devel] [Bug 1815143] [NEW] qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

2019-02-07 Thread liang yan
Public bug reported:

just wondering if TCG implements instruction prefix 0x7f4

server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel

--> starting qemu with kvm enabled works fine
server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd -enable-kvm
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Initializing cgroup subsys cpuacct
Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 (SUSE 
Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f)
setup.289988: Linux is running under KVM in 64-bit mode
setup.b050d0: The maximum memory size is 128MB
numa.196305: NUMA mode: plain
Write protected kernel read-only data: 8692k
[...]

--> but starting qemu without kvm enabled works fails
server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd 
qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented

PSW=mask 00018000 addr 0067ed6e cc 00
R00=8000 R01=0067ed76 R02= 
R03=
R04=00111548 R05= R06= 
R07=
R08=000100f6 R09= R10= 
R11=
R12=00ae2000 R13=00681978 R14=00111548 
R15=bef0
F00= F01= F02= 
F03=
F04= F05= F06= 
F07=
F08= F09= F10= 
F11=
F12= F13= F14= 
F15=
V00= V01=
V02= V03=
V04= V05=
V06= V07=
V08= V09=
V10= V11=
V12= V13=
V14= V15=
V16= V17=
V18= V19=
V20= V21=
V22= V23=
V24= V25=
V26= V27=
V28= V29=
V30= V31=
C00= C01= C02= 
C03=
C04= C05= C06= 
C07=
C08= C09= C10= 
C11=
C12= C13= C14= 
C15=

Aborted (core dumped)


server3:~ # lscpu
Architecture:  s390x
CPU op-mode(s):32-bit, 64-bit
Byte Order:Big Endian
CPU(s):2
On-line CPU(s) list:   0,1
Thread(s) per core:1
Core(s) per socket:1
Socket(s) per book:1
Book(s) per drawer:1
Drawer(s): 2
NUMA node(s):  1
Vendor ID: IBM/S390
Machine type:  2964
BogoMIPS:  20325.00
Hypervisor:z/VM 6.4.0
Hypervisor vendor: IBM
Virtualization type:   full
Dispatching mode:  horizontal
L1d cache: 128K
L1i cache: 96K
L2d cache: 2048K
L2i cache: 2048K
L3 cache:  65536K
L4 cache:  491520K
NUMA node0 CPU(s): 0-63
Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh 
highgprs te vx sie
server3:~ # uname -a
Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 
(9649989) s390x s390x s390x GNU/Linux
server3:~ #

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1815143

Title:
   qemu-system-s390x fails when running without kvm: fatal: EXECUTE on
  instruction prefix 0x7f4 not implemented

Status in QEMU:
  New

Bug description:
  just wondering if TCG implements instruction prefix 0x7f4

  server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel

  --> starting qemu with kvm enabled works fine
  server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd 
/boot/initrd -enable-kvm
  Initializing cgroup subsys cpuset
  Initializing cgroup subsys cpu
  Initializing cgroup subsys cpuacct
  Linux version 4.4.162-94.72-default (geeko@buildhost) 

Re: [Qemu-devel] [Qemu-block] [PATCH] iotests: update qemu-iotests/082.out after 9cbef9d68ee

2018-10-18 Thread Liang Yan



On 10/18/18 4:46 PM, Eric Blake wrote:
> On 10/18/18 3:22 PM, Liang Yan wrote:
>> qemu-img help output is changed after commit 9cbef9d68ee, this is a
>> update for iotests.
>>
>> Signed-off-by: Liang Yan 
>> ---
>>   tests/qemu-iotests/082.out | 956 ++---
>>   1 file changed, 478 insertions(+), 478 deletions(-)
> 
> Thanks, but we're still discussing possible other changes to the output:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2018-10/msg03042.html
> 

Good to know, and thanks for the reminder.

~Liang



[Qemu-devel] [PATCH] iotests: update qemu-iotests/082.out after 9cbef9d68ee

2018-10-18 Thread Liang Yan
qemu-img help output is changed after commit 9cbef9d68ee, this is a
update for iotests.

Signed-off-by: Liang Yan 
---
 tests/qemu-iotests/082.out | 956 ++---
 1 file changed, 478 insertions(+), 478 deletions(-)

diff --git a/tests/qemu-iotests/082.out b/tests/qemu-iotests/082.out
index 19e9fb13ff..2672349e1d 100644
--- a/tests/qemu-iotests/082.out
+++ b/tests/qemu-iotests/082.out
@@ -44,171 +44,171 @@ cluster_size: 8192
 
 Testing: create -f qcow2 -o help TEST_DIR/t.qcow2 128M
 Supported options:
-size Virtual disk size
-compat   Compatibility level (0.10 or 1.1)
-backing_file File name of a base image
-backing_fmt  Image format of the base image
-encryption   Encrypt the image with format 'aes'. (Deprecated in favor of 
encrypt.format=aes)
-encrypt.format   Encrypt the image, format choices: 'aes', 'luks'
-encrypt.key-secret ID of secret providing qcow AES key or LUKS passphrase
-encrypt.cipher-alg Name of encryption cipher algorithm
-encrypt.cipher-mode Name of encryption cipher mode
-encrypt.ivgen-alg Name of IV generator algorithm
-encrypt.ivgen-hash-alg Name of IV generator hash algorithm
-encrypt.hash-alg Name of encryption hash algorithm
-encrypt.iter-time Time to spend in PBKDF in milliseconds
-cluster_size qcow2 cluster size
-preallocationPreallocation mode (allowed values: off, metadata, falloc, 
full)
-lazy_refcounts   Postpone refcount updates
-refcount_bitsWidth of a reference count entry in bits
-nocowTurn off copy-on-write (valid only on btrfs)
+backing_file=str - File name of a base image
+backing_fmt=str - Image format of the base image
+cluster_size=size - qcow2 cluster size
+compat=str - Compatibility level (0.10 or 1.1)
+encrypt.cipher-alg=str - Name of encryption cipher algorithm
+encrypt.cipher-mode=str - Name of encryption cipher mode
+encrypt.format=str - Encrypt the image, format choices: 'aes', 'luks'
+encrypt.hash-alg=str - Name of encryption hash algorithm
+encrypt.iter-time=num - Time to spend in PBKDF in milliseconds
+encrypt.ivgen-alg=str - Name of IV generator algorithm
+encrypt.ivgen-hash-alg=str - Name of IV generator hash algorithm
+encrypt.key-secret=str - ID of secret providing qcow AES key or LUKS passphrase
+encryption=bool (on/off) - Encrypt the image with format 'aes'. (Deprecated in 
favor of encrypt.format=aes)
+lazy_refcounts=bool (on/off) - Postpone refcount updates
+nocow=bool (on/off) - Turn off copy-on-write (valid only on btrfs)
+preallocation=str - Preallocation mode (allowed values: off, metadata, falloc, 
full)
+refcount_bits=num - Width of a reference count entry in bits
+size=size - Virtual disk size
 
 Testing: create -f qcow2 -o ? TEST_DIR/t.qcow2 128M
 Supported options:
-size Virtual disk size
-compat   Compatibility level (0.10 or 1.1)
-backing_file File name of a base image
-backing_fmt  Image format of the base image
-encryption   Encrypt the image with format 'aes'. (Deprecated in favor of 
encrypt.format=aes)
-encrypt.format   Encrypt the image, format choices: 'aes', 'luks'
-encrypt.key-secret ID of secret providing qcow AES key or LUKS passphrase
-encrypt.cipher-alg Name of encryption cipher algorithm
-encrypt.cipher-mode Name of encryption cipher mode
-encrypt.ivgen-alg Name of IV generator algorithm
-encrypt.ivgen-hash-alg Name of IV generator hash algorithm
-encrypt.hash-alg Name of encryption hash algorithm
-encrypt.iter-time Time to spend in PBKDF in milliseconds
-cluster_size qcow2 cluster size
-preallocationPreallocation mode (allowed values: off, metadata, falloc, 
full)
-lazy_refcounts   Postpone refcount updates
-refcount_bitsWidth of a reference count entry in bits
-nocowTurn off copy-on-write (valid only on btrfs)
+backing_file=str - File name of a base image
+backing_fmt=str - Image format of the base image
+cluster_size=size - qcow2 cluster size
+compat=str - Compatibility level (0.10 or 1.1)
+encrypt.cipher-alg=str - Name of encryption cipher algorithm
+encrypt.cipher-mode=str - Name of encryption cipher mode
+encrypt.format=str - Encrypt the image, format choices: 'aes', 'luks'
+encrypt.hash-alg=str - Name of encryption hash algorithm
+encrypt.iter-time=num - Time to spend in PBKDF in milliseconds
+encrypt.ivgen-alg=str - Name of IV generator algorithm
+encrypt.ivgen-hash-alg=str - Name of IV generator hash algorithm
+encrypt.key-secret=str - ID of secret providing qcow AES key or LUKS passphrase
+encryption=bool (on/off) - Encrypt the image with format 'aes'. (Deprecated in 
favor of encrypt.format=aes)
+lazy_refcounts=bool (on/off) - Postpone refcount updates
+nocow=bool (on/off) - Turn off copy-on-write (valid only on btrfs)
+preallocation=str - Preallocation mode (allowed values: off, metadata, falloc, 
full)
+refcount_bits=num - Width of a reference count entry in bits
+size=size - Virtual disk size
 
 Testing: create -f qcow2 -o cluster_size=4k,help TEST_DIR/t.qcow2 128M

[Qemu-devel] [Bug 1441443] Re: Is there a way to create a 10G network interface for VMs in KVM2.0?

2018-04-05 Thread liang yan
Unless you are using SRIOV or DPDK which both need hardware support. If
could support SRIOV, then using IOMMU+VFIO, and pass-through to VM, this
will get a close number. Or DPDK, using a user-space driver + vhost-net,
will also get a pretty good value.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1441443

Title:
  Is there a way to create a 10G network interface for VMs in KVM2.0?

Status in QEMU:
  Incomplete

Bug description:

  We have installed & configured the KVM 2.0 (qemu-kvm 2.0.0+dfsg-
  2ubuntu1.10) on Ubuntu 14.04. The physical server is connected to 10G
  network, KVM is configured in Bridged mode But the issue is, when we
  create Network interface on VMs, we have only 1G device as an options
  for vmhosts. Is this the limit of the KVM or is there a way to create
  a 10G network interface for VMs? Available device models

  E1000
  Ne2k_pci
  Pcnet
  Rtl8139
  virtio

  Please find the network configuration details

  Source device : Host device vnet1 (Bridge ‘br0’)
  Device model : virtio 

  Network configuration in the host /etc/network/interfaces

  auto br0
  iface br0 inet static
  address 10.221.x.10
  netmask 255.255.255.0
  network 10.221.x.0
  broadcast 10.221.x.255
  gateway 10.221.x.1
  # dns-* options are implemented by the resolvconf package, if 
installed
  dns-nameservers X.X.X.X
  dns-search XXX.NET
  bridge_ports em1
  bridge_fd 0
  bridge_stp off
  bridge_maxwait 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1441443/+subscriptions



[Qemu-devel] [Bug 1736042] Re: qemu-system-x86_64 does not boot image reliably

2017-12-08 Thread liang yan
This looks not a QEMU bug to me. You may drop "-curses" first, and run
again. Once get inside, change grub file(/etc/default/grub) by uncomment
GRUB_TERMINAL=console. It should work then. If still not, then blacklist
vga16fb and add "fbcon=map:99 text" in grub command line. Remember to
run update-grub after change configure file.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1736042

Title:
  qemu-system-x86_64 does not boot image reliably

Status in QEMU:
  New

Bug description:
  Booting image as root user with following command works randomly.

  ./qemu-system-x86_64 -enable-kvm -curses -smp cpus=4 -m 8192
  /root/ructfe2917_g.qcow2

  Most of the time it ends up on "800x600 Graphic mode"(been stuck there
  even for 4 hours before killed), but 1 out of ~20 it boots image
  correctly(and instantly).

  This is visible in v2.5.0 build from sources, v2.5.0 from Ubuntu
  Xenial and v2.1.2 from Debian Jessie.

  The image in question was converted from vmdk using:

  qemu-img convert -O qcow2 file.vmdk file.qcow2

  The image contains Ubuntu with grub.

  I can provide debug logs, but will need guidance how to enable
  them(and what logs are necessary).

  As a side note, it seems that booting is more certain after
  connecting(or mounting) partition using qemu-nbd/mount.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1736042/+subscriptions



Re: [Qemu-devel] [PATCH v2] hw/display/xenfb: Simulate auto-repeat key events

2017-11-02 Thread Liang Yan
This patch doesn't work here, my test environment actually broke, I am
so sorry for the mess!
Please just ignore it.

On 11/2/17 1:40 PM, Daniel P. Berrange wrote:
> On Thu, Nov 02, 2017 at 05:26:50PM +, Peter Maydell wrote:
>> On 2 November 2017 at 17:18, Liang Yan <l...@suse.com> wrote:
>>> New tigervnc changes the way to send long pressed key,
>>> from "down up down up ..." to "down down ... up", it only
>>> affects xen pv console mode. I send a patch to latest
>>> kernel side, but it may have a fix in qemu backend for
>>> back compatible becase guest VMs may use very old kernel.
>>> This patch inserts an up event after each regular key down
>>> event to simulate an auto-repeat key event for xen keyboard
>>> frontend driver.
>>>
>>> Signed-off-by: Liang Yan <l...@suse.com>
>>> ---
>>> v2:
>>> - exclude extended key
>>> - change log comment
>>>
>>>  hw/display/xenfb.c | 5 +
>>>  1 file changed, 5 insertions(+)
>>>
>>> diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
>>> index 8e2547ac05..1bc5b41ab7 100644
>>> --- a/hw/display/xenfb.c
>>> +++ b/hw/display/xenfb.c
>>> @@ -292,6 +292,11 @@ static void xenfb_key_event(void *opaque, int scancode)
>>>  }
>>>  trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
>>>  xenfb_send_key(xenfb, down, scancode2linux[scancode]);
>>> +
>>> +/* insert an up event for regular down key event */
>>> +if (down && !xenfb->extended) {
>>> +xenfb_send_key(xenfb, 0, scancode2linux[scancode]);
>>> +}
>>>  }
>> This doesn't look to me like the right place to fix this bug.
>> The xenfb key event handler is just one QEMU keyboard backend
>> (in a setup where there are many possible sources of keyboard
>> events: vnc, gtk, SDL, cocoa UI frontends; and many possible
>> sinks: xenfb's key handling, ps2 keyboard emulator, etc etc).
You are right, this is not a good place. Sorry did not think this much
at first time.
>> We need to be clear in our definition of generic QEMU key
>> events how key repeat is supposed to be handled, and then
>> every consumer and every producer needs to follow that.
The is good, but I do not think it could be done easily.
>> In the specific case of the vnc UI frontend, we need to
>> also look at what the VNC protocol specifies for key repeat.
>> That then tells us whether the bug to be fixed is in QEMU,
>> or in a particular VNC client.
> I'm somewhat inclined to say this is a Tigervnc bug. We fixed this
> same issue in GTK-VNC ~10 years ago. While X11 would send a sequence
> of press,release,press,release,  GTK would turn this into
> press,press,press,press,release which broke some VNC servers.
> So GTK-VNC undoes this optimization from GTK to ensure a full set
> of press,release,press,release pairs is always sent.
yeah, I saw both here and there. This one looks in a reverse way.
> The official RFC for VNC does not specify any auto-repeat behaviour
>
>   https://tools.ietf.org/html/rfc6143#section-7.5.4
>
> The unofficial community authored extension to the RFC suggests
> taking the press,press,press,release approach to allow servers to
Kernel input subsystem looks use this way now.

Best,
Liang
> distinguish auto-repeat from manual repeat, but I'm not really
> convinced by that justification really
>
>   http://vncdotool.readthedocs.io/en/latest/rfbproto.html#keyevent
>
> Regards,
> Daniel




Re: [Qemu-devel] [PATCH v2] hw/display/xenfb: Simulate auto-repeat key events

2017-11-02 Thread Liang Yan


On 11/2/17 1:40 PM, Daniel P. Berrange wrote:
> On Thu, Nov 02, 2017 at 05:26:50PM +, Peter Maydell wrote:
>> On 2 November 2017 at 17:18, Liang Yan <l...@suse.com> wrote:
>>> New tigervnc changes the way to send long pressed key,
>>> from "down up down up ..." to "down down ... up", it only
>>> affects xen pv console mode. I send a patch to latest
>>> kernel side, but it may have a fix in qemu backend for
>>> back compatible becase guest VMs may use very old kernel.
>>> This patch inserts an up event after each regular key down
>>> event to simulate an auto-repeat key event for xen keyboard
>>> frontend driver.
>>>
>>> Signed-off-by: Liang Yan <l...@suse.com>
>>> ---
>>> v2:
>>> - exclude extended key
>>> - change log comment
>>>
>>>  hw/display/xenfb.c | 5 +
>>>  1 file changed, 5 insertions(+)
>>>
>>> diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
>>> index 8e2547ac05..1bc5b41ab7 100644
>>> --- a/hw/display/xenfb.c
>>> +++ b/hw/display/xenfb.c
>>> @@ -292,6 +292,11 @@ static void xenfb_key_event(void *opaque, int scancode)
>>>  }
>>>  trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
>>>  xenfb_send_key(xenfb, down, scancode2linux[scancode]);
>>> +
>>> +/* insert an up event for regular down key event */
>>> +if (down && !xenfb->extended) {
>>> +xenfb_send_key(xenfb, 0, scancode2linux[scancode]);
>>> +}
>>>  }
>> This doesn't look to me like the right place to fix this bug.
>> The xenfb key event handler is just one QEMU keyboard backend
>> (in a setup where there are many possible sources of keyboard
>> events: vnc, gtk, SDL, cocoa UI frontends; and many possible
>> sinks: xenfb's key handling, ps2 keyboard emulator, etc etc).
>>
>> We need to be clear in our definition of generic QEMU key
>> events how key repeat is supposed to be handled, and then
>> every consumer and every producer needs to follow that.
>> In the specific case of the vnc UI frontend, we need to
>> also look at what the VNC protocol specifies for key repeat.
>> That then tells us whether the bug to be fixed is in QEMU,
>> or in a particular VNC client.
> I'm somewhat inclined to say this is a Tigervnc bug. We fixed this
> same issue in GTK-VNC ~10 years ago. While X11 would send a sequence
> of press,release,press,release,  GTK would turn this into
> press,press,press,press,release which broke some VNC servers.
> So GTK-VNC undoes this optimization from GTK to ensure a full set
> of press,release,press,release pairs is always sent.
Tigervnc uses "press press press ... release" now,  this one is actually
because xenkb couldn't handler these repeat events. I sent a fix to
front-end side, and this patch here is for old compatibly only,
otherwise we need to patch all those guest VMs even we run a newer host.

Thanks,
Liang
> The official RFC for VNC does not specify any auto-repeat behaviour
>
>   https://tools.ietf.org/html/rfc6143#section-7.5.4
>
> The unofficial community authored extension to the RFC suggests
> taking the press,press,press,release approach to allow servers to
> distinguish auto-repeat from manual repeat, but I'm not really
> convinced by that justification really
>
>   http://vncdotool.readthedocs.io/en/latest/rfbproto.html#keyevent
>
> Regards,
> Daniel




Re: [Qemu-devel] [PATCH v2] hw/display/xenfb: Simulate auto-repeat key events

2017-11-02 Thread Liang Yan
Thanks for the reply.

On 11/2/17 1:26 PM, Peter Maydell wrote:
> On 2 November 2017 at 17:18, Liang Yan <l...@suse.com> wrote:
>> New tigervnc changes the way to send long pressed key,
>> from "down up down up ..." to "down down ... up", it only
>> affects xen pv console mode. I send a patch to latest
>> kernel side, but it may have a fix in qemu backend for
>> back compatible becase guest VMs may use very old kernel.
>> This patch inserts an up event after each regular key down
>> event to simulate an auto-repeat key event for xen keyboard
>> frontend driver.
>>
>> Signed-off-by: Liang Yan <l...@suse.com>
>> ---
>> v2:
>> - exclude extended key
>> - change log comment
>>
>>  hw/display/xenfb.c | 5 +
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
>> index 8e2547ac05..1bc5b41ab7 100644
>> --- a/hw/display/xenfb.c
>> +++ b/hw/display/xenfb.c
>> @@ -292,6 +292,11 @@ static void xenfb_key_event(void *opaque, int scancode)
>>  }
>>  trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
>>  xenfb_send_key(xenfb, down, scancode2linux[scancode]);
>> +
>> +/* insert an up event for regular down key event */
>> +if (down && !xenfb->extended) {
>> +xenfb_send_key(xenfb, 0, scancode2linux[scancode]);
>> +}
>>  }
> This doesn't look to me like the right place to fix this bug.
> The xenfb key event handler is just one QEMU keyboard backend
> (in a setup where there are many possible sources of keyboard
> events: vnc, gtk, SDL, cocoa UI frontends; and many possible
> sinks: xenfb's key handling, ps2 keyboard emulator, etc etc).
QEMU actually just forwards what it receives(vnc,sdl) to different
backend handler, usually those front and back(device) end will work
together to handle those events. For this one, it could and should be
fixed in front-end driver, but there are so many different guest kernel,
especially for those old versions, it would be totally a pain. That is
why I came back to backend side. BTW, I saw same logic in other places
of qemu too.    

Best,
Liang
> We need to be clear in our definition of generic QEMU key
> events how key repeat is supposed to be handled, and then
> every consumer and every producer needs to follow that.
> In the specific case of the vnc UI frontend, we need to
> also look at what the VNC protocol specifies for key repeat.
> That then tells us whether the bug to be fixed is in QEMU,
> or in a particular VNC client.
>
> thanks
> -- PMM
>
>




[Qemu-devel] [PATCH v2] hw/display/xenfb: Simulate auto-repeat key events

2017-11-02 Thread Liang Yan
New tigervnc changes the way to send long pressed key,
from "down up down up ..." to "down down ... up", it only
affects xen pv console mode. I send a patch to latest
kernel side, but it may have a fix in qemu backend for
back compatible becase guest VMs may use very old kernel.
This patch inserts an up event after each regular key down
event to simulate an auto-repeat key event for xen keyboard
frontend driver.

Signed-off-by: Liang Yan <l...@suse.com>
---
v2:
- exclude extended key
- change log comment

 hw/display/xenfb.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 8e2547ac05..1bc5b41ab7 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -292,6 +292,11 @@ static void xenfb_key_event(void *opaque, int scancode)
 }
 trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
 xenfb_send_key(xenfb, down, scancode2linux[scancode]);
+
+/* insert an up event for regular down key event */
+if (down && !xenfb->extended) {
+xenfb_send_key(xenfb, 0, scancode2linux[scancode]);
+}
 }
 
 /*
-- 
2.14.2




[Qemu-devel] [PATCH] hw/display/xenfb: Simulate auto-repeat key events

2017-10-27 Thread Liang Yan
New tigervnc server changes the way to send long pressed key,
from "down up down up ..." to "down down ... up". So we insert
an up event after each key down event to simulate auto-repeat
key events for xen keyboard frontend driver.

Signed-off-by: Liang Yan <l...@suse.com>
---
 hw/display/xenfb.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 8e2547ac05..a5f787a3f3 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -292,6 +292,9 @@ static void xenfb_key_event(void *opaque, int scancode)
 }
 trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
 xenfb_send_key(xenfb, down, scancode2linux[scancode]);
+if (down) { /* simulate auto-repeat key events */
+xenfb_send_key(xenfb, 0, scancode2linux[scancode]);
+}
 }
 
 /*
-- 
2.14.2




Re: [Qemu-devel] [Bug 1721788] Re: Failed to get shared "write" lock with 'qemu-img info'

2017-10-11 Thread Liang Yan
This does  not only affect qemu-img only, it could not make libvirt
"" work either when two vms were running with share disk
image.  Is there a workaround for this situation?

Best,
Liang

On 10/6/17 10:30 AM, Daniel Berrange wrote:
> I've just noticed, however, that '--force-share' appears totally
> undocumented in both CLI help output and the man page. So that's
> certainly something that should be fixed
>




[Qemu-devel] [PATCH] chardev/baum: fix baum that releases brlapi twice

2017-09-22 Thread Liang Yan
Error process of baum_chr_open needs to set brlapi null, so it won't
get released twice in char_braille_finalize, which will cause
"/usr/bin/qemu-system-x86_64: double free or corruption (!prev)"

Signed-off-by: Liang Yan <l...@suse.com>
---
 chardev/baum.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/chardev/baum.c b/chardev/baum.c
index 302dd9666c..67fd783a59 100644
--- a/chardev/baum.c
+++ b/chardev/baum.c
@@ -643,6 +643,7 @@ static void baum_chr_open(Chardev *chr,
 error_setg(errp, "brlapi__openConnection: %s",
brlapi_strerror(brlapi_error_location()));
 g_free(handle);
+baum->brlapi = NULL;
 return;
 }
 baum->deferred_init = 0;
-- 
2.14.1




[Qemu-devel] [PATCH v2] hw/display/xenfb.c: Add trace_xenfb_key_event

2017-08-23 Thread Liang Yan
It may be better to add a trace event to monitor the last moment of
a key event from QEMU to guest VM

Signed-off-by: Liang Yan <l...@suse.com>
---
 hw/display/trace-events | 1 +
 hw/display/xenfb.c  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index ed8cca0755..da498c1def 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -6,6 +6,7 @@ jazz_led_write(uint64_t addr, uint8_t new) "write 
addr=0x%"PRIx64": 0x%x"
 
 # hw/display/xenfb.c
 xenfb_mouse_event(void *opaque, int dx, int dy, int dz, int button_state, int 
abs_pointer_wanted) "%p x %d y %d z %d bs 0x%x abs %d"
+xenfb_key_event(void *opaque, int scancode, int button_state) "%p scancode %d 
bs 0x%x"
 xenfb_input_connected(void *xendev, int abs_pointer_wanted) "%p abs %d"
 
 # hw/display/g364fb.c
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index df8b78f6f4..8e2547ac05 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -290,6 +290,7 @@ static void xenfb_key_event(void *opaque, int scancode)
scancode |= 0x80;
xenfb->extended = 0;
 }
+trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
 xenfb_send_key(xenfb, down, scancode2linux[scancode]);
 }
 
-- 
2.14.1




[Qemu-devel] [PATCH] hw/display/xenfb.c: Add trace_xenfb_key_event

2017-08-23 Thread Liang Yan
It may be better to add a trace event to monitor the last moment of
a key event from QEMU to guest VM

Signed-off-by: Liang Yan <l...@suse.com>
---
 hw/display/trace-events | 1 +
 hw/display/xenfb.c  | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index ed8cca0755..da498c1def 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -6,6 +6,7 @@ jazz_led_write(uint64_t addr, uint8_t new) "write 
addr=0x%"PRIx64": 0x%x"
 
 # hw/display/xenfb.c
 xenfb_mouse_event(void *opaque, int dx, int dy, int dz, int button_state, int 
abs_pointer_wanted) "%p x %d y %d z %d bs 0x%x abs %d"
+xenfb_key_event(void *opaque, int scancode, int button_state) "%p scancode %d 
bs 0x%x"
 xenfb_input_connected(void *xendev, int abs_pointer_wanted) "%p abs %d"
 
 # hw/display/g364fb.c
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index df8b78f6f4..afc43f33a2 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -290,6 +290,8 @@ static void xenfb_key_event(void *opaque, int scancode)
scancode |= 0x80;
xenfb->extended = 0;
 }
+
+trace_xenfb_key_event(opaque, scancode2linux[scancode], down);
 xenfb_send_key(xenfb, down, scancode2linux[scancode]);
 }
 
-- 
2.14.1




Re: [Qemu-devel] [Bug 1686390] [NEW] vnc server closed socket after arrow "down" keyevent

2017-04-26 Thread Liang Yan


On 4/26/17 9:12 AM, leon wrote:
> Public bug reported:
>
> This is a rewrite for https://bugs.launchpad.net/qemu/+bug/1670377
>
> QEMU 2.6 or later
> tigervncviwer 1.6  
>
> Once get into grub boot interface(choose boot os, or recovery mode),
> keep pressing arrow down button for couple times, qemu will close the
> connection, vnc used zrle mode.
One correction is that hold pressing "down" key, and do not release it
for a while, then the connection will be closed.
>
> Interesting place:
> 1. when stopped at grub interface, only arrow up and down key could trigger 
> it, 
> 2.  only in zrle or tight mode, could work well in raw mode
> 2. it only triggered by remote connection, not happen if local vncviewer and 
> vnc server
According to the trace file, it looks like socket is closed right after
qio_channel_socket_writev, so it may something wrong when update fb by
zrle mode. Anyone know how to trace buffer size change?


46531@1493059183.573496:ps2_put_keycode 0x55f2943ae3b0 keycode 208
46531@1493059183.573498:system_wakeup_request reason=3
46531@1493059183.573500:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe75347160
46531@1493059183.573503:apic_report_irq_delivered coalescing 32932
46531@1493059183.573505:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe75347180
46531@1493059183.573507:apic_report_irq_delivered coalescing 32934
46531@1493059183.573509:input_event_sync
46531@1493059183.573518:buffer_resize vnc-input/0x55f293ec3c30: old
4096, new 4096
46531@1493059183.573521:object_class_dynamic_cast_assert
qio-channel-socket->qio-channel (io/channel.c:60:qio_channel_writev_full)
46531@1493059183.573524:object_dynamic_cast_assert
qio-channel-socket->qio-channel-socket
(io/channel-socket.c:508:qio_channel_socket_writev)
46531@1493059183.573528:object_class_dynamic_cast_assert
qio-channel-socket->qio-channel (io/channel.c:123:qio_channel_close)
46531@1493059183.573531:object_dynamic_cast_assert
qio-channel-socket->qio-channel-socket
(io/channel-socket.c:688:qio_channel_socket_close)

Thanks,
Liang
>
> A trace is attached.
>
> Thanks
>
> ** Affects: qemu
>  Importance: Undecided
>  Status: New
>
> ** Attachment added: "qemu trace file"
>
> https://bugs.launchpad.net/bugs/1686390/+attachment/4868238/+files/trace.txt
>





Re: [Qemu-devel] some error when compile qemu

2016-03-15 Thread liang yan



On 03/15/2016 10:46 AM, Alex Bennée wrote:

高强  writes:


Hi,alls

I compile qemu on ubuntu 12.04,when "make",some error appears.the
error:

Start by ensuring you have all your build dependencies installed. On
Ubuntu:

 apt-get build-dep qemu


Yes, according to your error message, you may miss your libnuma.so library.

migration/rdma.c: 在函数‘qemu_rdma_dump_id’中:
migration/rdma.c:738:21: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
migration/rdma.c:739:22: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
migration/rdma.c:739:37: 错误: ‘IBV_LINK_LAYER_INFINIBAND’未声明(在此函数内第一次使用)
migration/rdma.c:739:37: 附注: 每个未声明的标识符在其出现的函数内只报告一次
migration/rdma.c:740:24: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
migration/rdma.c:740:39: 错误: ‘IBV_LINK_LAYER_ETHERNET’未声明(在此函数内第一次使用)
migration/rdma.c: 在函数‘qemu_rdma_broken_ipv6_kernel’中:
migration/rdma.c:839:26: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
migration/rdma.c:839:41: 错误: ‘IBV_LINK_LAYER_INFINIBAND’未声明(在此函数内第一次使用)
migration/rdma.c:841:33: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
migration/rdma.c:841:48: 错误: ‘IBV_LINK_LAYER_ETHERNET’未声明(在此函数内第一次使用)
migration/rdma.c:880:18: 错误: ‘struct ibv_port_attr’没有名为‘link_layer’的成员
make: *** [migration/rdma.o] 错误 1

look at the source code,maybe some head file is lossed,anybody knows how to
deal with it?

Thanks


--
Alex Bennée







Re: [Qemu-devel] [edk2] Could not add PCI device with big memory to aarch64 VMs

2015-12-02 Thread liang yan

Hi, Laszlo,

On 11/30/2015 06:45 PM, Laszlo Ersek wrote:

On 12/01/15 01:46, liang yan wrote:

Hello, Laszlo,

On 11/30/2015 03:05 PM, Laszlo Ersek wrote:

[snip]


If you need more room (with large alignments), then there's no way
around supporting QEMU's 64 bit aperture, VIRT_PCIE_MMIO_HIGH (see my
earlier email).

I checked the function create_pcie form pathtoqemu/hw/arm/virt.c, it has
a flag value use_highmem(which has default "true" value).

It set base_mmio_high and size_mmio_high to device tree by function below,

 qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
  1, FDT_PCI_RANGE_IOPORT, 2, 0,
  2, base_pio, 2, size_pio,
  1, FDT_PCI_RANGE_MMIO, 2, base_mmio,
  2, base_mmio, 2, size_mmio,
  1, FDT_PCI_RANGE_MMIO_64BIT,
  2, base_mmio_high,
  2, base_mmio_high, 2, size_mmio_high);

So basically, I need to add two UINT64 variables like mmio_high_base and
mmio_high_size to PCD under function ProcessPciHost(VirtFdtDxe.c),
and try to use this high base address and size as new aperture.

Is this correct?

It is correct, but that's only part of the story.

Parsing the 64-bit aperture from the DTB into new PCDs in
ArmVirtPkg/VirtFdtDxe is the easy part.

The hard part is modifying ArmVirtPkg/PciHostBridgeDxe, so that BAR
allocation requests (submitted by the platform independent PCI bus
driver that resides in "MdeModulePkg/Bus/Pci/PciBusDxe") are actually
serviced from this high aperture too.


Unfortunately I can't readily help with that in the
"ArmVirtPkg/PciHostBridgeDxe" driver; there's no such (open-source)
example in the edk2 tree. Of course, I could experiment with it myself
-- only not right now.

If possible, I do want to finish this part or help you finish it. I just
work on UEFI recently, and thank you so much for your patient and detail
explanation. I really appreciate it.

I guess copying and adapting the TypeMem32 logic to TypeMem64 (currently
short-circuited with EFI_ABORTED) could work.

Is the 32 or 64 bit determined by BAR(2-3bit) or by the PCI device
memory size? Is there an option from QEMU?

I can't tell. :)


Does TypeMem32 still keep  "VIRT_PCIE_MMIO" aperture and TypeMem64 use
"VIRT_PCIE_MMIO_HIGH" aperture? or It's more like device property
controlled from QEMU device simulation?

Good question. I don't know. I think in order to answer this question,
we should understand the whole dance between the PCI root bridge / host
bridge driver and the generic PCI bus driver.

The documentation I know of is in the Platform Init spec, version 1.4,
Volume 5, Chapter 10 "PCI Host Bridge". I've taken multiple stabs at
that chapter earlier, but I've always given up.

Sorry I can't help more, but this is new area for me as well.

No, already a big help, Really appreciate your generous sharing.

I also have a problem from guest vm kernel side.

Even we change the UEFI side, should the guest kernel still need to modify?
Because, I noticed that the kernel will do rescan.  use the example of 
256M below.


UEFI side, has two setup, not sure which is the real one.

PciBus: Discovered PCI @ [00|01|00]
   BAR[0]: Type =  Mem32; Alignment = 0xFFF;Length = 0x100; Offset 
= 0x10
   BAR[1]: Type =  Mem32; Alignment = 0xFFF;Length = 0x1000; Offset 
= 0x14
   BAR[2]: Type = PMem64; Alignment = 0xFFF;Length = 
0x1000;Offset = 0x18

 = PMem64 here


PciBus: Resource Map for Root Bridge PciRoot(0x0)
Type =  Mem32; Base = 0x2000;Length = 0x1010; Alignment = 
0xFFF
   Base = 0x2000;Length = 0x1000;Alignment = 
0xFFF;Owner = PCI [00|01|00:18]; Type = PMem32 
>Mem32 here
   Base = 0x3000;Length = 0x1000;Alignment = 0xFFF; Owner = 
PCI [00|01|00:14]
   Base = 0x30001000;Length = 0x100;Alignment = 0xFFF; Owner = 
PCI [00|01|00:10]



but kernel side: it becomes 64bit pref

[3.005355] pci_bus :00: root bus resource [mem 
0x1000-0x3efe window]

[3.006028] pci_bus :00: root bus resource [bus 00-0f]
.
.
.
[3.135847] pci :00:01.0: BAR 2: assigned [mem 
0x1000-0x1fff 64bit pref]

[3.137099] pci :00:01.0: BAR 1: assigned [mem 0x2000-0x2fff]
[3.137382] pci :00:01.0: BAR 0: assigned [mem 0x20001000-0x200010ff]



Also, I found that [mem 0x80 window] was ignored,

[2.769608] PCI ECAM: ECAM for domain  [bus 00-0f] at [mem 
0x3f00-0x3fff] (base 0x3f00)

[2.962930] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-0f])
[2.965787] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM 
ClockPM Segments MSI]
[2.990794] acpi PNP0A08:00: _OSC: OS now controls [PCIe

Re: [Qemu-devel] Could not add PCI device with big memory to aarch64 VMs

2015-11-30 Thread liang yan



On 11/04/2015 05:53 PM, Laszlo Ersek wrote:

On 11/04/15 23:22, liang yan wrote:

Hello, Laszlo,


(2)It also has a problem that once I use a memory bigger than 256M for
ivshmem, it could not get through UEFI,
the error message is

PciBus: Discovered PCI @ [00|01|00]
BAR[0]: Type =  Mem32; Alignment = 0xFFF;Length = 0x100; Offset =
0x10
BAR[1]: Type =  Mem32; Alignment = 0xFFF;Length = 0x1000; Offset
= 0x14
BAR[2]: Type = PMem64; Alignment = 0x3FFF;Length =
0x4000;Offset = 0x18

PciBus: HostBridge->SubmitResources() - Success
ASSERT
/home/liang/studio/edk2/ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c(449): 
((BOOLEAN)(0==1))


I am wandering if there are memory limitation for pcie devices under
Qemu environment?


Just thank you in advance and any information would be appreciated.

(CC'ing Ard.)

"Apparently", the firmware-side counterpart of QEMU commit 5125f9cd2532
has never been contributed to edk2.

Therefore the the ProcessPciHost() function in
"ArmVirtPkg/VirtFdtDxe/VirtFdtDxe.c" ignores the
DTB_PCI_HOST_RANGE_MMIO64 type range from the DTB. (Thus only
DTB_PCI_HOST_RANGE_MMIO32 is recognized as PCI MMIO aperture.)

However, even if said driver was extended to parse the new 64-bit
aperture into PCDs (which wouldn't be hard), the
ArmVirtPkg/PciHostBridgeDxe driver would still have to be taught to look
at that aperture (from the PCDs) and to serve MMIO BAR allocation
requests from it. That could be hard.

Please check edk2 commits e48f1f15b0e2^..e5ceb6c9d390, approximately,
for the background on the current code. See also chapter 13 "Protocols -
PCI Bus Support" in the UEFI spec.

Patches welcome. :)

(A separate note on ACPI vs. DT: the firmware forwards *both* from QEMU
to the runtime guest OS. However, the firmware parses only the DT for
its own purposes.)

Hello, Laszlo,

Thanks for your advices above, it's very helpful.

When debugging, I also found some problems for 32 bit PCI devices.
Hope could get some clues from you.

I checked on 512M, 1G, and 2G devices.(4G return invalid parameter 
error, so I think it may be taken as a 64bit devices, is this right?).



First,

All devices start from base address 3EFE.

ProcessPciHost: Config[0x3F00+0x100) Bus[0x0..0xF] 
Io[0x0+0x1)@0x3EFF Mem[0x1000+0x2EFF)@0x0


PcdPciMmio32Base is  1000=
PcdPciMmio32Size is  2EFF=


Second,

It could not get new base address when searching memory space in GCD map.

For 512M devices,

*BaseAddress = (*BaseAddress + 1 - Length) & (~AlignmentMask);

BaseAddress is 3EFE==
new BaseAddress is 1EEF==
~AlignmentMask is E000==
Final BaseAddress is 

Status = CoreSearchGcdMapEntry (*BaseAddress, Length, , 
, Map);




For bigger devices:

all stops when searching memory space because below code, Length will 
bigger than MaxAddress(3EFE)


if ((Entry->BaseAddress + Length) > MaxAddress) {
 continue;
}


I also checked on ArmVirtQemu.dsc which all set to 0.

  gArmPlatformTokenSpaceGuid.PcdPciBusMin|0x0
  gArmPlatformTokenSpaceGuid.PcdPciBusMax|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoBase|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoSize|0x0
  gArmPlatformTokenSpaceGuid.PcdPciIoTranslation|0x0
  gArmPlatformTokenSpaceGuid.PcdPciMmio32Base|0x0
  gArmPlatformTokenSpaceGuid.PcdPciMmio32Size|0x0
  gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x0


Do you think I should change from PcdPciMmio32Base and PcdPciMmio32Size, 
or do some change for GCD entry list, so it could allocate resources for 
PCI devices(CoreSearchGcdMapEntry)?



Looking forward to your reply.


Thanks,
Liang


Thanks
Laszlo






Re: [Qemu-devel] [edk2] Could not add PCI device with big memory to aarch64 VMs

2015-11-30 Thread liang yan

Hello, Laszlo,

On 11/30/2015 03:05 PM, Laszlo Ersek wrote:

On 11/30/15 19:45, liang yan wrote:


On 11/04/2015 05:53 PM, Laszlo Ersek wrote:

On 11/04/15 23:22, liang yan wrote:

Hello, Laszlo,


(2)It also has a problem that once I use a memory bigger than 256M for
ivshmem, it could not get through UEFI,
the error message is

PciBus: Discovered PCI @ [00|01|00]
 BAR[0]: Type =  Mem32; Alignment = 0xFFF;Length = 0x100;
Offset =
0x10
 BAR[1]: Type =  Mem32; Alignment = 0xFFF;Length = 0x1000; Offset
= 0x14
 BAR[2]: Type = PMem64; Alignment = 0x3FFF;Length =
0x4000;Offset = 0x18

PciBus: HostBridge->SubmitResources() - Success
ASSERT
/home/liang/studio/edk2/ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c(449):
((BOOLEAN)(0==1))


I am wandering if there are memory limitation for pcie devices under
Qemu environment?


Just thank you in advance and any information would be appreciated.

(CC'ing Ard.)

"Apparently", the firmware-side counterpart of QEMU commit 5125f9cd2532
has never been contributed to edk2.

Therefore the the ProcessPciHost() function in
"ArmVirtPkg/VirtFdtDxe/VirtFdtDxe.c" ignores the
DTB_PCI_HOST_RANGE_MMIO64 type range from the DTB. (Thus only
DTB_PCI_HOST_RANGE_MMIO32 is recognized as PCI MMIO aperture.)

However, even if said driver was extended to parse the new 64-bit
aperture into PCDs (which wouldn't be hard), the
ArmVirtPkg/PciHostBridgeDxe driver would still have to be taught to look
at that aperture (from the PCDs) and to serve MMIO BAR allocation
requests from it. That could be hard.

Please check edk2 commits e48f1f15b0e2^..e5ceb6c9d390, approximately,
for the background on the current code. See also chapter 13 "Protocols -
PCI Bus Support" in the UEFI spec.

Patches welcome. :)

(A separate note on ACPI vs. DT: the firmware forwards *both* from QEMU
to the runtime guest OS. However, the firmware parses only the DT for
its own purposes.)

Hello, Laszlo,

Thanks for your advices above, it's very helpful.

When debugging, I also found some problems for 32 bit PCI devices.
Hope could get some clues from you.

I checked on 512M, 1G, and 2G devices.(4G return invalid parameter
error, so I think it may be taken as a 64bit devices, is this right?).

I guess so.



First,

All devices start from base address 3EFE.

According to the below:


ProcessPciHost: Config[0x3F00+0x100) Bus[0x0..0xF]
Io[0x0+0x1)@0x3EFF Mem[0x1000+0x2EFF)@0x0

the address you mention (0x3EFE) is the *highest* inclusive
guest-phys address that an MMIO BAR can take. Not sure if that's what
you meant.


Yes, you are right, current allocation is 
EfiGcdAllocateMaxAddressSearchTopDown, so base address here is the 
highest inclusive address.



The size of the MMIO aperture for the entire PCI host is 0x2EFF
bytes: a little less than 752 MB. So devices that need 1G and 2G MMIO
BARs have no chance.


PcdPciMmio32Base is  1000=
PcdPciMmio32Size is  2EFF=


Second,

It could not get new base address when searching memory space in GCD map.

For 512M devices,

*BaseAddress = (*BaseAddress + 1 - Length) & (~AlignmentMask);

This seems to be from CoreAllocateSpace()
[MdeModulePkg/Core/Dxe/Gcd/Gcd.c]. AlignmentMask is computed from the
Alignment input parameter.

Which in turn seems to come from the BitsOfAlignment parameter computed
in NotifyPhase(), "ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c".


BaseAddress is 3EFE==

So this is the highest address (inclusive) where the 512MB BAR could end.


new BaseAddress is 1EEF==

This is the highest address (inclusive) where the 512MB BAR could start.

This should be rounded down to an 512MB alignment (I believe), and then
checked if that is still in the MMIO aperture.

512MB is 0x2000_.

Rounding 0x1EEF_ down to an integral multiple of 0x2000_ results
in zero:


~AlignmentMask is E000==
Final BaseAddress is 

And that address does not fall into the MMIO aperture.

In other words, although your size requirement of 512MB could be
theoretically satisfied from the aperture (which extends from exactly
256 MB to a little lower than 1008 MB), if you *also* require the base
address to be aligned at 512MB, then that cannot be satisfied.

Thanks for the detail explanation above, I will write email with detail 
information too. Really appreciate.



Status = CoreSearchGcdMapEntry (*BaseAddress, Length, ,
, Map);



For bigger devices:

all stops when searching memory space because below code, Length will
bigger than MaxAddress(3EFE)

if ((Entry->BaseAddress + Length) > MaxAddress) {
  continue;
}


I also checked on ArmVirtQemu.dsc which all set to 0.

   gArmPlatformTokenSpaceGuid.PcdPciBusMin|0x0
   gArmPlatformTokenSpaceGuid.PcdPciBusMax|0x0
   gArmPlatformTokenSpaceGuid.PcdPciIoBa

[Qemu-devel] Could not add PCI device with big memory to aarch64 VMs

2015-11-04 Thread liang yan

Hello, Laszlo,

(1)I am trying to add ivshmem device(PCI device with big memory) to my 
aarch64 vm.
So far, I could find device information from vm. But it seems vm did not 
create
correct resource file for this device. Do you have any idea that this 
happens?


I used the upstream EDK2 to build my UEFI firmware.

There are three BARs for this device, and memory map is assigned too, 
but only one

resource file is created.

My qemu supports ACPI 5.1 and the command line is :

  -device ivshmem,size=256M,chardev=ivshmem,msi=on,ioeventfd=on \
  -chardev socket,path=/tmp/ivshmem_socket,id=ivshmem \

The lspci information:

00:00.0 Host bridge: Red Hat, Inc. Device 0008
Subsystem: Red Hat, Inc Device 1100
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
SERR- 

00:01.0 RAM memory: Red Hat, Inc Inter-VM shared memory
Subsystem: Red Hat, Inc QEMU Virtual Machine
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
SERR- 
Interrupt: pin A routed to IRQ 255
Region 0: Memory at 20001000 (32-bit, non-prefetchable) [disabled] 
[size=256]
Region 1: Memory at 2000 (32-bit, non-prefetchable) [disabled] 
[size=4K]
Region 2: Memory at 1000 (64-bit, prefetchable) [disabled] 
[size=256M]

Capabilities: [40] MSI-X: Enable- Count=1 Masked-
Vector table: BAR=1 offset=
PBA: BAR=1 offset=0800

Boot information:

[2.380924] pci :00:01.0: BAR 2: assigned [mem 
0x1000-0x1fff 64bit pref]

[2.382836] pci :00:01.0: BAR 1: assigned [mem 0x2000-0x2fff]
[2.383557] pci :00:01.0: BAR 0: assigned [mem 0x20001000-0x200010ff]


Files under /sys/devices/pci:00/:00:01.0

broken_parity_status  devspec   local_cpus  resource
class  dma_mask_bitsmodaliassubsystem
config  driver_override  msi_bus subsystem_device
consistent_dma_mask_bits  enable   power   subsystem_vendor
d3cold_allowed  irq   remove  uevent
device  local_cpulistrescan  vendor

Information for resource:

0x20001000 0x200010ff 0x00040200
0x2000 0x2fff 0x00040200
0x1000 0x1fff 0x0014220c
0x 0x 0x
0x 0x 0x
0x 0x 0x
0x 0x 0x




(2)It also has a problem that once I use a memory bigger than 256M for 
ivshmem, it could not get through UEFI,

the error message is

PciBus: Discovered PCI @ [00|01|00]
   BAR[0]: Type =  Mem32; Alignment = 0xFFF;Length = 0x100; Offset 
= 0x10
   BAR[1]: Type =  Mem32; Alignment = 0xFFF;Length = 0x1000; Offset 
= 0x14
   BAR[2]: Type = PMem64; Alignment = 0x3FFF;Length = 
0x4000;Offset = 0x18


PciBus: HostBridge->SubmitResources() - Success
ASSERT 
/home/liang/studio/edk2/ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c(449): ((BOOLEAN)(0==1))


I am wandering if there are memory limitation for pcie devices under 
Qemu environment?



Just thank you in advance and any information would be appreciated.



Best,
Liang



Re: [Qemu-devel] [PATCH v9 00/24] Generate ACPI v5.1 tables and expose them to guest over fw_cfg on ARM

2015-10-13 Thread liang yan

Hello, Laszlo,

On 10/10/15 00:34, liang yan wrote:
>/Hello, Shannon,/
>/> From: Shannon Zhao <address@hidden>/
>/>/
>/> This patch series generate seven ACPI tables for machine virt on ARM./
>/> The set of generated tables are:/
>/> - RSDP/
>/> - RSDT/
>/> - MADT/
>/> - GTDT/
>/> - FADT/
>/> - DSDT/
>/> - MCFG (For PCIe host bridge)/
>/>/
>/> These tables are created dynamically using the function of aml-build.c,/
>/> taking into account the needed information passed from the virt machine/
>/> model. When the generation is finalized, it use fw_cfg to expose the/
>/> tables to guest./
>/>/
>/> You can fetch this from following repo:/
>/> http://git.linaro.org/people/shannon.zhao/qemu.git ACPI_ARM_v9/
>/>/
>/> And this patchset refers to Alexander Spyridakis's patches which are/
>/> sent to qemu-devel mailing list before./
>/> /
>/> http://lists.gnu.org/archive/html/qemu-devel/2014-10/msg03987.html/
>/>/
>/> Thanks to Laszlo's work on UEFI (ArmVirtualizationQemu) supporting/
>/> downloading ACPI tables over fw_cfg, we now can use ACPI in VM./
>/>/
>/> Now upstream kernel applies ACPI patchset, so we can boot it with ACPI,/
>/> while we need to apply patches[1] to make tty work, patch[2] to make/
>/> virtio-mmio work and apply patch[3] and the relevant patches to make 
PCI/

>/> devices works, e.g. virtio-net-pci, e1000./
>/> On the other hand, you can directly use the Fedora Linux kernel from/
>/> following address:/
>/> https://git.fedorahosted.org/cgit/kernel-arm64.git/log/?h=devel/
>/>/
>/> I've done test with following VM:/
>/> xp, windows2008, sles11 on X86/
>/> upstream kernel and Fedora Linux kernel on ARM64/
>/>/
>/> In addtion, dump all the acpi tables, use iasl -d *.dat to convert to/
>/> *.asl and use iasl -tc *.asl to compile them to *.hex. No error 
appears./

>/>/
>/> If you want to test, you could get kernel Image from [4] which contains/
>/> uart, virtio-mmio, pci drivers, UEFI binary from [5] and Qemu command/
>/> line example from [6]./
>/I tested with your kernel and bios, all runs well. But when I try to/
>/build a new debian(upstream) with your qemu patch and bios,/
>/it always told me could find the right driver, or could not enable ACPI/
>/from kernel command line. Do you have a full vm for fedora or/
>/you just use the kernel there? Could you tell me more about your detail?/
>/Thanks./
>//
>/Also, we have our own EDK-II, and it could not work now, so I need to do/
>/patches too. Do you mind to tell me how you build your QMEU.fd? Where/
>/can I access those source code? Thanks./

The relevant edk2 patches have been in the upstream repo for quite some
time now; you shouldn't need anything extra.

I built a new QEMU.fd file, and it worked fine. Thanks for your reply.

Best,
Liang


You can clone the repo from <https://github.com/tianocore/edk2.git>.

Build instructions are written up for example in the linaro wiki
<https://wiki.linaro.org/LEG/UEFIforQEMU>, but someone else asked about
the same just the other day on the edk2 mailing list, and I answered there:

http://news.gmane.org/address@hidden

Wrt. the QEMU command line, I recommend something like:

   # recreate first flash drive from most recent firmware build
   cat \
 .../Build/ArmVirtQemu-AARCH64/DEBUG_GCC48/FV/QEMU_EFI.fd \
 /dev/zero \
   | head -c $((64 * 1024 * 1024)) >| flash0.img

   # create second flash drive (varstore) if it doesn't exist
   if ! [ -e flash1.img ]; then
 head -c $((64 * 1024 * 1024)) /dev/zero > flash1.img
   fi

   # launch qemu (TCG)
   .../qemu-system-aarch64 \
 -nodefaults \
 -nodefconfig \
 -nographic \
 \
 -m 2048 \
 -cpu cortex-a57 \
 -M virt \
 \
 -drive if=pflash,format=raw,file=flash0.img,readonly \
 -drive if=pflash,format=raw,file=flash1.img \
 \
 -chardev stdio,signal=off,mux=on,id=char0 \
 -mon chardev=char0,mode=readline,default \
 -serial chardev:char0 \
 \
 ...

These commands are appropriate for a "persistent" virtual machine (ie.
one where you want to preserve the non-volatile UEFI variables from boot
to boot).

If you want to start again with an empty varstore, just delete
"flash1.img". (Normally, you'd only do that when also zapping the VM's
system disk.)

However, if you can (and are willing to) use libvirt, I certainly
recommend that you do. See eg.
<https://fedoraproject.org/wiki/Architectures/AArch64/Install_with_QEMU>
(although virt-install has become even more convenient since then).

HTH
Laszlo


>//
>//
>/Best,/
>/Liang/
>//
>//
>/> 
[1]http://git.linaro.org/leg/acpi/acpi.git/shortlog/refs/heads/acpi-sbsa/

>/> [2

Re: [Qemu-devel] [PATCH v9 00/24] Generate ACPI v5.1 tables and expose them to guest over fw_cfg on ARM

2015-10-09 Thread liang yan

Hello, Shannon,

From: Shannon Zhao 

This patch series generate seven ACPI tables for machine virt on ARM.
The set of generated tables are:
- RSDP
- RSDT
- MADT
- GTDT
- FADT
- DSDT
- MCFG (For PCIe host bridge)

These tables are created dynamically using the function of aml-build.c,
taking into account the needed information passed from the virt machine
model. When the generation is finalized, it use fw_cfg to expose the
tables to guest.

You can fetch this from following repo:
 http://git.linaro.org/people/shannon.zhao/qemu.git   ACPI_ARM_v9

And this patchset refers to Alexander Spyridakis's patches which are
sent to qemu-devel mailing list before.
 http://lists.gnu.org/archive/html/qemu-devel/2014-10/msg03987.html

Thanks to Laszlo's work on UEFI (ArmVirtualizationQemu) supporting
downloading ACPI tables over fw_cfg, we now can use ACPI in VM.

Now upstream kernel applies ACPI patchset, so we can boot it with ACPI,
while we need to apply patches[1] to make tty work, patch[2] to make
virtio-mmio work and apply patch[3] and the relevant patches to make PCI
devices works, e.g. virtio-net-pci, e1000.
On the other hand, you can directly use the Fedora Linux kernel from
following address:
https://git.fedorahosted.org/cgit/kernel-arm64.git/log/?h=devel

I've done test with following VM:
xp, windows2008, sles11 on X86
upstream kernel and Fedora Linux kernel on ARM64

In addtion, dump all the acpi tables, use iasl -d *.dat to convert to
*.asl and use iasl -tc *.asl to compile them to *.hex. No error appears.

If you want to test, you could get kernel Image from [4] which contains
uart, virtio-mmio, pci drivers, UEFI binary from [5] and Qemu command
line example from [6].
I tested with your kernel and bios, all runs well. But when I try to 
build a new debian(upstream) with your qemu patch and bios,
it always told me could find the right driver, or could not enable ACPI 
from kernel command line. Do you have a full vm for fedora or
you just use the kernel there? Could you tell me more about your detail? 
Thanks.


Also, we have our own EDK-II, and it could not work now, so I need to do 
patches too. Do you mind to tell me how you build your QMEU.fd? Where 
can I access those source code? Thanks.



Best,
Liang



[1]http://git.linaro.org/leg/acpi/acpi.git/shortlog/refs/heads/acpi-sbsa
[2]
http://git.linaro.org/leg/acpi/acpi.git/commit/57acba56d55e3fb521fd6ce767446459ef7a4943
[3]
https://git.fedorahosted.org/cgit/kernel-arm64.git/commit/?h=devel=8cf58cbe94b982b680229e5b164231eea0ca2d11
[4]http://people.linaro.org/~shannon.zhao/ACPI_ARM/Image.gz 

[5]http://people.linaro.org/~shannon.zhao/ACPI_ARM/QEMU_EFI.fd 

[6]http://people.linaro.org/~shannon.zhao/ACPI_ARM/acpi_test.sh 



changes since v8:
   * remove empty _CRS in processor device node and use a define macro
 for SPI base (Igor)
   * Add some reviewd-bys from Igor and Alex

changes since v7:
   * replace build_append_uint32 with 4 build_append_byte (Igor)
   * Fix byte order of aml_unicode() (Igor)
   * Use upper case for enum values and fix enums in aml-build.h (Michael)
   * implement aml_interrupt() based on ACPI 5.0 (Igor)
   * use separate assert (Laszlo)
   * some doc comments fix (Igor & Michael)

changes since v6:
   * add build_append_uint32 (Peter)
   * drop some unnecessary headers and adjust the order of headers (Peter)
   * drop struct AcpiDsdtInfo, AcpiMadtInfo, AcpiGtdtInfo, AcpiPcieInfo
 and reuse MemMapEntry[] and irqmap[] (Peter)
   * record PCI ranges info in MemMapEntry[], not calculate those (Peter)
   * add a separate patch for splitting CONFIG_ACPI (Peter)
   * use VMSTATE_BOOL (Alex)

changes since v5:
   * Fix table version (Igor)
   * only create CPU device objects for present CPUs (Igor)
   * drop madt->local_apic_address and madt->flags (Igor)
   * adjust implementation of ToUUID macro (Igor)
   * Fix aml_buffer() (Michael & Igor)
   * Fix aml_not()

changes since v4:
   * use trace_* instead of DPRINTF (Igor & Alex)
   * use standard QEMU style for structs (Michael)
   * add "-no-acpi" option support for arm
   * use extractNN for bits operation (Alex)
   * use AmlReadAndWrite enum for rw flags (Igor)
   * s/uint64_t/uint32_t/ (Igor)
   * use enum for interrupt flag (Igor)
   * simplify aml_device use in DSDT (Alex)
   * share RSDT table generating code with x86 (Igor)
   * remove unnecessary 1 in MCFG table generating code (Alex & Peter)
   * use string for ToUUID macro (Igor)
   * aml_or and aml_and use two args (Igor)
   * add comments on UUID (Michael)
   * change PCI MMIO region non-cacheable (Peter)
   * fix wrong io map (Peter)
   * add several reviewed-by's from Alex, thanks

changes since v3:
   * rebase on upstream qemu
   * fix _HID of CPU (Heyi Guo)
   * Add PCIe host bridge

changes since 

Re: [Qemu-devel] Could not boot a guest vm from kvm mode based on APM X-Gene Host and latest qemu

2015-09-16 Thread liang yan



On 09/16/2015 08:34 AM, Alex Bennée wrote:

liang yan <lia...@hpe.com> writes:


Hello, All,

I am trying to enable kvm for a guest vm on an APM X-Gene Host with
latest qemu, but could not make it work.

The host is APM X-Gene 8-core, Linux kernel is 4.1.0-rc7-1-arm64,

Guest kernel is linux-3.16rc3

QEMU is latest version

Host has these dmesg info
[2.708259] kvm [1]: GICH base=0x780c, GICV base=0x780e, IRQ=25
[2.708327] kvm [1]: timer IRQ30
[2.708335] kvm [1]: Hyp mode initialized successfully

Host has dev/kvm.

command-line is
aarch64-softmmu/qemu-system-aarch64 -machine virt,kernel_irqchip=off
-cpu cortex-a57 -machine accel=kvm -nographic -smp 1 -m 2048 -kernel
aarch64-linux-3.16rc3-buildroot.img  --append "console=ttyAMA0"

I thought I recognised one of my images ;-)

Why are you running with kernel_irqchip=off?



Hello, Alex,

Thanks for your reply and your pre-boot image. Very helpful.

The reason I ran with "kernel_irqchip=off" is because it showed the 
error "Create kernel irqchip failed: No such device"

are you familiar with it?

I did not try kernel 4.3.x yet, have you released on your linaro website 
now?


Thanks for your log file, it shows something wrong with the vcpu, I am 
gonna check the kvm code from host kernel now.

Will let you know the update.


Best,
Liang


Without it I can boot the image fine on my APM running 4.3.0-rc1-ajb but
with it I helpfully seg the kernel:

=


[16035.990518] Bad mode in Synchronous Abort handler detected, code 0x8606 
-- IABT (current EL)

=

[16035.997970] CPU: 1 PID: 21328 Comm: qemu-system-aar Not tainted 
4.3.0-rc1-ajb #446
[16036.004203] Hardware name: APM X-Gene Mustang board (DT)
[16036.008191] task: ffc3ecea8000 ti: ffc3d8078000 task.ti: 
ffc3d8078000
[16036.014338] PC is at 0x0
[16036.015564] LR is at kvm_vgic_map_resources+0x30/0x3c
[16036.019291] pc : [<>] lr : [] pstate: 
0145
[16036.025350] sp : ffc3d807bb20
[16036.027348] x29: ffc3d807bb20 x28: ffc3d8078000
[16036.031355] x27: ffc000642000 x26: 001d
[16036.035361] x25: 011b x24: ffc3d80c1000
[16036.039368] x23:  x22: 
[16036.043374] x21: ffc0fa24 x20: ffc0fa807800
[16036.047380] x19: ffc0fa807800 x18: 007f97af20e0
[16036.051387] x17: 007f99c44810 x16: ffc0001fb030
[16036.055394] x15: 007f99cc9588 x14: 00922000
[16036.059401] x13: 0097eb80 x12: 004de0f0
[16036.063406] x11: 0038 x10: 
[16036.067413] x9 : 007f97af2480 x8 : 0050
[16036.071419] x7 : ffc3ec24c840 x6 : 
[16036.075424] x5 : 0003 x4 : ffc3ece72080
[16036.079430] x3 : ffc3ece72080 x2 : 
[16036.083436] x1 : ffc000a26260 x0 : ffc0fa807800



[16036.087628] Internal error: Oops - bad mode: 0 [#1] SMP




[16036.091528] Modules linked in:
[16036.093278] CPU: 1 PID: 21328 Comm: qemu-system-aar Not tainted 
4.3.0-rc1-ajb #446
[16036.099510] Hardware name: APM X-Gene Mustang board (DT)
[16036.103497] task: ffc3ecea8000 ti: ffc3d8078000 task.ti: 
ffc3d8078000
[16036.109642] PC is at 0x0
[16036.110864] LR is at kvm_vgic_map_resources+0x30/0x3c
[16036.114590] pc : [<>] lr : [] pstate: 
0145
[16036.120649] sp : ffc3d807bb20
[16036.122648] x29: ffc3d807bb20 x28: ffc3d8078000
[16036.126654] x27: ffc000642000 x26: 001d
[16036.130659] x25: 011b x24: ffc3d80c1000
[16036.134666] x23:  x22: 
[16036.138671] x21: ffc0fa24 x20: ffc0fa807800
[16036.142678] x19: ffc0fa807800 x18: 007f97af20e0
[16036.146685] x17: 007f99c44810 x16: ffc0001fb030
[16036.150690] x15: 007f99cc9588 x14: 00922000
[16036.154696] x13: 0097eb80 x12: 004de0f0
[16036.158701] x11: 0038 x10: 
[16036.162706] x9 : 007f97af2480 x8 : 0050
[16036.166712] x7 : ffc3ec24c840 x6 : 
[16036.170719] x5 : 0003 x4 : ffc3ece72080
[16036.174725] x3 : ffc3ece72080 x2 : 
[16036.178731] x1 : ffc000a26260 x0 : ffc0fa807800



when using cpu "cortex-a57", got the error "kvm_init_vcpu failed:
Invalid argument"
when using cpu "host", got the error "Failed to retrieve host CPU features!"

By the way, all the command line works well under "tcg" mode.
Anyone has a quick idea? Thanks in advance.

Best,
Liang





Re: [Qemu-devel] Could not boot a guest vm from kvm mode based on APM X-Gene Host and latest qemu

2015-09-16 Thread liang yan



On 09/16/2015 03:24 AM, Tushar Jagad wrote:

Hi,

On Mon, Sep 14, 2015 at 06:03:48PM -0600, liang yan wrote:

Hello, All,

I am trying to enable kvm for a guest vm on an APM X-Gene Host with
latest qemu, but could not make it work.

The host is APM X-Gene 8-core, Linux kernel is 4.1.0-rc7-1-arm64,

Guest kernel is linux-3.16rc3

QEMU is latest version

Host has these dmesg info
[2.708259] kvm [1]: GICH base=0x780c, GICV base=0x780e, IRQ=25
[2.708327] kvm [1]: timer IRQ30
[2.708335] kvm [1]: Hyp mode initialized successfully

Host has dev/kvm.

command-line is
aarch64-softmmu/qemu-system-aarch64 -machine virt,kernel_irqchip=off
-cpu cortex-a57 -machine accel=kvm -nographic -smp 1 -m 2048 -kernel
aarch64-linux-3.16rc3-buildroot.img  --append "console=ttyAMA0"


when using cpu "cortex-a57", got the error "kvm_init_vcpu failed:
Invalid argument"

Currently, it is not possible to run qemu with a cpu type other than the
host. I'm currently in the process of adding the necessary support and
have posted the necessary RFC patches for kvm[1] and qemu[2].

[1] http://comments.gmane.org/gmane.linux.ports.arm.kernel/438744
[2] https://lists.gnu.org/archive/html/qemu-devel/2015-09/msg02374.html
--
Regards,
Tushar


Hello, Tushar,

Thanks for your reply.

Actually, I already patched your code there, also used the command "-cpu 
cortex-a57,bpts=2,wpts=2" ,but did not change the output,

I am trying to check the KVM code from Linux kernel now.

By the way, I am very interested in your work here, feel free to let me 
know if there

are anything I can be involved.

Best,
Liang

when using cpu "host", got the error "Failed to retrieve host CPU features!"

By the way, all the command line works well under "tcg" mode.
Anyone has a quick idea? Thanks in advance.

Best,
Liang







[Qemu-devel] Could not boot a guest vm from kvm mode based on APM X-Gene Host and latest qemu

2015-09-15 Thread liang yan

Hello, All,

I am trying to enable kvm for a guest vm on an APM X-Gene Host with 
latest qemu, but could not make it work.


The host is APM X-Gene 8-core, Linux kernel is 4.1.0-rc7-1-arm64,

Guest kernel is linux-3.16rc3

QEMU is latest version

Host has these dmesg info
[2.708259] kvm [1]: GICH base=0x780c, GICV base=0x780e, IRQ=25
[2.708327] kvm [1]: timer IRQ30
[2.708335] kvm [1]: Hyp mode initialized successfully

Host has dev/kvm.

command-line is
aarch64-softmmu/qemu-system-aarch64 -machine virt,kernel_irqchip=off 
-cpu cortex-a57 -machine accel=kvm -nographic -smp 1 -m 2048 -kernel 
aarch64-linux-3.16rc3-buildroot.img  --append "console=ttyAMA0"



when using cpu "cortex-a57", got the error "kvm_init_vcpu failed: 
Invalid argument"

when using cpu "host", got the error "Failed to retrieve host CPU features!"

By the way, all the command line works well under "tcg" mode.
Anyone has a quick idea? Thanks in advance.

Best,
Liang