Re: [PATCH v9 17/20] ebpf: Fix RSS error handling
On 2024/04/13 21:16, Yuri Benditovich wrote: On Wed, Apr 3, 2024 at 2:12 PM Akihiko Odaki wrote: calculate_rss_hash() was using hash value 0 to tell if it calculated a hash, but the hash value may be 0 on a rare occasion. Have a distinct bool value for correctness. This is interesting question whether in reality the hash value might be 0 or not. On one hand - this seems like a kind of fix. On another hard - this adds computation cycles for each packet, and the corner case that this targets to fix seems hardly reachable if at all. Optimistic estimation is 2.5*10^-8 percent of address:address:port triplets. I would suggest at least to find some proof of the fact that the calculated hash might be 0 in real case where source addresses are not random. Your estimation, which suggests the low probability of error, and the fact that an error in RSS is not fatal and only regresses the performance, should be sufficient to determine how this patch should be handled; this patch is nice to have, but is not worth to backport to stable or to merge before the 9.0 release. Regards, Akihiko Odaki
Re: [PATCH] target/sparc: resolve ASI_USERTXT correctly
Hi Richard, On Thu, Apr 11, 2024 at 10:16 PM Richard Henderson wrote: > > On 4/11/24 18:15, M Bazz wrote: > > diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c > > index e581bb42ac..4f87e44a93 100644 > > --- a/target/sparc/ldst_helper.c > > +++ b/target/sparc/ldst_helper.c > > @@ -684,6 +684,7 @@ uint64_t helper_ld_asi(CPUSPARCState *env, > > target_ulong addr, > > case ASI_M_DIAGS: /* Turbosparc DTLB Diagnostic */ > > case ASI_M_IODIAG: /* Turbosparc IOTLB Diagnostic */ > > break; > > +case ASI_USERTXT: /* User code access */ > > case ASI_KERNELTXT: /* Supervisor code access */ > > oi = make_memop_idx(memop, cpu_mmu_index(env_cpu(env), true)); > > No, this also does not work, because it uses the wrong permissions (kernel > instead of > user). I have just sent a patch to fix both problems. This thought just came to me. `lda` is a privileged instruction. It has to run in supervisor mode. So, I'm struggling to understand how the kernel permission was wrong. Isn't that the right permission for this instruction? I just want to have the right understanding, so that I'm more prepared for the next time. Here is a related excerpt from the Sparcv8 manual: "For each instruction access and each normal data access, the IU appends to the 32-bit memory address an 8-bit address space identifier, or ASI. The ASI encodes whether the processor is in supervisor or user mode, and whether the access is an instruction or data access. There are also privileged load/store alternate instructions (see below) that can provide an arbitrary ASI with their data addresses." Thank you, -bazz > > > r~
Re: Point where target instructions are read
Ah I had my .tlb_fill callback set to an empty function with just returning true. I need to put the actual code there. Let me fill this function up and see what happens. -Gautam. On Thu, Apr 11, 2024 at 2:45 AM Gautam Bhat wrote: > > On Tue, Apr 9, 2024 at 2:23 PM Peter Maydell wrote: > > > That sounds like a problem with your binary. If the reset vector > > needs to be at 0xFFFE then it needs to be there, and you > > should arrange for it to be built correctly. It doesn't matter > > whether it's an ELF file or a raw binary file, the data has > > to be in the right place. (Generally when objcopy creates a raw > > binary from an ELF file it doesn't change the address where > > the data is, assuming you load the resulting raw binary to the > > right starting address.) > > > > -- PMM > > It was a problem with me loading it to the right address. Went through > the manual again and I found that the ROM address starts at 0xC000. > Hence I was supposed to load at that address. Loading at that address > places the reset vector interrupt in the right location. Now I get the > right program counter value which is 0xC000 which is the code in the > ROM. > > I went ahead to check if I now get the right opcode but I am still > getting zeroes. Digging through the code when I do the following for > translate: > > static void translate(DisasContext *ctx) > { > uint32_t opcode; > > opcode = cpu_lduw_code(ctx->msp430_cpu_state, > ctx->base.pc_next); > qemu_fprintf(stderr, "Opcode: 0x%x\n", opcode); > } > > > cpu_lduw_code calls mmu_index callback. In my callback I have > > static int msp430_cpu_mmu_index(CPUState *cp, bool ifetch) > { > return MMU_CODE_DATA_IDX; > } > > Here I have just set the MMU_CODE_DATA_IDX to 1 which I know does not > make sense. I am not sure how this index is supposed to be computed. > Any idea on what to look at to understand it? > > -Gautam.
Questions about "QEMU gives wrong MPIDR value for Arm CPU types with MT=1" (gitlab issue #1608)
Hi, Hope everyone is doing well. I was looking at "Bite Sized" tagged QEMU issues in gitlab to see if there is anything that makes sense for me as a first time contributor. I see this issue "QEMU gives wrong MPIDR value for Arm CPU types with MT=1" (gitlab URL: https://gitlab.com/qemu-project/qemu/-/issues/1608 ). >From the bug ticket description, it is very clear that I will need to add a bool member variable in the "AarchCPU" struct which is in "target/arm/cpu.h" file. I believe the next logical change is to set this member variable to "true" in the corresponding arm cpu "initfn" functions (a55, a76, noeverse_n1) which are in "target/arm/cpu64.c" file. I have a few questions about the following steps as I am looking through the code. 1. I believe I will need to update the "arm_build_mp_affinity" function in "target/arm/cpu.c" file to now also take in a bool parameter that will indicate if the function should instead put the "core index" in the "aff1" bits instead of the existing logic of "aff0" bits and the cluster id in the "aff2" bits instead of the existing logic of "aff1" bits. But I see this function being invoked from 3 other files: "hw/arm/sbsa-ref.c", "hw/arm/virt.c", "hw/arm/npcm7xx.c". Should the function calls in these files always have that corresponding argument set to "false"? 2. As per the bug ticket description, I will also need to update the "mpidr_read_val" function in the "target/arm/helper.c" file to only set the MT bit (24th) to 1 if the member variable is true. I think there is nothing else to be done in this function apart from checking and then setting the MT bit. Is my assumption correct? I think doing the above steps should fix the bug and probably we don't need anything else. It would be great if someone can help me answer the questions or any suggestion would be great if my assumptions are wrong. Thanks. Regards, Dorjoy.
Qemu for TC377
Hello All, I see that Latest qemu supports for tricore TC277 and TC377 [image: image.png] But when I downloaded source code and checked for TC377 related file , I didn't find anything I want to run RTOS/bare metal code on TC377 . could you please let me know how to start qemu on TC377 ? Here is the latest version of qemu i have , I didn't download 9.0 [image: image.png] Regards Sameer
Re: [PATCH v9 17/20] ebpf: Fix RSS error handling
On Wed, Apr 3, 2024 at 2:12 PM Akihiko Odaki wrote: > > calculate_rss_hash() was using hash value 0 to tell if it calculated > a hash, but the hash value may be 0 on a rare occasion. Have a > distinct bool value for correctness. This is interesting question whether in reality the hash value might be 0 or not. On one hand - this seems like a kind of fix. On another hard - this adds computation cycles for each packet, and the corner case that this targets to fix seems hardly reachable if at all. Optimistic estimation is 2.5*10^-8 percent of address:address:port triplets. I would suggest at least to find some proof of the fact that the calculated hash might be 0 in real case where source addresses are not random. > > Fixes: f3fa412de2 ("ebpf: Added eBPF RSS program.") > Signed-off-by: Akihiko Odaki > --- > ebpf/rss.bpf.skeleton.h | 1210 > +++ > tools/ebpf/rss.bpf.c| 20 +- > 2 files changed, 610 insertions(+), 620 deletions(-) > > diff --git a/ebpf/rss.bpf.skeleton.h b/ebpf/rss.bpf.skeleton.h > index aed4ef9a0335..e41ed8890191 100644 > --- a/ebpf/rss.bpf.skeleton.h > +++ b/ebpf/rss.bpf.skeleton.h > @@ -165,7 +165,7 @@ rss_bpf__create_skeleton(struct rss_bpf *obj) > s->progs[0].prog = &obj->progs.tun_rss_steering_prog; > s->progs[0].link = &obj->links.tun_rss_steering_prog; > > - s->data = (void *)rss_bpf__elf_bytes(&s->data_sz); > + s->data = rss_bpf__elf_bytes(&s->data_sz); > > obj->skeleton = s; > return 0; > @@ -176,194 +176,188 @@ err: > > static inline const void *rss_bpf__elf_bytes(size_t *sz) > { > - *sz = 20600; > - return (const void *)"\ > + static const char data[] __attribute__((__aligned__(8))) = "\ > > \x7f\x45\x4c\x46\x02\x01\x01\0\0\0\0\0\0\0\0\0\x01\0\xf7\0\x01\0\0\0\0\0\0\0\0\ > -\0\0\0\0\0\0\0\0\0\0\0\x38\x4d\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0d\0\ > -\x01\0\xbf\x19\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x63\x1a\x4c\xff\0\0\0\0\xbf\xa7\ > -\0\0\0\0\0\0\x07\x07\0\0\x4c\xff\xff\xff\x18\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ > +\0\0\0\0\0\0\0\0\0\0\0\xb8\x4b\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0d\0\ > +\x01\0\xbf\x19\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x63\x1a\x54\xff\0\0\0\0\xbf\xa7\ > +\0\0\0\0\0\0\x07\x07\0\0\x54\xff\xff\xff\x18\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ > > \xbf\x72\0\0\0\0\0\0\x85\0\0\0\x01\0\0\0\xbf\x06\0\0\0\0\0\0\x18\x01\0\0\0\0\0\ > > \0\0\0\0\0\0\0\0\0\xbf\x72\0\0\0\0\0\0\x85\0\0\0\x01\0\0\0\xbf\x07\0\0\0\0\0\0\ > -\x18\0\0\0\xff\xff\xff\xff\0\0\0\0\0\0\0\0\x15\x06\x61\x02\0\0\0\0\xbf\x78\0\0\ > -\0\0\0\0\x15\x08\x5f\x02\0\0\0\0\x71\x61\0\0\0\0\0\0\x55\x01\x01\0\0\0\0\0\x05\ > -\0\x58\x02\0\0\0\0\xb7\x01\0\0\0\0\0\0\x63\x1a\xc0\xff\0\0\0\0\x7b\x1a\xb8\xff\ > -\0\0\0\0\x7b\x1a\xb0\xff\0\0\0\0\x7b\x1a\xa8\xff\0\0\0\0\x7b\x1a\xa0\xff\0\0\0\ > -\0\x63\x1a\x98\xff\0\0\0\0\x7b\x1a\x90\xff\0\0\0\0\x7b\x1a\x88\xff\0\0\0\0\x7b\ > -\x1a\x80\xff\0\0\0\0\x7b\x1a\x78\xff\0\0\0\0\x7b\x1a\x70\xff\0\0\0\0\x7b\x1a\ > -\x68\xff\0\0\0\0\x7b\x1a\x60\xff\0\0\0\0\x7b\x1a\x58\xff\0\0\0\0\x7b\x1a\x50\ > -\xff\0\0\0\0\x15\x09\x47\x02\0\0\0\0\x6b\x1a\xc8\xff\0\0\0\0\xbf\xa3\0\0\0\0\0\ > -\0\x07\x03\0\0\xc8\xff\xff\xff\xbf\x91\0\0\0\0\0\0\xb7\x02\0\0\x0c\0\0\0\xb7\ > +\x18\0\0\0\xff\xff\xff\xff\0\0\0\0\0\0\0\0\x15\x06\x4f\x02\0\0\0\0\xbf\x78\0\0\ > +\0\0\0\0\x15\x08\x4d\x02\0\0\0\0\x71\x61\0\0\0\0\0\0\x55\x01\x01\0\0\0\0\0\x05\ > +\0\x46\x02\0\0\0\0\xb7\x01\0\0\0\0\0\0\x63\x1a\xc8\xff\0\0\0\0\x7b\x1a\xc0\xff\ > +\0\0\0\0\x7b\x1a\xb8\xff\0\0\0\0\x7b\x1a\xb0\xff\0\0\0\0\x7b\x1a\xa8\xff\0\0\0\ > +\0\x63\x1a\xa0\xff\0\0\0\0\x7b\x1a\x98\xff\0\0\0\0\x7b\x1a\x90\xff\0\0\0\0\x7b\ > +\x1a\x88\xff\0\0\0\0\x7b\x1a\x80\xff\0\0\0\0\x7b\x1a\x78\xff\0\0\0\0\x7b\x1a\ > +\x70\xff\0\0\0\0\x7b\x1a\x68\xff\0\0\0\0\x7b\x1a\x60\xff\0\0\0\0\x7b\x1a\x58\ > +\xff\0\0\0\0\x15\x09\x35\x02\0\0\0\0\x6b\x1a\xd0\xff\0\0\0\0\xbf\xa3\0\0\0\0\0\ > +\0\x07\x03\0\0\xd0\xff\xff\xff\xbf\x91\0\0\0\0\0\0\xb7\x02\0\0\x0c\0\0\0\xb7\ > > \x04\0\0\x02\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x44\0\0\0\x67\0\0\0\x20\0\0\0\ > -\x77\0\0\0\x20\0\0\0\x55\0\x3c\x02\0\0\0\0\xb7\x02\0\0\x10\0\0\0\x69\xa1\xc8\ > +\x77\0\0\0\x20\0\0\0\x55\0\x2a\x02\0\0\0\0\xb7\x02\0\0\x10\0\0\0\x69\xa1\xd0\ > > \xff\0\0\0\0\xbf\x13\0\0\0\0\0\0\xdc\x03\0\0\x10\0\0\0\x15\x03\x02\0\0\x81\0\0\ > > \x55\x03\x0b\0\xa8\x88\0\0\xb7\x02\0\0\x14\0\0\0\xbf\xa3\0\0\0\0\0\0\x07\x03\0\ > -\0\xc8\xff\xff\xff\xbf\x91\0\0\0\0\0\0\xb7\x04\0\0\x02\0\0\0\xb7\x05\0\0\0\0\0\ > -\0\x85\0\0\0\x44\0\0\0\x67\0\0\0\x20\0\0\0\x77\0\0\0\x20\0\0\0\x55\0\x2c\x02\0\ > -\0\0\0\x69\xa1\xc8\xff\0\0\0\0\x15\x01\x2a\x02\0\0\0\0\x7b\x9a\x38\xff\0\0\0\0\ > -\x15\x01\x56\0\x86\xdd\0\0\x55\x01\x3b\0\x08\0\0\0\xb7\x01\0\0\x01\0\0\0\x73\ > -\x1a\x50\xff\0\0\0\0\xb7\x01\0\0\0\0\0\0\x63\x1a\xd8\xff\0\0\0\0\x7b\x1a\xd0\ > -\xff\0\0\0\0\x7b\x1a\xc8\xff\0\0\0\0\xbf\xa3\0\0\0\0\0\0\x07\x03\0\0\xc8\xff\ > -\xff\xff\x79\xa1\x38\xff\0\0\0\0\xb7\x02\0\0\0\0\0\0\xb7\x04\0\0\x14\0\0\0\xb7\ > +\0\xd0\xff\xf
Re: [PATCH v3 03/27] util/hexdump: Use a GString for qemu_hexdump_line
On Sat, 13 Apr 2024 at 05:46, Philippe Mathieu-Daudé wrote: > > On 12/4/24 20:59, Richard Henderson wrote: > > On 4/12/24 10:41, Philippe Mathieu-Daudé wrote: > >>> -void qemu_hexdump_line(char *line, const void *bufptr, size_t len) > >>> +GString *qemu_hexdump_line(GString *str, const void *vbuf, size_t len) > >>> { > >>> -const char *buf = bufptr; > >>> -int i, c; > >>> +const uint8_t *buf = vbuf; > >>> +size_t i; > >>> -if (len > QEMU_HEXDUMP_LINE_BYTES) { > >>> -len = QEMU_HEXDUMP_LINE_BYTES; > >>> +if (str == NULL) { > >>> +/* Estimate the length of the output to avoid reallocs. */ > >>> +i = len * 3 + len / 4; > >>> +str = g_string_sized_new(i + 1); > >>> } > >> > >> [*] > >> else { > >> g_string_truncate(str, 0); > >> } > >> > > ... > >>> @@ -49,24 +52,26 @@ static void asciidump_line(char *line, const void > >>> *bufptr, size_t len) > >>> *line = '\0'; > >>> } > >>> +#define QEMU_HEXDUMP_LINE_BYTES 16 > >>> #define QEMU_HEXDUMP_LINE_WIDTH \ > >>> (QEMU_HEXDUMP_LINE_BYTES * 2 + QEMU_HEXDUMP_LINE_BYTES / 4) > >>> void qemu_hexdump(FILE *fp, const char *prefix, > >>> const void *bufptr, size_t size) > >>> { > >>> -char line[QEMU_HEXDUMP_LINE_LEN]; > >>> +g_autoptr(GString) str = > >>> g_string_sized_new(QEMU_HEXDUMP_LINE_WIDTH + 1); > >>> char ascii[QEMU_HEXDUMP_LINE_BYTES + 1]; > >>> size_t b, len; > >>> for (b = 0; b < size; b += len) { > >>> len = MIN(size - b, QEMU_HEXDUMP_LINE_BYTES); > >>> -qemu_hexdump_line(line, bufptr + b, len); > >>> +g_string_truncate(str, 0); > >> > >> Shouldn't we truncate in [*] ? > > > > The usage in tpm puts several lines together in one string, > > adding \n in between, for output in one go. > > I see the trace_tpm_util_show_buffer() call. However this > isn't a recommended use of the tracing API (Cc'ing Stefan). > It breaks the "log" backend output, and is sub-optimal for > all other backends. > > IMHO the TPM buffer should be traced by multiple calls of > (offset, hexbuf) instead. I think so too. Stefan
Re: [PATCH v7 07/10] virtio-gpu: Handle resource blob commands
On 2024/04/11 19:19, Dmitry Osipenko wrote: From: Antonio Caggiano Support BLOB resources creation, mapping and unmapping by calling the new stable virglrenderer 0.10 interface. Only enabled when available and via the blob config. E.g. -device virtio-vga-gl,blob=true Signed-off-by: Antonio Caggiano Signed-off-by: Xenia Ragiadakou Signed-off-by: Huang Rui Signed-off-by: Dmitry Osipenko --- hw/display/virtio-gpu-virgl.c | 196 + hw/display/virtio-gpu.c| 4 +- include/hw/virtio/virtio-gpu.h | 1 + 3 files changed, 200 insertions(+), 1 deletion(-) diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c index c2057b0c2147..ec63f5d698b7 100644 --- a/hw/display/virtio-gpu-virgl.c +++ b/hw/display/virtio-gpu-virgl.c @@ -32,6 +32,55 @@ virgl_get_egl_display(G_GNUC_UNUSED void *cookie) } #endif +#ifdef HAVE_VIRGL_RESOURCE_BLOB +static int +virtio_gpu_virgl_map_resource_blob(VirtIOGPU *g, + struct virtio_gpu_simple_resource *res, + uint64_t offset) +{ +VirtIOGPUBase *b = VIRTIO_GPU_BASE(g); +uint64_t size; +void *data; +int ret; + +if (!virtio_gpu_hostmem_enabled(b->conf)) { +return -EOPNOTSUPP; +} + +ret = virgl_renderer_resource_map(res->resource_id, &data, &size); +if (ret) { +return -ret; +} + +res->mr = g_new0(MemoryRegion, 1); +memory_region_init_ram_ptr(res->mr, OBJECT(res->mr), "blob", + size, data); +memory_region_add_subregion(&b->hostmem, offset, res->mr); +memory_region_set_enabled(res->mr, true); + +return 0; +} + +static void +virtio_gpu_virgl_unmap_resource_blob(VirtIOGPU *g, + struct virtio_gpu_simple_resource *res) +{ +VirtIOGPUBase *b = VIRTIO_GPU_BASE(g); + +if (!res->mr) { +return; +} + +memory_region_set_enabled(res->mr, false); +memory_region_del_subregion(&b->hostmem, res->mr); + +/* memory region owns res->mr object and frees it when mr is released */ +res->mr = NULL; + +virgl_renderer_resource_unmap(res->resource_id); Hi, First, thanks for keeping working on this. This patch has some changes since the previous version, but it is still vulnerable to the race condition pointed out. The memory region is asynchronously unmapped from the guest address space, but the backing memory on the host address space is unmapped synchronously before that. This results in use-after-free. The whole unmapping operation needs to be implemented in an asynchronous manner. Regards, Akihiko Odaki +} +#endif /* HAVE_VIRGL_RESOURCE_BLOB */ + static void virgl_cmd_create_resource_2d(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd) { @@ -145,6 +194,10 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g, return; } +#ifdef HAVE_VIRGL_RESOURCE_BLOB +virtio_gpu_virgl_unmap_resource_blob(g, res); +#endif + virgl_renderer_resource_detach_iov(unref.resource_id, &res_iovs, &num_iovs); @@ -495,6 +548,140 @@ static void virgl_cmd_get_capset(VirtIOGPU *g, } #ifdef HAVE_VIRGL_RESOURCE_BLOB +static void virgl_cmd_resource_create_blob(VirtIOGPU *g, + struct virtio_gpu_ctrl_command *cmd) +{ +struct virgl_renderer_resource_create_blob_args virgl_args = { 0 }; +struct virtio_gpu_resource_create_blob cblob; +struct virtio_gpu_simple_resource *res; +int ret; + +if (!virtio_gpu_blob_enabled(g->parent_obj.conf)) { +cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; +return; +} + +VIRTIO_GPU_FILL_CMD(cblob); +virtio_gpu_create_blob_bswap(&cblob); +trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size); + +if (cblob.resource_id == 0) { +qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n", + __func__); +cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; +return; +} + +res = virtio_gpu_find_resource(g, cblob.resource_id); +if (res) { +qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n", + __func__, cblob.resource_id); +cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; +return; +} + +res = g_new0(struct virtio_gpu_simple_resource, 1); +res->resource_id = cblob.resource_id; +res->blob_size = cblob.size; +res->dmabuf_fd = -1; + +if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) { +ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob), +cmd, &res->addrs, +&res->iov, &res->iov_cnt); +if (!ret) { +g_free(res); +cmd->error = VIRTIO_G
[PATCH v4] target/riscv/kvm/kvm-cpu.c: kvm_riscv_handle_sbi() fail with vendor-specific SBI
kvm_riscv_handle_sbi() may return not supported return code to not trigger qemu abort with vendor-specific sbi. Added SBI related return code's defines. Signed-off-by: Alexei Filippov Fixes: 4eb47125 ("target/riscv: Handle KVM_EXIT_RISCV_SBI exit") --- Changes since v3: -Clear Reviewed-by tags target/riscv/kvm/kvm-cpu.c | 13 + target/riscv/sbi_ecall_interface.h | 12 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/target/riscv/kvm/kvm-cpu.c b/target/riscv/kvm/kvm-cpu.c index 6a6c6cae80..844942d9ba 100644 --- a/target/riscv/kvm/kvm-cpu.c +++ b/target/riscv/kvm/kvm-cpu.c @@ -1392,7 +1392,6 @@ bool kvm_arch_stop_on_emulation_error(CPUState *cs) static int kvm_riscv_handle_sbi(CPUState *cs, struct kvm_run *run) { -int ret = 0; unsigned char ch; switch (run->riscv_sbi.extension_id) { case SBI_EXT_0_1_CONSOLE_PUTCHAR: @@ -1400,22 +1399,20 @@ static int kvm_riscv_handle_sbi(CPUState *cs, struct kvm_run *run) qemu_chr_fe_write(serial_hd(0)->be, &ch, sizeof(ch)); break; case SBI_EXT_0_1_CONSOLE_GETCHAR: -ret = qemu_chr_fe_read_all(serial_hd(0)->be, &ch, sizeof(ch)); -if (ret == sizeof(ch)) { +if (qemu_chr_fe_read_all(serial_hd(0)->be, &ch, sizeof(ch)) == sizeof(ch)) { run->riscv_sbi.ret[0] = ch; } else { -run->riscv_sbi.ret[0] = -1; +run->riscv_sbi.ret[0] = SBI_ERR_FAILURE; } -ret = 0; break; default: qemu_log_mask(LOG_UNIMP, - "%s: un-handled SBI EXIT, specific reasons is %lu\n", + "%s: Unhandled SBI exit with extension-id %lu\n", __func__, run->riscv_sbi.extension_id); -ret = -1; +run->riscv_sbi.ret[0] = SBI_ERR_NOT_SUPPORTED; break; } -return ret; +return 0; } int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) diff --git a/target/riscv/sbi_ecall_interface.h b/target/riscv/sbi_ecall_interface.h index 43899d08f6..a2e21d9b8c 100644 --- a/target/riscv/sbi_ecall_interface.h +++ b/target/riscv/sbi_ecall_interface.h @@ -69,4 +69,16 @@ #define SBI_EXT_VENDOR_END 0x09FF /* clang-format on */ +/* SBI return error codes */ +#define SBI_SUCCESS 0 +#define SBI_ERR_FAILURE -1 +#define SBI_ERR_NOT_SUPPORTED -2 +#define SBI_ERR_INVALID_PARAM -3 +#define SBI_ERR_DENIED -4 +#define SBI_ERR_INVALID_ADDRESS -5 +#define SBI_ERR_ALREADY_AVAILABLE -6 +#define SBI_ERR_ALREADY_STARTED -7 +#define SBI_ERR_ALREADY_STOPPED -8 +#define SBI_ERR_NO_SHMEM-9 + #endif -- 2.34.1
[PATCH 2/2] target/riscv: do not set mtval2 for non guest-page faults
Previous patch fixed the PMP priority in raise_mmu_exception() but we're still setting mtval2 incorrectly. In riscv_cpu_tlb_fill(), after pmp check in 2 stage translation part, mtval2 will be set in case of successes 2 stage translation but failed pmp check. In this case we gonna set mtval2 via env->guest_phys_fault_addr in context of riscv_cpu_tlb_fill(), as this was a guest-page-fault, but it didn't and mtval2 should be zero, according to RISCV privileged spec sect. 9.4.4: When a guest page-fault is taken into M-mode, mtval2 is written with either zero or guest physical address that faulted, shifted by 2 bits. *For other traps, mtval2 is set to zero...* Signed-off-by: Alexei Filippov --- target/riscv/cpu_helper.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index 196166f8dd..89e659fe3a 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -1410,17 +1410,17 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size, __func__, pa, ret, prot_pmp, tlb_size); prot &= prot_pmp; -} - -if (ret != TRANSLATE_SUCCESS) { +} else { /* * Guest physical address translation failed, this is a HS * level exception */ first_stage_error = false; -env->guest_phys_fault_addr = (im_address | - (address & - (TARGET_PAGE_SIZE - 1))) >> 2; +if (ret != TRANSLATE_PMP_FAIL) { +env->guest_phys_fault_addr = (im_address | + (address & + (TARGET_PAGE_SIZE - 1))) >> 2; +} } } } else { -- 2.34.1
[PATCH 1/2] target/riscv: prioritize pmp errors in raise_mmu_exception()
From: Daniel Henrique Barboza raise_mmu_exception(), as is today, is prioritizing guest page faults by checking first if virt_enabled && !first_stage, and then considering the regular inst/load/store faults. There's no mention in the spec about guest page fault being a higher priority that PMP faults. In fact, privileged spec section 3.7.1 says: "Attempting to fetch an instruction from a PMP region that does not have execute permissions raises an instruction access-fault exception. Attempting to execute a load or load-reserved instruction which accesses a physical address within a PMP region without read permissions raises a load access-fault exception. Attempting to execute a store, store-conditional, or AMO instruction which accesses a physical address within a PMP region without write permissions raises a store access-fault exception." So, in fact, we're doing it wrong - PMP faults should always be thrown, regardless of also being a first or second stage fault. The way riscv_cpu_tlb_fill() and get_physical_address() work is adequate: a TRANSLATE_PMP_FAIL error is immediately reported and reflected in the 'pmp_violation' flag. What we need is to change raise_mmu_exception() to prioritize it. Reported-by: Joseph Chan Fixes: 82d53adfbb ("target/riscv/cpu_helper.c: Invalid exception on MMU translation stage") Signed-off-by: Daniel Henrique Barboza --- target/riscv/cpu_helper.c | 22 -- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index bc70ab5abc..196166f8dd 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -1203,28 +1203,30 @@ static void raise_mmu_exception(CPURISCVState *env, target_ulong address, switch (access_type) { case MMU_INST_FETCH: -if (env->virt_enabled && !first_stage) { +if (pmp_violation) { +cs->exception_index = RISCV_EXCP_INST_ACCESS_FAULT; +} else if (env->virt_enabled && !first_stage) { cs->exception_index = RISCV_EXCP_INST_GUEST_PAGE_FAULT; } else { -cs->exception_index = pmp_violation ? -RISCV_EXCP_INST_ACCESS_FAULT : RISCV_EXCP_INST_PAGE_FAULT; +cs->exception_index = RISCV_EXCP_INST_PAGE_FAULT; } break; case MMU_DATA_LOAD: -if (two_stage && !first_stage) { +if (pmp_violation) { +cs->exception_index = RISCV_EXCP_LOAD_ACCESS_FAULT; +} else if (two_stage && !first_stage) { cs->exception_index = RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT; } else { -cs->exception_index = pmp_violation ? -RISCV_EXCP_LOAD_ACCESS_FAULT : RISCV_EXCP_LOAD_PAGE_FAULT; +cs->exception_index = RISCV_EXCP_LOAD_PAGE_FAULT; } break; case MMU_DATA_STORE: -if (two_stage && !first_stage) { +if (pmp_violation) { +cs->exception_index = RISCV_EXCP_STORE_AMO_ACCESS_FAULT; +} else if (two_stage && !first_stage) { cs->exception_index = RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT; } else { -cs->exception_index = pmp_violation ? -RISCV_EXCP_STORE_AMO_ACCESS_FAULT : -RISCV_EXCP_STORE_PAGE_FAULT; +cs->exception_index = RISCV_EXCP_STORE_PAGE_FAULT; } break; default: -- 2.34.1
Re: [PATCH v2] ppc440_pcix: Do not expose a bridge device on PCI bus
On 11/4/24 21:24, BALATON Zoltan wrote: Real 460EX SoC apparently does not expose a bridge device and having it appear on PCI bus confuses an AmigaOS file system driver that uses this to detect which machine it is running on. Signed-off-by: BALATON Zoltan --- Here's another version that keeps the values and only drops the device so it's even less likely it could break anything, in case this can be accepted for 9.0. hw/pci-host/ppc440_pcix.c | 11 --- 1 file changed, 4 insertions(+), 7 deletions(-) Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH 0/3] vvfat: Fix bugs in writing and reading files -- PING
Ping to the vvfat maintainers.
Re: [PATCH 2/2] hw: Add a Kconfig switch for the TYPE_CPU_CLUSTER device
On 12/4/24 11:15, Thomas Huth wrote: On 12/04/2024 08.20, Thomas Huth wrote: The cpu-cluster device is only needed for some few arm and riscv machines. Let's avoid compiling and linking it if it is not really necessary. I expect clustering become the new default for heterogeneous machines, but we are not there yet. Signed-off-by: Thomas Huth --- hw/arm/Kconfig | 3 +++ hw/cpu/Kconfig | 3 +++ hw/cpu/meson.build | 2 +- hw/riscv/Kconfig | 2 ++ 4 files changed, 9 insertions(+), 1 deletion(-) diff --git a/hw/cpu/meson.build b/hw/cpu/meson.build index 38cdcfbe57..43a34c4c6e 100644 --- a/hw/cpu/meson.build +++ b/hw/cpu/meson.build @@ -1,4 +1,4 @@ -system_ss.add(files('core.c', 'cluster.c')) +system_ss.add(when: 'CONFIG_CPU_CLUSTER', if_true: files('core.c', 'cluster.c')) Oops, sorry, the switch should only be used for cluster.c, not for core.c. I'll change it in v2 ... For v2: Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH 1/2] hw: Fix problem with the A*MPCORE switches in the Kconfig files
On 12/4/24 13:32, Thomas Huth wrote: On 12/04/2024 13.10, Philippe Mathieu-Daudé wrote: On 12/4/24 08:20, Thomas Huth wrote: A9MPCORE, ARM11MPCORE and A15MPCORE are defined twice, once in hw/cpu/Kconfig and once in hw/arm/Kconfig. This is only possible by accident, since hw/cpu/Kconfig is never included from hw/Kconfig. Fix it by declaring the switches only in hw/cpu/Kconfig (since the related files reside in the hw/cpu/ folder) and by making sure that the file hw/cpu/Kconfig is now properly included from hw/Kconfig. Signed-off-by: Thomas Huth --- hw/Kconfig | 1 + hw/arm/Kconfig | 15 --- hw/cpu/Kconfig | 12 +--- 3 files changed, 10 insertions(+), 18 deletions(-) diff --git a/hw/Kconfig b/hw/Kconfig index 2c00936c28..9567cc475d 100644 --- a/hw/Kconfig +++ b/hw/Kconfig @@ -48,6 +48,7 @@ source watchdog/Kconfig # arch Kconfig source arm/Kconfig +source cpu/Kconfig source alpha/Kconfig source avr/Kconfig source cris/Kconfig diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig index 893a7bff66..d97015c45c 100644 --- a/hw/arm/Kconfig +++ b/hw/arm/Kconfig @@ -678,21 +678,6 @@ config ZAURUS select NAND select ECC -config A9MPCORE - bool - select A9_GTIMER - select A9SCU # snoop control unit - select ARM_GIC - select ARM_MPTIMER - -config A15MPCORE - bool - select ARM_GIC - -config ARM11MPCORE - bool - select ARM11SCU - config ARMSSE bool select ARM_V7M diff --git a/hw/cpu/Kconfig b/hw/cpu/Kconfig index 1767d028ac..f776e884cd 100644 --- a/hw/cpu/Kconfig +++ b/hw/cpu/Kconfig @@ -1,8 +1,14 @@ -config ARM11MPCORE - bool - config A9MPCORE bool + select A9_GTIMER + select A9SCU # snoop control unit + select ARM_GIC + select ARM_MPTIMER config A15MPCORE bool + select ARM_GIC + +config ARM11MPCORE + bool + select ARM11SCU I thought https://lore.kernel.org/qemu-devel/20231212162935.42910-6-phi...@linaro.org/ was already merged :/ I'll look at what is missing and respin. Oh, ok. But I'd prefer to keep the hw/cpu/Kconfig file since it will be used in the 2nd patch here, too. Fine, Reviewed-by: Philippe Mathieu-Daudé
Re: [PATCH v3 03/27] util/hexdump: Use a GString for qemu_hexdump_line
On 12/4/24 20:59, Richard Henderson wrote: On 4/12/24 10:41, Philippe Mathieu-Daudé wrote: -void qemu_hexdump_line(char *line, const void *bufptr, size_t len) +GString *qemu_hexdump_line(GString *str, const void *vbuf, size_t len) { - const char *buf = bufptr; - int i, c; + const uint8_t *buf = vbuf; + size_t i; - if (len > QEMU_HEXDUMP_LINE_BYTES) { - len = QEMU_HEXDUMP_LINE_BYTES; + if (str == NULL) { + /* Estimate the length of the output to avoid reallocs. */ + i = len * 3 + len / 4; + str = g_string_sized_new(i + 1); } [*] else { g_string_truncate(str, 0); } ... @@ -49,24 +52,26 @@ static void asciidump_line(char *line, const void *bufptr, size_t len) *line = '\0'; } +#define QEMU_HEXDUMP_LINE_BYTES 16 #define QEMU_HEXDUMP_LINE_WIDTH \ (QEMU_HEXDUMP_LINE_BYTES * 2 + QEMU_HEXDUMP_LINE_BYTES / 4) void qemu_hexdump(FILE *fp, const char *prefix, const void *bufptr, size_t size) { - char line[QEMU_HEXDUMP_LINE_LEN]; + g_autoptr(GString) str = g_string_sized_new(QEMU_HEXDUMP_LINE_WIDTH + 1); char ascii[QEMU_HEXDUMP_LINE_BYTES + 1]; size_t b, len; for (b = 0; b < size; b += len) { len = MIN(size - b, QEMU_HEXDUMP_LINE_BYTES); - qemu_hexdump_line(line, bufptr + b, len); + g_string_truncate(str, 0); Shouldn't we truncate in [*] ? The usage in tpm puts several lines together in one string, adding \n in between, for output in one go. I see the trace_tpm_util_show_buffer() call. However this isn't a recommended use of the tracing API (Cc'ing Stefan). It breaks the "log" backend output, and is sub-optimal for all other backends. IMHO the TPM buffer should be traced by multiple calls of (offset, hexbuf) instead. Regards, Phil.
Re: [PATCH 0/6] disas/cris: Use GString instead of sprintf
On Sat, 13 Apr 2024 at 06:25, Richard Henderson wrote: > > More sprintf cleanup encouraged by the Apple deprecation. > Probably there's a more minimal patch. On the other hand, > there's certainly a larger cleanup possible. The CRIS architecture is deprecated, so all this code will be deleted in 9.2; that seems to me to limit the utility of doing big cleanups on it. thanks -- PMM