How to read RISC-V mcycle CSR from Linux userspace app?
Hi all, stupid question but I can't for the life of me figure this out even with all the docs open. I need to get an estimate figure for the cyclecount of a busy loop in one of my small Linux userspace apps. The app in question is running in qemu-system-riscv64. I've compiled QEMU myself, and the full code is like this: #include #include #include #include uint64_t get_mcycle() { uint64_t mcycle = 0; asm volatile ("csrr %0,mcycle" : "=r" (mcycle) ); return mcycle; } int main(int argc, char **argv) { printf("Hello\n"); printf("mcycle is %lu\n", get_mcycle()); return 0; } Now I get SIGILL when I hit the `csrr` insn, which makes sense. According to the "Privileged Architecture Version 1.10", page 32, [1] we need to set mcounteren, hcounteren, and scounteren low bits to 1 in order to get the mcycle csr to become available in userspace. So I add the following function: void enable_mcount() { /* Enable IR, TM, CY */ uint64_t mcounteren = 0x5; asm volatile ("csrw mcounteren,%0" : "=r" (mcounteren)); asm volatile ("csrw hcounteren,%0" : "=r" (mcounteren)); asm volatile ("csrw scounteren,%0" : "=r" (mcounteren)); } And call it before I call get_mcycle(), but this triggers SIGILL (unsurprisingly) also, since these CSRs are also privileged. So there's a bit of a chicken and egg problem. Could someone more knowledgeable please suggest what the course of action here is? I've got QEMU revision f45fd24c90 checked out, and I'm staring at qemu/target/riscv/csr.c:71, which seems to deal with whether or not to raise a SIGILL upon access. I can see a condition for when we're in 'S' mode, but nothing for 'U' mode. Does that mean there is fundamentally no access to these CSR's from 'U' mode? Is it possible to just hack it in? Maxim [1]: https://riscv.org/wp-content/uploads/2017/05/riscv-privileged-v1.10.pdf
[Qemu-devel] RISC-V: insn32.decode: Confusing encodings
Hi all, I've been going through the insn32.decode file, and found some confusing inconsistencies with the ISA spec that I don't understand. I hope some of you can clarify. There is a field defined called "%sh10" as follows: %sh1020:10 Which is used in the "@sh" format as follows: @sh .. .. . ... . ... &shift shamt=%sh10 %rs1 %rd And the "@sh" format specifier is used for the following rv32i instruction defs: slli 00 ... 001 . 0010011 @sh srli 00 ... 101 . 0010011 @sh srai 01 ... 101 . 0010011 @sh First question: Why does the %sh10 field exist? There are no 10-bit shamt fields anywhere in the spec. Second question: For rv32i, "SLLI" is defined as follows in the spec: 000 shamt[4:0] rs1[4:0] 001 rd[4:0] 0010011 | SLLI That is, the first 7 bits *must* be zero. So why does the QEMU definition above only specify the first 2 bits, and treat the next 10 bits as a so-called "sh10" field? Surely that shouldn't work and will catch false instructions right? And even if it does work, surely we would want an explicit definition, something more like %sh520:5 @sh .. .. . ... . ... &shift shamt=%sh5 %rs1 %rd slli 000 .. 001 . 0010011 @sh srli 000 .. 101 . 0010011 @sh srai 010 .. 101 . 0010011 @sh Another thing I noticed is that the rv64i ISA redefines the slli, srli and srai encodings by stealing a bit from the immediate field, like so: 00 shamt[5:0] rs1[4:0] 001 rd[4:0] 0010011 | SLLI Consider the case that we have a 32 bit cpu and we wanted to have a custom instruction encoded like so: |This bit set v 001 shamt[4:0] rs1[4:0] 001 rd[4:0] 0010011 | MY_INSN In 64 bit risc-v, we can't have that instruction because that bit is used in the shift field for the SLLI instruction. But it should be fine to use in 32-bit risc-v. There are two files currently: insn32.decode, and insn32-64.decode. The insn32-64.decode file is additive, but some instructions as simply encoded differently in 64 bit mode. Why not have two separate insn32.decode and insn64.decode files? I hope I'm understanding the ISA correctly... Maxim