On 23.03.19 02:25, João Reis wrote:
sexta-feira, 22 de Março de 2019 às 18:32:24 UTC, J. Kiszka escreveu:
On 22.03.19 18:12, João Reis wrote:
sexta-feira, 22 de Março de 2019 às 16:39:17 UTC, J. Kiszka escreveu:
On 20.03.19 23:24, João Reis wrote:
Hello everyone,

Lately i've been trying to share memory between two cells using uio_ivshmem 
driver (https://github.com/henning-schild-work/ivshmem-guest-code), as it has 
been recommended here in multiple threads, in Ultrascale+ (arm64).

So firstly, i am using ivshmem-demo.bin to run on non-root cell and test 
uio_ivshmem driver. I am using a customized ivshmem-demo.c from another user 
that tweaked ivshmem-demo to work in arm64. When debugging this source file 
(using printks), i've noticed that the code stops running when mmio_read16() is 
called (the printks stop there).

mmio_read16 for accessing the MMCFG space? At least this is how your code looks
like.

Note that upstream inmate/lib for ARM does not support PCI yet, thus does not
map that region. So you may simply trigger a guest-side page fault. Or are you
using a different code base which does that?

Jan


Any ideas of what might be the problem??

I attach the log file of the session where i issue the commands to enable 
ivshmem and the cells to share memory.

(NOTE: Some additional information: PCI_CFG_BASE=0xfc000000

lspci -v
00:00.0 Unassigned class [ff80]: Red Hat, Inc Inter-VM shared memory
        Subsystem: Red Hat, Inc Inter-VM shared memory
        Flags: bus master, fast devsel, latency 0, IRQ 55
        Memory at fc100000 (64-bit, non-prefetchable) [size=256]
        Kernel driver in use: uio_ivshmem

cat /proc/interrupts
55:          0          0     GICv2 136 Edge      uio_ivshmem

cat /proc/iomem
fc000000-fc0fffff : PCI ECAM
fc100000-fc101fff : //pci@0
     fc100000-fc1000ff : 0000:00:00.0
       fc100000-fc1000ff : ivshmem

)


--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

I've applied some patches that have been posted here before 
(https://groups.google.com/forum/#!searchin/jailhouse-dev/ivshmem-demo|sort:date/jailhouse-dev/L2sjyl1xFDg/MrM5u8IHDQAJ)
 that give PCI support for ARM. The proof is that i can access IVSHMEM with 
Erika inmate using mmio_readXX() and mmio_writeXX() functions. I don't 
understand why i cannot access any memory with ivshmem-demo.bin inmate.


As I suspected: Those patches just implement the MMCFG accessors. They do not
perform any mapping into the inmate page table. Maybe they pre-date our enabling
the the MMU for ARM inmates.

So you will need map_range(PCI_CFG_BASE, <size-of-region>, MAP_UNCACHED);

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

Thanks! It is working now!

I've got some questions:
1) My non-root cell pin_bitmap is like this:
.pin_bitmap = {
                1 << (53 - 32),
                0,
                0,
                (1 << (140 - 128)) | (1 << (142 - 128))
        },
So i suppose my IVSHMEM_IRQ in ivshmem-demo.c must be:
#define IVSHMEM_IRQ 140 ??

Yes, it is GIC interrupt 140 (or SPI 108) on slot 0 of the virtual PCI host
controller and 142 on slot 2. In fact, the second slot is not used in the
upstream config. I just copied that bits from the zcu102 where we have two
ivshmem devices registered.


2) The output says that my device is not MSI-X capable, does this mean that i 
cannot use interrupts among cells? If i can, how must i solve this?

You can. Ivshmem falls back to INTx (line-base interrupt) in that case.


3) When i map_range() ivshmem using MAP_CACHED i cannot "see" the ivshmem on  root cell 
(random values), but when i use MAP_UNCACHED i can "see" it (values that i wrote). Is 
there any explanation to this?

Usually a sign of inconsistent mappings. Make sure that both cell configs as
well as both guest drivers map the shared memory cached.

Jan

--
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to