CV-Bowen opened a new pull request, #17925:
URL: https://github.com/apache/nuttx/pull/17925

   *Note: Please adhere to [Contributing 
Guidelines](https://github.com/apache/nuttx/blob/master/CONTRIBUTING.md).*
   
   ## Summary
   
   This PR enhances the RPMSG subsystem by unifying signal handling across all 
transport types and adding signal routing support for the RPMSG router 
transport.
   
   ## Changes
   
   ### 1. Unify RPMSG Signals Architecture (c1a7b0c)
   - **Unified signal storage**: Moved signal handling from individual 
transport private structures to the common `struct rpmsg_s`
   - **Simplified implementation**: Removed redundant signal fields from 
transport-specific structures
   - **Improved consistency**: All RPMSG transports now use the same signal 
mechanism
   
   ### 2. Add Signals Router for RPMSG Router (4050398)
   - **Router signal support**: RPMSG router transport now supports signal 
propagation
   - **Signal routing**: Signals from physical transports (port, virtio) are 
properly routed through the router
   - **Enhanced functionality**: Router can now handle and forward signal 
states between different transport endpoints
   
   ## Impact
   Rpmsg Port and Rpmsg Router
   
   ## Testing
   qemuv8a:rpserver and rpproxy
   ```c
   ❯ qemu-system-aarch64 -cpu cortex-a53 -nographic \
   -machine virt,virtualization=on,gic-version=3 \
   -chardev stdio,id=con,mux=on -serial chardev:con \
   -object 
memory-backend-file,discard-data=on,id=shmmem-shmem0,mem-path=/dev/shm/my_shmem0,size=4194304,share=yes
 \
   -device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,addr=0xb \
   -device virtio-serial-device,bus=virtio-mmio-bus.0 \
   -chardev socket,path=/tmp/rpmsg_port_uart_socket,server=on,wait=off,id=foo \
   -device virtconsole,chardev=foo \
   -mon chardev=con,mode=readline -kernel ./nuttx/cmake_out/v8a_server/nuttx \
   -gdb tcp::7775
   [    0.000000] [ 0] [  INFO] [server] pci_register_rptun_ivshmem_driver: 
Register ivshmem driver, id=0, cpuname=proxy, master=1
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: pci_scan_bus for bus 0
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000600, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:00 [1b36:0008]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000200, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:08 [1af4:1000]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: 
mask64=fffffffe 32bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1: 
mask64=fffffff0 4096bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4: 
mask64=fffffffffffffff0 16384bytes
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: class = 00000500, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [server] pci_scan_bus: 00:58 [1af4:1110]
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar0: 
mask64=fffffff0 256bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar2: 
mask64=fffffffffffffff0 4194304bytes
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [server] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [server] ivshmem_probe: shmem addr=0x10400000 
size=4194304 reg=0x10008000
   [    0.000000] [ 3] [  INFO] [server] rptun_ivshmem_probe: shmem 
addr=0x10400000 size=4194304
   
   NuttShell (NSH) NuttX-12.10.0
   server> 
   server> 
   server> [    0.000000] [ 0] [  INFO] [proxy] 
pci_register_rptun_ivshmem_driver: Register ivshmem driver, id=0, 
cpuname=server, master=0
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: pci_scan_bus for bus 0
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000600, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:00 [1b36:0008]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000200, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:08 [1af4:1000]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: 
mask64=fffffffe 32bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1: 
mask64=fffffff0 4096bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar3 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4: 
mask64=fffffffffffffff0 16384bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: class = 00000500, 
hdr_type = 00000000
   [    0.000000] [ 3] [  INFO] [proxy] pci_scan_bus: 00:58 [1af4:1110]
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar0: 
mask64=fffffff0 256bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar1 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar2: 
mask64=fffffffffffffff0 4194304bytes
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar4 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] pci_setup_device: pbar5 set bad mask
   [    0.000000] [ 3] [  INFO] [proxy] ivshmem_probe: shmem addr=0x10400000 
size=4194304 reg=0x10008000
   [    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: shmem 
addr=0x10400000 size=4194304
   [    0.000000] [ 3] [  INFO] [proxy] rptun_ivshmem_probe: Start the wdog
   
   server> 
   server> 
   server> 
   server> ps
     TID   PID  PPID PRI POLICY   TYPE    NPX STATE    EVENT     SIGMASK        
    STACK    USED FILLED COMMAND
       0     0     0   0 FIFO     Kthread   - Ready              
0000000000000000 0008160 0001792  21.9%  Idle_Task
       1     0     0 192 FIFO     Kthread   - Waiting  Semaphore 
0000000000000000 0008096 0001344  16.6%  hpwork 0x40475c60 0x40475ce0
       2     0     0 100 FIFO     Kthread   - Waiting  Semaphore 
0000000000000000 0008096 0001344  16.6%  lpwork 0x40475d10 0x40475d90
       4     0     0 224 FIFO     Kthread   - Waiting  Semaphore 
0000000000000000 0008096 0002032  25.0%  rptun proxy 0x40495bb8
       5     0     0 224 FIFO     Kthread   - Waiting  Semaphore 
0000000000000000 0008096 0001952  24.1%  rpmsg-uart-rx proxy2 0x404a11f0
       6     0     0 224 FIFO     Kthread   - Waiting  Semaphore 
0000000000000000 0008096 0001968  24.3%  rpmsg-uart-tx proxy2 0x404a11f0
       7     7     0 100 FIFO     Task      - Running            
0000000000000000 0008128 0004176  51.3%  nsh_main
   server> 
   server> 
   server> rpmsg ping all 1 1 1 1
   [    0.000000] [ 7] [ EMERG] [server] ping times: 1
   [    0.000000] [ 7] [ EMERG] [server] buffer_len: 2024, send_len: 17
   [    0.000000] [ 7] [ EMERG] [server] avg: 0 s, 18974880 ns
   [    0.000000] [ 7] [ EMERG] [server] min: 0 s, 18974880 ns
   [    0.000000] [ 7] [ EMERG] [server] max: 0 s, 18974880 ns
   [    0.000000] [ 7] [ EMERG] [server] rate: 0.007167 Mbits/sec
   server> 
   server> 
   server> rptun ping all 1 1 1 1
   [    0.000000] [ 7] [ EMERG] [server] ping times: 1
   [    0.000000] [ 7] [ EMERG] [server] buffer_len: 2032, send_len: 17
   [    0.000000] [ 7] [ EMERG] [server] avg: 0 s, 25224240 ns
   [    0.000000] [ 7] [ EMERG] [server] min: 0 s, 25224240 ns
   [    0.000000] [ 7] [ EMERG] [server] max: 0 s, 25224240 ns
   [    0.000000] [ 7] [ EMERG] [server] rate: 0.005391 Mbits/sec
   server> uname -a
   NuttX server 12.10.0 4050398e2de Jan 15 2026 14:30:04 arm64 qemu-armv8a
   server> rpmsg dump all
   [    0.000000] [ 7] [ EMERG] [server] Remote: proxy2 state: 1
   [    0.000000] [ 7] [ EMERG] [server] ept NS
   [    0.000000] [ 7] [ EMERG] [server] ept rpmsg-sensor
   [    0.000000] [ 7] [ EMERG] [server] ept rpmsg-ping
   [    0.000000] [ 7] [ EMERG] [server] rpmsg_port queue RX: {used: 0, avail: 
8}
   [    0.000000] [ 7] [ EMERG] [server] rpmsg buffer list:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg_port queue TX: {used: 0, avail: 
8}
   [    0.000000] [ 7] [ EMERG] [server] rpmsg buffer list:
   server> rptun dump all
   [    0.000000] [ 7] [ EMERG] [server] Remote: proxy headrx 8
   [    0.000000] [ 7] [ EMERG] [server] Dump rpmsg info between cpu (master: 
yes)server <==> proxy:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg vq RX:
   [    0.000000] [ 7] [ EMERG] [server] rpmsg vq TX:
   [    0.000000] [ 7] [ EMERG] [server]   rpmsg ept list:
   [    0.000000] [ 7] [ EMERG] [server]     ept NS
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-sensor
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-ping
   [    0.000000] [ 7] [ EMERG] [server]     ept rpmsg-syslog
   [    0.000000] [ 7] [ EMERG] [server]   rpmsg buffer list:
   [    0.000000] [ 7] [ EMERG] [server]     RX buffer, total 8, pending 0
   [    0.000000] [ 7] [ EMERG] [server]     TX buffer, total 8, pending 0
   server> 
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to