Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-22 Thread Stefan Hajnoczi
On Wed, Nov 21, 2012 at 03:27:48PM +0100, lementec fabien wrote:
 usage
 -
 PCIEFW devices are instanciated using the following QEMU options:
 -device \
  pciefw,\
  laddr=local_addr,\
  lport=local_port,\
  raddr=remote_addr,\
  rport=remote_port

Take a look at qemu_socket.h:socket_parse().  It should allow you to
support TCP, UNIX domain sockets, and arbitrary file descriptors.

 implementation
 --
 PCIEFW is a PCIE accesses forwarding device added to the QEMU source tree. At
 initialization, this device opens a bidirectionnal point to point 
 communication
 channel with an external process. This process actually implements the PCIE
 endpoint. That is, a PCIE access made by QEMU is forwarded to the process.
 Reciprocally, replies and interrupts messages from the process are forwarded
 to QEMU.

 The commnication currently relies on a bidirectionnal point to point TCP

s/commnication/communication/

 socket based channel. Byte ordering is little endian.

 PCIEFW initiates a request upon access from QEMU. It sends a message whose
 format is described by the pciefw_msg_t type:
 
 typedef struct pciefw_msg
 {
 #define PCIEFW_MSG_MAX_SIZE (offsetof(pciefw_msg_t, data) + 0x1000)

The size field is uint16_t.  Do you really want to limit to 4 KB of
data?


   pciefw_header_t header;

 #define PCIEFW_OP_READ_CONFIG 0
 #define PCIEFW_OP_WRITE_CONFIG 1
 #define PCIEFW_OP_READ_MEM 2
 #define PCIEFW_OP_WRITE_MEM 3
 #define PCIEFW_OP_READ_IO 4
 #define PCIEFW_OP_WRITE_IO 5
 #define PCIEFW_OP_INT 6
 #define PCIEFW_OP_MSI 7
 #define PCIEFW_OP_MSIX 8

   uint8_t op; /* in PCIEFW_OP_XXX */
   uint8_t bar; /* in [0:5] */
   uint8_t width; /* access in 1, 2, 4, 8 */
   uint64_t addr;
   uint16_t size; /* data size, in bytes */

Why is are both width and size fields?  For read-type operations the
size field would indicate how many bytes to read.  For write-type
operations the size field would indicate how many bytes are included in
data[].

   uint8_t data[1];

 } __attribute__((packed)) pciefw_msg_t;

 Note that data is a variable length field.

 The PCIE endpoint process replies with a pciefw_reply_t formatted message:

 typedef struct pciefw_reply
 {
   pciefw_header_t header;
   uint8_t status;

What values does this field take?

   uint8_t data[8];
 } __attribute__((packed)) pciefw_reply_t;

 The PCIE endpoint process can initiate pciefw_msg_t to perform write 
 operations
 of its own. This is used to perform data transfer (DMA engines ...) and send
 interrupts.

Any flow control rules?  For example, can the endpoint raise an
interrupt while processing a message (before it sends a reply)?

 Both types start with a pciefw_header containing the total size:

 typedef struct pciefw_header
 {
   uint16_t size;
 } __attribute__((packed)) pciefw_header_t;

A hello message type would be useful so that you can extend the
protocol in the future.  The message would contain feature bits or a
version number.

Stefan



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-22 Thread lementec fabien
Hi,

Thanks for the feedback, I will modify the previous document
to include the changes you mentionned. I reply here too.

2012/11/22 Stefan Hajnoczi stefa...@gmail.com:
 On Wed, Nov 21, 2012 at 03:27:48PM +0100, lementec fabien wrote:
 usage
 -
 PCIEFW devices are instanciated using the following QEMU options:
 -device \
  pciefw,\
  laddr=local_addr,\
  lport=local_port,\
  raddr=remote_addr,\
  rport=remote_port

 Take a look at qemu_socket.h:socket_parse().  It should allow you to
 support TCP, UNIX domain sockets, and arbitrary file descriptors.


ok, I will have a look what it implies to support arbitrary file descriptor.
For instance, my current implementation does not work with UDP sockets.
It assumes a reliable, ordered transport layer, whose OS API is not datagram
oriented.

 implementation
 --
 PCIEFW is a PCIE accesses forwarding device added to the QEMU source tree. At
 initialization, this device opens a bidirectionnal point to point 
 communication
 channel with an external process. This process actually implements the PCIE
 endpoint. That is, a PCIE access made by QEMU is forwarded to the process.
 Reciprocally, replies and interrupts messages from the process are forwarded
 to QEMU.

 The commnication currently relies on a bidirectionnal point to point TCP

 s/commnication/communication/

 socket based channel. Byte ordering is little endian.

 PCIEFW initiates a request upon access from QEMU. It sends a message whose
 format is described by the pciefw_msg_t type:

 typedef struct pciefw_msg
 {
 #define PCIEFW_MSG_MAX_SIZE (offsetof(pciefw_msg_t, data) + 0x1000)

 The size field is uint16_t.  Do you really want to limit to 4 KB of
 data?


My first implementation required to allocate a fixed size buffer. It
is no longer
the case (with non datagram oriented IO operations) since I included the header
that contains the message size. Since PCIE maximum payload size is 0x1000,
it was an obvious choice. Of course, it is an arbitrary choice.


   pciefw_header_t header;

 #define PCIEFW_OP_READ_CONFIG 0
 #define PCIEFW_OP_WRITE_CONFIG 1
 #define PCIEFW_OP_READ_MEM 2
 #define PCIEFW_OP_WRITE_MEM 3
 #define PCIEFW_OP_READ_IO 4
 #define PCIEFW_OP_WRITE_IO 5
 #define PCIEFW_OP_INT 6
 #define PCIEFW_OP_MSI 7
 #define PCIEFW_OP_MSIX 8

   uint8_t op; /* in PCIEFW_OP_XXX */
   uint8_t bar; /* in [0:5] */
   uint8_t width; /* access in 1, 2, 4, 8 */
   uint64_t addr;
   uint16_t size; /* data size, in bytes */

 Why is are both width and size fields?  For read-type operations the
 size field would indicate how many bytes to read.  For write-type
 operations the size field would indicate how many bytes are included in
 data[].


Actually, the width field is currently not required. I included it to
allow multiple
contiguous accesses in 1 operation (where count = size / width). The device
would still need know the width of individual accesses in this case. But this is
not used.

   uint8_t data[1];

 } __attribute__((packed)) pciefw_msg_t;

 Note that data is a variable length field.

 The PCIE endpoint process replies with a pciefw_reply_t formatted message:

 typedef struct pciefw_reply
 {
   pciefw_header_t header;
   uint8_t status;

 What values does this field take?


I will define a PCIEFW_STATUS_XXX

   uint8_t data[8];
 } __attribute__((packed)) pciefw_reply_t;

 The PCIE endpoint process can initiate pciefw_msg_t to perform write 
 operations
 of its own. This is used to perform data transfer (DMA engines ...) and send
 interrupts.

 Any flow control rules?  For example, can the endpoint raise an
 interrupt while processing a message (before it sends a reply)?


Currently, messages are not identified so the transport is assumed
to be made in order. In practice, it works because of the LINUX
application I use does not starts 2 DMA transfers in parallel. But a
protocol cannot rely on such assumptions. Plus, I assume QEMU
can eventually make 2 PCIE concurrent accesses on the same
device which would lead to 2 replies. so I will add an identifier field.

 Both types start with a pciefw_header containing the total size:

 typedef struct pciefw_header
 {
   uint16_t size;
 } __attribute__((packed)) pciefw_header_t;

 A hello message type would be useful so that you can extend the
 protocol in the future.  The message would contain feature bits or a
 version number.


I did think about it. More generally, it would be useful to have a control
message to allow an endpoint to be disconnected then reconnected
without having to reboot QEMU. It is very useful when developping a
new device.

 Stefan

I will send you the modified document,

Thanks,

Fabien.



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-22 Thread Paolo Bonzini
Il 22/11/2012 09:19, Stefan Hajnoczi ha scritto:
  usage
  -
  PCIEFW devices are instanciated using the following QEMU options:
  -device \
   pciefw,\
   laddr=local_addr,\
   lport=local_port,\
   raddr=remote_addr,\
   rport=remote_port
 Take a look at qemu_socket.h:socket_parse().  It should allow you to
 support TCP, UNIX domain sockets, and arbitrary file descriptors.
 

Even better it could just be a chardev.  socket_parse() is only used by
the (human) monitor interface.

Paolo



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-22 Thread lementec fabien
Hi,

I modified the protocol so that new message types can be
added easily. It is necessary for control related messages,
such as the hello one (I called it init). A type field has
been added to the header.

I did not include a is_reply (or is_request) field, and
prefered having 2 distinct message types. This is because
one may imagine a message type that has no reply (ie. ping ...)

Out of order reception is allowed by the use of a tag field
in request and replies. I did not included the tag in the
header, since not all the messages may need a tag. I plan
to implement this tag as a simple incrementing counter, so
made it large enough.

I did not implement these modifications yet, since I prefer
having feedbacks first. Neither did I have a look to the
the command line option parsing.

Regards,

Fabien.

2012/11/22 Paolo Bonzini pbonz...@redhat.com:
 Il 22/11/2012 09:19, Stefan Hajnoczi ha scritto:
  usage
  -
  PCIEFW devices are instanciated using the following QEMU options:
  -device \
   pciefw,\
   laddr=local_addr,\
   lport=local_port,\
   raddr=remote_addr,\
   rport=remote_port
 Take a look at qemu_socket.h:socket_parse().  It should allow you to
 support TCP, UNIX domain sockets, and arbitrary file descriptors.


 Even better it could just be a chardev.  socket_parse() is only used by
 the (human) monitor interface.

 Paolo


pciefw.protocol
Description: Binary data


Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-22 Thread Stefan Hajnoczi
On Thu, Nov 22, 2012 at 11:21:58AM +0100, Paolo Bonzini wrote:
 Il 22/11/2012 09:19, Stefan Hajnoczi ha scritto:
   usage
   -
   PCIEFW devices are instanciated using the following QEMU options:
   -device \
pciefw,\
laddr=local_addr,\
lport=local_port,\
raddr=remote_addr,\
rport=remote_port
  Take a look at qemu_socket.h:socket_parse().  It should allow you to
  support TCP, UNIX domain sockets, and arbitrary file descriptors.
  
 
 Even better it could just be a chardev.  socket_parse() is only used by
 the (human) monitor interface.

The issue with chardev is that it's asynchronous.

In this case we cannot return from MemoryRegionOps-read() or
MemoryRegionOps-write() back to the event loop.

Stefan



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-21 Thread lementec fabien
Hi,

As far as I know, all the PCIE devices implemented here
work with 256 bytes config header.

Cheers,

Fabien.

2012/11/20 Jason Baron jba...@redhat.com:
 On Fri, Nov 16, 2012 at 09:39:07AM +0100, lementec fabien wrote:
 Hi,

 I am a software engineer who works in an electronic group. Using QEMU
 to emulate devices allows me to start writing and testing LINUX software
 before the device is actually available. In the group, we are mostly
 working with XILINX FPGAs, communicating with the host via PCIE. The
 devices are implemented in VHDL.

 As you know the current PCI config space is limited to 256 bytes on x86. I was
 wondering then, if you needed to work around this limitation in any way
 since you've mentioned you're using PCIE (which has a 4k config space)?

 Thanks,

 -Jason




Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-21 Thread lementec fabien
I join you a doc describing the current small protocol implementation.

2012/11/19 Stefan Hajnoczi stefa...@gmail.com:
 On Fri, Nov 16, 2012 at 02:05:29PM +0100, lementec fabien wrote:
 Actually, I wanted to be independant of the QEMU event loop. Plus,
 some proprietary simulation environment provides a closed socket
 based interface to 'stimulate' the emulated device, at the PCIE level
 for instance. These environments are sometimes installed on cluster
 not running QEMU. The socket based approach fits quite well.

 Not knowing about QEMU internals, I spent some hours trying to find
 out the best way to plug into QEMU, and did not find ivhsmem appropriate.
 Honestly, I wanted to have a working solution asap, and it did not take
 long before I opted for the socket based approach. Now that it is working,
 I can take time to reconsider stuffs according to others need, and ideally
 an integration to QEMU.

 I suggest writing up a spec for the socket protocol.  It can be put in
 docs/specs/ (like the ivshmem spec).

 This is both a good way to increase discussion and important for others
 who may wish to make use of this feature.

 Stefan


pciefw.protocol
Description: Binary data


Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-20 Thread Jason Baron
On Fri, Nov 16, 2012 at 09:39:07AM +0100, lementec fabien wrote:
 Hi,
 
 I am a software engineer who works in an electronic group. Using QEMU
 to emulate devices allows me to start writing and testing LINUX software
 before the device is actually available. In the group, we are mostly
 working with XILINX FPGAs, communicating with the host via PCIE. The
 devices are implemented in VHDL.

As you know the current PCI config space is limited to 256 bytes on x86. I was
wondering then, if you needed to work around this limitation in any way
since you've mentioned you're using PCIE (which has a 4k config space)?

Thanks,

-Jason




Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-19 Thread Stefan Hajnoczi
On Fri, Nov 16, 2012 at 02:05:29PM +0100, lementec fabien wrote:
 Actually, I wanted to be independant of the QEMU event loop. Plus,
 some proprietary simulation environment provides a closed socket
 based interface to 'stimulate' the emulated device, at the PCIE level
 for instance. These environments are sometimes installed on cluster
 not running QEMU. The socket based approach fits quite well.
 
 Not knowing about QEMU internals, I spent some hours trying to find
 out the best way to plug into QEMU, and did not find ivhsmem appropriate.
 Honestly, I wanted to have a working solution asap, and it did not take
 long before I opted for the socket based approach. Now that it is working,
 I can take time to reconsider stuffs according to others need, and ideally
 an integration to QEMU.

I suggest writing up a spec for the socket protocol.  It can be put in
docs/specs/ (like the ivshmem spec).

This is both a good way to increase discussion and important for others
who may wish to make use of this feature.

Stefan



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-19 Thread lementec fabien
Hi,

Thanks, it is actually a good idea to start with. I will write a spec
based on an improved version of what I have already implemented.
I think I will have some time this week, I will keep you updated soon.

Best regards,

Fabien.

2012/11/19 Stefan Hajnoczi stefa...@gmail.com:
 On Fri, Nov 16, 2012 at 02:05:29PM +0100, lementec fabien wrote:
 Actually, I wanted to be independant of the QEMU event loop. Plus,
 some proprietary simulation environment provides a closed socket
 based interface to 'stimulate' the emulated device, at the PCIE level
 for instance. These environments are sometimes installed on cluster
 not running QEMU. The socket based approach fits quite well.

 Not knowing about QEMU internals, I spent some hours trying to find
 out the best way to plug into QEMU, and did not find ivhsmem appropriate.
 Honestly, I wanted to have a working solution asap, and it did not take
 long before I opted for the socket based approach. Now that it is working,
 I can take time to reconsider stuffs according to others need, and ideally
 an integration to QEMU.

 I suggest writing up a spec for the socket protocol.  It can be put in
 docs/specs/ (like the ivshmem spec).

 This is both a good way to increase discussion and important for others
 who may wish to make use of this feature.

 Stefan



[Qemu-devel] TCP based PCIE request forwarding

2012-11-16 Thread lementec fabien
Hi,

I am a software engineer who works in an electronic group. Using QEMU
to emulate devices allows me to start writing and testing LINUX software
before the device is actually available. In the group, we are mostly
working with XILINX FPGAs, communicating with the host via PCIE. The
devices are implemented in VHDL.

I wanted to be able to reuse our VHDL designs in QEMU. To this end,
I implemented a QEMU TCP based PCIE request forwarder, so that I can
emulate our device in a standard process, and use the GHDL GCC frontend
plus some glue.

The fact that it is TCP based allows me to run the device on another
machine, which is a requirement.

The whole thing is available here:
https://github.com/texane/vpcie

The request forwarder is available here:
https://github.com/texane/vpcie/blob/master/qemu/pciefw.c

It requires a patch to QEMU, available here:
https://github.com/texane/vpcie/blob/master/qemu/qemu_58617a795c8067b2f9800cffce60f38707d3aa31.diff

Since I am the only one using it and I wanted a working version soon,
I use a naive method to plug into QEMU, which can block the VM. Plus,
I did not take care of some PCIE related details. But it works well
enough.

Do you think the approach of forwarding PCIE requests over TCP could
be integrated to QEMU? If positive, what kind of modifications should
be done to this patch?

Best regards,

Fabien Le Mentec.



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-16 Thread Stefan Hajnoczi
On Fri, Nov 16, 2012 at 9:39 AM, lementec fabien
fabien.lemen...@gmail.com wrote:
 I am a software engineer who works in an electronic group. Using QEMU
 to emulate devices allows me to start writing and testing LINUX software
 before the device is actually available. In the group, we are mostly
 working with XILINX FPGAs, communicating with the host via PCIE. The
 devices are implemented in VHDL.

 I wanted to be able to reuse our VHDL designs in QEMU. To this end,
 I implemented a QEMU TCP based PCIE request forwarder, so that I can
 emulate our device in a standard process, and use the GHDL GCC frontend
 plus some glue.

 The fact that it is TCP based allows me to run the device on another
 machine, which is a requirement.

 The whole thing is available here:
 https://github.com/texane/vpcie

 The request forwarder is available here:
 https://github.com/texane/vpcie/blob/master/qemu/pciefw.c

 It requires a patch to QEMU, available here:
 https://github.com/texane/vpcie/blob/master/qemu/qemu_58617a795c8067b2f9800cffce60f38707d3aa31.diff

 Since I am the only one using it and I wanted a working version soon,
 I use a naive method to plug into QEMU, which can block the VM. Plus,
 I did not take care of some PCIE related details. But it works well
 enough.

 Do you think the approach of forwarding PCIE requests over TCP could
 be integrated to QEMU? If positive, what kind of modifications should
 be done to this patch?

Thanks for sharing your code.  There is definitely interest in
integrating hardware simulation with QEMU in the wider community.

There is a little bit of overlap with hw/ivshmem.c but I don't think
ivshmem is as flexible for modelling arbitrary PCIe adapters.

I guess the reason you didn't try linking the GHDL object files
against QEMU is that you wanted full control over the process (e.g. so
you don't need to worry about QEMU's event loop)?

Stefan



Re: [Qemu-devel] TCP based PCIE request forwarding

2012-11-16 Thread lementec fabien
Hi,

Thanks for your reply.

Actually, I wanted to be independant of the QEMU event loop. Plus,
some proprietary simulation environment provides a closed socket
based interface to 'stimulate' the emulated device, at the PCIE level
for instance. These environments are sometimes installed on cluster
not running QEMU. The socket based approach fits quite well.

Not knowing about QEMU internals, I spent some hours trying to find
out the best way to plug into QEMU, and did not find ivhsmem appropriate.
Honestly, I wanted to have a working solution asap, and it did not take
long before I opted for the socket based approach. Now that it is working,
I can take time to reconsider stuffs according to others need, and ideally
an integration to QEMU.

Fabien.

2012/11/16 Stefan Hajnoczi stefa...@gmail.com:
 On Fri, Nov 16, 2012 at 9:39 AM, lementec fabien
 fabien.lemen...@gmail.com wrote:
 I am a software engineer who works in an electronic group. Using QEMU
 to emulate devices allows me to start writing and testing LINUX software
 before the device is actually available. In the group, we are mostly
 working with XILINX FPGAs, communicating with the host via PCIE. The
 devices are implemented in VHDL.

 I wanted to be able to reuse our VHDL designs in QEMU. To this end,
 I implemented a QEMU TCP based PCIE request forwarder, so that I can
 emulate our device in a standard process, and use the GHDL GCC frontend
 plus some glue.

 The fact that it is TCP based allows me to run the device on another
 machine, which is a requirement.

 The whole thing is available here:
 https://github.com/texane/vpcie

 The request forwarder is available here:
 https://github.com/texane/vpcie/blob/master/qemu/pciefw.c

 It requires a patch to QEMU, available here:
 https://github.com/texane/vpcie/blob/master/qemu/qemu_58617a795c8067b2f9800cffce60f38707d3aa31.diff

 Since I am the only one using it and I wanted a working version soon,
 I use a naive method to plug into QEMU, which can block the VM. Plus,
 I did not take care of some PCIE related details. But it works well
 enough.

 Do you think the approach of forwarding PCIE requests over TCP could
 be integrated to QEMU? If positive, what kind of modifications should
 be done to this patch?

 Thanks for sharing your code.  There is definitely interest in
 integrating hardware simulation with QEMU in the wider community.

 There is a little bit of overlap with hw/ivshmem.c but I don't think
 ivshmem is as flexible for modelling arbitrary PCIe adapters.

 I guess the reason you didn't try linking the GHDL object files
 against QEMU is that you wanted full control over the process (e.g. so
 you don't need to worry about QEMU's event loop)?

 Stefan