On 11/03/2010 07:46 PM, Adam Litke wrote:
On Wed, 2010-11-03 at 10:28 -0500, Michael Roth wrote:
Process VPPackets coming in from channel and send them to the
appropriate server/client connections.

Signed-off-by: Michael Roth<mdr...@linux.vnet.ibm.com>
---
  virtproxy.c |   42 ++++++++++++++++++++++++++++++++++++++++++
  1 files changed, 42 insertions(+), 0 deletions(-)

diff --git a/virtproxy.c b/virtproxy.c
index 6c3611b..57ab2b0 100644
--- a/virtproxy.c
+++ b/virtproxy.c
@@ -235,6 +235,48 @@ static void vp_channel_accept(void *opaque)
      vp_set_fd_handler(drv->listen_fd, NULL, NULL, NULL);
  }

+/* handle data packets
+ *
+ * process VPPackets containing data and send them to the corresponding
+ * FDs
+ */
+static int vp_handle_data_packet(void *drv, const VPPacket *pkt)
+{
+    int fd, ret;
+
+    TRACE("called with drv: %p", drv);
+
+    if (pkt->type == VP_PKT_CLIENT) {
+        TRACE("recieved client packet, client fd: %d, server fd: %d",
+              pkt->payload.proxied.client_fd, pkt->payload.proxied.server_fd);
+        fd = pkt->payload.proxied.server_fd;
+    } else if (pkt->type == VP_PKT_SERVER) {
+        TRACE("recieved server packet, client fd: %d, server fd: %d",
+              pkt->payload.proxied.client_fd, pkt->payload.proxied.server_fd);
+        fd = pkt->payload.proxied.client_fd;
+    } else {
+        TRACE("unknown packet type");
+        return -1;
+    }
+
+    /* TODO: proxied in non-blocking mode can causes us to spin here
+     * for slow servers/clients. need to use write()'s and maintain
+     * a per-conn write queue that we clear out before sending any
+     * more data to the fd
+     */

Hmm, so a guest can cause a denial of service in the host qemu process?
Are you working on adding the write queues?


Not a guest...though they could DoS the agent in this manner. But there seems to be an analogous situation in qemu currently, where a malicious client connects to, say, a socket setup to listen for telnet connections to the qemu monitor, sends a bunch of info commands, and doesn't read anything coming back to it. That might eventually cause the qemu process to hang when a monitor_flush->qemu_chr_write->send_all(client_fd) happens. A malicious client connecting to a forwarding port/socket on the host side could cause a similiar situation.

So if it's a problem, it seems like a problem that should be addressed more generally. A general, asynchronous implementation of send_all for instance.

+    ret = vp_send_all(fd, (void *)pkt->payload.proxied.data,
+            pkt->payload.proxied.bytes);
+    if (ret == -1) {
+        LOG("error sending data over channel");
+        return -1;
+    } else if (ret != pkt->payload.proxied.bytes) {
+        TRACE("buffer full?");

This can happen?  Does this bring the whole connection down?  The whole
virtproxy instance?


It can if write() returns 0, so only if we do a non-blocking write() inside vp_send_all/send_all. It's basically a TODO...since right now the writes are blocking ones, so it shouldn't happen. If we used non-blocking writes though we'd need to call vp_send_all(fd, (void *)pkt->payload.proxied.data + ret, pkt->payload.proxied.bytes - ret) in a loop, it wouldn't be an error case. I'll clean this up a bit.

Reply via email to