[Qemu-devel] [Bug 818673] Re: virtio: trying to map MMIO memory

2011-08-13 Thread Rick Vernam
So that would point to virtio.  This appears to be the place for virtio bugs, 
correct?
Should I be doing anything to help usher this along?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/818673

Title:
  virtio: trying to map MMIO memory

Status in QEMU:
  New

Bug description:
  Qemu host is Core i7, running Linux.  Guest is Windows XP sp3.
  Often, qemu will crash shortly after starting (1-5 minutes) with a statement 
"qemu-system-x86_64: virtio: trying to map MMIO memory"
  This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and 
qemu 0.15.0-rc1.
  Qemu is started as such:
  qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid 
-drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net 
nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial 
-chardev spicevmc,name=vdagent,id=vdagent -device 
virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice 
port=1234,disable-ticketing -daemonize -monitor 
telnet:localhost:12341,server,nowait
  The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most 
current spice binaries from spice-space.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/818673/+subscriptions



Re: [Qemu-devel] [net-next RFC PATCH 0/7] multiqueue support for tun/tap

2011-08-13 Thread Jason Wang


- Original Message -
> On Fri, 2011-08-12 at 09:54 +0800, Jason Wang wrote:
> > As multi-queue nics were commonly used for high-end servers,
> > current single queue based tap can not satisfy the
> > requirement of scaling guest network performance as the
> > numbers of vcpus increase. So the following series
> > implements multiple queue support in tun/tap.
> >
> > In order to take advantages of this, a multi-queue capable
> > driver and qemu were also needed. I just rebase the latest
> > version of Krishna's multi-queue virtio-net driver into this
> > series to simplify the test. And for multiqueue supported
> > qemu, you can refer the patches I post in
> > http://www.spinics.net/lists/kvm/msg52808.html. Vhost is
> > also a must to achieve high performance and its code could
> > be used for multi-queue without modification. Alternatively,
> > this series can be also used for Krishna's M:N
> > implementation of multiqueue but I didn't test it.
> >
> > The idea is simple: each socket were abstracted as a queue
> > for tun/tap, and userspace may open as many files as
> > required and then attach them to the devices. In order to
> > keep the ABI compatibility, device creation were still
> > finished in TUNSETIFF, and two new ioctls TUNATTACHQUEUE and
> > TUNDETACHQUEUE were added for user to manipulate the numbers
> > of queues for the tun/tap.
> 
> Is it possible to have tap create these queues automatically when
> TUNSETIFF is called instead of having userspace to do the new
> ioctls. I am just wondering if it is possible to get multi-queue
> to be enabled without any changes to qemu. I guess the number of
> queues
> could be based on the number of vhost threads/guest virtio-net queues.

It's possible but we need at least pass the number of queues
through TUNSETIFF which may break the ABI? And this method
is not flexible as adding new ioctls, consider we may
disable some queues for some reaons such as running a single
queue guest or pxe on an multiple virtio-net backened.

> 
> Also, is it possible to enable multi-queue on the host alone without
> any guest virtio-net changes?

If we use current driver without changes, it can run on host
that multiqueu enabled. But it can not make use all of the
queues.

> 
> Have you done any multiple TCP_RR/UDP_RR testing with small packet
> sizes? 256byte request/response with 50-100 instances?

Not yet, I would do it after I was back from KVM Forum.

> 
> >
> > I've done some basic performance testing of multi queue
> > tap. For tun, I just test it through vpnc.
> >
> > Notes:
> > - Test shows improvement when receving packets from
> > local/external host to guest, and send big packet from guest
> > to local/external host.
> > - Current multiqueue based virtio-net/tap introduce a
> > regression of send small packet (512 byte) from guest to
> > local/external host. I suspect it's the issue of queue
> > selection in both guest driver and tap. Would continue to
> > investigate.
> > - I would post the perforamnce numbers as a reply of this
> > mail.
> >
> > TODO:
> > - solve the issue of packet transmission of small packets.
> > - addressing the comments of virtio-net driver
> > - performance tunning
> >
> > Please review and comment it, Thanks.
> >
> > ---
> >
> > Jason Wang (5):
> >   tuntap: move socket/sock related structures to tun_file
> >   tuntap: categorize ioctl
> >   tuntap: introduce multiqueue related flags
> >   tuntap: multiqueue support
> >   tuntap: add ioctls to attach or detach a file form tap device
> >
> > Krishna Kumar (2):
> >   Change virtqueue structure
> >   virtio-net changes
> >
> >
> >  drivers/net/tun.c | 738 ++-
> >  drivers/net/virtio_net.c | 578 --
> >  drivers/virtio/virtio_pci.c | 10 -
> >  include/linux/if_tun.h | 5
> >  include/linux/virtio.h | 1
> >  include/linux/virtio_net.h | 3
> >  6 files changed, 867 insertions(+), 468 deletions(-)
> >
> 
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/



Re: [Qemu-devel] [net-next RFC PATCH 4/7] tuntap: multiqueue support

2011-08-13 Thread Jason Wang


- Original Message -
> On Fri, Aug 12, 2011 at 09:55:20AM +0800, Jason Wang wrote:
> > With the abstraction that each socket were a backend of a
> > queue for userspace, this patch adds multiqueue support for
> > tap device by allowing multiple sockets to be attached to a
> > tap device. Then we could parallize the transmission by put
> > them into different socket.
> >
> > As queue related information were stored in private_data of
> > file new, we could simply implement the multiqueue support
> > by add an array of pointers to sockets were stored in the
> > tap device. Then ioctls may be added to manipulate those
> > pointers for adding or removing queues.
> >
> > In order to let tx path lockless, NETIF_F_LLTX were used for
> > multiqueue tap device. And RCU is used for doing
> > synchronization between packet handling and system calls
> > such as removing queues.
> >
> > Currently, multiqueue support is limited for tap , but it's
> > easy also enable it for tun if we find it was also helpful.
> 
> Question below about calls to tun_get_slot().
> 
> Thanx, Paul
> 
> > Signed-off-by: Jason Wang 
> > ---
> >  drivers/net/tun.c | 376
> >  ++---
> >  1 files changed, 243 insertions(+), 133 deletions(-)
> >
> > diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> > index 4cd292a..8bc6dff 100644
> > --- a/drivers/net/tun.c
> > +++ b/drivers/net/tun.c
> > @@ -108,6 +108,8 @@ struct tap_filter {
> > unsigned char addr[FLT_EXACT_COUNT][ETH_ALEN];
> >  };
> >
> > +#define MAX_TAP_QUEUES (NR_CPUS < 16 ? NR_CPUS : 16)
> > +
> >  struct tun_file {
> > struct sock sk;
> > struct socket socket;
> > @@ -115,7 +117,7 @@ struct tun_file {
> > int vnet_hdr_sz;
> > struct tap_filter txflt;
> > atomic_t count;
> > - struct tun_struct *tun;
> > + struct tun_struct __rcu *tun;
> > struct net *net;
> > struct fasync_struct *fasync;
> > unsigned int flags;
> > @@ -124,7 +126,8 @@ struct tun_file {
> >  struct tun_sock;
> >
> >  struct tun_struct {
> > - struct tun_file *tfile;
> > + struct tun_file *tfiles[MAX_TAP_QUEUES];
> > + unsigned int numqueues;
> > unsigned int flags;
> > uid_t owner;
> > gid_t group;
> > @@ -139,80 +142,183 @@ struct tun_struct {
> >  #endif
> >  };
> >
> > -static int tun_attach(struct tun_struct *tun, struct file *file)
> > +static DEFINE_SPINLOCK(tun_lock);
> > +
> > +/*
> > + * get_slot: return a [unused/occupied] slot in tun->tfiles[]:
> > + * - if 'f' is NULL, return the first empty slot;
> > + * - otherwise, return the slot this pointer occupies.
> > + */
> > +static int tun_get_slot(struct tun_struct *tun, struct tun_file
> > *tfile)
> >  {
> > - struct tun_file *tfile = file->private_data;
> > - int err;
> > + int i;
> >
> > - ASSERT_RTNL();
> > + for (i = 0; i < MAX_TAP_QUEUES; i++) {
> > + if (rcu_dereference(tun->tfiles[i]) == tfile)
> > + return i;
> > + }
> >
> > - netif_tx_lock_bh(tun->dev);
> > + /* Should never happen */
> > + BUG_ON(1);
> > +}
> >
> > - err = -EINVAL;
> > - if (tfile->tun)
> > - goto out;
> > +/*
> > + * tun_get_queue(): calculate the queue index
> > + * - if skbs comes from mq nics, we can just borrow
> > + * - if not, calculate from the hash
> > + */
> > +static struct tun_file *tun_get_queue(struct net_device *dev,
> > + struct sk_buff *skb)
> > +{
> > + struct tun_struct *tun = netdev_priv(dev);
> > + struct tun_file *tfile = NULL;
> > + int numqueues = tun->numqueues;
> > + __u32 rxq;
> >
> > - err = -EBUSY;
> > - if (tun->tfile)
> > + BUG_ON(!rcu_read_lock_held());
> > +
> > + if (!numqueues)
> > goto out;
> >
> > - err = 0;
> > - tfile->tun = tun;
> > - tun->tfile = tfile;
> > - netif_carrier_on(tun->dev);
> > - dev_hold(tun->dev);
> > - sock_hold(&tfile->sk);
> > - atomic_inc(&tfile->count);
> > + if (likely(skb_rx_queue_recorded(skb))) {
> > + rxq = skb_get_rx_queue(skb);
> > +
> > + while (unlikely(rxq >= numqueues))
> > + rxq -= numqueues;
> > +
> > + tfile = rcu_dereference(tun->tfiles[rxq]);
> > + if (tfile)
> > + goto out;
> > + }
> > +
> > + /* Check if we can use flow to select a queue */
> > + rxq = skb_get_rxhash(skb);
> > + if (rxq) {
> > + tfile = rcu_dereference(tun->tfiles[rxq % numqueues]);
> > + if (tfile)
> > + goto out;
> > + }
> > +
> > + /* Everything failed - find first available queue */
> > + for (rxq = 0; rxq < MAX_TAP_QUEUES; rxq++) {
> > + tfile = rcu_dereference(tun->tfiles[rxq]);
> > + if (tfile)
> > + break;
> > + }
> >
> >  out:
> > - netif_tx_unlock_bh(tun->dev);
> > - return err;
> > + return tfile;
> >  }
> >
> > -static void __tun_detach(struct tun_struct *tun)
> > +static int tun_detach(struct tun_file *tfile, bool clean)
> >  {
> > - struct tun_file *tfile = tun->tfile;
> > - /* Detach from net device */
> > - netif_tx_lock_bh(tun->dev);
> > - netif_carrier_off(tun->dev);
> > - tun->tfile = NULL;
> > - netif_tx_unlock_bh(tun->dev);
> > -
> > - /* Drop read queue */
> > - skb_queue_purge(&tfile->socket.sk->sk_re

Re: [Qemu-devel] [net-next RFC PATCH 4/7] tuntap: multiqueue support

2011-08-13 Thread Jason Wang


- Original Message -
> Le vendredi 12 août 2011 à 09:55 +0800, Jason Wang a écrit :
> 
> >+ rxq = skb_get_rxhash(skb);
> >+ if (rxq) {
> >+ tfile = rcu_dereference(tun->tfiles[rxq % numqueues]);
> >+ if (tfile)
> >+ goto out;
> >+ }
> 
> You can avoid an expensive divide with following trick :
> 
> u32 idx = ((u64)rxq * numqueues) >> 32;
> 

Sure.

> 
> 
> > -static struct tun_struct *tun_get(struct file *file)
> > +static void tun_detach_all(struct net_device *dev)
> >  {
> > - return __tun_get(file->private_data);
> > + struct tun_struct *tun = netdev_priv(dev);
> > + struct tun_file *tfile, *tfile_list[MAX_TAP_QUEUES];
> > + int i, j = 0;
> > +
> > + spin_lock(&tun_lock);
> > +
> > + for (i = 0; i < MAX_TAP_QUEUES && tun->numqueues; i++) {
> > + tfile = rcu_dereference_protected(tun->tfiles[i],
> > + lockdep_is_held(&tun_lock));
> > + if (tfile) {
> > + wake_up_all(&tfile->wq.wait);
> > + tfile_list[i++] = tfile;
> 
> typo here, you want tfile_list[j++] = tfile;
> 

Yes, thanks for catching this.

> > + rcu_assign_pointer(tun->tfiles[i], NULL);
> > + rcu_assign_pointer(tfile->tun, NULL);
> > + --tun->numqueues;
> > + }
> > + }
> > + BUG_ON(tun->numqueues != 0);
> > + spin_unlock(&tun_lock);
> > +
> > + synchronize_rcu();
> > + for(--j; j >= 0; j--)
> > + sock_put(&tfile_list[j]->sk);
> >  }
> >
> 
> Could you take a look at net/packet/af_packet.c, to check how David
> did
> the whole fanout thing ?
> 
> __fanout_unlink()
> 
> Trick is to not leave NULL entries in the tun->tfiles[] array.
> 
> It makes things easier in hot path.

Sure I would go to take a look at this.

> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html



Re: [Qemu-devel] [net-next RFC PATCH 7/7] virtio-net changes

2011-08-13 Thread Jason Wang


- Original Message -
> On Fri, 2011-08-12 at 09:55 +0800, Jason Wang wrote:
> > From: Krishna Kumar 
> >
> > Implement mq virtio-net driver.
> >
> > Though struct virtio_net_config changes, it works with the old
> > qemu since the last element is not accessed unless qemu sets
> > VIRTIO_NET_F_MULTIQUEUE.
> >
> > Signed-off-by: Krishna Kumar 
> > Signed-off-by: Jason Wang 
> > ---
> 
> Could these changes be documented to virtio-spec as well?

Yes, definitely.

> 
> >  drivers/net/virtio_net.c | 578
> >  +++-
> >  include/linux/virtio_net.h | 3
> >  2 files changed, 411 insertions(+), 170 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 0c7321c..03a199d 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -49,16 +49,48 @@ struct virtnet_stats {
> > u64 rx_packets;
> >  };
> >
> > -struct virtnet_info {
> > - struct virtio_device *vdev;
> > - struct virtqueue *rvq, *svq, *cvq;
> > - struct net_device *dev;
> > +/* Internal representation of a send virtqueue */
> > +struct send_queue {
> > + /* Virtqueue associated with this send _queue */
> > + struct virtqueue *svq;
> > +
> > + /* TX: fragments + linear part + virtio header */
> > + struct scatterlist tx_sg[MAX_SKB_FRAGS + 2];
> > +};
> > +
> > +/* Internal representation of a receive virtqueue */
> > +struct receive_queue {
> > + /* Virtqueue associated with this receive_queue */
> > + struct virtqueue *rvq;
> > +
> > + /* Back pointer to the virtnet_info */
> > + struct virtnet_info *vi;
> > +
> > struct napi_struct napi;
> > - unsigned int status;
> >
> > /* Number of input buffers, and max we've ever had. */
> > unsigned int num, max;
> >
> > + /* Work struct for refilling if we run low on memory. */
> > + struct delayed_work refill;
> > +
> > + /* Chain pages by the private ptr. */
> > + struct page *pages;
> > +
> > + /* RX: fragments + linear part + virtio header */
> > + struct scatterlist rx_sg[MAX_SKB_FRAGS + 2];
> > +};
> > +
> > +struct virtnet_info {
> > + struct send_queue **sq;
> > + struct receive_queue **rq;
> > +
> > + int numtxqs; /* # of rxqs/txqs */
> > + struct virtio_device *vdev;
> > + struct virtqueue *cvq;
> > + struct net_device *dev;
> > + unsigned int status;
> > +
> > /* I like... big packets and I cannot lie! */
> > bool big_packets;
> >
> > @@ -67,16 +99,6 @@ struct virtnet_info {
> >
> > /* Active statistics */
> > struct virtnet_stats __percpu *stats;
> > -
> > - /* Work struct for refilling if we run low on memory. */
> > - struct delayed_work refill;
> > -
> > - /* Chain pages by the private ptr. */
> > - struct page *pages;
> > -
> > - /* fragments + linear part + virtio header */
> > - struct scatterlist rx_sg[MAX_SKB_FRAGS + 2];
> > - struct scatterlist tx_sg[MAX_SKB_FRAGS + 2];
> >  };
> >
> >  struct skb_vnet_hdr {
> > @@ -106,22 +128,22 @@ static inline struct skb_vnet_hdr
> > *skb_vnet_hdr(struct sk_buff *skb)
> >   * private is used to chain pages for big packets, put the whole
> >   * most recent used list in the beginning for reuse
> >   */
> > -static void give_pages(struct virtnet_info *vi, struct page *page)
> > +static void give_pages(struct receive_queue *rq, struct page *page)
> >  {
> > struct page *end;
> >
> > /* Find end of list, sew whole thing into vi->pages. */
> > for (end = page; end->private; end = (struct page *)end->private);
> > - end->private = (unsigned long)vi->pages;
> > - vi->pages = page;
> > + end->private = (unsigned long)rq->pages;
> > + rq->pages = page;
> >  }
> >
> > -static struct page *get_a_page(struct virtnet_info *vi, gfp_t
> > gfp_mask)
> > +static struct page *get_a_page(struct receive_queue *rq, gfp_t
> > gfp_mask)
> >  {
> > - struct page *p = vi->pages;
> > + struct page *p = rq->pages;
> >
> > if (p) {
> > - vi->pages = (struct page *)p->private;
> > + rq->pages = (struct page *)p->private;
> > /* clear private here, it is used to chain pages */
> > p->private = 0;
> > } else
> > @@ -132,12 +154,13 @@ static struct page *get_a_page(struct
> > virtnet_info *vi, gfp_t gfp_mask)
> >  static void skb_xmit_done(struct virtqueue *svq)
> >  {
> > struct virtnet_info *vi = svq->vdev->priv;
> > + int qnum = svq->queue_index / 2; /* RX/TX vqs are allocated in
> > pairs */
> >
> > /* Suppress further interrupts. */
> > virtqueue_disable_cb(svq);
> >
> > /* We were probably waiting for more output buffers. */
> > - netif_wake_queue(vi->dev);
> > + netif_wake_subqueue(vi->dev, qnum);
> >  }
> >
> >  static void set_skb_frag(struct sk_buff *skb, struct page *page,
> > @@ -157,9 +180,10 @@ static void set_skb_frag(struct sk_buff *skb,
> > struct page *page,
> > *len -= f->size;
> >  }
> >
> > -static struct sk_buff *page_to_skb(struct virtnet_info *vi,
> > +static struct sk_buff *page_to_skb(struct receive_queue *rq,
> >struct page *p

[Qemu-devel] [PATCH v2] posix-aio-compat: fix latency issues

2011-08-13 Thread Avi Kivity
In certain circumstances, posix-aio-compat can incur a lot of latency:
 - threads are created by vcpu threads, so if vcpu affinity is set,
   aio threads inherit vcpu affinity.  This can cause many aio threads
   to compete for one cpu.
 - we can create up to max_threads (64) aio threads in one go; since a
   pthread_create can take around 30μs, we have up to 2ms of cpu time
   under a global lock.

Fix by:
 - moving thread creation to the main thread, so we inherit the main
   thread's affinity instead of the vcpu thread's affinity.
 - if a thread is currently being created, and we need to create yet
   another thread, let thread being born create the new thread, reducing
   the amount of time we spend under the main thread.
 - drop the local lock while creating a thread (we may still hold the
   global mutex, though)

Note this doesn't eliminate latency completely; scheduler artifacts or
lack of host cpu resources can still cause it.  We may want pre-allocated
threads when this cannot be tolerated.

Thanks to Uli Obergfell of Red Hat for his excellent analysis and suggestions.

Signed-off-by: Avi Kivity 
---
v2: simplify do_spawn_thread() locking

 posix-aio-compat.c |   44 ++--
 1 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/posix-aio-compat.c b/posix-aio-compat.c
index 8dc00cb..c3febfb 100644
--- a/posix-aio-compat.c
+++ b/posix-aio-compat.c
@@ -30,6 +30,7 @@
 
 #include "block/raw-posix-aio.h"
 
+static void do_spawn_thread(void);
 
 struct qemu_paiocb {
 BlockDriverAIOCB common;
@@ -64,6 +65,9 @@ static pthread_attr_t attr;
 static int max_threads = 64;
 static int cur_threads = 0;
 static int idle_threads = 0;
+static int new_threads = 0; /* backlog of threads we need to create */
+static int pending_threads = 0; /* threads created but not running yet */
+static QEMUBH *new_thread_bh;
 static QTAILQ_HEAD(, qemu_paiocb) request_list;
 
 #ifdef CONFIG_PREADV
@@ -311,6 +315,11 @@ static void *aio_thread(void *unused)
 
 pid = getpid();
 
+mutex_lock(&lock);
+pending_threads--;
+mutex_unlock(&lock);
+do_spawn_thread();
+
 while (1) {
 struct qemu_paiocb *aiocb;
 ssize_t ret = 0;
@@ -381,11 +390,20 @@ static void *aio_thread(void *unused)
 return NULL;
 }
 
-static void spawn_thread(void)
+static void do_spawn_thread(void)
 {
 sigset_t set, oldset;
 
-cur_threads++;
+mutex_lock(&lock);
+if (!new_threads) {
+mutex_unlock(&lock);
+return;
+}
+
+new_threads--;
+pending_threads++;
+
+mutex_unlock(&lock);
 
 /* block all signals */
 if (sigfillset(&set)) die("sigfillset");
@@ -396,6 +414,27 @@ static void spawn_thread(void)
 if (sigprocmask(SIG_SETMASK, &oldset, NULL)) die("sigprocmask restore");
 }
 
+static void spawn_thread_bh_fn(void *opaque)
+{
+do_spawn_thread();
+}
+
+static void spawn_thread(void)
+{
+cur_threads++;
+new_threads++;
+/* If there are threads being created, they will spawn new workers, so
+ * we don't spend time creating many threads in a loop holding a mutex or
+ * starving the current vcpu.
+ *
+ * If there are no idle threads, ask the main thread to create one, so we
+ * inherit the correct affinity instead of the vcpu affinity.
+ */
+if (!pending_threads) {
+qemu_bh_schedule(new_thread_bh);
+}
+}
+
 static void qemu_paio_submit(struct qemu_paiocb *aiocb)
 {
 aiocb->ret = -EINPROGRESS;
@@ -665,6 +704,7 @@ int paio_init(void)
 die2(ret, "pthread_attr_setdetachstate");
 
 QTAILQ_INIT(&request_list);
+new_thread_bh = qemu_bh_new(spawn_thread_bh_fn, NULL);
 
 posix_aio_state = s;
 return 0;
-- 
1.7.5.3




Re: [Qemu-devel] [PATCH] posix-aio-compat: fix latency issues

2011-08-13 Thread Avi Kivity

On 08/12/2011 06:24 AM, Anthony Liguori wrote:

On 08/08/2011 06:37 AM, Avi Kivity wrote:


+static void spawn_thread_bh_fn(void *opaque)
+{
+mutex_lock(&lock);
+do_spawn_thread();
+mutex_unlock(&lock);
+}


The locking here is odd.  Why not call do_spawn_thread() without the 
lock, and acquire the lock for the section that needs to hold it?


Just the way the code evolved.  Note that aio_thread() does need to take 
the lock.  However, it is indeed cleaner to take the lock when needed 
rather than drop it when not.




Otherwise, the logic seems correct to me.

Kevin, could you also take a look at this patch?


Yes please.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.




Re: [Qemu-devel] [PATCH 2/3] memory: add API for creating ROM/device regions

2011-08-13 Thread Avi Kivity

On 08/12/2011 06:48 AM, Anthony Liguori wrote:

  target_phys_addr_t offset_in_region;
  AddrRange addr;
  uint8_t dirty_log_mask;
+bool readable;

@@ -125,6 +125,7 @@ struct FlatRange {

In a follow up, it might be good to add a comment explaining that this 
whole readable thing is not just an optimization, but a hard 
requirement for KVM in order to be able to execute code from ROM.


This has nothing to do with kvm (in fact, I think we cannot support it 
under kvm with current interfaces).  It's there to support devices that 
sometimes act as RAM and sometimes as mmio.


We could also do it with a container and playing with subregions.  I 
tried it and it was pretty complicated.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.




[Qemu-devel] [PATCH] Improvements to libtool support.

2011-08-13 Thread Brad
Improvements to the libtool support in QEMU. Replace hard coded
libtool in the infrastructure with $(LIBTOOL) and allow
overriding the libtool binary used via the configure
script.

Signed-off-by: Brad Smith 

---
 Makefile.objs  |2 +-
 configure  |5 ++---
 libcacard/Makefile |   10 +-
 rules.mak  |2 +-
 4 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/Makefile.objs b/Makefile.objs
index 16eef38..6c0be08 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -369,7 +369,7 @@ trace-dtrace.lo: trace-dtrace.dtrace
@echo "missing libtool. please install and rerun configure."; exit 1
 else
 trace-dtrace.lo: trace-dtrace.dtrace
-   $(call quiet-command,libtool --mode=compile --tag=CC dtrace -o $@ -G -s 
$<, "  lt GEN trace-dtrace.o")
+   $(call quiet-command,$(LIBTOOL) --mode=compile --tag=CC dtrace -o $@ -G 
-s $<, "  lt GEN trace-dtrace.o")
 endif
 
 simpletrace.o: simpletrace.c $(GENERATED_HEADERS)
diff --git a/configure b/configure
index 6c77067..eb7497b 100755
--- a/configure
+++ b/configure
@@ -224,6 +224,7 @@ cc="${CC-${cross_prefix}gcc}"
 ar="${AR-${cross_prefix}ar}"
 objcopy="${OBJCOPY-${cross_prefix}objcopy}"
 ld="${LD-${cross_prefix}ld}"
+libtool="${LIBTOOL-${cross_prefix}libtool}"
 strip="${STRIP-${cross_prefix}strip}"
 windres="${WINDRES-${cross_prefix}windres}"
 pkg_config="${PKG_CONFIG-${cross_prefix}pkg-config}"
@@ -1341,10 +1342,8 @@ fi
 ##
 # libtool probe
 
-if ! has libtool; then
+if ! has $libtool; then
 libtool=
-else
-libtool=libtool
 fi
 
 ##
diff --git a/libcacard/Makefile b/libcacard/Makefile
index fe9747a..56dc974 100644
--- a/libcacard/Makefile
+++ b/libcacard/Makefile
@@ -37,7 +37,7 @@ install-libcacard:
@echo "libtool is missing, please install and rerun configure"; exit 1
 else
 libcacard.la: $(libcacard.lib-y) $(QEMU_OBJS_LIB)
-   $(call quiet-command,libtool --mode=link --quiet --tag=CC $(CC) 
$(libcacard_libs) -lrt -rpath $(libdir) -o $@ $^,"  lt LINK $@")
+   $(call quiet-command,$(LIBTOOL) --mode=link --quiet --tag=CC $(CC) 
$(libcacard_libs) -lrt -rpath $(libdir) -o $@ $^,"  lt LINK $@")
 
 libcacard.pc: $(libcacard_srcpath)/libcacard.pc.in
sed -e 's|@LIBDIR@|$(libdir)|' \
@@ -53,10 +53,10 @@ install-libcacard: libcacard.pc libcacard.la vscclient
$(INSTALL_DIR) "$(DESTDIR)$(libdir)/pkgconfig"
$(INSTALL_DIR) "$(DESTDIR)$(libcacard_includedir)"
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
-   libtool --mode=install $(INSTALL_PROG) vscclient "$(DESTDIR)$(bindir)"
-   libtool --mode=install $(INSTALL_DATA) libcacard.la 
"$(DESTDIR)$(libdir)"
-   libtool --mode=install $(INSTALL_DATA) libcacard.pc 
"$(DESTDIR)$(libdir)/pkgconfig"
+   $(LIBTOOL) --mode=install $(INSTALL_PROG) vscclient 
"$(DESTDIR)$(bindir)"
+   $(LIBTOOL) --mode=install $(INSTALL_DATA) libcacard.la 
"$(DESTDIR)$(libdir)"
+   $(LIBTOOL) --mode=install $(INSTALL_DATA) libcacard.pc 
"$(DESTDIR)$(libdir)/pkgconfig"
for inc in *.h; do \
-   libtool --mode=install $(INSTALL_DATA) 
$(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
+   $(LIBTOOL) --mode=install $(INSTALL_DATA) 
$(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
done
 endif
diff --git a/rules.mak b/rules.mak
index 1a2622c..884d421 100644
--- a/rules.mak
+++ b/rules.mak
@@ -22,7 +22,7 @@ ifeq ($(LIBTOOL),)
@echo "missing libtool. please install and rerun configure"; exit 1
 else
 %.lo: %.c
-   $(call quiet-command,libtool --mode=compile --quiet --tag=CC $(CC) 
$(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC 
$@")
+   $(call quiet-command,$(LIBTOOL) --mode=compile --quiet --tag=CC $(CC) 
$(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC 
$@")
 endif
 
 %.o: %.S
-- 
1.7.6


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




Re: [Qemu-devel] [PATCH] libcacard: use INSTALL_DATA for data

2011-08-13 Thread Alon Levy
On Sat, Aug 13, 2011 at 05:23:57PM -0400, Brad wrote:
> libcacard: use INSTALL_DATA for data
> 
> Signed-off-by: Brad Smith 
> 

Thanks, I'll add it to the next pull request.

> ---
>  libcacard/Makefile |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/libcacard/Makefile b/libcacard/Makefile
> index 5cd7594..fe9747a 100644
> --- a/libcacard/Makefile
> +++ b/libcacard/Makefile
> @@ -54,7 +54,7 @@ install-libcacard: libcacard.pc libcacard.la vscclient
>   $(INSTALL_DIR) "$(DESTDIR)$(libcacard_includedir)"
>   $(INSTALL_DIR) "$(DESTDIR)$(bindir)"
>   libtool --mode=install $(INSTALL_PROG) vscclient "$(DESTDIR)$(bindir)"
> - libtool --mode=install $(INSTALL_PROG) libcacard.la 
> "$(DESTDIR)$(libdir)"
> + libtool --mode=install $(INSTALL_DATA) libcacard.la 
> "$(DESTDIR)$(libdir)"
>   libtool --mode=install $(INSTALL_DATA) libcacard.pc 
> "$(DESTDIR)$(libdir)/pkgconfig"
>   for inc in *.h; do \
>   libtool --mode=install $(INSTALL_DATA) 
> $(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
> -- 
> 1.7.6
> 
> 
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> 



[Qemu-devel] [Bug 825776] Re: -boot -hda //.//physicaldrivex does not work if it is USB drive

2011-08-13 Thread Steve Si
All working now, I think some BIOS files were missing. Please close this
bug report.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/825776

Title:
  -boot -hda //.//physicaldrivex does not work if it is USB drive

Status in QEMU:
  New

Bug description:
  qemu-system-x86_64.exe -L . -name "RMPrepUSB Emulation Session" -boot
  c -m 500 -hda //./PhysicalDrive1

  just opens a blank QEMU window (no BIOS POSt messages) and does
  nothing

  qemu v 0.15.0
  Under Windows 7 64-bit
  drive1 is a USB Flash drive

  Previous version of x86_64 (Jan 2010) works fine. If replace with new
  version (or RC2 version) then does not work.

  if use harddisk.img raw file instead of USB physical device then I get BIOS 
POST messages and it boots to image OK.
  So appears to be USB or physicaldisk support issue???

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/825776/+subscriptions



[Qemu-devel] [PATCH] libcacard: use INSTALL_DATA for data

2011-08-13 Thread Brad
libcacard: use INSTALL_DATA for data

Signed-off-by: Brad Smith 

---
 libcacard/Makefile |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/libcacard/Makefile b/libcacard/Makefile
index 5cd7594..fe9747a 100644
--- a/libcacard/Makefile
+++ b/libcacard/Makefile
@@ -54,7 +54,7 @@ install-libcacard: libcacard.pc libcacard.la vscclient
$(INSTALL_DIR) "$(DESTDIR)$(libcacard_includedir)"
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
libtool --mode=install $(INSTALL_PROG) vscclient "$(DESTDIR)$(bindir)"
-   libtool --mode=install $(INSTALL_PROG) libcacard.la 
"$(DESTDIR)$(libdir)"
+   libtool --mode=install $(INSTALL_DATA) libcacard.la 
"$(DESTDIR)$(libdir)"
libtool --mode=install $(INSTALL_DATA) libcacard.pc 
"$(DESTDIR)$(libdir)/pkgconfig"
for inc in *.h; do \
libtool --mode=install $(INSTALL_DATA) 
$(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
-- 
1.7.6


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




Re: [Qemu-devel] VNC server running on `127.0.0.1:5900'

2011-08-13 Thread Mulyadi Santosa
On Sat, Aug 13, 2011 at 21:54, Nithish R  wrote:
> Hi
> Thanx a lot... It worked...

nice, btw, please reply to the list too, I don't want to take it as
private discussion

> By the way do u know how the log file is generated in the qemu monitor when
> we give the commands like log out_asm, in_asm, op, etc? I am working on a
> project which involves providing network support for virtualization. So as
> proof of concept, I have to generate the  assembly codes. I need to modify
> it but as of now I am not getting anything about logging in monitor.c file
> apart from lots of functions involving log but none has out_asm or any
> command associated with it.

the log? AFAICT, it's a continous dump of current running TB
(translated block). Can't tell much...sorry

-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com



Re: [Qemu-devel] [PATCH] hw/scsi-bus.c: Fix use of uninitialised variable

2011-08-13 Thread Blue Swirl
On Fri, Aug 12, 2011 at 7:22 PM, Blue Swirl  wrote:
> On Fri, Aug 12, 2011 at 4:49 PM, Peter Maydell  
> wrote:
>> Don't use req before it has been initialised in scsi_req_new().
>> This fixes a compile failure due to gcc complaining about this.
>
> It fixes a crash if the warning is ignored:
> Configuration device id QEMU version 1 machine id 32
>
> Program received signal SIGSEGV, Segmentation fault.
> scsi_req_new (d=0x15e46b0, tag=0x0, lun=0x0, buf=0x7fffde41 "\022",
>    hba_private=) at /src/qemu/hw/scsi-bus.c:375
> 375             if (req->cmd.lba != -1) {
> (gdb) bt
> #0  scsi_req_new (d=0x15e46b0, tag=0x0, lun=0x0, buf=0x7fffde41 "\022",
>    hba_private=) at /src/qemu/hw/scsi-bus.c:375
> #1  0x0052c6ef in do_busid_cmd (s=0x15e2790, buf=0x0,
>    busid=) at /src/qemu/hw/esp.c:247
> #2  0x0052cc5d in do_cmd (s=0x15e2790) at /src/qemu/hw/esp.c:270
> #3  handle_satn (s=0x15e2790) at /src/qemu/hw/esp.c:284
> #4  0x0052d174 in esp_mem_writeb (opaque=0x15e2790,
>    addr=, val=0xc2) at /src/qemu/hw/esp.c:640
> #5  0x4003d1f5 in ?? ()
> #6  0x01632330 in ?? ()
> #7  0x01632280 in ?? ()
> #8  0x7fffe180 in ?? ()
> #9  0x3d3d87e90d932400 in ?? ()
> #10 0x77eefd00 in ?? ()
> #11 0x004dc558 in tb_reset_jump_recursive2 (tb=0xffee100c)
>    at /src/qemu/exec.c:1389
> #12 tb_reset_jump_recursive (tb=0xffee100c) at /src/qemu/exec.c:1395
> #13 0x0040bdea in qemu_notify_event () at /src/qemu/cpus.c:616
> #14 
> #15 0x004de681 in cpu_sparc_exec (env=0x1059600)
>    at /src/qemu/cpu-exec.c:528
> #16 0x0040c1fc in tcg_cpu_exec () at /src/qemu/cpus.c:1064
> #17 cpu_exec_all () at /src/qemu/cpus.c:1105
> #18 0x00519497 in main_loop (argc=,
>    argv=, envp=)
>    at /src/qemu/vl.c:1392
> #19 main (argc=, argv=,
>    envp=) at /src/qemu/vl.c:3356
> (gdb) p req
> $1 = 
> (gdb) p req->cmd
> Cannot access memory at address 0x28
> (gdb) p req->cmd.lba
> Cannot access memory at address 0x48
>
>> Signed-off-by: Peter Maydell 
>> ---
>>  hw/scsi-bus.c |    2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/hw/scsi-bus.c b/hw/scsi-bus.c
>> index f2af6cd..559d5a4 100644
>> --- a/hw/scsi-bus.c
>> +++ b/hw/scsi-bus.c
>> @@ -372,7 +372,7 @@ SCSIRequest *scsi_req_new(SCSIDevice *d, uint32_t tag, 
>> uint32_t lun,
>>     } else {
>>         trace_scsi_req_parsed(d->id, lun, tag, buf[0],
>>                               cmd.mode, cmd.xfer);
>> -        if (req->cmd.lba != -1) {
>> +        if (cmd.lba != -1) {
>>             trace_scsi_req_parsed_lba(d->id, lun, tag, buf[0],
>>                                       cmd.lba);
>>         }

Something is very wrong with SCSI. OpenBIOS can't boot anymore:
Configuration device id QEMU version 1 machine id 32
CPUs: 1 x FMI,MB86904
UUID: ----
Welcome to OpenBIOS v1.0 built on Jul 20 2011 21:16
  Type 'help' for detailed information
Trying cdrom:d...
Unhandled Exception 0x002a
PC = 0xffd10bdc NPC = 0xffd10be0
Stopping execution

This is due to division by zero in OpenBIOS drivers/esp.c. Bisecting
reveals that this is due to c7b488721d6aafe32994ac63f8d690ae6d4729fa,
SCSI devices now report Unit Attention status after reset. OpenBIOS
does not handle this case and fails (block size is 0).

First OpenBIOS issues Inquiry command, then if a device is present,
Read Capacity. I tried adding Request Sense command after Inquiry, but
then QEMU crashes:

Configuration device id QEMU version 1 machine id 32
Initializing SCSI...dma1: Revision 2
ESP at 0xffdaa230, buffer va 0xffdab000 dva 0xfe00
done
Initializing SCSI devices...do_command: id 0, cmd[0] 0x80, status 0x80
do_command: id 1, cmd[0] 0x80, status 0x80
do_command: id 2, cmd[0] 0x80, status 0x91
do_command_reply: status 0x93
do_command: id 2, cmd[0] 0x80, status 0x93
request_sense id 2 failed
request_sense id 2 sense key 128
do_command: id 2, cmd[0] 0x80, status 0x91
do_command_reply: status 0x93
read_capacity id 2 bs 2048 sectors 1295228
do_command: id 2, cmd[0] 0x80, status 0x91
*** glibc detected ***
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc: double free or
corruption (out): 0x016407e0 ***
=== Backtrace: =
/lib/libc.so.6(+0x71ad6)[0x73de1ad6]
/lib/libc.so.6(cfree+0x6c)[0x73de684c]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x46b41e]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x46b7cf]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x46a037]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x46b4ad]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x52f057]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x52f331]
/src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc[0x52f95d]
[0x4004113b]
=== Memory map: 
0040-005fa000 r-xp  08:13 3827028
  /src/qemu/obj-amd64/sparc-softmmu/qemu-system-sparc
007fa000-0080f000 rw-p 001fa000 08:13 3827028
  /src/qemu/obj-amd64/sparc-softmmu/qemu-sy

[Qemu-devel] full allocation and file fragmentation.

2011-08-13 Thread Frediano Ziglio
Looking at you patch for full image preallocation I noted you used
bdrv_write to fill space with zeroes. Nothing wrong but I would add a
call to fallocate or posix_fallocate in order to get less
fragmentation on file system. Perhaps there should be a
bdrv_preallocate call ??

Regards
  Frediano



[Qemu-devel] [Bug 818673] Re: virtio: trying to map MMIO memory

2011-08-13 Thread David Rando
It's something related to Windows. I have in the same machine a linux
server working with spice enabled and is rock solid. The windows
machines crash with that error randomnly.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/818673

Title:
  virtio: trying to map MMIO memory

Status in QEMU:
  New

Bug description:
  Qemu host is Core i7, running Linux.  Guest is Windows XP sp3.
  Often, qemu will crash shortly after starting (1-5 minutes) with a statement 
"qemu-system-x86_64: virtio: trying to map MMIO memory"
  This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and 
qemu 0.15.0-rc1.
  Qemu is started as such:
  qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid 
-drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net 
nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial 
-chardev spicevmc,name=vdagent,id=vdagent -device 
virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice 
port=1234,disable-ticketing -daemonize -monitor 
telnet:localhost:12341,server,nowait
  The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most 
current spice binaries from spice-space.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/818673/+subscriptions



[Qemu-devel] [Bug 825776] [NEW] -boot -hda //.//physicaldrivex does not work if it is USB drive

2011-08-13 Thread Steve Si
Public bug reported:

qemu-system-x86_64.exe -L . -name "RMPrepUSB Emulation Session" -boot c
-m 500 -hda //./PhysicalDrive1

just opens a blank QEMU window (no BIOS POSt messages) and does nothing

qemu v 0.15.0
Under Windows 7 64-bit
drive1 is a USB Flash drive

Previous version of x86_64 (Jan 2010) works fine. If replace with new
version (or RC2 version) then does not work.

if use harddisk.img raw file instead of USB physical device then I get BIOS 
POST messages and it boots to image OK.
So appears to be USB or physicaldisk support issue???

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/825776

Title:
  -boot -hda //.//physicaldrivex does not work if it is USB drive

Status in QEMU:
  New

Bug description:
  qemu-system-x86_64.exe -L . -name "RMPrepUSB Emulation Session" -boot
  c -m 500 -hda //./PhysicalDrive1

  just opens a blank QEMU window (no BIOS POSt messages) and does
  nothing

  qemu v 0.15.0
  Under Windows 7 64-bit
  drive1 is a USB Flash drive

  Previous version of x86_64 (Jan 2010) works fine. If replace with new
  version (or RC2 version) then does not work.

  if use harddisk.img raw file instead of USB physical device then I get BIOS 
POST messages and it boots to image OK.
  So appears to be USB or physicaldisk support issue???

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/825776/+subscriptions



Re: [Qemu-devel] New patches for w32, w64

2011-08-13 Thread Blue Swirl
On Sat, Aug 13, 2011 at 9:38 AM, Stefan Weil  wrote:
> Am 15.07.2011 21:38, schrieb Stefan Weil:
>>
>> Here are some small patches needed for the w32 and w64 platforms.
>>
>> [PATCH 1/3] Fix conversions from pointer to tcg_target_long
>> [PATCH 2/3] w64: Add definition of FMT_pid
>> [PATCH 3/3] w32: Fix format string regression
>>
>> Kind regards,
>> Stefan
>
> What about these patches?
>
> Patch 1 was acked by two developers:
>    http://patchwork.ozlabs.org/patch/104889/
>
> Patch 2 was review by Andreas:
>    http://patchwork.ozlabs.org/patch/104884/
>
> Patch 3 was discussed a lot:
>    http://patchwork.ozlabs.org/patch/104885/
>
> At least the first two patches can be applied immediately.

Yes.

> I also think that the third patch is correct and should be applied.
> It's only MinGW related and fixes a compiler error (regression) there.

The interfaces are broken in so many ways that I don't know what is
the right fix, maybe introducing qemu_getpid() which always returned
pid_t could hide this madness. Then 3/3 would not be needed.



Re: [Qemu-devel] New patches for w32, w64

2011-08-13 Thread Stefan Weil

Am 15.07.2011 21:38, schrieb Stefan Weil:

Here are some small patches needed for the w32 and w64 platforms.

[PATCH 1/3] Fix conversions from pointer to tcg_target_long
[PATCH 2/3] w64: Add definition of FMT_pid
[PATCH 3/3] w32: Fix format string regression

Kind regards,
Stefan


What about these patches?

Patch 1 was acked by two developers:
http://patchwork.ozlabs.org/patch/104889/

Patch 2 was review by Andreas:
http://patchwork.ozlabs.org/patch/104884/

Patch 3 was discussed a lot:
http://patchwork.ozlabs.org/patch/104885/

At least the first two patches can be applied immediately.

I also think that the third patch is correct and should be applied.
It's only MinGW related and fixes a compiler error (regression) there.

Cheers,
Stefan







Re: [Qemu-devel] [PATCH] sh4: Fix potential crash in debug code

2011-08-13 Thread Stefan Weil

Am 20.07.2011 20:56, schrieb Stefan Weil:

cppcheck reports this error:

qemu/hw/sh_intc.c:390: error: Possible null pointer dereference:
  s - otherwise it is redundant to check if s is null at line 385

If s were NULL, the printf() statement would crash.
Setting braces fixes this bug.

Signed-off-by: Stefan Weil
---
  hw/sh_intc.c |9 +
  1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/hw/sh_intc.c b/hw/sh_intc.c
index 0734da9..f73a4b0 100644
--- a/hw/sh_intc.c
+++ b/hw/sh_intc.c
@@ -382,13 +382,14 @@ void sh_intc_register_sources(struct intc_desc *desc,

sh_intc_register_source(desc, vect->enum_id, groups, nr_groups);
s = sh_intc_source(desc, vect->enum_id);
-   if (s)
-   s->vect = vect->vect;
+if (s) {
+s->vect = vect->vect;

  #ifdef DEBUG_INTC_SOURCES
-   printf("sh_intc: registered source %d ->  0x%04x (%d/%d)\n",
-  vect->enum_id, s->vect, s->enable_count, s->enable_max);
+printf("sh_intc: registered source %d ->  0x%04x (%d/%d)\n",
+   vect->enum_id, s->vect, s->enable_count, s->enable_max);
  #endif
+}
  }

  if (groups) {
   


Ping?




Re: [Qemu-devel] [PATCH] tcg/arm: Fix compiler error caused by unused function

2011-08-13 Thread Stefan Weil

Am 29.07.2011 22:07, schrieb Stefan Weil:

Function tcg_out_addi is obviously unused and causes a compiler
error (tested with cross compilation on x86 Debian Linux):

tcg/arm/tcg-target.c:1824: error: ‘tcg_out_addi’ defined but not used

Don't remove it because it might be useful in the future,
but declare it inline. This fixes the compiler error and
is similar to other tcg targets.

Cc: Andrzej Zaborowski
Signed-off-by: Stefan Weil
---
  tcg/arm/tcg-target.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/tcg/arm/tcg-target.c b/tcg/arm/tcg-target.c
index 93eb0f1..c94a354 100644
--- a/tcg/arm/tcg-target.c
+++ b/tcg/arm/tcg-target.c
@@ -1820,7 +1820,7 @@ static inline void tcg_out_st(TCGContext *s, TCGType 
type, int arg,
  tcg_out_st32(s, COND_AL, arg, arg1, arg2);
  }

-static void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val)
+static inline void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val)
  {
  if (val>  0)
  if (val<  0x100)
   


Ping?



Re: [Qemu-devel] [PATCH 3/3] target-mips:Support for Cavium specific instructions

2011-08-13 Thread Khansa Butt
On Thu, Aug 4, 2011 at 4:22 PM, Peter Maydell wrote:

> On 5 July 2011 10:19,   wrote:
> > ---
> >  host-utils.c|1 +
> >  target-mips/cpu.h   |7 +
> >  target-mips/helper.h|5 +
> >  target-mips/op_helper.c |   67 +++
> >  target-mips/translate.c |  443
> ++-
> >  5 files changed, 514 insertions(+), 9 deletions(-)
>
> Don't you also need to add support for the new instructions
> to the disassembler in mips-dis.c ?
>
>

The ISA for Cavium Networks Octeon Processor consist of MIPS64r2+Cavium
specific instructions. These are 27 usermode instructions which we
implemented. some of its instructions have some conflicts with mips and
LoongSon instructions. for example Branch on bit clear/set instructions
(these are 4 instructions)  consumes major opcodes of MIPS COP2 instructions
(e.g, LWC2 etc). and V3MULU, VMM0 have same opcode and function fields as
two of Loongson 's Instructions. To detect correct instruction in
disassembling process can I add a CPU specific Flag in DisasContext so that
I can pass this to log_target_disas()/disas.c and set some of top 16 bits in
disassemble_info 's flags. On the basis of which I can pick  correct
instruction in print_insn_mips()/mips-dis.c. In future this Flag can be used
for other vendor specific instruction as well.

Please guide me in this regard. If I make a separate print function for
Cavium, this will not suitable for me as Cavium includes all instructions in
MIPS64r2 so there will be lot of repetition.

Thanks.