Re: [v2] vhost: add vsock compat ioctl

2018-03-22 Thread Sonny Rao
On Thu, Mar 22, 2018 at 2:25 AM, Stefan Hajnoczi  wrote:
> On Fri, Mar 16, 2018 at 7:30 PM, David Miller  wrote:
>> Although the top level ioctls are probably size and layout compatible,
>> I do not think that the deeper ioctls can be called by compat binaries
>> without some translations in order for them to work.
>
> I audited the vhost ioctl code when reviewing this patch and was
> unable to find anything that would break for a 32-bit userspace
> process.
>
> drivers/vhost/net.c does the same thing already, which doesn't prove
> it's correct but makes me more confident I didn't miss something while
> auditing the vhost ioctl code.
>
> Did you have a specific ioctl in mind?

I think he means that we need to use the compat_ptr macro on any other
pointers we get from userspace in those other ioctls.  For most
architectures this macro doesn't seem to do much but it does on some
-- I think s390 modifies the pointer.

>
> Stefan


Re: [v2] vhost: add vsock compat ioctl

2018-03-16 Thread Sonny Rao
On Fri, Mar 16, 2018 at 12:30 PM, David Miller  wrote:
>
> Although the top level ioctls are probably size and layout compatible,
> I do not think that the deeper ioctls can be called by compat binaries
> without some translations in order for them to work.

Ok, thanks -- I have only tested VHOST_VSOCK_SET_GUEST_CID and
VHOST_VSOCK_SET_RUNNING but by deeper ioctls I think you might mean
the ioctls under vhost_dev_ioctl() and vhost_vring_ioctl()?


[PATCH v2] vhost: add vsock compat ioctl

2018-03-14 Thread Sonny Rao
This will allow usage of vsock from 32-bit binaries on a 64-bit
kernel.

Signed-off-by: Sonny Rao <sonny...@chromium.org>
---
 drivers/vhost/vsock.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 0d14e2ff19f16..ee0c385d9fe54 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -699,12 +699,23 @@ static long vhost_vsock_dev_ioctl(struct file *f, 
unsigned int ioctl,
}
 }
 
+#ifdef CONFIG_COMPAT
+static long vhost_vsock_dev_compat_ioctl(struct file *f, unsigned int ioctl,
+unsigned long arg)
+{
+   return vhost_vsock_dev_ioctl(f, ioctl, (unsigned long)compat_ptr(arg));
+}
+#endif
+
 static const struct file_operations vhost_vsock_fops = {
.owner  = THIS_MODULE,
.open   = vhost_vsock_dev_open,
.release= vhost_vsock_dev_release,
.llseek = noop_llseek,
.unlocked_ioctl = vhost_vsock_dev_ioctl,
+#ifdef CONFIG_COMPAT
+   .compat_ioctl   = vhost_vsock_dev_compat_ioctl,
+#endif
 };
 
 static struct miscdevice vhost_vsock_misc = {
-- 
2.13.5



Re: [PATCH] vhost: add vsock compat ioctl

2018-03-14 Thread Sonny Rao
On Wed, Mar 14, 2018 at 12:05 PM, Michael S. Tsirkin <m...@redhat.com> wrote:
> On Wed, Mar 14, 2018 at 10:26:05AM -0700, Sonny Rao wrote:
>> This will allow usage of vsock from 32-bit binaries on a 64-bit
>> kernel.
>>
>> Signed-off-by: Sonny Rao <sonny...@chromium.org>
>
> I think you need to convert the pointer argument though.
> Something along the lines of:
>
> #ifdef CONFIG_COMPAT
> static long vhost_vsock_dev_compat_ioctl(struct file *f, unsigned int ioctl,
>  unsigned long arg)
> {
> return vhost_vsock_dev_ioctl(f, ioctl, (unsigned 
> long)compat_ptr(arg));
> }
> #endif

Ok, thanks for pointing that out -- it has worked for me so far, but
I'll re-spin as you suggested.

>
>
>
>> ---
>>  drivers/vhost/vsock.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>> index 0d14e2ff19f16..d0e65e92110e5 100644
>> --- a/drivers/vhost/vsock.c
>> +++ b/drivers/vhost/vsock.c
>> @@ -705,6 +705,7 @@ static const struct file_operations vhost_vsock_fops = {
>>   .release= vhost_vsock_dev_release,
>>   .llseek = noop_llseek,
>>   .unlocked_ioctl = vhost_vsock_dev_ioctl,
>> + .compat_ioctl   = vhost_vsock_dev_ioctl,
>>  };
>>
>>  static struct miscdevice vhost_vsock_misc = {
>> --
>> 2.13.5


[PATCH] vhost: add vsock compat ioctl

2018-03-14 Thread Sonny Rao
This will allow usage of vsock from 32-bit binaries on a 64-bit
kernel.

Signed-off-by: Sonny Rao <sonny...@chromium.org>
---
 drivers/vhost/vsock.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 0d14e2ff19f16..d0e65e92110e5 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -705,6 +705,7 @@ static const struct file_operations vhost_vsock_fops = {
.release= vhost_vsock_dev_release,
.llseek = noop_llseek,
.unlocked_ioctl = vhost_vsock_dev_ioctl,
+   .compat_ioctl   = vhost_vsock_dev_ioctl,
 };
 
 static struct miscdevice vhost_vsock_misc = {
-- 
2.13.5



[PATCH] vhost: fix vhost ioctl signature to build with clang

2018-03-14 Thread Sonny Rao
Clang is particularly anal about signed vs unsigned comparisons and
doesn't like the fact that some ioctl numbers set the MSB, so we get
this error when trying to build vhost on aarch64:

drivers/vhost/vhost.c:1400:7: error: overflow converting case value to
 switch condition type (3221794578 to 18446744072636378898)
 [-Werror, -Wswitch]
case VHOST_GET_VRING_BASE:

3221794578 is 0xC008AF12 in hex
18446744072636378898 is 0xC008AF12 in hex

Fix this by using unsigned ints in the function signature for
vhost_vring_ioctl().

Signed-off-by: Sonny Rao <sonny...@chromium.org>
---
 drivers/vhost/vhost.c | 2 +-
 drivers/vhost/vhost.h | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 1b3e8d2d5c8b4..5316319d84081 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1337,7 +1337,7 @@ static long vhost_set_memory(struct vhost_dev *d, struct 
vhost_memory __user *m)
return -EFAULT;
 }
 
-long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
+long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user 
*argp)
 {
struct file *eventfp, *filep = NULL;
bool pollstart = false, pollstop = false;
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index ac4b6056f19ae..d8ee85ae8fdcc 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -45,7 +45,7 @@ void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_flush(struct vhost_poll *poll);
 void vhost_poll_queue(struct vhost_poll *poll);
 void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work);
-long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
+long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user 
*argp);
 
 struct vhost_log {
u64 addr;
@@ -177,7 +177,7 @@ void vhost_dev_reset_owner(struct vhost_dev *, struct 
vhost_umem *);
 void vhost_dev_cleanup(struct vhost_dev *);
 void vhost_dev_stop(struct vhost_dev *);
 long vhost_dev_ioctl(struct vhost_dev *, unsigned int ioctl, void __user 
*argp);
-long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp);
+long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user 
*argp);
 int vhost_vq_access_ok(struct vhost_virtqueue *vq);
 int vhost_log_access_ok(struct vhost_dev *);
 
-- 
2.13.5



kernel 2.4 vs 2.6 Traffic Controller performance

2007-10-03 Thread Sonny
Hello
This is a repost, there seems to have a misunderstanding before.

I hope this is the right place to ask this. Does any know if there is a
substantial difference in the performance of the traffic controller
between kernel 2.4 and 2.6. We tested it using 1 iperf server and use
250 and 500 clients, altering the burst.

This is the set-up:
iperf client -  router (w/ traffic controller) - iperf server

We use the top command inside the router to check the idle time of our
router to see this. The results we got from the 2.4 kernel shows
around 65-70% idle time while the 2.6 shows
60-65% idle time. We tried to use MRTG and we're not getting any
results either. We want to know if we could improve the bandwidth by
upgrading the kernel, else we would have to get a new bandwidth
manager.  Have anyone performed a similar test or can suggest a better
way to do this. Thanks in advance.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel 2.4 vs 2.6 Traffic Controller performance

2007-10-03 Thread Sonny
On 10/3/07, Eric Dumazet [EMAIL PROTECTED] wrote:
 Sonny a écrit :
  Hello
  This is a repost, there seems to have a misunderstanding before.
 
  I hope this is the right place to ask this. Does any know if there is a
  substantial difference in the performance of the traffic controller
  between kernel 2.4 and 2.6. We tested it using 1 iperf server and use
  250 and 500 clients, altering the burst.
 
  This is the set-up:
  iperf client -  router (w/ traffic controller) - iperf server
 
  We use the top command inside the router to check the idle time of our
  router to see this. The results we got from the 2.4 kernel shows
  around 65-70% idle time while the 2.6 shows
  60-65% idle time. We tried to use MRTG and we're not getting any
  results either. We want to know if we could improve the bandwidth by
  upgrading the kernel, else we would have to get a new bandwidth
  manager.  Have anyone performed a similar test or can suggest a better
  way to do this. Thanks in advance.
  -
 Hi Sonny

 I am not sure what you are asking here. 65-70% idle time (or 60-65%) is fine.

 2.6 is also not very meaningfull, there are a lot of changes between 2.6.0 and
 2.6.23 :)

we're using 2.6.22

 Why should you upgrade kernel ?
we would like to test the difference bet 2 kernels performance

 What bandwidth do you handle ?
10 mbps

 What kind of platform is it ? (a new kernel wont help much if its a real old
 machine, or old NICs)
it's a P IV 2.8 GHz HT with 512 MB

 You seem to have some bandwidth problem but focus on cpu affairs...
Bandwidth is not a problem, we can get 10mbps without a hitch. But we
would like to know the scalability on the CPU vs the number of
clients. So far, for both kernels, we're getting 50% CPU utilization
using 500 clients and 384 burst kbps each.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Kernel 2.4 vs 2.6 Traffic Controller Performance

2007-10-02 Thread Sonny
Hello
I hope this is the right place to ask this.Does any know if there is a
substantial difference in the performance of the traffic controller
between kernel 2.4 and 2.6. We tested it using 1 iperf server and use
250 and 500 clients, altering the burst. We use the top command to
check the idle time of our router to see this. The results we got from
the 2.4 kernel shows around 65-70% idle time while the 2.6 shows
60-65% idle time. We tried to use MRTG and we're not getting any
results either. We want to know if we could improve the bandwidth by
upgrading the kernel, else we would have to get a new bandwidth
manager.  Could anyone have the similar test regarding this or suggest
a better way to do this. Thanks in advance.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html