deadline extension 08/16/11 -- SI on Data Intensive Computing in the Clouds in JGC

2011-07-17 Thread Ioan Raicu
Hi all,
Due to numerous requests, we have extended the deadline for the Special 
Issue on Data Intensive Computing in the Clouds in the Springer Journal 
of Grid Computing to August 16th, 2011. Please see below for the CFP 
announcement.

Regards,
Ioan Raicu and Tevfik Kosar
Guest Editors of the Special Issue on Data Intensive Computing in the Clouds
Springer Journal of Grid Computing
http://datasys.cs.iit.edu/events/JGC-DataCloud-2012/index.html

-
*** Call for Papers *** 

  Springer Journal of Grid Computing
Special Issue on Data Intensive Computing in the Clouds
 http://datasys.cs.iit.edu/events/JGC-DataCloud-2012/
-

Applications and experiments in all areas of science are becoming increasingly
complex and more demanding in terms of their computational and data 
requirements.
Some applications generate data volumes reaching hundreds of terabytes and even
petabytes. As scientific applications become more data intensive, the management
of data resources and dataflow between the storage and compute resources is
becoming the main bottleneck. Analyzing, visualizing, and disseminating these
large data sets has become a major challenge and data intensive computing is now
considered as the "fourth paradigm" in scientific discovery after
empirical, theoretical, and computational scientific approaches.

The Special Issue on Data Intensive Computing in the Clouds will provide the
scientific community a dedicated forum, within the prestigious Springer Journal
of Grid Computing, for presenting new research, development, and deployment
efforts in running data-intensive computing workloads on Cloud Computing
infrastructures. This special issue will focus on the use of cloud-based
technologies to meet the new data intensive scientific challenges that are not
well served by the current supercomputers, grids or compute-intensive clouds.
We believe this venue will be an excellent place to help the community define 
the
current state, determine future goals, and present architectures and services 
for
future clouds supporting data intensive computing.


TOPICS
-
- Data-intensive cloud computing applications, characteristics, challenges
- Case studies of data intensive computing in the clouds
- Performance evaluation of data clouds, data grids, and data centers
- Energy-efficient data cloud design and management
- Data placement, scheduling, and interoperability in the clouds
- Accountability, QoS, and SLAs
- Data privacy and protection in a public cloud environment
- Distributed file systems for clouds
- Data streaming and parallelization
- New programming models for data-intensive cloud computing
- Scalability issues in clouds
- Social computing and massively social gaming
- 3D Internet and implications
- Future research challenges in data-intensive cloud computing

Important Dates
---
*   Papers Due: August 16, 2011
*   First Round Decisions:  October 15, 2011
*   Major Revisions if needed:  November 15, 2011
*   Second Round Decisions: December 15, 2011
*   Minor Revisions if needed:  January 15, 2012
*   Final Decision: February 1, 2012
*   Publication Date:   June 2012

PAPER SUBMISSION
-
Authors are invited to submit original and unpublished technical papers. All
submissions will be peer-reviewed and judged on correctness, originality,
technical strength, significance, quality of presentation, and relevance to the
special issue topics of interest. Submitted papers may not have appeared in or 
be
under consideration for another workshop, conference or a journal, nor may they
be under review or submitted to another forum during the review process.
Submitted papers may not exceed 20 single-spaced double-column pages using
10-point size font on 8.5x11 inch pages (1" margins), including figures, tables,
and references; note that accepted papers will likely be between 15 to 20 pages,
depending on a variety of factors; for more information for preparing the
submitted papers, please see
http://www.springer.com/computer/communication+networks/journal/10723, under
"Instructions for Authors". The papers (PDF format) must be submitted online at
http://grid.edmgr.com/ before the extended deadline of August 16th, 2011
at 11:59PM PST. For any questions on the submission process, please email the
guest editors at jgc-datacloud-2...@datasys.cs.iit.edu.

Guest Editors
-

Re: [PATCHv9] vhost: experimental tx zero-copy support

2011-07-17 Thread Jesper Juhl
On Sun, 17 Jul 2011, Michael S. Tsirkin wrote:

> From: Shirley Ma 
> 
> This adds experimental zero copy support in vhost-net,
> disabled by default. To enable, set the zerocopytx
> module option to 1.
> 
> This patch maintains the outstanding userspace buffers in the
> sequence it is delivered to vhost. The outstanding userspace buffers
> will be marked as done once the lower device buffers DMA has finished.
> This is monitored through last reference of kfree_skb callback. Two
> buffer indices are used for this purpose.
> 
> The vhost-net device passes the userspace buffers info to lower device
> skb through message control. DMA done status check and guest
> notification are handled by handle_tx: in the worst case is all buffers
> in the vq are in pending/done status, so we need to notify guest to
> release DMA done buffers first before we get any new buffers from the
> vq.
> 
> One known problem is that if the guest stops submitting
> buffers, buffers might never get used until some
> further action, e.g. device reset. This does not
> seem to affect linux guests.
> 
> Signed-off-by: Shirley 
> Signed-off-by: Michael S. Tsirkin 
> ---
> 
> The below is what I came up with. We add the feature enabled
> by default for now as there are known issues, 

You mean "disabled" - right?


> but some
> guests can benefit so there's value in putting this
> in tree, to help the code get wider testing.
> 
>  drivers/vhost/net.c   |   73 +-
>  drivers/vhost/vhost.c |   85 
> +
>  drivers/vhost/vhost.h |   29 +
>  3 files changed, 186 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index e224a92..226ca6b 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -12,6 +12,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -28,10 +29,18 @@
>  
>  #include "vhost.h"
>  
> +static int zcopytx;
> +module_param(zcopytx, int, 0444);

Should everyone be able to read this? How about "0440" just to be 
paranoid? or?

> +MODULE_PARM_DESC(lnksts, "Enable Zero Copy Transmit");
> +
>  /* Max number of bytes transferred before requeueing the job.
>   * Using this limit prevents one virtqueue from starving others. */
>  #define VHOST_NET_WEIGHT 0x8
>  
> +/* MAX number of TX used buffers for outstanding zerocopy */
> +#define VHOST_MAX_PEND 128
> +#define VHOST_GOODCOPY_LEN 256
> +
>  enum {
>   VHOST_NET_VQ_RX = 0,
>   VHOST_NET_VQ_TX = 1,
> @@ -54,6 +63,11 @@ struct vhost_net {
>   enum vhost_net_poll_state tx_poll_state;
>  };
>  
> +static bool vhost_sock_zcopy(struct socket *sock)
> +{
> + return unlikely(zcopytx) && sock_flag(sock->sk, SOCK_ZEROCOPY);
> +}
> +
>  /* Pop first len bytes from iovec. Return number of segments used. */
>  static int move_iovec_hdr(struct iovec *from, struct iovec *to,
> size_t len, int iov_count)
> @@ -129,6 +143,8 @@ static void handle_tx(struct vhost_net *net)
>   int err, wmem;
>   size_t hdr_size;
>   struct socket *sock;
> + struct vhost_ubuf_ref *uninitialized_var(ubufs);
> + bool zcopy;
>  
>   /* TODO: check that we are running from vhost_worker? */
>   sock = rcu_dereference_check(vq->private_data, 1);
> @@ -149,8 +165,13 @@ static void handle_tx(struct vhost_net *net)
>   if (wmem < sock->sk->sk_sndbuf / 2)
>   tx_poll_stop(net);
>   hdr_size = vq->vhost_hlen;
> + zcopy = vhost_sock_zcopy(sock);
>  
>   for (;;) {
> + /* Release DMAs done buffers first */
> + if (zcopy)
> + vhost_zerocopy_signal_used(vq);
> +
>   head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
>ARRAY_SIZE(vq->iov),
>&out, &in,
> @@ -166,6 +187,12 @@ static void handle_tx(struct vhost_net *net)
>   set_bit(SOCK_ASYNC_NOSPACE, &sock->flags);
>   break;
>   }
> + /* If more outstanding DMAs, queue the work */
> + if (vq->upend_idx - vq->done_idx > VHOST_MAX_PEND) {
> + tx_poll_start(net, sock);
> + set_bit(SOCK_ASYNC_NOSPACE, &sock->flags);
> + break;
> + }
>   if (unlikely(vhost_enable_notify(&net->dev, vq))) {
>   vhost_disable_notify(&net->dev, vq);
>   continue;
> @@ -188,9 +215,39 @@ static void handle_tx(struct vhost_net *net)
>  iov_length(vq->hdr, s), hdr_size);
>   break;
>   }
> + /* use msg_control to pass vhost zerocopy ubuf info to skb */
> + if (zcopy) {
> + vq->heads[vq->upend_idx].i

Re: [PATCHv9] vhost: experimental tx zero-copy support

2011-07-17 Thread David Miller
From: "Michael S. Tsirkin" 
Date: Sun, 17 Jul 2011 22:36:14 +0300

> The below is what I came up with. We add the feature enabled
> by default ...

s/enabled/disabled/  Well, at least you got it right in the
commit message where it counts :-)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[PATCHv9] vhost: experimental tx zero-copy support

2011-07-17 Thread Michael S. Tsirkin
From: Shirley Ma 

This adds experimental zero copy support in vhost-net,
disabled by default. To enable, set the zerocopytx
module option to 1.

This patch maintains the outstanding userspace buffers in the
sequence it is delivered to vhost. The outstanding userspace buffers
will be marked as done once the lower device buffers DMA has finished.
This is monitored through last reference of kfree_skb callback. Two
buffer indices are used for this purpose.

The vhost-net device passes the userspace buffers info to lower device
skb through message control. DMA done status check and guest
notification are handled by handle_tx: in the worst case is all buffers
in the vq are in pending/done status, so we need to notify guest to
release DMA done buffers first before we get any new buffers from the
vq.

One known problem is that if the guest stops submitting
buffers, buffers might never get used until some
further action, e.g. device reset. This does not
seem to affect linux guests.

Signed-off-by: Shirley 
Signed-off-by: Michael S. Tsirkin 
---

The below is what I came up with. We add the feature enabled
by default for now as there are known issues, but some
guests can benefit so there's value in putting this
in tree, to help the code get wider testing.

 drivers/vhost/net.c   |   73 +-
 drivers/vhost/vhost.c |   85 +
 drivers/vhost/vhost.h |   29 +
 3 files changed, 186 insertions(+), 1 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index e224a92..226ca6b 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -28,10 +29,18 @@
 
 #include "vhost.h"
 
+static int zcopytx;
+module_param(zcopytx, int, 0444);
+MODULE_PARM_DESC(lnksts, "Enable Zero Copy Transmit");
+
 /* Max number of bytes transferred before requeueing the job.
  * Using this limit prevents one virtqueue from starving others. */
 #define VHOST_NET_WEIGHT 0x8
 
+/* MAX number of TX used buffers for outstanding zerocopy */
+#define VHOST_MAX_PEND 128
+#define VHOST_GOODCOPY_LEN 256
+
 enum {
VHOST_NET_VQ_RX = 0,
VHOST_NET_VQ_TX = 1,
@@ -54,6 +63,11 @@ struct vhost_net {
enum vhost_net_poll_state tx_poll_state;
 };
 
+static bool vhost_sock_zcopy(struct socket *sock)
+{
+   return unlikely(zcopytx) && sock_flag(sock->sk, SOCK_ZEROCOPY);
+}
+
 /* Pop first len bytes from iovec. Return number of segments used. */
 static int move_iovec_hdr(struct iovec *from, struct iovec *to,
  size_t len, int iov_count)
@@ -129,6 +143,8 @@ static void handle_tx(struct vhost_net *net)
int err, wmem;
size_t hdr_size;
struct socket *sock;
+   struct vhost_ubuf_ref *uninitialized_var(ubufs);
+   bool zcopy;
 
/* TODO: check that we are running from vhost_worker? */
sock = rcu_dereference_check(vq->private_data, 1);
@@ -149,8 +165,13 @@ static void handle_tx(struct vhost_net *net)
if (wmem < sock->sk->sk_sndbuf / 2)
tx_poll_stop(net);
hdr_size = vq->vhost_hlen;
+   zcopy = vhost_sock_zcopy(sock);
 
for (;;) {
+   /* Release DMAs done buffers first */
+   if (zcopy)
+   vhost_zerocopy_signal_used(vq);
+
head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
 ARRAY_SIZE(vq->iov),
 &out, &in,
@@ -166,6 +187,12 @@ static void handle_tx(struct vhost_net *net)
set_bit(SOCK_ASYNC_NOSPACE, &sock->flags);
break;
}
+   /* If more outstanding DMAs, queue the work */
+   if (vq->upend_idx - vq->done_idx > VHOST_MAX_PEND) {
+   tx_poll_start(net, sock);
+   set_bit(SOCK_ASYNC_NOSPACE, &sock->flags);
+   break;
+   }
if (unlikely(vhost_enable_notify(&net->dev, vq))) {
vhost_disable_notify(&net->dev, vq);
continue;
@@ -188,9 +215,39 @@ static void handle_tx(struct vhost_net *net)
   iov_length(vq->hdr, s), hdr_size);
break;
}
+   /* use msg_control to pass vhost zerocopy ubuf info to skb */
+   if (zcopy) {
+   vq->heads[vq->upend_idx].id = head;
+   if (len < VHOST_GOODCOPY_LEN) {
+   /* copy don't need to wait for DMA done */
+   vq->heads[vq->upend_idx].len =
+   VHOST_DMA_DONE_LEN;
+   msg.msg_control = NULL;

Re: Large Patch Series in Email (was Re: [PATCH 0000/0117] Staging: hv: Driver cleanup)

2011-07-17 Thread Florian Mickler
On Sat, 16 Jul 2011 00:07:39 +0100
Alan Cox  wrote:

> >   Do not send more than 15 patches at once to the vger
> >   mailing lists!!!
> > 
> > and, accordingly, I went to the trouble of setting up a GitHub
> > account to host a repo from which I could issue *one* single
> > PULL request email; I get a little miffed every time my
> > inbox gets blasted with hundreds of patches when others don't
> > do similarly.
> 
> The problem with dumping stuff that needs review into a git tree is it's
> a lot of hassle to review so the advice is kind of outdated in such cases.
> It's good advice for things like big new subsystems perhaps but not for
> review.
> 
> As is always the case social norms evolve faster than the people who feel
> compelled to attempt to document them.
> 
> There are lots of web archives of the list and it's also not hard to set
> up mail tools to shuffle long emails into a folder so there are plenty of
> ways to manage and read the lists without being part of it.
> 
> And someone should probably updating the CodingStyle document to reflect
> reality 8)
> 
> Alan

And while at it, maybe add some tipps on how to keep patchseries
small... (people have probably more / better suggestions, please):

- 'submit early, submit often' instead of time based submittal (like...
  oh I hacked 24/7 this week and now is friday, let's see how many
  patches I got...)
- Putting controversial stuff at the end (so at least uncontroversial
  stuff can be applied)
- try to split patchseries into stuff that can be applied independent
  of one another. 
- ...

Regards,
Flo
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: RFT: virtio_net: limit xmit polling

2011-07-17 Thread Michael S. Tsirkin
On Thu, Jul 14, 2011 at 12:38:05PM -0700, Roopa Prabhu wrote:
> Michael, below are some numbers I got from one round of runs.
> Thanks,
> Roopa

Thanks!
At this point it does not appear like there's any measureable
impact from moving the polling around.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization