On Thu, Jun 07, 2012 at 11:47:56AM +0800, Liu Yuan wrote:
On 06/07/2012 06:23 AM, Christoph Hellwig wrote:
Applied first one, and the other need rebasing to master.
they need the sheep: fix nr_nodes calculation in local_stat_cluster
fix applied first. I really don't think merging a minimally
Allow users to override the address advertised to other sheep. This is
important for setups where the computers running sheep nodes have multiple
network interfaces and we need to use a specific one.
Signed-off-by: Christoph Hellwig h...@lst.de
Index: sheepdog/sheep/group.c
On 06/07/2012 04:30 PM, Christoph Hellwig wrote:
On Thu, Jun 07, 2012 at 11:47:56AM +0800, Liu Yuan wrote:
On 06/07/2012 06:23 AM, Christoph Hellwig wrote:
Applied first one, and the other need rebasing to master.
they need the sheep: fix nr_nodes calculation in local_stat_cluster
fix
On Thu, Jun 07, 2012 at 04:44:20PM +0800, Liu Yuan wrote:
But if we already see the problem, we only need to fix it once. You
added one fix in the previous patch and then removed the lines added in
the next fix, and these two fixes happen one after another, I don't
think there is any point
On 06/07/2012 06:23 AM, Christoph Hellwig wrote:
Applied this series.Thanks
Yuan
--
sheepdog mailing list
sheepdog@lists.wpkg.org
http://lists.wpkg.org/mailman/listinfo/sheepdog
On 06/07/2012 04:40 PM, Christoph Hellwig wrote:
Allow users to override the address advertised to other sheep. This is
important for setups where the computers running sheep nodes have multiple
network interfaces and we need to use a specific one.
Applied after adjusting para indentation,
I have a question about migration a Qemu Virtual Machines from
one node to another...
Is there something I have to pay attention when using the
farm cache mechanisms?
For example, is it possible, that I ran my virtual Machine V
on host A, Start a live migration to host B, but some content
in
On 06/05/2012 06:22 PM, MORITA Kazutaka wrote:
At Tue, 05 Jun 2012 16:43:05 +0800,
Liu Yuan wrote:
On 06/04/2012 04:53 PM, MORITA Kazutaka wrote:
One possibility is that if forward_write_obj_req() fails before
receiving data, the next forward_(read|write)_obj_req() could be
interleaved.
From: Liu Yuan tailai...@taobao.com
This patch addresses one very sutble problem, to quote from Kazutaka:
One possibility is that if forward_write_obj_req() fails before
receiving data, the next forward_(read|write)_obj_req() could be
interleaved.
The interleaved requets
From: Liu Yuan tailai...@taobao.com
Cache pool is a TCP connection, so we don't need to timeout on it, if
they don't return data to us, it means they are really busy with preparation
of the response or with other stuff, but it will send the data to us finally.
If the node is failed without
On 06/07/2012 05:19 PM, Bastian Scholz wrote:
I have a question about migration a Qemu Virtual Machines from
one node to another...
Is there something I have to pay attention when using the
farm cache mechanisms?
Farm doesn't cache any data for regular IO. Instead, Farm caches
snapshot
this patch looks good to me ,
because we have removed the 'nr_outstanding_io' counter,
so socket timeout handling is not necessary anymore.
On Thu, Jun 7, 2012 at 5:29 PM, Liu Yuan namei.u...@gmail.com wrote:
From: Liu Yuan tailai...@taobao.com
Cache pool is a TCP connection, so we don't
At Thu, 7 Jun 2012 17:29:35 +0800,
Liu Yuan wrote:
From: Liu Yuan tailai...@taobao.com
This patch addresses one very sutble problem, to quote from Kazutaka:
One possibility is that if forward_write_obj_req() fails before
receiving data, the next
At Thu, 7 Jun 2012 17:29:36 +0800,
Liu Yuan wrote:
From: Liu Yuan tailai...@taobao.com
Cache pool is a TCP connection, so we don't need to timeout on it, if
they don't return data to us, it means they are really busy with preparation
of the response or with other stuff, but it will send
On 06/07/2012 11:07 PM, MORITA Kazutaka wrote:
5 seconds is actually too short, but is it really good to remove
timeout completely? Without timeout, how long does send/recv/poll
block when network error happens, and how long do guest OSes wait for
read/write/flush to return?
How about set
On 06/07/2012 11:20 PM, Liu Yuan wrote:
On 06/07/2012 11:07 PM, MORITA Kazutaka wrote:
5 seconds is actually too short, but is it really good to remove
timeout completely? Without timeout, how long does send/recv/poll
block when network error happens, and how long do guest OSes wait for
At Thu, 07 Jun 2012 23:25:39 +0800,
Liu Yuan wrote:
On 06/07/2012 11:20 PM, Liu Yuan wrote:
On 06/07/2012 11:07 PM, MORITA Kazutaka wrote:
5 seconds is actually too short, but is it really good to remove
timeout completely? Without timeout, how long does send/recv/poll
block when
work structure in struct flush_work must be initialized with zero
so that work-attr is WORK_SIMPLE.
Signed-off-by: MORITA Kazutaka morita.kazut...@lab.ntt.co.jp
---
sheep/object_cache.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/sheep/object_cache.c
On 06/08/2012 12:06 AM, MORITA Kazutaka wrote:
At Fri, 08 Jun 2012 00:01:20 +0800,
Liu Yuan wrote:
On 06/07/2012 11:50 PM, MORITA Kazutaka wrote:
The reason we use timeout for socket connections is that, when
membership change happens, the gateway should retry I/Os with a new
membership
19 matches
Mail list logo