NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to provide multiple connections to a single nbd device. This
coupled with block-mq drastically increases performance in multi-
On Thu, Oct 06, 2016 at 06:16:30AM -0700, Christoph Hellwig wrote:
> On Thu, Oct 06, 2016 at 03:09:49PM +0200, Wouter Verhelst wrote:
> > Okay, I've updated the proto.md file then, to clarify that in the case
> > of multiple connections, a client MUST NOT send a flush request until it
> > has seen
On Thu, Oct 06, 2016 at 03:09:49PM +0200, Wouter Verhelst wrote:
> Okay, I've updated the proto.md file then, to clarify that in the case
> of multiple connections, a client MUST NOT send a flush request until it
> has seen the replies to the write requests that it cares about. That
> should be eno
On Thu, Oct 06, 2016 at 03:31:55AM -0700, Christoph Hellwig wrote:
> No, they match the cache flush semantics in every other storage protocol
> known to me, and they match the expectations of both the Linux kernel
> and any other OS or comsumer I know about perfectly.
Okay, I've updated the proto.
> On 6 Oct 2016, at 11:15, Wouter Verhelst wrote:
>
>>
>>
>> but I still think it would be helpful if the protocol helped out
>> the end user of the client and refused to negotiate multichannel
>> connections when they are unsafe. How is the end client meant to know
>> whether the back en
On Thu, Oct 06, 2016 at 11:04:15AM +0200, Wouter Verhelst wrote:
> In the current situation, a client could opportunistically send a number
> of write requests immediately followed by a flush and hope for the best.
> However, in that case there is no guarantee that for the write requests
> that the
On Thu, Oct 06, 2016 at 10:41:36AM +0100, Alex Bligh wrote:
> Wouter,
[...]
> > Given that, given the issue in the previous
> > paragraph, and given the uncertainty introduced with multiple
> > connections, I think it is reasonable to say that a client should just
> > not assume a flush touches any
Wouter,
> On 6 Oct 2016, at 10:04, Wouter Verhelst wrote:
>
> Hi Alex,
>
> On Tue, Oct 04, 2016 at 10:35:03AM +0100, Alex Bligh wrote:
>> Wouter,
>>> I see now that it should be closer
>>> to the former; a more useful definition is probably something along the
>>> following lines:
>>>
>>> Al
Hi Alex,
On Tue, Oct 04, 2016 at 10:35:03AM +0100, Alex Bligh wrote:
> Wouter,
> > I see now that it should be closer
> > to the former; a more useful definition is probably something along the
> > following lines:
> >
> >All write commands (that includes NBD_CMD_WRITE and NBD_CMD_TRIM)
> >
Wouter,
>>> It is impossible for nbd to make such a guarantee, due to head-of-line
>>> blocking on TCP.
>>
>> this is perfectly accurate as far as it goes, but this isn't the current
>> NBD definition of 'flush'.
>
> I didn't read it that way.
>
>> That is (from the docs):
>>
>>> All write com
Alex,
Christoph,
On Mon, Oct 03, 2016 at 12:34:33PM +0100, Alex Bligh wrote:
> On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
> >> Can you clarify what you mean by that? Why is it an "odd flush
> >> definition", and how would you "properly" define it?
> >
> > E.g. take the defintion from NVMe
Josef,
> On 3 Oct 2016, at 15:32, Josef Bacik wrote:
>
> Ok I understand your objections now. You aren't arguing that we are unsafe
> by default, only that we are unsafe with servers that do something special
> beyond simply writing to a single disk or file. I agree this is problematic,
> b
On 10/03/2016 07:34 AM, Alex Bligh wrote:
On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
Can you clarify what you mean by that? Why is it an "odd flush
definition", and how would you "properly" define it?
E.g. take the defintion from NVMe which also supports multiple queues:
"The Flush
> On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
>
>> Can you clarify what you mean by that? Why is it an "odd flush
>> definition", and how would you "properly" define it?
>
> E.g. take the defintion from NVMe which also supports multiple queues:
>
> "The Flush command shall commit data an
On Mon, Oct 03, 2016 at 09:51:49AM +0200, Wouter Verhelst wrote:
> Actually, I was pointing out the TCP head-of-line issue, where a delay
> on the socket that contains the flush reply would result in the arrival
> in the kernel block layer of a write reply before the said flush reply,
> resulting i
On Mon, Oct 03, 2016 at 12:20:49AM -0700, Christoph Hellwig wrote:
> On Mon, Oct 03, 2016 at 01:47:06AM +, Josef Bacik wrote:
> > It's not "broken", it's working as designed, and any fs on top of this
> > patch will be perfectly safe because they all wait for their io to complete
> > before iss
On Sun, Oct 02, 2016 at 05:17:14PM +0100, Alex Bligh wrote:
> On 29 Sep 2016, at 17:59, Josef Bacik wrote:
> > Huh I missed that. Yeah that's not possible for us for sure, I think my
> > option
> > idea is the less awful way forward if we want to address that limitation.
> > Thanks,
>
> I th
On Mon, Oct 03, 2016 at 01:47:06AM +, Josef Bacik wrote:
> It's not "broken", it's working as designed, and any fs on top of this patch
> will be perfectly safe because they all wait for their io to complete before
> issuing the FLUSH. If somebody wants to address the paranoid case later the
> On Oct 2, 2016, at 12:17 PM, Alex Bligh wrote:
>
>
>> On 29 Sep 2016, at 17:59, Josef Bacik wrote:
>>
>> On 09/29/2016 12:41 PM, Wouter Verhelst wrote:
On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
So think of it like normal disks with multiple channels. We don't s
> On 29 Sep 2016, at 17:59, Josef Bacik wrote:
>
> On 09/29/2016 12:41 PM, Wouter Verhelst wrote:
>> On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
>>> So think of it like normal disks with multiple channels. We don't send
>>> flushes
>>> down all the hwq's to make sure they are
On 09/29/2016 12:41 PM, Wouter Verhelst wrote:
On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
So think of it like normal disks with multiple channels. We don't send flushes
down all the hwq's to make sure they are clear, we leave that decision up to the
application (usually a FS o
On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
> So think of it like normal disks with multiple channels. We don't send
> flushes
> down all the hwq's to make sure they are clear, we leave that decision up to
> the
> application (usually a FS of course).
Well, when I asked earli
On 09/29/2016 05:52 AM, Wouter Verhelst wrote:
Hi Josef,
On Wed, Sep 28, 2016 at 04:01:32PM -0400, Josef Bacik wrote:
NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to provi
Hi Josef,
On Wed, Sep 28, 2016 at 04:01:32PM -0400, Josef Bacik wrote:
> NBD can become contended on its single connection. We have to serialize all
> writes and we can only process one read response at a time. Fix this by
> allowing userspace to provide multiple connections to a single nbd devi
NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to provide multiple connections to a single nbd device. This
coupled with block-mq drastically increases performance in multi-pro
25 matches
Mail list logo