Re: [PATCH RFC 01/10] vmalloc: Add basic perm alloc implementation

2020-11-24 Thread h...@infradead.org
On Mon, Nov 23, 2020 at 08:44:12PM +, Edgecombe, Rick P wrote: > Well, there were two reasons: > 1. Non-standard naming for the PAGE_FOO flags. For example, > PAGE_KERNEL_ROX vs PAGE_KERNEL_READ_EXEC. This could be unified. I > think it's just riscv that breaks the conventions. Others are just

Re: [RFC PATCH 1/9] cxl/acpi: Add an acpi_cxl module for the CXL interconnect

2020-11-10 Thread h...@infradead.org
On Wed, Nov 11, 2020 at 07:30:34AM +, Verma, Vishal L wrote: > Hi Christpph, > > I thought 100 col. lines were acceptable now. Quote from the coding style document: "The preferred limit on the length of a single line is 80 columns. Statements longer than 80 columns should be broken into

Re: [dm-devel] [PATCH 0/2] block layer filter and block device snapshot module

2020-10-23 Thread h...@infradead.org
On Fri, Oct 23, 2020 at 12:31:05PM +0200, Hannes Reinecke wrote: > My thoughts went more into the direction of hooking into ->submit_bio, > seeing that it's a NULL pointer for most (all?) block drivers. > > But sure, I'll check how the interposer approach would turn out. submit_bio is owned by

Re: [PATCH 0/2] block layer filter and block device snapshot module

2020-10-23 Thread h...@infradead.org
On Thu, Oct 22, 2020 at 01:54:16PM -0400, Mike Snitzer wrote: > On Thu, Oct 22, 2020 at 11:14 AM Darrick J. Wong > > Stupid question: Why don't you change the block layer to make it > > possible to insert device mapper devices after the blockdev has been set > > up? > > Not a stupid question.

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-09-08 Thread h...@infradead.org
On Mon, Sep 07, 2020 at 12:31:42PM +0530, Kanchan Joshi wrote: > But there are use-cases which benefit from supporting zone-append on > raw block-dev path. > Certain user-space log-structured/cow FS/DB will use the device that > way. Aerospike is one example. > Pass-through is synchronous, and we

Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

2020-08-29 Thread h...@infradead.org
On Thu, Aug 27, 2020 at 05:49:40PM +, Limonciello, Mario wrote: > Can you further elaborate what exactly you're wanting here? VMD > enable/disable > is something that is configured in firmware setup as the firmware does the > early > configuration for the silicon related to it. So it's up

Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

2020-08-29 Thread h...@infradead.org
On Thu, Aug 27, 2020 at 02:33:56PM -0700, Dan Williams wrote: > > Just a few benefits and there are other users with unique use cases: > > 1. Passthrough of the endpoint to OSes which don't natively support > > hotplug can enable hotplug for that OS using the guest VMD driver > > 2. Some

Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

2020-08-27 Thread h...@infradead.org
On Thu, Aug 27, 2020 at 04:45:53PM +, Derrick, Jonathan wrote: > Just a few benefits and there are other users with unique use cases: > 1. Passthrough of the endpoint to OSes which don't natively support > hotplug can enable hotplug for that OS using the guest VMD driver Or they could just

Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

2020-08-27 Thread h...@infradead.org
On Thu, Aug 27, 2020 at 04:13:44PM +, Derrick, Jonathan wrote: > On Thu, 2020-08-27 at 06:34 +0000, h...@infradead.org wrote: > > On Wed, Aug 26, 2020 at 09:43:27PM +, Derrick, Jonathan wrote: > > > Feel free to review my set to disable the MSI remapping whi

Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

2020-08-27 Thread h...@infradead.org
On Wed, Aug 26, 2020 at 09:43:27PM +, Derrick, Jonathan wrote: > Feel free to review my set to disable the MSI remapping which will make > it perform as well as direct-attached: > > https://patchwork.kernel.org/project/linux-pci/list/?series=325681 So that then we have to deal with your

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-08-14 Thread h...@infradead.org
On Fri, Aug 14, 2020 at 08:27:13AM +, Damien Le Moal wrote: > > > > O_APPEND pretty much implies out of order, as there is no way for an > > application to know which thread wins the race to write the next chunk. > > Yes and no. If the application threads do not synchronize their calls to >

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-08-14 Thread h...@infradead.org
On Wed, Aug 05, 2020 at 07:35:28AM +, Damien Le Moal wrote: > > the write pointer. The only interesting addition is that we also want > > to report where we wrote. So I'd rather have RWF_REPORT_OFFSET or so. > > That works for me. But that rules out having the same interface for raw block >

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-07-31 Thread h...@infradead.org
And FYI, this is what I'd do for a hacky aio-only prototype (untested): diff --git a/fs/aio.c b/fs/aio.c index 91e7cc4a9f179b..42b1934e38758b 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -1438,7 +1438,10 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2) }

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-07-31 Thread h...@infradead.org
On Fri, Jul 31, 2020 at 10:16:49AM +, Damien Le Moal wrote: > > > > Let's keep semantics and implementation separate. For the case > > where we report the actual offset we need a size imitation and no > > short writes. > > OK. So the name of the flag confused me. The flag name should

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-07-31 Thread h...@infradead.org
On Fri, Jul 31, 2020 at 09:34:50AM +, Damien Le Moal wrote: > Sync writes are done under the inode lock, so there cannot be other writers at > the same time. And for the sync case, since the actual written offset is > necessarily equal to the file size before the write, there is no need to >

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-07-31 Thread h...@infradead.org
On Fri, Jul 31, 2020 at 08:14:22AM +, Damien Le Moal wrote: > > > This was one of the reason why we chose to isolate the operation by a > > different IOCB flag and not by IOCB_APPEND alone. > > For zonefs, the plan is: > * For the sync write case, zone append is always used. > * For the

Re: [PATCH v4 6/6] io_uring: add support for zone-append

2020-07-31 Thread h...@infradead.org
On Fri, Jul 31, 2020 at 06:42:10AM +, Damien Le Moal wrote: > > - We may not be able to use RWF_APPEND, and need exposing a new > > type/flag (RWF_INDIRECT_OFFSET etc.) user-space. Not sure if this > > sounds outrageous, but is it OK to have uring-only flag which can be > > combined with

Re: [v6 PATCH] RISC-V: Remove unsupported isa string info print

2019-10-07 Thread h...@infradead.org
On Wed, Oct 02, 2019 at 06:28:59AM +, Atish Patra wrote: > On Wed, 2019-10-02 at 09:53 +0800, Alan Kao wrote: > > On Tue, Oct 01, 2019 at 03:10:16AM -0700, h...@infradead.org wrote: > > > On Tue, Oct 01, 2019 at 08:22:37AM +, Atish Patra wrote: > > > > riscv_

Re: [v6 PATCH] RISC-V: Remove unsupported isa string info print

2019-10-01 Thread h...@infradead.org
On Tue, Oct 01, 2019 at 08:22:37AM +, Atish Patra wrote: > riscv_of_processor_hartid() or seems to be a better candidate. We > already check if "rv" is present in isa string or not. I will extend > that to check for rv64i or rv32i. Is that okay ? I'd rather lift the checks out of that into a

Re: [RFC PATCH 0/2] Add support for SBI version to 0.2

2019-09-16 Thread h...@infradead.org
On Fri, Sep 13, 2019 at 08:54:27AM -0700, Palmer Dabbelt wrote: > On Tue, Sep 3, 2019 at 12:38 AM h...@infradead.org wrote: > > > On Fri, Aug 30, 2019 at 11:13:25PM +, Atish Patra wrote: > > > If I understood you clearly, you want to call it legacy in the spec an

Re: [v5 PATCH] RISC-V: Fix unsupported isa string info.

2019-09-10 Thread h...@infradead.org
On Fri, Sep 06, 2019 at 11:27:57PM +, Atish Patra wrote: > > Agreed. May be something like this ? > > > > Let's say f/d is enabled in kernel but cpu doesn't support it. > > "unsupported isa" will only appear if there are any unsupported isa. > > > > processor : 3 > > hart:

Re: [RFC PATCH 0/2] Add support for SBI version to 0.2

2019-09-03 Thread h...@infradead.org
On Fri, Aug 30, 2019 at 11:13:25PM +, Atish Patra wrote: > If I understood you clearly, you want to call it legacy in the spec and > just say v0.1 extensions. > > The whole idea of marking them as legacy extensions to indicate that it > would be obsolete in the future. > > But I am not too

Re: [RFC PATCH 0/2] Add support for SBI version to 0.2

2019-08-29 Thread h...@infradead.org
On Tue, Aug 27, 2019 at 10:19:42PM +, Atish Patra wrote: > I did not understand this part. All the legacy SBI calls are defined as > a separate extension ID not single extension. How did it break the > backward compatibility ? Yes, sorry I mistead this. The way is is defined is rather

Re: [RFC PATCH 1/2] RISC-V: Mark existing SBI as legacy SBI.

2019-08-29 Thread h...@infradead.org
On Tue, Aug 27, 2019 at 08:37:27PM +, Atish Patra wrote: > That would split the implementation between C file & assembly file for > no good reason. > > How about moving everything in sbi.c and just write everything inline > assembly there. Well, if we implement it in pure assembly that would

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-21 Thread h...@infradead.org
Btw, for the next version it also might make sense to do one optimization at a time. E.g. the empty cpumask one as the first patch, the local cpu directly one next, and the threshold based full flush as a third.

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-21 Thread h...@infradead.org
On Tue, Aug 20, 2019 at 08:29:47PM +, Atish Patra wrote: > Sounds good to me. Christoph has already mm/tlbflush.c in his mmu > series. I will rebase on top of it. It was't really intended for the nommu series but for the native clint prototype. But the nommu series grew so many cleanups and

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-21 Thread h...@infradead.org
On Wed, Aug 21, 2019 at 09:22:48AM +0530, Anup Patel wrote: > I agree that IPI mechanism should be standardized for RISC-V but I > don't support the idea of mandating CLINT as part of the UNIX > platform spec. For example, the AndesTech SOC does not use CLINT > instead they have PLMT for per-HART

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-20 Thread h...@infradead.org
On Wed, Aug 21, 2019 at 09:29:22AM +0800, Alan Kao wrote: > IMHO, this approach should be avoided because CLINT is compatible to but > not mandatory in the privileged spec. In other words, it is possible that > a Linux-capable RISC-V platform does not contain a CLINT component but > rely on some

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-20 Thread h...@infradead.org
On Tue, Aug 20, 2019 at 08:28:36PM +, Atish Patra wrote: > > http://git.infradead.org/users/hch/riscv.git/commitdiff/ea4067ae61e20fcfcf46a6f6bd1cc25710ce3afe > > This does seem a lot cleaner to me. We can reuse some of the code for > this patch as well. Based on NATIVE_CLINT configuration, it

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-20 Thread h...@infradead.org
On Tue, Aug 20, 2019 at 08:42:19AM +, Atish Patra wrote: > cmask NULL is pretty common case and we would be unnecessarily > executing bunch of instructions everytime while not saving much. Kernel > still have to make an SBI call and OpenSBI is doing a local flush > anyways. > > Looking at

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-20 Thread h...@infradead.org
On Tue, Aug 20, 2019 at 09:14:58AM +0200, Andreas Schwab wrote: > On Aug 19 2019, "h...@infradead.org" wrote: > > > This looks a little odd to m and assumes we never pass a size smaller > > than PAGE_SIZE. Whule that is probably true, why not something like: >

Re: [v2 PATCH] RISC-V: Optimize tlb flush path.

2019-08-19 Thread h...@infradead.org
On Mon, Aug 19, 2019 at 05:47:35PM -0700, Atish Patra wrote: > In RISC-V, tlb flush happens via SBI which is expensive. > If the target cpumask contains a local hartid, some cost > can be saved by issuing a local tlb flush as we do that > in OpenSBI anyways. There is also no need of SBI call if >

Re: [PATCH] RISC-V: Issue a local tlb flush if possible.

2019-08-19 Thread h...@infradead.org
On Mon, Aug 19, 2019 at 08:39:02PM +0530, Anup Patel wrote: > If we were using ASID then yes we don't need to flush anything > but currently we don't use ASID due to lack of HW support and > HW can certainly do speculatively page table walks so flushing > local TLB when MM mask is empty might

Re: [PATCH] RISC-V: Issue a local tlb flush if possible.

2019-08-19 Thread h...@infradead.org
On Thu, Aug 15, 2019 at 08:37:04PM +, Atish Patra wrote: > We get ton of them. Here is the stack dump. Looks like we might not need to flush anything at all here as the mm_struct was never scheduled to run on any cpu?

Re: [v5 PATCH] RISC-V: Fix unsupported isa string info.

2019-08-18 Thread h...@infradead.org
On Fri, Aug 16, 2019 at 07:21:52PM +, Atish Patra wrote: > > > + if (isa[0] != '\0') { > > > + /* Add remainging isa strings */ > > > + for (e = isa; *e != '\0'; ++e) { > > > +#if !defined(CONFIG_VIRTUALIZATION) > > > + if (e[0] != 'h') > > > +#endif > > > +

Re: [PATCH] RISC-V: Issue a local tlb flush if possible.

2019-08-13 Thread h...@infradead.org
On Tue, Aug 13, 2019 at 12:15:15AM +, Atish Patra wrote: > I thought if it recieves an empty cpumask, then it should at least do a > local flush is the expected behavior. Are you saying that we should > just skip all and return ? How could we ever receive an empty cpu mask? I think it could

Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain

2019-01-28 Thread h...@infradead.org
On Fri, Jan 25, 2019 at 09:45:26AM +, Peng Fan wrote: > Just have a question, > > Since vmalloc_to_page is ok for cma area, no need to take cma and per device > cma into consideration right? The CMA area itself it a physical memory region. If it is a non-highmem region you can call

Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain

2019-01-23 Thread h...@infradead.org
On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote: > If vring_use_dma_api is actually supposed to return true when > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote > are not fixing the real issue here. > > I don't know enough about remoteproc to know where

Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain

2019-01-22 Thread h...@infradead.org
On Tue, Jan 22, 2019 at 11:59:31AM -0800, Stefano Stabellini wrote: > > if (!virtio_has_iommu_quirk(vdev)) > > return true; > > > > @@ -260,7 +262,7 @@ static bool vring_use_dma_api(struct virtio_device > > *vdev) > > * the DMA API if we're a Xen guest, which at least

Re: [RFC] virtio_ring: check dma_mem for xen_domain

2019-01-21 Thread h...@infradead.org
On Mon, Jan 21, 2019 at 04:51:57AM +, Peng Fan wrote: > on i.MX8QM, M4_1 is communicating with DomU using rpmsg with a fixed > address as the dma mem buffer which is predefined. > > Without this patch, the flow is: > vring_map_one_sg -> vring_use_dma_api > -> dma_map_page >

Re: [PATCH v4] RISC-V: defconfig: Enable Generic PCIE by default

2019-01-15 Thread h...@infradead.org
On Tue, Jan 15, 2019 at 05:45:34PM +, Alistair Francis wrote: > > Alistair, is there a ready-made qemu machine with riscv + pcie? > > That would be really useful for testing.. > > Yep, if you use master QEMU (it isn't in a release yet) the virt > machine has PCIe support. Cool. Time to play

Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io

2016-05-08 Thread h...@infradead.org
On Thu, May 05, 2016 at 09:39:14PM +, Verma, Vishal L wrote: > How is it any 'less direct'? All it does now is follow the blockdev > O_DIRECT path. There still isn't any page cache involved.. It's still more overhead than the play DAX I/O path.

Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io

2016-05-08 Thread h...@infradead.org
On Thu, May 05, 2016 at 09:39:14PM +, Verma, Vishal L wrote: > How is it any 'less direct'? All it does now is follow the blockdev > O_DIRECT path. There still isn't any page cache involved.. It's still more overhead than the play DAX I/O path.

Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io

2016-05-08 Thread h...@infradead.org
On Thu, May 05, 2016 at 09:45:07PM +, Verma, Vishal L wrote: > I'm not sure I completely understand how this will work? Can you explain > a bit? Would we have to export rw_bytes up to layers above the pmem > driver? Where does get_user_pages come in? A DAX filesystem can directly use the

Re: [PATCH v4 5/7] fs: prioritize and separate direct_io from dax_io

2016-05-08 Thread h...@infradead.org
On Thu, May 05, 2016 at 09:45:07PM +, Verma, Vishal L wrote: > I'm not sure I completely understand how this will work? Can you explain > a bit? Would we have to export rw_bytes up to layers above the pmem > driver? Where does get_user_pages come in? A DAX filesystem can directly use the

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-26 Thread h...@infradead.org
On Mon, Apr 25, 2016 at 05:14:36PM +, Verma, Vishal L wrote: > - Application hits EIO doing dax_IO or load/store io > > - It checks badblocks and discovers it's files have lost data > > - It write()s those sectors (possibly converted to file offsets using > fiemap) > ?? ?? * This triggers

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-26 Thread h...@infradead.org
On Mon, Apr 25, 2016 at 05:14:36PM +, Verma, Vishal L wrote: > - Application hits EIO doing dax_IO or load/store io > > - It checks badblocks and discovers it's files have lost data > > - It write()s those sectors (possibly converted to file offsets using > fiemap) > ?? ?? * This triggers

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-26 Thread h...@infradead.org
On Mon, Apr 25, 2016 at 11:32:08AM -0400, Jeff Moyer wrote: > > EINVAL is a concern here. Not due to the right error reported, but > > because it means your current scheme is fundamentally broken - we > > need to support I/O at any alignment for DAX I/O, and not fail due to > > alignbment

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-26 Thread h...@infradead.org
On Mon, Apr 25, 2016 at 11:32:08AM -0400, Jeff Moyer wrote: > > EINVAL is a concern here. Not due to the right error reported, but > > because it means your current scheme is fundamentally broken - we > > need to support I/O at any alignment for DAX I/O, and not fail due to > > alignbment

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-25 Thread h...@infradead.org
On Sat, Apr 23, 2016 at 06:08:37PM +, Verma, Vishal L wrote: > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM due > to some allocation failing, and I thought we should return the original > -EIO in such cases so that the application doesn't lose the information > that the

Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io

2016-04-25 Thread h...@infradead.org
On Sat, Apr 23, 2016 at 06:08:37PM +, Verma, Vishal L wrote: > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM due > to some allocation failing, and I thought we should return the original > -EIO in such cases so that the application doesn't lose the information > that the

Re: [PATCH 0/4] Drivers: scsi: storvsc: Fix miscellaneous issues

2014-12-30 Thread h...@infradead.org
On Mon, Dec 29, 2014 at 09:07:59PM +, KY Srinivasan wrote: > Should I be resending these patches. I don't need a resend, I need a review for the patches. Note that for driver patches I'm also fine with a review from a co worker, as long as it's a real review not just a rubber stamp. Talking

Re: [PATCH 0/4] Drivers: scsi: storvsc: Fix miscellaneous issues

2014-12-30 Thread h...@infradead.org
On Mon, Dec 29, 2014 at 09:07:59PM +, KY Srinivasan wrote: Should I be resending these patches. I don't need a resend, I need a review for the patches. Note that for driver patches I'm also fine with a review from a co worker, as long as it's a real review not just a rubber stamp. Talking

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
On Fri, Jul 18, 2014 at 10:12:38AM -0700, h...@infradead.org wrote: > This is what I plan to put in after it passes basic testing: And that one was on top of my previous version. One that applies against core-for-3.17 below: --- >From 8a79783e5f72ec034a724e16c1f46604bd97bf68 Mon Sep 17 00

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
This is what I plan to put in after it passes basic testing: --- >From bb617c9465b839d70ecbbc69002a20ccf5f935bd Mon Sep 17 00:00:00 2001 From: "K. Y. Srinivasan" Date: Fri, 18 Jul 2014 19:12:58 +0200 Subject: sd: fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout Commit ID:

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
On Fri, Jul 18, 2014 at 04:57:13PM +, James Bottomley wrote: > Actually, no you didn't. The difference is in the derivation of the > timeout. Christoph's patch is absolute in terms of SD_TIMEOUT; yours is > relative to the queue timeout setting ... I thought there was a reason > for

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread Christoph Hellwig (h...@infradead.org)
On Fri, Jul 18, 2014 at 12:51:06AM +, Elliott, Robert (Server Storage) wrote: > SYNCHRONIZE CACHE (16) should be favored over SYNCHRONIZE > CACHE (10) unless SYNCHRONIZE CACHE (10) is not supported. I gues you mean (16) for the last occurance? What's the benefit of using SYNCHRONIZE CACHE

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread Christoph Hellwig (h...@infradead.org)
On Thu, Jul 17, 2014 at 11:53:33PM +, KY Srinivasan wrote: > I still see this problem. There was talk of fixing it elsewhere. Well, what we have right not is entirely broken, given that the block layer doesn't initialize ->timeout on TYPE_FS requeuests. We either need to revert that initial

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread Christoph Hellwig (h...@infradead.org)
On Thu, Jul 17, 2014 at 11:53:33PM +, KY Srinivasan wrote: I still see this problem. There was talk of fixing it elsewhere. Well, what we have right not is entirely broken, given that the block layer doesn't initialize -timeout on TYPE_FS requeuests. We either need to revert that initial

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread Christoph Hellwig (h...@infradead.org)
On Fri, Jul 18, 2014 at 12:51:06AM +, Elliott, Robert (Server Storage) wrote: SYNCHRONIZE CACHE (16) should be favored over SYNCHRONIZE CACHE (10) unless SYNCHRONIZE CACHE (10) is not supported. I gues you mean (16) for the last occurance? What's the benefit of using SYNCHRONIZE CACHE

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
On Fri, Jul 18, 2014 at 04:57:13PM +, James Bottomley wrote: Actually, no you didn't. The difference is in the derivation of the timeout. Christoph's patch is absolute in terms of SD_TIMEOUT; yours is relative to the queue timeout setting ... I thought there was a reason for preferring

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
This is what I plan to put in after it passes basic testing: --- From bb617c9465b839d70ecbbc69002a20ccf5f935bd Mon Sep 17 00:00:00 2001 From: K. Y. Srinivasan k...@microsoft.com Date: Fri, 18 Jul 2014 19:12:58 +0200 Subject: sd: fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

Re: [PATCH 1/1] [SCSI] Fix a bug in deriving the FLUSH_TIMEOUT from the basic I/O timeout

2014-07-18 Thread h...@infradead.org
On Fri, Jul 18, 2014 at 10:12:38AM -0700, h...@infradead.org wrote: This is what I plan to put in after it passes basic testing: And that one was on top of my previous version. One that applies against core-for-3.17 below: --- From 8a79783e5f72ec034a724e16c1f46604bd97bf68 Mon Sep 17 00:00:00

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-17 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 03:20:00PM -0400, Martin K. Petersen wrote: > The block layer can only describe one contiguous block range in a > request. My copy offload patches introduces the bi_special field that > allows us to attach additional information to an I/O. I have > experimented with doing

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-17 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 03:20:00PM -0400, Martin K. Petersen wrote: The block layer can only describe one contiguous block range in a request. My copy offload patches introduces the bi_special field that allows us to attach additional information to an I/O. I have experimented with doing that

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 01:47:35PM -0400, Martin K. Petersen wrote: > There were several SSDs that did not want to support wearing out flash > by writing gobs of zeroes and only support the UNMAP case. Given that SSDs usually aren't hard provisioned anyway that seems like an odd decision. But

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 11:44:18AM -0400, Martin K. Petersen wrote: > There are lots of devices out there that support WRITE SAME(10) or (16) > without the UNMAP bit. And there are devices that support WRITE SAME w/ > UNMAP functionality but not "regular" WRITE SAME. Oh, we actually have devices

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Sun, Jul 13, 2014 at 08:58:34AM -0400, Martin K. Petersen wrote: > > "KY" == KY Srinivasan writes: > > KY> Windows hosts do support UNMAP and set the field in the > KY> EVPD. However, since the host advertises SPC-2 compliance, Linux > KY> does not even query the VPD page. > > >> If we

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Sun, Jul 13, 2014 at 08:58:34AM -0400, Martin K. Petersen wrote: KY == KY Srinivasan k...@microsoft.com writes: KY Windows hosts do support UNMAP and set the field in the KY EVPD. However, since the host advertises SPC-2 compliance, Linux KY does not even query the VPD page. If we

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 11:44:18AM -0400, Martin K. Petersen wrote: There are lots of devices out there that support WRITE SAME(10) or (16) without the UNMAP bit. And there are devices that support WRITE SAME w/ UNMAP functionality but not regular WRITE SAME. Oh, we actually have devices that

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-16 Thread h...@infradead.org
On Wed, Jul 16, 2014 at 01:47:35PM -0400, Martin K. Petersen wrote: There were several SSDs that did not want to support wearing out flash by writing gobs of zeroes and only support the UNMAP case. Given that SSDs usually aren't hard provisioned anyway that seems like an odd decision. But SAS

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-11 Thread h...@infradead.org
On Wed, Jul 09, 2014 at 10:27:24PM +, James Bottomley wrote: > If we fix it at source, why would there be any need to filter? That's > the reason the no_write_same flag was introduced. If we can find and > fix the bug, it can go back into the stable trees as a bug fix, hence > nothing should

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-11 Thread h...@infradead.org
On Wed, Jul 09, 2014 at 10:27:24PM +, James Bottomley wrote: If we fix it at source, why would there be any need to filter? That's the reason the no_write_same flag was introduced. If we can find and fix the bug, it can go back into the stable trees as a bug fix, hence nothing should

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-10 Thread h...@infradead.org
On Wed, Jul 09, 2014 at 10:36:26PM +, KY Srinivasan wrote: > Ok; I am concerned about older kernels that do not have no_write_same flag. > I suppose I can work directly with these Distros and give them a choice: > either implement > the no_write_same flag or filter the command in our driver.

Re: [PATCH 4/8] Drivers: scsi: storvsc: Filter WRITE_SAME_16

2014-07-10 Thread h...@infradead.org
On Wed, Jul 09, 2014 at 10:36:26PM +, KY Srinivasan wrote: Ok; I am concerned about older kernels that do not have no_write_same flag. I suppose I can work directly with these Distros and give them a choice: either implement the no_write_same flag or filter the command in our driver. I