Re: [LSF/MM TOPIC NOTES] x86 ZONE_DMA love

2018-04-26 Thread Christoph Hellwig
On Thu, Apr 26, 2018 at 09:54:06PM +, Luis R. Rodriguez wrote:
> In practice if you don't have a floppy device on x86, you don't need ZONE_DMA,

I call BS on that, and you actually explain later why it it BS due
to some drivers using it more explicitly.  But even more importantly
we have plenty driver using it through dma_alloc_* and a small DMA
mask, and they are in use - we actually had a 4.16 regression due to
them.

> SCSI is *severely* affected:

Not really.  We have unchecked_isa_dma to support about 4 drivers,
and less than a hand ful of drivers doing stupid things, which can
be fixed easily, and just need a volunteer.

> That's the end of the review of all current explicit callers on x86.
> 
> # dma_alloc_coherent_gfp_flags() and dma_generic_alloc_coherent()
> 
> dma_alloc_coherent_gfp_flags() and dma_generic_alloc_coherent() set
> GFP_DMA if if (dma_mask <= DMA_BIT_MASK(24))

All that code is long gone and replaced with dma-direct.  Which still
uses GFP_DMA based on the dma mask, though - see above.


Re: [PATCH] usb-storage: stop using block layer bounce buffers

2018-04-26 Thread Christoph Hellwig
On Sun, Apr 15, 2018 at 11:24:11AM -0400, Alan Stern wrote:
> On Sun, 15 Apr 2018, Christoph Hellwig wrote:
> 
> > USB host controllers now must handle highmem, so we can get rid of bounce
> > buffering highmem pages in the block layer.
> 
> Sorry, I don't quite understand what you are saying.  Do you mean that
> all USB host controllers now magically _do_ handle highmem?  Or do you
> mean that if they _don't_ handle highmem, we will not support them any
> more?

USB controller themselves never cared about highmem, drivers did.  For
PIO based controllers they'd have to kmap any address no in the kernel
drirect mapping.

Nothing in drivers/usb/host or the other diretories related to host
drivers calls page_address (only used in a single gadget) or sg_virt
(only used in a few upper level drivers), which makes me assume
semi-confidently that no USB host driver is not highmem aware these
days.

Greg, does this match your observation as the USB maintainer?


Re: [PATCH] scsi_transport_sas: don't bounce highmem pages for the smp handler

2018-04-26 Thread Christoph Hellwig
Johannes,

can you take a look at this?  You are one of the few persons who cared
about SMP passthrough in the recent past.

On Sun, Apr 15, 2018 at 04:52:37PM +0200, Christoph Hellwig wrote:
> All three instance of ->smp_handler deal with highmem backed requests
> just fine.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>  drivers/scsi/scsi_transport_sas.c | 4 
>  1 file changed, 4 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_transport_sas.c 
> b/drivers/scsi/scsi_transport_sas.c
> index 08acbabfae07..a22baf206071 100644
> --- a/drivers/scsi/scsi_transport_sas.c
> +++ b/drivers/scsi/scsi_transport_sas.c
> @@ -223,10 +223,6 @@ static int sas_bsg_initialize(struct Scsi_Host *shost, 
> struct sas_rphy *rphy)
>   to_sas_host_attrs(shost)->q = q;
>   }
>  
> - /*
> -  * by default assume old behaviour and bounce for any highmem page
> -  */
> - blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
>   blk_queue_flag_set(QUEUE_FLAG_BIDI, q);
>   return 0;
>  }
> -- 
> 2.17.0
---end quoted text---


Re: [PATCH 07/21] qedf: Add dcbx_not_wait module parameter so we won't wait for DCBX convergence to start discovery.

2018-04-26 Thread kbuild test robot
Hi Chad,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on mkp-scsi/for-next]
[also build test WARNING on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Chad-Dupuis/qedf-Update-driver-to-8-33-16-20/20180427-062801
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git for-next
reproduce:
# apt-get install sparse
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

>> drivers/scsi/qedf/qedf_main.c:92:6: sparse: symbol 'qedf_dcbx_no_wait' was 
>> not declared. Should it be static?
   drivers/scsi/qedf/qedf_main.c:1878:18: sparse: incorrect type in assignment 
(different base types) @@expected unsigned short [unsigned] [usertype] 
prod_idx @@got igned] [usertype] prod_idx @@
   drivers/scsi/qedf/qedf_main.c:1878:18:expected unsigned short [unsigned] 
[usertype] prod_idx
   drivers/scsi/qedf/qedf_main.c:1878:18:got restricted __le16 
   drivers/scsi/qedf/qedf_main.c:1908:18: sparse: incorrect type in assignment 
(different base types) @@expected unsigned short [unsigned] [usertype] 
prod_idx @@got igned] [usertype] prod_idx @@
   drivers/scsi/qedf/qedf_main.c:1908:18:expected unsigned short [unsigned] 
[usertype] prod_idx
   drivers/scsi/qedf/qedf_main.c:1908:18:got restricted __le16 
   drivers/scsi/qedf/qedf_main.c:1926:33: sparse: restricted __le32 degrades to 
integer
   drivers/scsi/qedf/qedf_main.c:1944:26: sparse: restricted __le32 degrades to 
integer
   include/linux/qed/qed_if.h:988:33: sparse: incorrect type in assignment 
(different base types) @@expected restricted __le32 [usertype] 
sb_id_and_flags @@got  [usertype] sb_id_and_flags @@
   include/linux/qed/qed_if.h:988:33:expected restricted __le32 [usertype] 
sb_id_and_flags
   include/linux/qed/qed_if.h:988:33:got unsigned int
   include/linux/qed/qed_if.h:995:9: sparse: cast from restricted __le32
   include/linux/qed/qed_if.h:988:33: sparse: incorrect type in assignment 
(different base types) @@expected restricted __le32 [usertype] 
sb_id_and_flags @@got  [usertype] sb_id_and_flags @@
   include/linux/qed/qed_if.h:988:33:expected restricted __le32 [usertype] 
sb_id_and_flags
   include/linux/qed/qed_if.h:988:33:got unsigned int
   include/linux/qed/qed_if.h:995:9: sparse: cast from restricted __le32
   drivers/scsi/qedf/qedf_main.c:2160:20: sparse: incorrect type in assignment 
(different base types) @@expected unsigned int [unsigned] [usertype] fr_crc 
@@got restrictunsigned int [unsigned] [usertype] fr_crc @@
   drivers/scsi/qedf/qedf_main.c:2160:20:expected unsigned int [unsigned] 
[usertype] fr_crc
   drivers/scsi/qedf/qedf_main.c:2160:20:got restricted __le32 
[addressable] [usertype] fcoe_crc32
   drivers/scsi/qedf/qedf_main.c:2346:34: sparse: restricted __le32 degrades to 
integer
   drivers/scsi/qedf/qedf_main.c:2456:25: sparse: restricted __le32 degrades to 
integer
   drivers/scsi/qedf/qedf_main.c:2459:18: sparse: restricted __le32 degrades to 
integer
   drivers/scsi/qedf/qedf_main.c:2808:28: sparse: expression using sizeof(void)
   drivers/scsi/qedf/qedf_main.c:2808:28: sparse: expression using sizeof(void)
   include/scsi/fc/fc_fcoe.h:101:36: sparse: cast truncates bits from constant 
value (efc becomes fc)
   include/scsi/fc/fc_fcoe.h:102:23: sparse: cast truncates bits from constant 
value (efc00 becomes 0)

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


[RFC PATCH] qedf: qedf_dcbx_no_wait can be static

2018-04-26 Thread kbuild test robot

Fixes: d9867ecbae88 ("qedf: Add dcbx_not_wait module parameter so we won't wait 
for DCBX convergence to start discovery.")
Signed-off-by: Fengguang Wu 
---
 qedf_main.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 8df151e..b96c928 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -89,7 +89,7 @@ module_param_named(retry_delay, qedf_retry_delay, bool, 
S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(retry_delay, " Enable/disable handling of FCP_RSP IU retry "
"delay handling (default off).");
 
-bool qedf_dcbx_no_wait;
+static bool qedf_dcbx_no_wait;
 module_param_named(dcbx_no_wait, qedf_dcbx_no_wait, bool, S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(dcbx_no_wait, " Do not wait for DCBX convergence to start "
"sending FIP VLAN requests on link up (Default: off).");


Re: [Lsf-pc] [LSF/MM TOPIC NOTES] x86 ZONE_DMA love

2018-04-26 Thread Rik van Riel
On Thu, 2018-04-26 at 21:54 +, Luis R. Rodriguez wrote:
> Below are my notes on the ZONE_DMA discussion at LSF/MM 2018. There
> were some
> earlier discussion prior to my arrival to the session about moving
> around
> ZOME_DMA around, if someone has notes on that please share too :)

We took notes during LSF/MM 2018. Not a whole lot
on your topic, but most of the MM and plenary
topics have some notes.

https://etherpad.wikimedia.org/p/LSFMM2018

-- 
All Rights Reversed.

signature.asc
Description: This is a digitally signed message part


Re: [PATCH] bsg referencing bus driver module

2018-04-26 Thread Anatoliy Glagolev
Any thoughts on this? Can we really drop a reference from a child device
(bsg_class_device) to a parent device (Scsi_Host) while the child device
is still around at fc_bsg_remove time?

If not, please consider a fix with module references. I realized that
the previous version of the fix had a problem since bsg_open may run
more often than bsg_release. Sending a newer version... The new fix
piggybacks on the bsg layer logic allocating/freeing bsg_device structs.
When all those structs are gone there are no references to Scsi_Host from
the user-mode side. The only remaining references are from a SCSI bus
driver (like qla2xxx) itself; it is safe to drop the module reference
at that time.


>From c744d4fd93578545ad12faa35a3354364793b124 Mon Sep 17 00:00:00 2001
From: Anatoliy Glagolev 
Date: Wed, 25 Apr 2018 19:16:10 -0600
Subject: [PATCH] bsg referencing parent module
Signed-off-by: Anatoliy Glagolev 

Fixing a bug when bsg layer holds the last reference to device
when the device's module has been unloaded. Upon dropping the
reference the device's release function may touch memory of
the unloaded module.
---
 block/bsg-lib.c  | 24 ++--
 block/bsg.c  | 22 +-
 drivers/scsi/scsi_transport_fc.c |  8 ++--
 include/linux/bsg-lib.h  |  4 
 include/linux/bsg.h  |  5 +
 5 files changed, 58 insertions(+), 5 deletions(-)

diff --git a/block/bsg-lib.c b/block/bsg-lib.c
index fc2e5ff..bb11786 100644
--- a/block/bsg-lib.c
+++ b/block/bsg-lib.c
@@ -309,6 +309,25 @@ struct request_queue *bsg_setup_queue(struct device *dev, 
const char *name,
bsg_job_fn *job_fn, int dd_job_size,
void (*release)(struct device *))
 {
+   return bsg_setup_queue_ex(dev, name, job_fn, dd_job_size, release,
+   NULL);
+}
+EXPORT_SYMBOL_GPL(bsg_setup_queue);
+
+/**
+ * bsg_setup_queue_ex - Create and add the bsg hooks so we can receive requests
+ * @dev: device to attach bsg device to
+ * @name: device to give bsg device
+ * @job_fn: bsg job handler
+ * @dd_job_size: size of LLD data needed for each job
+ * @release: @dev release function
+ * @dev_module: @dev's module
+ */
+struct request_queue *bsg_setup_queue_ex(struct device *dev, const char *name,
+   bsg_job_fn *job_fn, int dd_job_size,
+   void (*release)(struct device *),
+   struct module *dev_module)
+{
struct request_queue *q;
int ret;
 
@@ -331,7 +350,8 @@ struct request_queue *bsg_setup_queue(struct device *dev, 
const char *name,
blk_queue_softirq_done(q, bsg_softirq_done);
blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT);
 
-   ret = bsg_register_queue(q, dev, name, _transport_ops, release);
+   ret = bsg_register_queue_ex(q, dev, name, _transport_ops, release,
+   dev_module);
if (ret) {
printk(KERN_ERR "%s: bsg interface failed to "
   "initialize - register queue\n", dev->kobj.name);
@@ -343,4 +363,4 @@ struct request_queue *bsg_setup_queue(struct device *dev, 
const char *name,
blk_cleanup_queue(q);
return ERR_PTR(ret);
 }
-EXPORT_SYMBOL_GPL(bsg_setup_queue);
+EXPORT_SYMBOL_GPL(bsg_setup_queue_ex);
diff --git a/block/bsg.c b/block/bsg.c
index defa06c..950cd31 100644
--- a/block/bsg.c
+++ b/block/bsg.c
@@ -666,6 +666,7 @@ static int bsg_put_device(struct bsg_device *bd)
 {
int ret = 0, do_free;
struct request_queue *q = bd->queue;
+   struct module *parent_module = q->bsg_dev.parent_module;
 
mutex_lock(_mutex);
 
@@ -695,8 +696,11 @@ static int bsg_put_device(struct bsg_device *bd)
kfree(bd);
 out:
kref_put(>bsg_dev.ref, bsg_kref_release_function);
-   if (do_free)
+   if (do_free) {
blk_put_queue(q);
+   if (parent_module)
+   module_put(parent_module);
+   }
return ret;
 }
 
@@ -706,12 +710,19 @@ static struct bsg_device *bsg_add_device(struct inode 
*inode,
 {
struct bsg_device *bd;
unsigned char buf[32];
+   struct module *parent_module = rq->bsg_dev.parent_module;
 
if (!blk_get_queue(rq))
return ERR_PTR(-ENXIO);
 
+   if (parent_module) {
+   if (!try_module_get(parent_module))
+   return ERR_PTR(-ENODEV);
+   }
bd = bsg_alloc_device();
if (!bd) {
+   if (parent_module)
+   module_put(parent_module);
blk_put_queue(rq);
return ERR_PTR(-ENOMEM);
}
@@ -922,6 +933,14 @@ int bsg_register_queue(struct request_queue *q, struct 
device *parent,
const char *name, const struct bsg_ops *ops,
void (*release)(struct device *))
 {
+   return bsg_register_queue_ex(q, parent, name, ops, release, NULL);
+}
+
+int 

Proposal

2018-04-26 Thread MS Zeliha Omer Faruk



Hello

   Greetings to you today i asked before but i did't get a response please
i know this might come to you as a surprise because you do not know me
personally i have a business proposal for you please reply for more
info.



Best Regards,

Esentepe Mahallesi Büyükdere
Caddesi Kristal Kule Binasi
No:215
 Sisli - Istanbul, Turkey



[LSF/MM TOPIC NOTES] x86 ZONE_DMA love

2018-04-26 Thread Luis R. Rodriguez
Below are my notes on the ZONE_DMA discussion at LSF/MM 2018. There were some
earlier discussion prior to my arrival to the session about moving around
ZOME_DMA around, if someone has notes on that please share too :)

PS. I'm not subscribed to linux-mm

  Luis

Determining you don't need to support ZONE_DMA on x86 at run time
=

In practice if you don't have a floppy device on x86, you don't need ZONE_DMA,
in that case you dont need to support ZONE_DMA, however currently disabling it
is only possible at compile time, and we won't know for sure until boot time if
you have such a device. If you don't need ZONE_DMA though means we would not
have to deal with slab allocators for them and special casings for it in a slew
of places. In particular even kmalloc() has a branch which is always run if
CONFIG_ZONE_DMA is enabled.

ZONE_DMA is needed for old devices that requires lower addresses since it allows
allocations more reliably. There should be more devices that require this,
not just floppy though.

Christoph Lameter added CONFIG_ZONE_DMA to disable ZONE_DMA at build time but
most distributions enable this. If we could disable ZONE_DMA at run time once
we know we don't have any device present requiring it we could get the same
benefit of compiling without CONFIG_ZONE_DMA at run time.

It used to be that disabling CONFIG_ZONE_DMA could help with performance, we
don't seem to have modern benchmarks over possible gains on removing it.
Are the gains no longer expected to be significant? Very likely there are
no performance gains. The assumption then is that the main advantage over
being able to disable ZONE_DMA on x86 these days would be pure aesthetics, and
having x86 work more like other architectures with allocations. Use of ZONE_DMA
on drivers are also good signs these drivers are old, or may be deprecated.
Perhaps some of these on x86 should be moved to staging.

Note that some architectures rely on ZONE_DMA as well, the above notes
only applies to x86.

We can use certain kernel mechanisms to disable usage of x86 certain features
at run time. Below are a few options:

  * x86 binary patching
  * ACPI_SIG_FADT
  * static keys
  * compiler multiverse (at least the R gcc proof of concept is now complete)

Detecting legacy x86 devices with ACPI ACPI_SIG_FADT


We could expand on ACPI_SIG_FADT with more legacy devices. This mechanism was
used to help determine if certain legacy x86 devices are present or not with
paravirtualization. For instance:

  * ACPI_FADT_NO_VGA
  * ACPI_FADT_NO_CMOS_RTC

CONFIG_ZONE_DMA
---

Christoph Lameter added CONFIG_ZONE_DMA through commit 4b51d66989218
("[PATCH] optional ZONE_DMA: optional ZONE_DMA in the VM") merged on
v2.6.21.

On x86 ZONE_DMA is defined as follows:

config ZONE_DMA
bool "DMA memory allocation support" if EXPERT
default y
help
  DMA memory allocation support allows devices with less than 32-bit
  addressing to allocate within the first 16MB of address space.
  Disable if no such devices will be used.

  If unsure, say Y.

Most distributions enable CONFIG_ZONE_DMA.

Immediate impact of CONFIG_ZONE_DMA
---

CONFIG_ZONE_DMA implicaates kmalloc() as follows:

struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
{
...
#ifdef CONFIG_ZONE_DMA
if (unlikely((flags & GFP_DMA)))
return kmalloc_dma_caches[index];
#endif
...
}

ZONE_DMA users
==

Turns out there are much more users of ZONE_DMA than expected even on x86.

Explicit requirements for ZONE_DMA with gfp flags
-

All drivers which explicitly use any of these flags implicate use
of ZONE_DMA for allocations:

  * GFP_DMA
  * __GFP_DMA

Implicit ZONE_DMA users
---

There are a series of implicit users of ZONE_DMA which use helpers. These are,
with details documented further below:

  * blk_queue_bounce()
  * blk_queue_bounce_limit()
  * dma_alloc_coherent_gfp_flags()
  * dma_generic_alloc_coherent()
  * intel_alloc_coherent()
  * _regmap_raw_write()
  * mempool_alloc_pages_isa()

x86 implicit and explicit ZONE_DMA users
-

We list below all x86 implicit and explicit ZONE_DMA users.

# Explicit x86 users of GFP_DMA or __GFP_DMA

  * drivers/iio/common/ssp_sensors - wonder if enabling this on x86 was a 
mistake.
Note that this needs SPI and SPI needs HAS_IOMEM. I only see HAS_IOMEM on
s390 ? But I do think the Intel Minnowboard has SPI, but doubt it has
   the ssp sensor stuff.

 * drivers/input/rmi4/rmi_spi.c - same SPI question
 * drivers/media/common/siano/ - make allyesconfig yields it enabled, but
   not sure if this should ever be on x86
 * 

Re: [PATCH v3 0/6] scsi: handle special return codes for ABORTED COMMAND

2018-04-26 Thread Martin Wilck
On Fri, 2018-04-20 at 19:15 -0400, Martin K. Petersen wrote:
> 
> Much better, thanks for reworking this. Applied to 4.18/scsi-queue.

Thank you!

By the way, I've been wondering whether declaring blist_flags_t
__bitwise was a wise decision. blist_flags_t is kernel-internal, thus
endianness doesn't matter. OTOH, using __bitwise requires explicit
casts in many places, which may suppress warnings about integer size
mismatch and made me overlook some places where I had to change
"unsigned long" to "unsigned long long" in the first place
(in the submitted and applied version I think I caught them all).

Regards,
Martin

-- 
Dr. Martin Wilck , Tel. +49 (0)911 74053 2107
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)



Re: [PATCH 2/5] ide: kill ide_toggle_bounce

2018-04-26 Thread Jens Axboe
On 4/26/18 1:20 AM, Christoph Hellwig wrote:
> On Tue, Apr 24, 2018 at 08:02:56PM -0600, Jens Axboe wrote:
>> On 4/24/18 12:16 PM, Christoph Hellwig wrote:
>>> ide_toggle_bounce did select various strange block bounce limits, including
>>> not bouncing at all as soon as an iommu is present in the system.  Given
>>> that the dma_map routines now handle any required bounce buffering except
>>> for ISA DMA, and the ide code already must handle either ISA DMA or highmem
>>> at least for iommu equipped systems we can get rid of the block layer
>>> bounce limit setting entirely.
>>
>> Pretty sure I was the one to add this code, when highmem page IO was
>> enabled about two decades ago...
>>
>> Outside of DMA, the issue was that the PIO code could not handle
>> highmem. That's not the case anymore, so this should be fine.
> 
> Yes, that is the rationale.  Any chance to you could look over the
> other patches as well?  Except for the networking one for which I'd
> really like to see a review from Dave all the users of the interface
> are block related.

You can add my reviewed-by to 1-3, and 5. Looks good to me.

-- 
Jens Axboe



Re: MegaCli fails to communicate with Raid-Controller

2018-04-26 Thread Volker Schwicking
On 23. Apr 2018, at 11:03, Volker Schwicking  
wrote:
> 
> I will add the printk to dma_alloc_coherent() as well to see, which request 
> actually fails. But i have to be a bit patient since its a production system 
> and the customers aren’t to happy about reboots.

Alright, here are some results.

Looking at my debug lines i can tell, that requesting either 2048 or 4 
regularly fail. Other values don’t ever show up as failed, but there are 
several  as you can see in the attached log.

The failed requests:
###
$ grep 'GD IOV-len FAILED' /var/log/kern.log  | awk '{ print $9, $10 }' | sort 
| uniq -c
 59 FAILED: 2048
 64 FAILED: 4
###

I attached full debugging output from several executions of “megacli -ldpdinfo 
-a0” in 5 second intervals, successful and failed and content from 
/proc/buddyinfo again.

 Can you make any sense of that? Where should i go from here?

Apr 26 16:33:48 xh643 kernel: [32258.217933] GD IOV-len FAILED: 2048
Apr 26 16:33:48 xh643 kernel: [32258.218047] megaraid_sas :03:00.0: Failed 
to alloc kernel SGL buffer for IOCTL
next success
Apr 26 16:33:53 xh643 kernel: [32263.226368] GD IOV-len OK: 2048
Apr 26 16:33:53 xh643 kernel: [32263.226654] GD IOV-len OK: 32
Apr 26 16:33:53 xh643 kernel: [32263.226804] GD IOV-len OK: 320
Apr 26 16:33:53 xh643 kernel: [32263.226952] GD IOV-len OK: 616
Apr 26 16:33:53 xh643 kernel: [32263.227146] GD IOV-len OK: 1664
Apr 26 16:33:53 xh643 kernel: [32263.227296] GD IOV-len OK: 32
Apr 26 16:33:53 xh643 kernel: [32263.227424] GD IOV-len OK: 8
Apr 26 16:33:53 xh643 kernel: [32263.227552] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.227713] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.227845] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228004] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228123] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228241] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228367] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228496] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228615] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228741] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228870] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.228998] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.229139] GD IOV-len OK: 2048
Apr 26 16:33:53 xh643 kernel: [32263.229311] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.229468] GD IOV-len OK: 32
Apr 26 16:33:53 xh643 kernel: [32263.229606] GD IOV-len OK: 320
Apr 26 16:33:53 xh643 kernel: [32263.229822] GD IOV-len OK: 616
Apr 26 16:33:53 xh643 kernel: [32263.229972] GD IOV-len OK: 1664
Apr 26 16:33:53 xh643 kernel: [32263.230113] GD IOV-len OK: 32
Apr 26 16:33:53 xh643 kernel: [32263.230253] GD IOV-len OK: 8
Apr 26 16:33:53 xh643 kernel: [32263.230395] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.230552] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.230699] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.230818] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.230935] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231064] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231195] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231313] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231429] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231564] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231690] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231810] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.231935] GD IOV-len OK: 2048
Apr 26 16:33:53 xh643 kernel: [32263.232250] GD IOV-len OK: 384
Apr 26 16:33:53 xh643 kernel: [32263.232414] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.232540] GD IOV-len OK: 12
Apr 26 16:33:53 xh643 kernel: [32263.232896] GD IOV-len OK: 512
Apr 26 16:33:53 xh643 kernel: [32263.233038] GD IOV-len OK: 168
Apr 26 16:33:53 xh643 kernel: [32263.233211] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.233677] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.233991] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.234437] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.234677] GD IOV-len OK: 384
Apr 26 16:33:53 xh643 kernel: [32263.234815] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.234956] GD IOV-len OK: 12
Apr 26 16:33:53 xh643 kernel: [32263.235323] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.235780] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.236042] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.236506] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.236780] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.237201] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.237458] GD IOV-len OK: 256
Apr 26 16:33:53 xh643 kernel: [32263.237920] GD IOV-len OK: 24
Apr 26 16:33:53 xh643 kernel: [32263.238204] GD IOV-len 

[Bug 199435] HPSA + P420i resetting logical Direct-Access never complete

2018-04-26 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=199435

--- Comment #20 from Anthony Hausman (anthonyhaussm...@gmail.com) ---
So here are all my test.
With the agent enable, using hp check command disk (hpacucli/ssacli and
hpssacli) and launching a sg_reset, the reset has no problem on the problematic
disk:

Apr 26 14:31:20 kernel: hpsa :08:00.0: scsi 0:1:0:0: resetting logical 
Direct-Access HP   LOGICAL VOLUME   RAID-0 SSDSmartPathCap- En- Exp=1
Apr 26 14:31:21 kernel: hpsa :08:00.0: device is ready.
Apr 26 14:31:21 kernel: hpsa :08:00.0: scsi 0:1:0:0: reset logical 
completed successfully Direct-Access HP   LOGICAL VOLUME   RAID-0
SSDSmartPathCap- En- Exp=1

The reset only took 1 second.

The "bug" seems to appear only when the disk returns errors concerning
Unrecovered read error (when using badblocks read-only test by example).

I try to reproduce it.

-- 
You are receiving this mail because:
You are the assignee for the bug.


Re: [PATCH 2/5] ide: kill ide_toggle_bounce

2018-04-26 Thread Christoph Hellwig
On Tue, Apr 24, 2018 at 08:02:56PM -0600, Jens Axboe wrote:
> On 4/24/18 12:16 PM, Christoph Hellwig wrote:
> > ide_toggle_bounce did select various strange block bounce limits, including
> > not bouncing at all as soon as an iommu is present in the system.  Given
> > that the dma_map routines now handle any required bounce buffering except
> > for ISA DMA, and the ide code already must handle either ISA DMA or highmem
> > at least for iommu equipped systems we can get rid of the block layer
> > bounce limit setting entirely.
> 
> Pretty sure I was the one to add this code, when highmem page IO was
> enabled about two decades ago...
> 
> Outside of DMA, the issue was that the PIO code could not handle
> highmem. That's not the case anymore, so this should be fine.

Yes, that is the rationale.  Any chance to you could look over the
other patches as well?  Except for the networking one for which I'd
really like to see a review from Dave all the users of the interface
are block related.