Re: [GIT PULL] SCSI fixes for 4.7-rc2

2016-06-13 Thread Linus Torvalds
On Mon, Jun 13, 2016 at 12:04 AM, Hannes Reinecke  wrote:
>
> And we have been running the very patch in SLES for over a year now,
> without a single issue being reported.

Oh, ok. So it's not "all qemu kvm instances are broken", it was a very
unusual issue, and the patch has actually gotten wider testing.

That makes me much happier about it.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] nvmet/rdma: fix ptr_ret.cocci warnings

2016-06-13 Thread kbuild test robot
drivers/nvme/target/rdma.c:1441:1-3: WARNING: PTR_ERR_OR_ZERO can be used


 Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR

Generated by: scripts/coccinelle/api/ptr_ret.cocci

Signed-off-by: Fengguang Wu 
---

 rdma.c |5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1438,10 +1438,7 @@ static int nvmet_rdma_add_port(struct nv
mutex_unlock(_rdma_ports_mutex);
 
rdma_port = nvmet_rdma_listen_cmid(pb);
-   if (IS_ERR(rdma_port))
-   return PTR_ERR(rdma_port);
-
-   return 0;
+   return PTR_ERR_OR_ZERO(rdma_port);
 }
 
 static void nvmet_rdma_remove_port(struct nvmet_port_binding *pb)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[target:nvmet-configfs-ng 22/27] drivers/nvme/target/rdma.c:1441:1-3: WARNING: PTR_ERR_OR_ZERO can be used

2016-06-13 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git 
nvmet-configfs-ng
head:   9589c853cae62cadabb17a3c48f9fdb3da5fa83c
commit: 49503f06b9b36b6c691435b25a31f6ab6c808f13 [22/27] nvmet/rdma: Convert to 
struct nvmet_port_binding


coccinelle warnings: (new ones prefixed by >>)

>> drivers/nvme/target/rdma.c:1441:1-3: WARNING: PTR_ERR_OR_ZERO can be used

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[target:nvmet-configfs-ng 19/27] drivers/nvme/target/configfs-ng.c:321:15: error: passing argument 1 of 'atomic64_inc' from incompatible pointer type

2016-06-13 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git 
nvmet-configfs-ng
head:   9589c853cae62cadabb17a3c48f9fdb3da5fa83c
commit: 002901c382c5432032722aba752f578ebe5bfeea [19/27] nvmet: Add support for 
configfs-ng multi-tenant logic
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
git checkout 002901c382c5432032722aba752f578ebe5bfeea
# save the attached .config to linux build tree
make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_disable':
>> drivers/nvme/target/configfs-ng.c:321:15: error: passing argument 1 of 
>> 'atomic64_inc' from incompatible pointer type 
>> [-Werror=incompatible-pointer-types]
 atomic64_inc(_genctr);
  ^
   In file included from arch/x86/include/asm/atomic.h:237:0,
from arch/x86/include/asm/msr.h:66,
from arch/x86/include/asm/processor.h:20,
from arch/x86/include/asm/cpufeature.h:4,
from arch/x86/include/asm/thread_info.h:52,
from include/linux/thread_info.h:54,
from arch/x86/include/asm/preempt.h:6,
from include/linux/preempt.h:59,
from include/linux/spinlock.h:50,
from include/linux/seqlock.h:35,
from include/linux/time.h:5,
from include/linux/stat.h:18,
from include/linux/module.h:10,
from drivers/nvme/target/configfs-ng.c:5:
   arch/x86/include/asm/atomic64_32.h:218:20: note: expected 'atomic64_t * {aka 
struct  *}' but argument is of type 'atomic_long_t * {aka struct 
 *}'
static inline void atomic64_inc(atomic64_t *v)
   ^~~~
   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_enable_store':
   drivers/nvme/target/configfs-ng.c:362:16: error: passing argument 1 of 
'atomic64_inc' from incompatible pointer type 
[-Werror=incompatible-pointer-types]
  atomic64_inc(_genctr);
   ^
   In file included from arch/x86/include/asm/atomic.h:237:0,
from arch/x86/include/asm/msr.h:66,
from arch/x86/include/asm/processor.h:20,
from arch/x86/include/asm/cpufeature.h:4,
from arch/x86/include/asm/thread_info.h:52,
from include/linux/thread_info.h:54,
from arch/x86/include/asm/preempt.h:6,
from include/linux/preempt.h:59,
from include/linux/spinlock.h:50,
from include/linux/seqlock.h:35,
from include/linux/time.h:5,
from include/linux/stat.h:18,
from include/linux/module.h:10,
from drivers/nvme/target/configfs-ng.c:5:
   arch/x86/include/asm/atomic64_32.h:218:20: note: expected 'atomic64_t * {aka 
struct  *}' but argument is of type 'atomic_long_t * {aka struct 
 *}'
static inline void atomic64_inc(atomic64_t *v)
   ^~~~
   cc1: some warnings being treated as errors
--
   drivers/nvme/target/discovery.c: In function 'nvmet_referral_enable':
>> drivers/nvme/target/discovery.c:29:16: error: passing argument 1 of 
>> 'atomic64_inc' from incompatible pointer type 
>> [-Werror=incompatible-pointer-types]
  atomic64_inc(_genctr);
   ^
   In file included from arch/x86/include/asm/atomic.h:237:0,
from arch/x86/include/asm/msr.h:66,
from arch/x86/include/asm/processor.h:20,
from arch/x86/include/asm/cpufeature.h:4,
from arch/x86/include/asm/thread_info.h:52,
from include/linux/thread_info.h:54,
from arch/x86/include/asm/preempt.h:6,
from include/linux/preempt.h:59,
from include/linux/spinlock.h:50,
from include/linux/mmzone.h:7,
from include/linux/gfp.h:5,
from include/linux/slab.h:14,
from drivers/nvme/target/discovery.c:15:
   arch/x86/include/asm/atomic64_32.h:218:20: note: expected 'atomic64_t * {aka 
struct  *}' but argument is of type 'atomic_long_t * {aka struct 
 *}'
static inline void atomic64_inc(atomic64_t *v)
   ^~~~
   drivers/nvme/target/discovery.c: In function 'nvmet_referral_disable':
   drivers/nvme/target/discovery.c:40:16: error: passing argument 1 of 
'atomic64_inc' from incompatible pointer type 
[-Werror=incompatible-pointer-types]
  atomic64_inc(_genctr);
   ^
   In file included from arch/x86/include/asm/atomic.h:237:0,
from arch/x86/include/asm/msr.h:66,
from arch/x86/include/asm/processor.h:20,
from 

[PATCH] 53c700: fix BUG on untagged commands

2016-06-13 Thread James Bottomley
The untagged command case in the 53c700 driver has been broken since
host wide tags were enabled because the replaced scsi_find_tag()
function had a special case for the tag value SCSI_NO_TAG to retrieve
sdev->current_cmnd.  The replacement function scsi_host_find_tag() has
no such special case and returns NULL causing untagged commands to
trigger a BUG() in the driver.  Inspection shows that the 53c700 is the
only driver using this SCSI_NO_TAG case, so a local fix in the driver
suffices to fix this problem globally.

Fixes: 64d513ac31b - "scsi: use host wide tags by default"
Cc: sta...@vger.kernel.org  # 4.4+
Reported-by: Helge Deller 
Tested-by: Helge Deller 
Signed-off-by: James Bottomley 

---

diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c
index d4c2856..3ddc85e 100644
--- a/drivers/scsi/53c700.c
+++ b/drivers/scsi/53c700.c
@@ -1122,7 +1122,7 @@ process_script_interrupt(__u32 dsps, __u32 dsp, struct 
scsi_cmnd *SCp,
} else {
struct scsi_cmnd *SCp;
 
-   SCp = scsi_host_find_tag(SDp->host, SCSI_NO_TAG);
+   SCp = SDp->current_cmnd;
if(unlikely(SCp == NULL)) {
sdev_printk(KERN_ERR, SDp,
"no saved request for untagged cmd\n");
@@ -1826,7 +1826,7 @@ NCR_700_queuecommand_lck(struct scsi_cmnd *SCp, void 
(*done)(struct scsi_cmnd *)
   slot->tag, slot);
} else {
slot->tag = SCSI_NO_TAG;
-   /* must populate current_cmnd for scsi_host_find_tag to work */
+   /* save current command for reselection */
SCp->device->current_cmnd = SCp;
}
/* sanity check: some of the commands generated by the mid-layer


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 03/11] nvmet: Add support for configfs-ng multi-tenant logic

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch introduces support for configfs-ng, that allows
for multi-tenant /sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/
operation, using existing /sys/kernel/config/target/core/
backends from target-core to be configfs symlinked as
per nvme-target subsystem NQN namespaces.

Here's how the layout looks:

/sys/kernel/config/nvmet/
└── subsystems
└── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
├── namespaces
│   └── 1
│   └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
└── ports
└── loop
├── addr_adrfam
├── addr_portid
├── addr_traddr
├── addr_treq
├── addr_trsvcid
├── addr_trtype
└── enable

It convert nvmet_find_get_subsys to port_binding_list, and
do the same for nvmet_host_discovery_allowed.

Also convert nvmet_genctr to atomic_long_t, so it can be used
outside of nvmet_config_sem.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/Makefile  |   2 +-
 drivers/nvme/target/configfs-ng.c | 662 ++
 drivers/nvme/target/configfs.c|  12 +-
 drivers/nvme/target/core.c|  91 --
 drivers/nvme/target/discovery.c   |  31 +-
 drivers/nvme/target/nvmet.h   |  50 ++-
 6 files changed, 812 insertions(+), 36 deletions(-)
 create mode 100644 drivers/nvme/target/configfs-ng.c

diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile
index b7a0623..2799e07 100644
--- a/drivers/nvme/target/Makefile
+++ b/drivers/nvme/target/Makefile
@@ -3,7 +3,7 @@ obj-$(CONFIG_NVME_TARGET)   += nvmet.o
 obj-$(CONFIG_NVME_TARGET_LOOP) += nvme-loop.o
 obj-$(CONFIG_NVME_TARGET_RDMA) += nvmet-rdma.o
 
-nvmet-y+= core.o configfs.o admin-cmd.o io-cmd.o fabrics-cmd.o 
\
+nvmet-y+= core.o configfs-ng.o admin-cmd.o io-cmd.o 
fabrics-cmd.o \
discovery.o
 nvme-loop-y+= loop.o
 nvmet-rdma-y   += rdma.o
diff --git a/drivers/nvme/target/configfs-ng.c 
b/drivers/nvme/target/configfs-ng.c
new file mode 100644
index 000..28dc24b
--- /dev/null
+++ b/drivers/nvme/target/configfs-ng.c
@@ -0,0 +1,662 @@
+/*
+ * Based on target_core_fabric_configfs.c code
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "nvmet.h"
+
+/*
+ * NVMf host CIT
+ */
+static void nvmet_host_release(struct config_item *item)
+{
+   struct nvmet_host *host = to_host(item);
+   struct nvmet_subsys *subsys = host->subsys;
+
+   mutex_lock(>hosts_mutex);
+   list_del_init(>node);
+   mutex_unlock(>hosts_mutex);
+
+   kfree(host);
+}
+
+static struct configfs_item_operations nvmet_host_item_opts = {
+   .release= nvmet_host_release,
+};
+
+static struct config_item_type nvmet_host_type = {
+   .ct_item_ops= _host_item_opts,
+   .ct_attrs   = NULL,
+   .ct_owner   = THIS_MODULE,
+
+};
+
+static struct config_group *nvmet_make_hosts(struct config_group *group,
+   const char *name)
+{
+   struct nvmet_subsys *subsys = ports_to_subsys(>cg_item);
+   struct nvmet_host *host;
+
+   host = kzalloc(sizeof(*host), GFP_KERNEL);
+   if (!host)
+   return ERR_PTR(-ENOMEM);
+
+   INIT_LIST_HEAD(>node);
+   host->subsys = subsys;
+
+   mutex_lock(>hosts_mutex);
+   list_add_tail(>node, >hosts);
+   mutex_unlock(>hosts_mutex);
+
+   config_group_init_type_name(>group, name, _host_type);
+
+   return >group;
+}
+
+static void nvmet_drop_hosts(struct config_group *group, struct config_item 
*item)
+{
+   config_item_put(item);
+}
+
+static struct configfs_group_operations nvmet_hosts_group_ops = {
+   .make_group = nvmet_make_hosts,
+   .drop_item  = nvmet_drop_hosts,
+};
+
+static struct config_item_type nvmet_hosts_type = {
+   .ct_group_ops   = _hosts_group_ops,
+   .ct_item_ops= NULL,
+   .ct_attrs   = NULL,
+   .ct_owner   = THIS_MODULE,
+};
+
+/*
+ * nvmet_port Generic ConfigFS definitions.
+ */
+static ssize_t nvmet_port_addr_adrfam_show(struct config_item *item,
+   char *page)
+{
+   switch (to_nvmet_port_binding(item)->disc_addr.adrfam) {
+   case NVMF_ADDR_FAMILY_IP4:
+   return sprintf(page, "ipv4\n");
+   case NVMF_ADDR_FAMILY_IP6:
+   return sprintf(page, "ipv6\n");
+   case NVMF_ADDR_FAMILY_IB:
+   return sprintf(page, "ib\n");
+   default:
+   return sprintf(page, "\n");
+   

[RFC-v2 09/11] nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch converts nvmet_execute_discard() to utilize
sbc_ops->execute_unmap() for target_iostate submission
into existing backends drivers via configfs in
/sys/kernel/config/target/core/.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/io-cmd.c | 47 
 1 file changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 23905a8..605f560 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -126,52 +126,73 @@ static void nvmet_execute_flush(struct nvmet_req *req)
rc = sbc_ops->execute_sync_cache(ios, false);
 }
 
-#if 0
-static u16 nvmet_discard_range(struct nvmet_ns *ns,
-   struct nvme_dsm_range *range, int type, struct bio **bio)
+static u16 nvmet_discard_range(struct nvmet_req *req, struct sbc_ops *sbc_ops,
+   struct nvme_dsm_range *range, struct bio **bio)
 {
-   if (__blkdev_issue_discard(ns->bdev,
+   struct nvmet_ns *ns = req->ns;
+   sense_reason_t rc;
+
+   rc = sbc_ops->execute_unmap(>t_iostate,
le64_to_cpu(range->slba) << (ns->blksize_shift - 9),
le32_to_cpu(range->nlb) << (ns->blksize_shift - 9),
-   GFP_KERNEL, type, bio))
+   bio);
+   if (rc)
return NVME_SC_INTERNAL | NVME_SC_DNR;
 
return 0;
 }
-#endif
+
+static void nvmet_discard_bio_done(struct bio *bio)
+{
+   struct nvmet_req *req = bio->bi_private;
+   int err = bio->bi_error;
+
+   bio_put(bio);
+   nvmet_req_complete(req, err ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+}
 
 static void nvmet_execute_discard(struct nvmet_req *req)
 {
-#if 0
-   struct nvme_dsm_range range;
+   struct target_iostate *ios = >t_iostate;
+   struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+   struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
struct bio *bio = NULL;
-   int type = REQ_WRITE | REQ_DISCARD, i;
+   struct nvme_dsm_range range;
+   int i;
u16 status;
 
+   if (!sbc_ops || !sbc_ops->execute_unmap) {
+   nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+   return;
+   }
+
+   ios->se_dev = dev;
+   ios->iomem = NULL;
+   ios->t_comp_func = NULL;
+
for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
status = nvmet_copy_from_sgl(req, i * sizeof(range), ,
sizeof(range));
if (status)
break;
 
-   status = nvmet_discard_range(req->ns, , type, );
+   status = nvmet_discard_range(req, sbc_ops, , );
if (status)
break;
}
 
if (bio) {
bio->bi_private = req;
-   bio->bi_end_io = nvmet_bio_done;
+   bio->bi_end_io = nvmet_discard_bio_done;
if (status) {
bio->bi_error = -EIO;
bio_endio(bio);
} else {
-   submit_bio(type, bio);
+   submit_bio(REQ_WRITE | REQ_DISCARD, bio);
}
} else {
nvmet_req_complete(req, status);
}
-#endif
 }
 
 static void nvmet_execute_dsm(struct nvmet_req *req)
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 11/11] nvmet/loop: Add support for bio integrity handling

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch adds support for nvme/loop block integrity,
based upon the reported ID_NS.ms + ID_NS.dps feature
bits in nvmet_execute_identify_ns().

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/loop.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index e9f31d4..480a7ef 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -42,6 +42,7 @@ struct nvme_loop_iod {
struct nvme_loop_queue  *queue;
struct work_struct  work;
struct sg_table sg_table;
+   struct scatterlist  meta_sg;
struct scatterlist  first_sgl[];
 };
 
@@ -201,6 +202,23 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
BUG_ON(iod->req.sg_cnt > req->nr_phys_segments);
}
 
+   if (blk_integrity_rq(req)) {
+   int count;
+
+   if (blk_rq_count_integrity_sg(hctx->queue, req->bio) != 1)
+   BUG_ON(1);
+
+   sg_init_table(>meta_sg, 1);
+   count = blk_rq_map_integrity_sg(hctx->queue, req->bio,
+   >meta_sg);
+
+   iod->req.prot_sg = >meta_sg;
+   iod->req.prot_sg_cnt = 1;
+
+   pr_debug("nvme/loop: Set prot_sg %p and prot_sg_cnt: %d\n",
+   iod->req.prot_sg, iod->req.prot_sg_cnt);
+   }
+
iod->cmd.common.command_id = req->tag;
blk_mq_start_request(req);
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 07/11] nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch converts nvmet_execute_rw() to utilize sbc_ops->execute_rw()
for target_iostate + target_iomem based I/O submission into existing
backends drivers via configfs in /sys/kernel/config/target/core/.

This includes support for passing T10-PI scatterlists via target_iomem
into existing sbc_ops->execute_rw() logic, and is functioning with
IBLOCK, FILEIO, and RAMDISK.

Note the preceeding target/iblock patch absorbs inline bio + bvecs
and blk_poll() optimizations from Ming + Sagi in nvmet/io-cmd into
target_core_iblock.c code.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/io-cmd.c | 116 ++-
 drivers/nvme/target/nvmet.h  |   7 +++
 2 files changed, 67 insertions(+), 56 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 38c2e97..133a14a 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -14,20 +14,16 @@
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #include 
 #include 
+#include 
+#include 
 #include "nvmet.h"
 
-#if 0
-static void nvmet_bio_done(struct bio *bio)
+static void nvmet_complete_ios(struct target_iostate *ios, u16 status)
 {
-   struct nvmet_req *req = bio->bi_private;
-
-   nvmet_req_complete(req,
-   bio->bi_error ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
+   struct nvmet_req *req = container_of(ios, struct nvmet_req, t_iostate);
 
-   if (bio != >inline_bio)
-   bio_put(bio);
+   nvmet_req_complete(req, status ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
 }
-#endif
 
 static inline u32 nvmet_rw_len(struct nvmet_req *req)
 {
@@ -35,72 +31,80 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
req->ns->blksize_shift;
 }
 
-#if 0
-static void nvmet_inline_bio_init(struct nvmet_req *req)
-{
-   struct bio *bio = >inline_bio;
-
-   bio_init(bio);
-   bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;
-   bio->bi_io_vec = req->inline_bvec;
-}
-#endif
-
 static void nvmet_execute_rw(struct nvmet_req *req)
 {
-#if 0
-   int sg_cnt = req->sg_cnt;
-   struct scatterlist *sg;
-   struct bio *bio;
+   struct target_iostate *ios = >t_iostate;
+   struct target_iomem *iomem = >t_iomem;
+   struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+   struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
sector_t sector;
-   blk_qc_t cookie;
-   int rw, i;
-#endif
+   enum dma_data_direction data_direction;
+   sense_reason_t rc;
+   bool fua_write = false, prot_enabled = false;
+
+   if (!sbc_ops || !sbc_ops->execute_rw) {
+   nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+   return;
+   }
+
if (!req->sg_cnt) {
nvmet_req_complete(req, 0);
return;
}
-#if 0
+
if (req->cmd->rw.opcode == nvme_cmd_write) {
if (req->cmd->rw.control & cpu_to_le16(NVME_RW_FUA))
-   rw = WRITE_FUA;
-   else
-   rw = WRITE;
+   fua_write = true;
+
+   data_direction = DMA_TO_DEVICE;
} else {
-   rw = READ;
+   data_direction = DMA_FROM_DEVICE;
}
 
sector = le64_to_cpu(req->cmd->rw.slba);
sector <<= (req->ns->blksize_shift - 9);
 
-   nvmet_inline_bio_init(req);
-   bio = >inline_bio;
-   bio->bi_bdev = req->ns->bdev;
-   bio->bi_iter.bi_sector = sector;
-   bio->bi_private = req;
-   bio->bi_end_io = nvmet_bio_done;
-
-   for_each_sg(req->sg, sg, req->sg_cnt, i) {
-   while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
-   != sg->length) {
-   struct bio *prev = bio;
-
-   bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES));
-   bio->bi_bdev = req->ns->bdev;
-   bio->bi_iter.bi_sector = sector;
-
-   bio_chain(bio, prev);
-   cookie = submit_bio(rw, prev);
-   }
+   ios->t_task_lba = sector;
+   ios->data_length = nvmet_rw_len(req);
+   ios->data_direction = data_direction;
+   iomem->t_data_sg = req->sg;
+   iomem->t_data_nents = req->sg_cnt;
+   iomem->t_prot_sg = req->prot_sg;
+   iomem->t_prot_nents = req->prot_sg_cnt;
+
+   // XXX: Make common between sbc_check_prot and nvme-target
+   switch (dev->dev_attrib.pi_prot_type) {
+   case TARGET_DIF_TYPE3_PROT:
+   ios->reftag_seed = 0x;
+   prot_enabled = true;
+   break;
+   case 

[RFC-v2 10/11] nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch updates nvmet_execute_identify_ns() to report
target-core backend T10-PI related feature bits to the
NVMe host controller.

Note this assumes support for NVME_NS_DPC_PI_TYPE1 and
NVME_NS_DPC_PI_TYPE3 as reported by backend drivers via
/sys/kernel/config/target/core/*/*/attrib/pi_prot_type.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/admin-cmd.c | 17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 240e323..3a808dc 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -200,6 +200,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
 {
struct nvmet_ns *ns;
struct nvme_id_ns *id;
+   struct se_device *dev;
u16 status = 0;
 
ns = nvmet_find_namespace(req->sq->ctrl, req->cmd->identify.nsid);
@@ -228,6 +229,22 @@ static void nvmet_execute_identify_ns(struct nvmet_req 
*req)
id->nlbaf = 0;
id->flbas = 0;
 
+   /* Populate bits for T10-PI from se_device backend */
+   rcu_read_lock();
+   dev = rcu_dereference(ns->dev);
+   if (dev && dev->dev_attrib.pi_prot_type) {
+   int pi_prot_type = dev->dev_attrib.pi_prot_type;
+
+   id->lbaf[0].ms = cpu_to_le16(sizeof(struct t10_pi_tuple));
+   printk("nvmet_set_id_ns: ms: %u\n", id->lbaf[0].ms);
+
+   if (pi_prot_type == 1)
+   id->dps = NVME_NS_DPC_PI_TYPE1;
+   else if (pi_prot_type == 3)
+   id->dps = NVME_NS_DPC_PI_TYPE3;
+   }
+   rcu_read_unlock();
+
/*
 * Our namespace might always be shared.  Not just with other
 * controllers, but also with any other user of the block device.
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 06/11] nvmet/rdma: Convert to struct nvmet_port_binding

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch converts nvmet/rdma to nvmet_port_binding in
configfs-ng, and introduces a nvmet_rdma_port that allows
for multiple nvmet_subsys nvmet_port_bindings to be mapped
to a single nvmet_rdma_port rdma_cm_id listener.

It moves rdma_cm_id setup into nvmet_rdma_listen_cmid(),
and rdma_cm_id destroy into nvmet_rmda_destroy_cmid()
using nvmet_rdma_port->ref.

It also updates nvmet_rdma_add_port() to do internal
port lookup matching traddr and trsvcid, and grabs
nvmet_rdma_port->ref if a matching port already exists.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Keith Busch 
Cc: Jay Freyensee 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/rdma.c | 127 -
 1 file changed, 114 insertions(+), 13 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index fccb01d..62638f7af 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -118,6 +118,17 @@ struct nvmet_rdma_device {
struct list_headentry;
 };
 
+struct nvmet_rdma_port {
+   atomic_tenabled;
+
+   struct rdma_cm_id   *cm_id;
+   struct nvmf_disc_rsp_page_entry port_addr;
+
+   struct list_headnode;
+   struct kref ref;
+   struct nvmet_port   port;
+};
+
 static bool nvmet_rdma_use_srq;
 module_param_named(use_srq, nvmet_rdma_use_srq, bool, 0444);
 MODULE_PARM_DESC(use_srq, "Use shared receive queue.");
@@ -129,6 +140,9 @@ static DEFINE_MUTEX(nvmet_rdma_queue_mutex);
 static LIST_HEAD(device_list);
 static DEFINE_MUTEX(device_list_mutex);
 
+static LIST_HEAD(nvmet_rdma_ports);
+static DEFINE_MUTEX(nvmet_rdma_ports_mutex);
+
 static bool nvmet_rdma_execute_command(struct nvmet_rdma_rsp *rsp);
 static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
@@ -1127,6 +1141,7 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id 
*cm_id,
 {
struct nvmet_rdma_device *ndev;
struct nvmet_rdma_queue *queue;
+   struct nvmet_rdma_port *rdma_port;
int ret = -EINVAL;
 
ndev = nvmet_rdma_find_get_device(cm_id);
@@ -1141,7 +1156,8 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id 
*cm_id,
ret = -ENOMEM;
goto put_device;
}
-   queue->port = cm_id->context;
+   rdma_port = cm_id->context;
+   queue->port = _port->port;
 
ret = nvmet_rdma_cm_accept(cm_id, queue, >param.conn);
if (ret)
@@ -1306,26 +1322,50 @@ static void nvmet_rdma_delete_ctrl(struct nvmet_ctrl 
*ctrl)
nvmet_rdma_queue_disconnect(queue);
 }
 
-static int nvmet_rdma_add_port(struct nvmet_port *port)
+static struct nvmet_rdma_port *nvmet_rdma_listen_cmid(struct 
nvmet_port_binding *pb)
 {
+   struct nvmet_rdma_port *rdma_port;
struct rdma_cm_id *cm_id;
struct sockaddr_in addr_in;
u16 port_in;
int ret;
 
-   ret = kstrtou16(port->disc_addr.trsvcid, 0, _in);
+   rdma_port = kzalloc(sizeof(*rdma_port), GFP_KERNEL);
+   if (!rdma_port)
+   return ERR_PTR(-ENOMEM);
+
+   INIT_LIST_HEAD(_port->node);
+   kref_init(_port->ref);
+   mutex_init(_port->port.port_binding_mutex);
+   INIT_LIST_HEAD(_port->port.port_binding_list);
+   rdma_port->port.priv = rdma_port;
+   rdma_port->port.nf_subsys = pb->nf_subsys;
+   rdma_port->port.nf_ops = pb->nf_ops;
+   pb->port = _port->port;
+
+   memcpy(_port->port_addr, >disc_addr,
+   sizeof(struct nvmf_disc_rsp_page_entry));
+
+   nvmet_port_binding_enable(pb, _port->port);
+
+   mutex_lock(_rdma_ports_mutex);
+   list_add_tail(_port->node, _rdma_ports);
+   mutex_unlock(_rdma_ports_mutex);
+
+   ret = kstrtou16(pb->disc_addr.trsvcid, 0, _in);
if (ret)
-   return ret;
+   goto out_port_disable;
 
addr_in.sin_family = AF_INET;
-   addr_in.sin_addr.s_addr = in_aton(port->disc_addr.traddr);
+   addr_in.sin_addr.s_addr = in_aton(pb->disc_addr.traddr);
addr_in.sin_port = htons(port_in);
 
-   cm_id = rdma_create_id(_net, nvmet_rdma_cm_handler, port,
+   cm_id = rdma_create_id(_net, nvmet_rdma_cm_handler, rdma_port,
RDMA_PS_TCP, IB_QPT_RC);
if (IS_ERR(cm_id)) {
pr_err("CM ID creation failed\n");
-   return PTR_ERR(cm_id);
+   ret = PTR_ERR(cm_id);
+   goto out_port_disable;
}
 
ret = rdma_bind_addr(cm_id, (struct sockaddr *)_in);
@@ -1340,21 +1380,82 @@ static int nvmet_rdma_add_port(struct 

[RFC-v2 01/11] nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch allows nvmf_host_add() to be used externally,
and optionally if no hostnqn is passed will generate a
hostnqn based on host->id, following nvmf_host_default().

Note it's required for nvme-loop multi-controller support,
in order to drive nvmet_port creation directly via configfs
attribute write from in ../nvmet/subsystems/$NQN/ports/$PORT/
group context.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Keith Busch 
Cc: Jay Freyensee 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/host/fabrics.c | 18 +-
 drivers/nvme/host/fabrics.h |  1 +
 2 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index ee4b7f1..2e0086a 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -41,28 +41,36 @@ static struct nvmf_host *__nvmf_host_find(const char 
*hostnqn)
return NULL;
 }
 
-static struct nvmf_host *nvmf_host_add(const char *hostnqn)
+struct nvmf_host *nvmf_host_add(const char *hostnqn)
 {
struct nvmf_host *host;
 
mutex_lock(_hosts_mutex);
-   host = __nvmf_host_find(hostnqn);
-   if (host)
-   goto out_unlock;
+   if (hostnqn) {
+   host = __nvmf_host_find(hostnqn);
+   if (host)
+   goto out_unlock;
+   }
 
host = kmalloc(sizeof(*host), GFP_KERNEL);
if (!host)
goto out_unlock;
 
kref_init(>ref);
-   memcpy(host->nqn, hostnqn, NVMF_NQN_SIZE);
uuid_le_gen(>id);
 
+   if (hostnqn)
+   memcpy(host->nqn, hostnqn, NVMF_NQN_SIZE);
+   else
+   snprintf(host->nqn, NVMF_NQN_SIZE,
+   "nqn.2014-08.org.nvmexpress:NVMf:uuid:%pUl", >id);
+
list_add_tail(>list, _hosts);
 out_unlock:
mutex_unlock(_hosts_mutex);
return host;
 }
+EXPORT_SYMBOL_GPL(nvmf_host_add);
 
 static struct nvmf_host *nvmf_host_default(void)
 {
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index b540674..956eab4 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -128,6 +128,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl);
 int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid);
 void nvmf_register_transport(struct nvmf_transport_ops *ops);
 void nvmf_unregister_transport(struct nvmf_transport_ops *ops);
+struct nvmf_host *nvmf_host_add(const char *hostnqn);
 void nvmf_free_options(struct nvmf_ctrl_options *opts);
 const char *nvmf_get_subsysnqn(struct nvme_ctrl *ctrl);
 int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size);
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 08/11] nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch converts nvmet_execute_flush() to utilize
sbc_ops->execute_sync_cache() for target_iostate
submission into existing backends drivers via
configfs in /sys/kernel/config/target/core/.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/io-cmd.c | 21 -
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 133a14a..23905a8 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -109,18 +109,21 @@ static void nvmet_execute_rw(struct nvmet_req *req)
 
 static void nvmet_execute_flush(struct nvmet_req *req)
 {
-#if 0
-   struct bio *bio;
+   struct target_iostate *ios = >t_iostate;
+   struct se_device *dev = rcu_dereference_raw(req->ns->dev);
+   struct sbc_ops *sbc_ops = dev->transport->sbc_ops;
+   sense_reason_t rc;
 
-   nvmet_inline_bio_init(req);
-   bio = >inline_bio;
+   if (!sbc_ops || !sbc_ops->execute_sync_cache) {
+   nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+   return;
+   }
 
-   bio->bi_bdev = req->ns->bdev;
-   bio->bi_private = req;
-   bio->bi_end_io = nvmet_bio_done;
+   ios->se_dev = dev;
+   ios->iomem = NULL;
+   ios->t_comp_func = _complete_ios;
 
-   submit_bio(WRITE_FLUSH, bio);
-#endif
+   rc = sbc_ops->execute_sync_cache(ios, false);
 }
 
 #if 0
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 02/11] nvmet: Add nvmet_fabric_ops get/put transport helpers

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch introduces two helpers for obtaining + releasing
nvmet_fabric_ops for nvmet_port usage, and the associated
struct module ops->owner reference.

This is required in order to support nvmet/configfs-ng
and multiple nvmet_port configfs groups living under
/sys/kernel/config/nvmet/subsystems/$SUBSYS_NQN/ports/

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/core.c  | 32 
 drivers/nvme/target/nvmet.h |  4 
 2 files changed, 36 insertions(+)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index e0b3f01..689ad4c 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -191,6 +191,38 @@ void nvmet_disable_port(struct nvmet_port *port)
module_put(ops->owner);
 }
 
+struct nvmet_fabrics_ops *
+nvmet_get_transport(struct nvmf_disc_rsp_page_entry *disc_addr)
+{
+   struct nvmet_fabrics_ops *ops;
+
+   down_write(_config_sem);
+   ops = nvmet_transports[disc_addr->trtype];
+   if (!ops) {
+   pr_err("transport type %d not supported\n",
+   disc_addr->trtype);
+   return ERR_PTR(-EINVAL);
+   }
+
+   if (!try_module_get(ops->owner)) {
+   up_write(_config_sem);
+   return ERR_PTR(-EINVAL);
+   }
+   up_write(_config_sem);
+
+   return ops;
+}
+
+void nvmet_put_transport(struct nvmf_disc_rsp_page_entry *disc_addr)
+{
+   struct nvmet_fabrics_ops *ops;
+
+   down_write(_config_sem);
+   ops = nvmet_transports[disc_addr->trtype];
+   module_put(ops->owner);
+   up_write(_config_sem);
+}
+
 static void nvmet_keep_alive_timer(struct work_struct *work)
 {
struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 57dd6d8..17fd217 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -299,6 +299,10 @@ void nvmet_unregister_transport(struct nvmet_fabrics_ops 
*ops);
 int nvmet_enable_port(struct nvmet_port *port);
 void nvmet_disable_port(struct nvmet_port *port);
 
+struct nvmet_fabrics_ops *nvmet_get_transport(
+   struct nvmf_disc_rsp_page_entry *disc_addr);
+void nvmet_put_transport(struct nvmf_disc_rsp_page_entry *disc_addr);;
+
 void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port);
 void nvmet_referral_disable(struct nvmet_port *port);
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC-v2 04/11] nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch hooks up nvmet_ns_enable() to accept the RCU protected
struct se_device provided as a configfs symlink from existing
/sys/kernel/config/target/core/ driver backends.

Also, drop the now unused internal ns->bdev + ns->device_path
usage, and add WIP stubs for nvmet/io-cmd sbc_ops backend
conversion to be added in subsequent patches.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/configfs-ng.c |  3 +--
 drivers/nvme/target/core.c| 30 --
 drivers/nvme/target/io-cmd.c  | 17 +++--
 drivers/nvme/target/nvmet.h   |  6 ++
 4 files changed, 26 insertions(+), 30 deletions(-)

diff --git a/drivers/nvme/target/configfs-ng.c 
b/drivers/nvme/target/configfs-ng.c
index 28dc24b..1cd1e8e 100644
--- a/drivers/nvme/target/configfs-ng.c
+++ b/drivers/nvme/target/configfs-ng.c
@@ -470,8 +470,7 @@ static int nvmet_ns_link(struct config_item *ns_ci, struct 
config_item *dev_ci)
return -ENOSYS;
}
 
-   // XXX: Pass in struct se_device into nvmet_ns_enable
-   return nvmet_ns_enable(ns);
+   return nvmet_ns_enable(ns, dev);
 }
 
 static int nvmet_ns_unlink(struct config_item *ns_ci, struct config_item 
*dev_ci)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 3357696..e2176e0 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -13,6 +13,8 @@
  */
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 #include 
+#include 
+#include 
 #include "nvmet.h"
 
 static struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX];
@@ -292,7 +294,7 @@ void nvmet_put_namespace(struct nvmet_ns *ns)
percpu_ref_put(>ref);
 }
 
-int nvmet_ns_enable(struct nvmet_ns *ns)
+int nvmet_ns_enable(struct nvmet_ns *ns, struct se_device *dev)
 {
struct nvmet_subsys *subsys = ns->subsys;
struct nvmet_ctrl *ctrl;
@@ -302,23 +304,14 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
if (!list_empty(>dev_link))
goto out_unlock;
 
-   ns->bdev = blkdev_get_by_path(ns->device_path, FMODE_READ | FMODE_WRITE,
-   NULL);
-   if (IS_ERR(ns->bdev)) {
-   pr_err("nvmet: failed to open block device %s: (%ld)\n",
-   ns->device_path, PTR_ERR(ns->bdev));
-   ret = PTR_ERR(ns->bdev);
-   ns->bdev = NULL;
-   goto out_unlock;
-   }
-
-   ns->size = i_size_read(ns->bdev->bd_inode);
-   ns->blksize_shift = blksize_bits(bdev_logical_block_size(ns->bdev));
+   rcu_assign_pointer(ns->dev, dev);
+   ns->size = dev->transport->get_blocks(dev) * 
dev->dev_attrib.hw_block_size;
+   ns->blksize_shift = blksize_bits(dev->dev_attrib.hw_block_size);
 
ret = percpu_ref_init(>ref, nvmet_destroy_namespace,
0, GFP_KERNEL);
if (ret)
-   goto out_blkdev_put;
+   goto out_unlock;
 
if (ns->nsid > subsys->max_nsid)
subsys->max_nsid = ns->nsid;
@@ -348,10 +341,6 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
 out_unlock:
mutex_unlock(>lock);
return ret;
-out_blkdev_put:
-   blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
-   ns->bdev = NULL;
-   goto out_unlock;
 }
 
 void nvmet_ns_disable(struct nvmet_ns *ns)
@@ -384,16 +373,13 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
list_for_each_entry(ctrl, >ctrls, subsys_entry)
nvmet_add_async_event(ctrl, NVME_AER_TYPE_NOTICE, 0, 0);
 
-   if (ns->bdev)
-   blkdev_put(ns->bdev, FMODE_WRITE|FMODE_READ);
+   rcu_assign_pointer(ns->dev, NULL);
mutex_unlock(>lock);
 }
 
 void nvmet_ns_free(struct nvmet_ns *ns)
 {
nvmet_ns_disable(ns);
-
-   kfree(ns->device_path);
kfree(ns);
 }
 
diff --git a/drivers/nvme/target/io-cmd.c b/drivers/nvme/target/io-cmd.c
index 76dbf73..38c2e97 100644
--- a/drivers/nvme/target/io-cmd.c
+++ b/drivers/nvme/target/io-cmd.c
@@ -16,6 +16,7 @@
 #include 
 #include "nvmet.h"
 
+#if 0
 static void nvmet_bio_done(struct bio *bio)
 {
struct nvmet_req *req = bio->bi_private;
@@ -26,6 +27,7 @@ static void nvmet_bio_done(struct bio *bio)
if (bio != >inline_bio)
bio_put(bio);
 }
+#endif
 
 static inline u32 nvmet_rw_len(struct nvmet_req *req)
 {
@@ -33,6 +35,7 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req)
req->ns->blksize_shift;
 }
 
+#if 0
 static void nvmet_inline_bio_init(struct nvmet_req *req)
 {
struct bio *bio = >inline_bio;
@@ -41,21 +44,23 @@ static void nvmet_inline_bio_init(struct nvmet_req *req)
bio->bi_max_vecs = NVMET_MAX_INLINE_BIOVEC;

[RFC-v2 05/11] nvmet/loop: Add support for controller-per-port model + nvmet_port_binding

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

This patch introduces loopback support for a nvme host
controller per nvmet_port instance model, following what
we've done in drivers/target/loopback/ for allowing
multiple host LLDs to co-exist.

It changes nvme_loop_add_port() to use struct nvme_loop_port
and take the nvmf_host_add() reference, and invokes
device_register() to nvme_loop_driver_probe() to kick off
controller creation within nvme_loop_create_ctrl().

This allows nvme_loop_queue_rq to setup iod->req.port to
the per nvmet_port pointer, instead of a single hardcoded
global nvmet_loop_port.

Subsequently, it also adds nvme_loop_remove_port() to call
device_unregister() and call nvme_loop_del_ctrl() and
nvmf_free_options() to drop nvmet_port's struct nvmf_host
rereference, when the nvmet_port_binding is being removed
from the associated nvmet_subsys.

Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Keith Busch 
Cc: Jay Freyensee 
Cc: Martin Petersen 
Cc: Sagi Grimberg 
Cc: Hannes Reinecke 
Cc: Mike Christie 
Signed-off-by: Nicholas Bellinger 
---
 drivers/nvme/target/loop.c | 205 -
 1 file changed, 185 insertions(+), 20 deletions(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 08b4fbb..e9f31d4 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -45,6 +45,13 @@ struct nvme_loop_iod {
struct scatterlist  first_sgl[];
 };
 
+struct nvme_loop_port {
+   struct device   dev;
+   struct nvmf_ctrl_options *opts;
+   struct nvme_ctrl*ctrl;
+   struct nvmet_port   port;
+};
+
 struct nvme_loop_ctrl {
spinlock_t  lock;
struct nvme_loop_queue  *queues;
@@ -61,6 +68,8 @@ struct nvme_loop_ctrl {
struct nvmet_ctrl   *target_ctrl;
struct work_struct  delete_work;
struct work_struct  reset_work;
+
+   struct nvme_loop_port   *port;
 };
 
 static inline struct nvme_loop_ctrl *to_loop_ctrl(struct nvme_ctrl *ctrl)
@@ -74,8 +83,6 @@ struct nvme_loop_queue {
struct nvme_loop_ctrl   *ctrl;
 };
 
-static struct nvmet_port *nvmet_loop_port;
-
 static LIST_HEAD(nvme_loop_ctrl_list);
 static DEFINE_MUTEX(nvme_loop_ctrl_mutex);
 
@@ -172,7 +179,8 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
return ret;
 
iod->cmd.common.flags |= NVME_CMD_SGL_METABUF;
-   iod->req.port = nvmet_loop_port;
+   iod->req.port = >ctrl->port->port;
+
if (!nvmet_req_init(>req, >nvme_cq,
>nvme_sq, _loop_ops)) {
nvme_cleanup_cmd(req);
@@ -599,6 +607,8 @@ out_destroy_queues:
 static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
struct nvmf_ctrl_options *opts)
 {
+   struct nvme_loop_port *loop_port = container_of(dev,
+   struct nvme_loop_port, dev);
struct nvme_loop_ctrl *ctrl;
bool changed;
int ret;
@@ -607,6 +617,7 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct 
device *dev,
if (!ctrl)
return ERR_PTR(-ENOMEM);
ctrl->ctrl.opts = opts;
+   ctrl->port = loop_port;
INIT_LIST_HEAD(>list);
 
INIT_WORK(>delete_work, nvme_loop_del_ctrl_work);
@@ -681,29 +692,135 @@ out_put_ctrl:
return ERR_PTR(ret);
 }
 
-static int nvme_loop_add_port(struct nvmet_port *port)
+static int nvme_loop_driver_probe(struct device *dev)
 {
-   /*
-* XXX: disalow adding more than one port so
-* there is no connection rejections when a
-* a subsystem is assigned to a port for which
-* loop doesn't have a pointer.
-* This scenario would be possible if we allowed
-* more than one port to be added and a subsystem
-* was assigned to a port other than nvmet_loop_port.
-*/
+   struct nvme_loop_port *loop_port = container_of(dev,
+   struct nvme_loop_port, dev);
+   struct nvme_ctrl *ctrl;
 
-   if (nvmet_loop_port)
-   return -EPERM;
+   ctrl = nvme_loop_create_ctrl(dev, loop_port->opts);
+   if (IS_ERR(ctrl))
+   return PTR_ERR(ctrl);
 
-   nvmet_loop_port = port;
+   loop_port->ctrl = ctrl;
return 0;
 }
 
-static void nvme_loop_remove_port(struct nvmet_port *port)
+static int nvme_loop_driver_remove(struct device *dev)
+{
+   struct nvme_loop_port *loop_port = container_of(dev,
+   struct nvme_loop_port, dev);
+   struct nvme_ctrl *ctrl = loop_port->ctrl;
+   struct nvmf_ctrl_options *opts = loop_port->opts;
+
+   nvme_loop_del_ctrl(ctrl);
+   nvmf_free_options(opts);
+   return 0;
+}
+
+static int pseudo_bus_match(struct device *dev,
+

[RFC-v2 00/11] nvmet: Add support for multi-tenant configfs

2016-06-13 Thread Nicholas A. Bellinger
From: Nicholas Bellinger 

Hi folks,

Here's the second pass of a nvmet multi-tenant configfs layout,
following what we've learned in target_core_fabric_configfs.c
wrt to independent operation of storage endpoints.

Namely, it allows existing /sys/kernel/config/target/core/ backends
to be configfs symlinked into ../nvmet/subsystems/$SUBSYS_NQN/
as nvme namespaces.

Here is how the running RFC-v2 code currently looks:

/sys/kernel/config/nvmet/subsystems/
└── nqn.2003-01.org.linux-iscsi.NVMf.skylake-ep
├── hosts
├── namespaces
│   └── 1
│   └── ramdisk0 -> ../../../../../target/core/rd_mcp_1/ramdisk0
└── ports
└── loop
├── addr_adrfam
├── addr_portid
├── addr_traddr
├── addr_treq
├── addr_trsvcid
├── addr_trtype
└── enable

The series exposes T10-PI from /sys/kernel/config/target/core/ as
ID_NS.ms + ID_NS.dps feature bits, and enables block integrity
support with nvmet/loop driver.

Note this series depends upon the following prerequisites of
target-core:

  http://marc.info/?l=linux-scsi=146527281416606=2

and of course, last week's earlier release of nvmet + friends:

  http://lists.infradead.org/pipermail/linux-nvme/2016-June/004754.html

Note the full set of patches is available from:

  
https://git.kernel.org/cgit/linux/kernel/git/nab/target-pending.git/log/?h=nvmet-configfs-ng

v2 changes:

  - Introduce struct nvmet_port_binding in configfs-ng.c, in order
to support 1:N mappings.
  - Convert nvmet_find_get_subsys() + discovery.c logic to use
nvmet_port->port_binding_list.
  - Convert nvmet/loop to use nvmet_port_binding. (1:1 mapping)
  - Convert nvmet/rdma to use nvmet_port_binding + nvmet_rdma_ports.
(1:N mapping)
  - Export nvmf_host_add + generate hostnqn if necessary. (HCH)
  - Make nvmet/loop multi-controller allocate it's own nvmf_host
per controller. (HCH)
  - Change nvmet_fabric_ops get/put to use nvmf_disc_rsp_page_entry.
  - Convert nvmet_genctr to atomic_long_t.
  - Enable ../nvmet/subsystems/$NQN/hosts/$HOSTNQN group usage
in configfs-ng.c.

Comments..?

--nab

Nicholas Bellinger (11):
  nvme-fabrics: Export nvmf_host_add + generate hostnqn if necessary
  nvmet: Add nvmet_fabric_ops get/put transport helpers
  nvmet: Add support for configfs-ng multi-tenant logic
  nvmet: Hookup nvmet_ns->dev to nvmet_ns_enable
  nvmet/loop: Add support for controller-per-port model +
nvmet_port_binding
  nvmet/rdma: Convert to struct nvmet_port_binding
  nvmet/io-cmd: Hookup sbc_ops->execute_rw backend ops
  nvmet/io-cmd: Hookup sbc_ops->execute_sync_cache backend ops
  nvmet/io-cmd: Hookup sbc_ops->execute_unmap backend ops
  nvmet/admin-cmd: Hookup T10-PI to ID_NS.ms + ID_NS.dps feature bits
  nvmet/loop: Add support for bio integrity handling

 drivers/nvme/host/fabrics.c   |  18 +-
 drivers/nvme/host/fabrics.h   |   1 +
 drivers/nvme/target/Makefile  |   2 +-
 drivers/nvme/target/admin-cmd.c   |  17 +
 drivers/nvme/target/configfs-ng.c | 661 ++
 drivers/nvme/target/configfs.c|  12 +-
 drivers/nvme/target/core.c| 153 ++---
 drivers/nvme/target/discovery.c   |  31 +-
 drivers/nvme/target/io-cmd.c  | 169 ++
 drivers/nvme/target/loop.c| 223 +++--
 drivers/nvme/target/nvmet.h   |  67 +++-
 drivers/nvme/target/rdma.c| 127 +++-
 12 files changed, 1317 insertions(+), 164 deletions(-)
 create mode 100644 drivers/nvme/target/configfs-ng.c

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] [PATCH 0/3] Convert ipr to use ata_port_operations->error_handler

2016-06-13 Thread Tejun Heo
On Fri, Jun 10, 2016 at 04:33:09PM -0500, Brian King wrote:
> On 05/26/2016 09:12 AM, Brian King wrote:
> > On 05/25/2016 02:32 PM, Tejun Heo wrote:
> >> Hello, Brian.
> >>
> >> So, of all the ata drivers, ipr seems to be the only driver which
> >> doesn't implement ata_port_opeations->error_handler and thus depends
> >> on the old error handling path. I'm wondering whether it'd be possible
> >> to convert ipr to implement ->error_handler and drop the old path.
> >> Would that be difficult?
> > 
> > Last time I looked at that there were a number of challenges in doing that,
> > but let me take another look and see if we can figure out a way to do that.
> 
> Here is an initial attempt to do this conversion. Probably plenty of bugs
> in it, and more testing is needed before this would be ready to apply.

Thanks a lot for working on this. :)

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] scsi: disable VPD page check on error

2016-06-13 Thread Ewan D. Milne
On Mon, 2016-06-13 at 14:48 +0200, Hannes Reinecke wrote:
> If we encounter an error during VPD page scanning we should be
> setting the 'skip_vpd_pages' bit to avoid further accesses.
> 
> Signed-off-by: Hannes Reinecke 
> ---
>  drivers/scsi/scsi.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
> index 1deb6ad..0359864 100644
> --- a/drivers/scsi/scsi.c
> +++ b/drivers/scsi/scsi.c
> @@ -796,6 +796,7 @@ retry_pg0:
>   result = scsi_vpd_inquiry(sdev, vpd_buf, 0, vpd_len);
>   if (result < 0) {
>   kfree(vpd_buf);
> + sdev->skip_vpd_pages = 1;
>   return;
>   }
>   if (result > vpd_len) {
> @@ -822,6 +823,7 @@ retry_pg80:
>   result = scsi_vpd_inquiry(sdev, vpd_buf, 0x80, vpd_len);
>   if (result < 0) {
>   kfree(vpd_buf);
> + sdev->skip_vpd_pages = 1;
>   return;
>   }
>   if (result > vpd_len) {
> @@ -851,6 +853,7 @@ retry_pg83:
>   result = scsi_vpd_inquiry(sdev, vpd_buf, 0x83, vpd_len);
>   if (result < 0) {
>   kfree(vpd_buf);
> + sdev->skip_vpd_pages = 1;
>   return;
>   }
>   if (result > vpd_len) {

So, this changes scsi_attach_vpd() but not scsi_get_vpd_page() ?

This particular implementation worries me.  If we get an error
performing a VPD inquiry, we will never, ever, attempt one again?  What
happens if a path is down at the time?  The idea behind getting updated
VPD info was that it might change, so if that does happen we don't want
to stop updating after an isolated error.

I think what we want to do is check if the VPD inquiry is supported on
the *initial* inquiry, and if that fails then suppress further updates.

-Ewan




--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] scsi: disable VPD page check on error

2016-06-13 Thread Hannes Reinecke
On 06/13/2016 03:50 PM, Ewan D. Milne wrote:
> On Mon, 2016-06-13 at 14:48 +0200, Hannes Reinecke wrote:
>> If we encounter an error during VPD page scanning we should be
>> setting the 'skip_vpd_pages' bit to avoid further accesses.
>>
>> Signed-off-by: Hannes Reinecke 
>> ---
>>  drivers/scsi/scsi.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
>> index 1deb6ad..0359864 100644
>> --- a/drivers/scsi/scsi.c
>> +++ b/drivers/scsi/scsi.c
>> @@ -796,6 +796,7 @@ retry_pg0:
>>  result = scsi_vpd_inquiry(sdev, vpd_buf, 0, vpd_len);
>>  if (result < 0) {
>>  kfree(vpd_buf);
>> +sdev->skip_vpd_pages = 1;
>>  return;
>>  }
>>  if (result > vpd_len) {
>> @@ -822,6 +823,7 @@ retry_pg80:
>>  result = scsi_vpd_inquiry(sdev, vpd_buf, 0x80, vpd_len);
>>  if (result < 0) {
>>  kfree(vpd_buf);
>> +sdev->skip_vpd_pages = 1;
>>  return;
>>  }
>>  if (result > vpd_len) {
>> @@ -851,6 +853,7 @@ retry_pg83:
>>  result = scsi_vpd_inquiry(sdev, vpd_buf, 0x83, vpd_len);
>>  if (result < 0) {
>>  kfree(vpd_buf);
>> +sdev->skip_vpd_pages = 1;
>>  return;
>>  }
>>  if (result > vpd_len) {
> 
> So, this changes scsi_attach_vpd() but not scsi_get_vpd_page() ?
> 
Yes.
scsi_get_vpd_page() is just checking 'skip_vpd_pages', but none of the
other settings (ie it's not using scsi_device_supports_vpd()).
So we already come to different results when asking for VPD pages via
scsi_get_vpd_page() and scsi_attach_vpd().
Ideally we would be using scsi_device_supports_vpd() in both instances,
but that would require a further audit and I deemed it beyond the scope
of this patch.

> This particular implementation worries me.  If we get an error
> performing a VPD inquiry, we will never, ever, attempt one again?  What
> happens if a path is down at the time?  The idea behind getting updated
> VPD info was that it might change, so if that does happen we don't want
> to stop updating after an isolated error.
> 
> I think what we want to do is check if the VPD inquiry is supported on
> the *initial* inquiry, and if that fails then suppress further updates.
> 
Fair point. Will be updating the patch.

Cheers,

Hannes
-- 
Dr. Hannes ReineckeTeamlead Storage & Networking
h...@suse.de   +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH resend] USB: uas: Fix slave queue_depth not being set

2016-06-13 Thread Hans de Goede

Hi,

On 13-06-16 14:05, Oliver Neukum wrote:

On Tue, 2016-05-31 at 09:18 +0200, Hans de Goede wrote:

Commit 198de51dbc34 ("USB: uas: Limit qdepth at the scsi-host level")
removed the scsi_change_queue_depth() call from uas_slave_configure()
assuming that the slave would inherit the host's queue_depth, which
that commit sets to the same value.

This is incorrect, without the scsi_change_queue_depth() call the slave's
queue_depth defaults to 1, introducing a performance regression.


Hans, may I ask what become of this patch? I don't see it in the queue.


It is here:

https://git.kernel.org/cgit/linux/kernel/git/gregkh/usb.git/commit/?h=usb-linus=593224ea77b1ca842f45cf76f4deeef44dfbacd1

Which is part of:

https://git.kernel.org/cgit/linux/kernel/git/gregkh/usb.git/log/?h=usb-linus

Regards,

Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] scsi: disable VPD page check on error

2016-06-13 Thread Hannes Reinecke
If we encounter an error during VPD page scanning we should be
setting the 'skip_vpd_pages' bit to avoid further accesses.

Signed-off-by: Hannes Reinecke 
---
 drivers/scsi/scsi.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 1deb6ad..0359864 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -796,6 +796,7 @@ retry_pg0:
result = scsi_vpd_inquiry(sdev, vpd_buf, 0, vpd_len);
if (result < 0) {
kfree(vpd_buf);
+   sdev->skip_vpd_pages = 1;
return;
}
if (result > vpd_len) {
@@ -822,6 +823,7 @@ retry_pg80:
result = scsi_vpd_inquiry(sdev, vpd_buf, 0x80, vpd_len);
if (result < 0) {
kfree(vpd_buf);
+   sdev->skip_vpd_pages = 1;
return;
}
if (result > vpd_len) {
@@ -851,6 +853,7 @@ retry_pg83:
result = scsi_vpd_inquiry(sdev, vpd_buf, 0x83, vpd_len);
if (result < 0) {
kfree(vpd_buf);
+   sdev->skip_vpd_pages = 1;
return;
}
if (result > vpd_len) {
-- 
1.8.5.6

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH resend] USB: uas: Fix slave queue_depth not being set

2016-06-13 Thread Oliver Neukum
On Tue, 2016-05-31 at 09:18 +0200, Hans de Goede wrote:
> Commit 198de51dbc34 ("USB: uas: Limit qdepth at the scsi-host level")
> removed the scsi_change_queue_depth() call from uas_slave_configure()
> assuming that the slave would inherit the host's queue_depth, which
> that commit sets to the same value.
> 
> This is incorrect, without the scsi_change_queue_depth() call the slave's
> queue_depth defaults to 1, introducing a performance regression.

Hans, may I ask what become of this patch? I don't see it in the queue.

Regards
Oliver


--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] scsi:stex.c Support Pegasus 3 product

2016-06-13 Thread Julian Calaby
Hi Charles,

On Mon, Jun 13, 2016 at 9:40 PM, Charles Chiou  wrote:
> Hi Julian,
>
>
> On 06/10/2016 08:10 AM, Julian Calaby wrote:
>>
>> Hi Charles,
>>
>> On Mon, Jun 6, 2016 at 5:53 PM, Charles Chiou 
>> wrote:
>>>
>>> From: Charles 
>>>
>>> Pegasus series is a RAID support product by using Thunderbolt technology.
>>>
>>> The newest product, Pegasus 3 is support Thunderbolt 3 technology with
>>> another chip.
>>>
>>> 1.Change driver version.
>>>
>>> 2.Add Pegasus 3 VID, DID and define it's device address.
>>>
>>> 3.Pegasus 3 use msi interrupt, so stex_request_irq P3 type enable msi.
>>>
>>> 4.For hibernation, use msi_lock in stex_ss_handshake to prevent msi
>>> register write again when handshaking.
>>>
>>> 5.Pegasus 3 don't need read() as flush.
>>>
>>> 6.In stex_ss_intr & stex_abort, P3 only clear interrupt register when
>>> getting vendor defined interrupt.
>>>
>>> 7.Add reboot notifier and register it in stex_probe for all supported
>>> device.
>>>
>>> 8.For all supported device in restart flow, we get a callback from
>>> notifier and set S6flag for stex_shutdown & stex_hba_stop to send restart
>>> command to FW.
>>>
>>> Signed-off-by: Charles 
>>> Signed-off-by: Paul 
>>> ---
>>>   drivers/scsi/stex.c | 282
>>> +++-
>>>   1 file changed, 214 insertions(+), 68 deletions(-)
>>>
>>> diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
>>> index 5b23175..9de2de2 100644
>>> --- a/drivers/scsi/stex.c
>>> +++ b/drivers/scsi/stex.c
>>> @@ -87,7 +95,7 @@ enum {
>>>  MU_STATE_STOP   = 5,
>>>  MU_STATE_NOCONNECT  = 6,
>>>
>>> -   MU_MAX_DELAY= 120,
>>> +   MU_MAX_DELAY= 50,
>>
>>
>> This won't cause problems for older adapters, right?
>
>
> Correct.
>
>>
>>>  MU_HANDSHAKE_SIGNATURE  = 0x5555,
>>>  MU_HANDSHAKE_SIGNATURE_HALF = 0x5a5a,
>>>  MU_HARD_RESET_WAIT  = 3,
>>> @@ -540,11 +556,15 @@ stex_ss_send_cmd(struct st_hba *hba, struct req_msg
>>> *req, u16 tag)
>>>
>>>  ++hba->req_head;
>>>  hba->req_head %= hba->rq_count+1;
>>> -
>>> -   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
>>> -   readl(hba->mmio_base + YH2I_REQ_HI); /* flush */
>>> -   writel(addr, hba->mmio_base + YH2I_REQ);
>>> -   readl(hba->mmio_base + YH2I_REQ); /* flush */
>>> +   if (hba->cardtype == st_P3) {
>>> +   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
>>> +   writel(addr, hba->mmio_base + YH2I_REQ);
>>> +   } else {
>>> +   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
>>> +   readl(hba->mmio_base + YH2I_REQ_HI); /* flush */
>>> +   writel(addr, hba->mmio_base + YH2I_REQ);
>>> +   readl(hba->mmio_base + YH2I_REQ); /* flush */
>>> +   }
>>
>>
>> The first writel() lines in each branch of the if statement are
>> identical, so they could be outside of it.
>
>
> I'll revise it in next patch.

On second thought, don't worry about doing this, keep the two
(slightly) different sets of code separate.

>>
>> Would it make sense to add a helper that does the readl() flush only
>> for non-st_P3? This could be a function pointer in the hba structure
>> which shouldn't slow stuff down.
>>
>
> Would you means to register another function pointer in struct "struct
> st_card_info" then point to hba strucrure for non-st_P3?
>
> struct st_card_info {
> struct req_msg * (*alloc_rq) (struct st_hba *);
> int (*map_sg)(struct st_hba *, struct req_msg *, struct st_ccb *);
> void (*send) (struct st_hba *, struct req_msg *, u16);
> unsigned int max_id;
> unsigned int max_lun;
> unsigned int max_channel;
> u16 rq_count;
> u16 rq_size;
> u16 sts_count;
> };

Again, on second thought, don't worry about it.

>>>   }
>>>
>>>   static void return_abnormal_state(struct st_hba *hba, int status)
>>> @@ -,30 +1160,63 @@ static int stex_ss_handshake(struct st_hba *hba)
>>>  scratch_size = (hba->sts_count+1)*sizeof(u32);
>>>  h->scratch_size = cpu_to_le32(scratch_size);
>>>
>>> -   data = readl(base + YINT_EN);
>>> -   data &= ~4;
>>> -   writel(data, base + YINT_EN);
>>> -   writel((hba->dma_handle >> 16) >> 16, base + YH2I_REQ_HI);
>>> -   readl(base + YH2I_REQ_HI);
>>> -   writel(hba->dma_handle, base + YH2I_REQ);
>>> -   readl(base + YH2I_REQ); /* flush */
>>> +   if (hba->cardtype == st_yel) {
>>
>>
>> Same question again.
>
>
> I'll revise it in next patch.
>
>>
>>> +   data = readl(base + YINT_EN);
>>> +   data &= ~4;
>>> +   writel(data, base + YINT_EN);
>>> +  

Re: [PATCH] scsi:stex.c Support Pegasus 3 product

2016-06-13 Thread Charles Chiou

Hi Julian,

On 06/10/2016 08:10 AM, Julian Calaby wrote:

Hi Charles,

On Mon, Jun 6, 2016 at 5:53 PM, Charles Chiou  wrote:

From: Charles 

Pegasus series is a RAID support product by using Thunderbolt technology.

The newest product, Pegasus 3 is support Thunderbolt 3 technology with another 
chip.

1.Change driver version.

2.Add Pegasus 3 VID, DID and define it's device address.

3.Pegasus 3 use msi interrupt, so stex_request_irq P3 type enable msi.

4.For hibernation, use msi_lock in stex_ss_handshake to prevent msi register 
write again when handshaking.

5.Pegasus 3 don't need read() as flush.

6.In stex_ss_intr & stex_abort, P3 only clear interrupt register when getting 
vendor defined interrupt.

7.Add reboot notifier and register it in stex_probe for all supported device.

8.For all supported device in restart flow, we get a callback from notifier and set 
S6flag for stex_shutdown & stex_hba_stop to send restart command to FW.

Signed-off-by: Charles 
Signed-off-by: Paul 
---
  drivers/scsi/stex.c | 282 +++-
  1 file changed, 214 insertions(+), 68 deletions(-)

diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
index 5b23175..9de2de2 100644
--- a/drivers/scsi/stex.c
+++ b/drivers/scsi/stex.c
@@ -87,7 +95,7 @@ enum {
 MU_STATE_STOP   = 5,
 MU_STATE_NOCONNECT  = 6,

-   MU_MAX_DELAY= 120,
+   MU_MAX_DELAY= 50,


This won't cause problems for older adapters, right?


Correct.




 MU_HANDSHAKE_SIGNATURE  = 0x5555,
 MU_HANDSHAKE_SIGNATURE_HALF = 0x5a5a,
 MU_HARD_RESET_WAIT  = 3,
@@ -540,11 +556,15 @@ stex_ss_send_cmd(struct st_hba *hba, struct req_msg *req, 
u16 tag)

 ++hba->req_head;
 hba->req_head %= hba->rq_count+1;
-
-   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
-   readl(hba->mmio_base + YH2I_REQ_HI); /* flush */
-   writel(addr, hba->mmio_base + YH2I_REQ);
-   readl(hba->mmio_base + YH2I_REQ); /* flush */
+   if (hba->cardtype == st_P3) {
+   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
+   writel(addr, hba->mmio_base + YH2I_REQ);
+   } else {
+   writel((addr >> 16) >> 16, hba->mmio_base + YH2I_REQ_HI);
+   readl(hba->mmio_base + YH2I_REQ_HI); /* flush */
+   writel(addr, hba->mmio_base + YH2I_REQ);
+   readl(hba->mmio_base + YH2I_REQ); /* flush */
+   }


The first writel() lines in each branch of the if statement are
identical, so they could be outside of it.


I'll revise it in next patch.



Would it make sense to add a helper that does the readl() flush only
for non-st_P3? This could be a function pointer in the hba structure
which shouldn't slow stuff down.



Would you means to register another function pointer in struct "struct 
st_card_info" then point to hba strucrure for non-st_P3?


struct st_card_info {
struct req_msg * (*alloc_rq) (struct st_hba *);
int (*map_sg)(struct st_hba *, struct req_msg *, struct st_ccb *);
void (*send) (struct st_hba *, struct req_msg *, u16);
unsigned int max_id;
unsigned int max_lun;
unsigned int max_channel;
u16 rq_count;
u16 rq_size;
u16 sts_count;
};


  }

  static void return_abnormal_state(struct st_hba *hba, int status)
@@ -974,15 +994,31 @@ static irqreturn_t stex_ss_intr(int irq, void *__hba)

 spin_lock_irqsave(hba->host->host_lock, flags);

-   data = readl(base + YI2H_INT);
-   if (data && data != 0x) {
-   /* clear the interrupt */
-   writel(data, base + YI2H_INT_C);
-   stex_ss_mu_intr(hba);
-   spin_unlock_irqrestore(hba->host->host_lock, flags);
-   if (unlikely(data & SS_I2H_REQUEST_RESET))
-   queue_work(hba->work_q, >reset_work);
-   return IRQ_HANDLED;
+   if (hba->cardtype == st_yel) {


I note that there's a few different card types beyond sd_yel and
st_P3. Does this function only get called for st_yel and st_P3?



This function only for st_yel & st_P3.


+   data = readl(base + YI2H_INT);
+   if (data && data != 0x) {
+   /* clear the interrupt */
+   writel(data, base + YI2H_INT_C);
+   stex_ss_mu_intr(hba);
+   spin_unlock_irqrestore(hba->host->host_lock, flags);
+   if (unlikely(data & SS_I2H_REQUEST_RESET))
+   queue_work(hba->work_q, >reset_work);
+   return IRQ_HANDLED;
+   }
+   } else {
+   data = readl(base + 

[target:nvmet-configfs-ng 30/35] drivers/nvme/target/configfs-ng.c:253:18: error: 'struct nvmet_port' has no member named 'port_binding_mutex'

2016-06-13 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git 
nvmet-configfs-ng
head:   b75ad796462431e38bba0fb04d277fd83c919575
commit: b57da10630e0fb2243e901f2df910c5c980e922e [30/35] nvmet/configfs-ng: 
Introduce struct nvmet_port_binding
config: openrisc-allmodconfig (attached as .config)
compiler: or32-linux-gcc (GCC) 4.5.1-or32-1.0rc1
reproduce:
wget 
https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross
 -O ~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout b57da10630e0fb2243e901f2df910c5c980e922e
# save the attached .config to linux build tree
make.cross ARCH=openrisc 

Note: the target/nvmet-configfs-ng HEAD 
b75ad796462431e38bba0fb04d277fd83c919575 builds fine.
  It only hurts bisectibility.

All error/warnings (new ones prefixed by >>):

   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_disable':
>> drivers/nvme/target/configfs-ng.c:253:18: error: 'struct nvmet_port' has no 
>> member named 'port_binding_mutex'
   drivers/nvme/target/configfs-ng.c:255:19: error: 'struct nvmet_port_binding' 
has no member named 'subsys_node'
   drivers/nvme/target/configfs-ng.c:256:20: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'
>> drivers/nvme/target/configfs-ng.c:262:2: warning: passing argument 1 of 
>> 'ops->remove_port' from incompatible pointer type
   drivers/nvme/target/configfs-ng.c:262:2: note: expected 'struct nvmet_port 
*' but argument is of type 'struct nvmet_port_binding *'
   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_enable_store':
>> drivers/nvme/target/configfs-ng.c:302:3: warning: passing argument 1 of 
>> 'ops->add_port' from incompatible pointer type
   drivers/nvme/target/configfs-ng.c:302:3: note: expected 'struct nvmet_port 
*' but argument is of type 'struct nvmet_port_binding *'
   drivers/nvme/target/configfs-ng.c:309:19: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'
   drivers/nvme/target/configfs-ng.c:311:20: error: 'struct nvmet_port_binding' 
has no member named 'subsys_node'
   drivers/nvme/target/configfs-ng.c:312:21: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'

vim +253 drivers/nvme/target/configfs-ng.c

   247  struct nvmet_subsys *subsys = pb->nf_subsys;
   248  struct nvmet_port *port = pb->port;
   249  
   250  if (!ops || !port)
   251  return;
   252  
 > 253  mutex_lock(>port_binding_mutex);
   254  pb->enabled = false;
   255  list_del_init(>subsys_node);
   256  mutex_unlock(>port_binding_mutex);
   257  
   258  mutex_lock(>pb_list_mutex);
   259  list_del_init(>node);
   260  mutex_unlock(>pb_list_mutex);
   261  
 > 262  ops->remove_port(pb);
   263  nvmet_put_transport(>disc_addr);
   264  pb->nf_ops = NULL;
   265  }
   266  
   267  static ssize_t nvmet_port_enable_show(struct config_item *item, char 
*page)
   268  {
   269  struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
   270  
   271  return sprintf(page, "%d\n", pb->enabled);
   272  }
   273  
   274  static ssize_t nvmet_port_enable_store(struct config_item *item,
   275  const char *page, size_t count)
   276  {
   277  struct nvmet_port *port;
   278  struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
   279  struct nvmet_subsys *subsys = pb->nf_subsys;
   280  struct nvmet_fabrics_ops *ops;
   281  bool enable;
   282  int rc;
   283  
   284  printk("Entering port enable %d\n", pb->disc_addr.trtype);
   285  
   286  if (strtobool(page, ))
   287  return -EINVAL;
   288  
   289  if (enable) {
   290  if (pb->enabled) {
   291  pr_warn("port already enabled: %d\n",
   292  pb->disc_addr.trtype);
   293  goto out;
   294  }
   295  
   296  ops = nvmet_get_transport(>disc_addr);
   297  if (IS_ERR(ops))
   298  return PTR_ERR(ops);
   299  
   300  pb->nf_ops = ops;
   301  
 > 302  rc = ops->add_port(pb);
   303  if (rc) {
   304  nvmet_put_transport(>disc_addr);
   305  return rc;

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: Binary data


[target:nvmet-configfs-ng 30/35] drivers/nvme/target/configfs-ng.c:253:18: error: 'struct nvmet_port' has no member named 'port_binding_mutex'; did you mean 'port_binding_list'?

2016-06-13 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git 
nvmet-configfs-ng
head:   b75ad796462431e38bba0fb04d277fd83c919575
commit: b57da10630e0fb2243e901f2df910c5c980e922e [30/35] nvmet/configfs-ng: 
Introduce struct nvmet_port_binding
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
git checkout b57da10630e0fb2243e901f2df910c5c980e922e
# save the attached .config to linux build tree
make ARCH=i386 

Note: the target/nvmet-configfs-ng HEAD 
b75ad796462431e38bba0fb04d277fd83c919575 builds fine.
  It only hurts bisectibility.

All errors (new ones prefixed by >>):

   In file included from include/linux/notifier.h:13:0,
from include/linux/memory_hotplug.h:6,
from include/linux/mmzone.h:737,
from include/linux/gfp.h:5,
from include/linux/kmod.h:22,
from include/linux/module.h:13,
from drivers/nvme/target/configfs-ng.c:5:
   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_disable':
>> drivers/nvme/target/configfs-ng.c:253:18: error: 'struct nvmet_port' has no 
>> member named 'port_binding_mutex'; did you mean 'port_binding_list'?
 mutex_lock(>port_binding_mutex);
 ^
   include/linux/mutex.h:146:44: note: in definition of macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
   ^~~~
>> drivers/nvme/target/configfs-ng.c:255:19: error: 'struct nvmet_port_binding' 
>> has no member named 'subsys_node'
 list_del_init(>subsys_node);
  ^~
   drivers/nvme/target/configfs-ng.c:256:20: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'; did you mean 'port_binding_list'?
 mutex_unlock(>port_binding_mutex);
   ^~
>> drivers/nvme/target/configfs-ng.c:262:19: error: passing argument 1 of 
>> 'ops->remove_port' from incompatible pointer type 
>> [-Werror=incompatible-pointer-types]
 ops->remove_port(pb);
  ^~
   drivers/nvme/target/configfs-ng.c:262:19: note: expected 'struct nvmet_port 
*' but argument is of type 'struct nvmet_port_binding *'
   drivers/nvme/target/configfs-ng.c: In function 'nvmet_port_enable_store':
>> drivers/nvme/target/configfs-ng.c:302:22: error: passing argument 1 of 
>> 'ops->add_port' from incompatible pointer type 
>> [-Werror=incompatible-pointer-types]
  rc = ops->add_port(pb);
 ^~
   drivers/nvme/target/configfs-ng.c:302:22: note: expected 'struct nvmet_port 
*' but argument is of type 'struct nvmet_port_binding *'
   In file included from include/linux/notifier.h:13:0,
from include/linux/memory_hotplug.h:6,
from include/linux/mmzone.h:737,
from include/linux/gfp.h:5,
from include/linux/kmod.h:22,
from include/linux/module.h:13,
from drivers/nvme/target/configfs-ng.c:5:
   drivers/nvme/target/configfs-ng.c:309:19: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'; did you mean 'port_binding_list'?
  mutex_lock(>port_binding_mutex);
  ^
   include/linux/mutex.h:146:44: note: in definition of macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
   ^~~~
   drivers/nvme/target/configfs-ng.c:311:20: error: 'struct nvmet_port_binding' 
has no member named 'subsys_node'
  list_add_tail(>subsys_node, >port_binding_list);
   ^~
   drivers/nvme/target/configfs-ng.c:312:21: error: 'struct nvmet_port' has no 
member named 'port_binding_mutex'; did you mean 'port_binding_list'?
  mutex_unlock(>port_binding_mutex);
^~
   cc1: some warnings being treated as errors

vim +253 drivers/nvme/target/configfs-ng.c

   247  struct nvmet_subsys *subsys = pb->nf_subsys;
   248  struct nvmet_port *port = pb->port;
   249  
   250  if (!ops || !port)
   251  return;
   252  
 > 253  mutex_lock(>port_binding_mutex);
   254  pb->enabled = false;
 > 255  list_del_init(>subsys_node);
 > 256  mutex_unlock(>port_binding_mutex);
   257  
   258  mutex_lock(>pb_list_mutex);
   259  list_del_init(>node);
   260  mutex_unlock(>pb_list_mutex);
   261  
 > 262  ops->remove_port(pb);
   263  nvmet_put_transport(>disc_addr);
   264  pb->nf_ops = NULL;
   265  }
   266  
   267  static ssize_t nvmet_port_enable_show(struct config_item *item, char 
*page)
   268  {
   269  struct nvmet_port_binding *pb = to_nvmet_port_binding(item);
   270  
   271  return sprintf(page, "%d\n", pb->enabled);
   272  }
   273  
   274  static ssize_t nvmet_port_enable_store(struct config_item *item,
   

Re: kernel BUG in drivers/scsi/53c700.c:1129

2016-06-13 Thread Christoph Hellwig
On Fri, Jun 10, 2016 at 02:43:52PM -0700, James Bottomley wrote:
> OK, I checked: snic and fnic use SCSI_NO_TAG but they don't save
> anything in current_cmnd, so they can't rely on the original behaviour.
>  I think we'll be safe with a local change in 53c700.c

Please move the current_cmnd field in struct scsi_device into the 53c700
driver while you're at it, so that others don't accidentally rely on it.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: block: don't check request size in blk_cloned_rq_check_limits()

2016-06-13 Thread Christoph Hellwig
On Sat, Jun 11, 2016 at 03:10:06PM +0200, Hannes Reinecke wrote:
> Well, the primary issue is that 'blk_cloned_rq_check_limits()' doesn't check
> for BLOCK_PC, so this particular check would be applied for every request.

So fix it..

> But as it turns out, even adding a check for BLOCK_PC doesn't help, so we're
> indeed seeing REQ_TYPE_FS requests with larger max_sector counts.
> 
> As to _why_ this happens I frankly have no idea. I have been staring at this
> particular code for over a year now (I've got another bug pending where we
> hit the _other_ if clause), but to no avail.
> So I've resolved to drop the check altogether, seeing that max_sector size
> is _not_ something which gets changed during failover.
> Therefore if the max_sector count is wrong for the cloned request it was
> already wrong for the original request, and we should've errored it out far
> earlier.
> The max_segments count, OTOH, _might_ change during failover (different
> hardware has different max_segments setting, and this is being changed
> during sg mapping), so there is some value to be had from testing it here.

I really think we need to drill down and figure out what's going on here
first.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Add support for SCT Write Same

2016-06-13 Thread Christoph Hellwig
On Thu, Jun 09, 2016 at 10:24:40PM -0500, Shaun Tancheff wrote:
> >  - The plain page_address above looks harmful, how do we know that
> >the page is mapped into kernel memory?  This might actually be broken
> >already, though.
> 
> I think it just happens to work because it's always used with a recently
> allocated page. Fixing it to include the (possible) offset is just a good 
> thing.

I haven't spent much time with libata recently, but what prevents an
arbitrary user page ending up here through SG_IO?
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 1/3] Add bio/request flags for using ZBC/ZAC commands

2016-06-13 Thread Christoph Hellwig
On Fri, Jun 10, 2016 at 02:13:53AM -0500, Shaun Tancheff wrote:
> T10 ZBC and T13 ZAC specify operations for Zoned devices.
> 
> To be able to access the zone information and open and close zones
> adding flags for the report zones command (REQ_REPORT_ZONES) and for
> Open and Close zone (REQ_OPEN_ZONE and REQ_CLOSE_ZONE) can be added
> for use by struct bio's bi_rw and by struct request's cmd_flags.

These need to be new operations, e.g. they should be in the REQ_OP_*
enum.  And please use a separate opcode for each actual operation.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PULL] SCSI fixes for 4.7-rc2

2016-06-13 Thread Hannes Reinecke
On 06/11/2016 11:03 PM, James Bottomley wrote:
> On Sat, 2016-06-11 at 13:25 -0700, Linus Torvalds wrote:
>> On Sat, Jun 11, 2016 at 12:41 PM, James Bottomley
>>  wrote:
>>>
>>> The QEMU people have accepted it as their bug and are fixing it.
>>
>> Of course they are. Somebody found a bug in their device model, I'd
>> expect nothing else.
>>
>> But I'm not worried about qemu. I'm worried about all the other 
>> random devices that have never been tested.
> 
> Most of the other devices that are likely to misbehave don't advertise
> high levels of SCSI conformance, so we seem to be mostly covered.
> 
And we have been running the very patch in SLES for over a year now,
without a single issue being reported.

>>>  There's no other course of action, really because we can't stop
>>> people
>>> sending this command using the BLOCK_PC interface from user space,
>>> so
>>> it's now a known and easy to use way of stopping the device from
>>> responding.
>>
>> Bah. That's not an argument from kernel space. We've had that 
>> forever. Broken device that hangs up when you try to read past the 
>> end? If you can open the raw device for reading, you can still do a
>> SCSI_IOCTL_SEND_COMMAND to send that read command past the end.
>>
>> The fact that you can craft special commands that can cause problems
>> for specific devices (if you have access to the raw device) does 
>> *not* at all argue that the kernel should then do those accesses of 
>> its own volition.
>>
>> My worry basically comes down to: we're clearly now doing something
>> that has never ever been tested by anybody before.
>>
Not quite. See above.

The reported issue came from someone who has been running the very
latest linux kernel in a VM which was hosted on an ancient version of
QEMU. Hardly a common scenario.

>> And I think that the assumption that the bug would magically be
>> limited to qemu is a *big* assumption.
> 
> How do we ever find out if we don't test it, though?  I'm sure some
> obscure minor celebrity trying to get on the chat show circuit once
> said "what is userspace except a test case for the kernel?"
> 
> If this is the only problem that turns up, I think we're done.  If we
> get any more we can consider either blacklisting all CD type devices or
> raising the conformance bar to SPC-3.
> 
I'm fully with James here.
The alternative would be to whitelist _every_ conformant device,
resulting in lots of unhappy customers until we've got the whitelist
settled.

Having to discuss with customers why Linux doesn't follow the specs is
infinitely harder than discussing with customers whose _hardware_
doesn't follow the specs.

Cheers,

Hannes
-- 
Dr. Hannes ReineckeTeamlead Storage & Networking
h...@suse.de   +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html