Re: VL805 USB 3.0 does not see connected devices (only on x86_64) (x86 is ok)

2017-02-18 Thread c400
[  804.441424] usb 4-1: new SuperSpeed USB device number 2 using xhci_hcd
[  804.462165] usb 4-1: New USB device found, idVendor=0951, idProduct=1656
[  804.462169] usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  804.462172] usb 4-1: Product: DT Ultimate G2
[  804.462174] usb 4-1: Manufacturer: Kingston
[  804.462176] usb 4-1: SerialNumber: 0018F3D97163BB71B1480004
[  804.462182] device: '4-1': device_add
[  804.462219] bus: 'usb': add device 4-1
[  804.462226] PM: Adding info for usb:4-1
[  804.462266] bus: 'usb': driver_probe_device: matched device 4-1
with driver usb
[  804.462267] bus: 'usb': really_probe: probing driver usb with device 4-1
[  804.462270] devices_kset: Moving 4-1 to end of list
[  804.462652] device: '4-1:1.0': device_add
[  804.462664] bus: 'usb': add device 4-1:1.0
[  804.462668] PM: Adding info for usb:4-1:1.0
[  804.462686] bus: 'usb': driver_probe_device: matched device 4-1:1.0
with driver usb-storage
[  804.462688] bus: 'usb': really_probe: probing driver usb-storage
with device 4-1:1.0
[  804.462691] devices_kset: Moving 4-1:1.0 to end of list
[  804.462697] usb-storage 4-1:1.0: USB Mass Storage device detected
[  804.463039] scsi host7: usb-storage 4-1:1.0
[  804.463043] device: 'host7': device_add
[  804.463049] bus: 'scsi': add device host7
[  804.463077] PM: Adding info for scsi:host7
[  804.463083] device: 'host7': device_add
[  804.463098] PM: Adding info for No Bus:host7
[  804.463106] driver: 'usb-storage': driver_bound: bound to device '4-1:1.0'
[  804.463107] bus: 'usb': really_probe: bound device 4-1:1.0 to
driver usb-storage
[  804.463109] device: 'ep_81': device_add
[  804.463117] PM: Adding info for No Bus:ep_81
[  804.463119] device: 'ep_02': device_add
[  804.463126] PM: Adding info for No Bus:ep_02
[  804.463128] driver: 'usb': driver_bound: bound to device '4-1'
[  804.463129] bus: 'usb': really_probe: bound device 4-1 to driver usb
[  804.463134] device: 'ep_00': device_add
[  804.463147] PM: Adding info for No Bus:ep_00
[  805.473481] scsi 7:0:0:0: Direct-Access Kingston DT Ultimate G2
  PMAP PQ: 0 ANSI: 0 CCS
[  805.473484] device: 'target7:0:0': device_add
[  805.473488] bus: 'scsi': add device target7:0:0
[  805.473494] PM: Adding info for scsi:target7:0:0
[  805.473503] device: '7:0:0:0': device_add
[  805.473519] bus: 'scsi': add device 7:0:0:0
[  805.473524] PM: Adding info for scsi:7:0:0:0
[  805.473528] bus: 'scsi': driver_probe_device: matched device
7:0:0:0 with driver sd
[  805.473529] bus: 'scsi': really_probe: probing driver sd with device 7:0:0:0
[  805.473531] devices_kset: Moving 7:0:0:0 to end of list
[  805.473537] device: '7:0:0:0': device_add
[  805.473549] PM: Adding info for No Bus:7:0:0:0
[  805.473555] driver: 'sd': driver_bound: bound to device '7:0:0:0'
[  805.473556] bus: 'scsi': really_probe: bound device 7:0:0:0 to driver sd
[  805.473558] device: '7:0:0:0': device_add
[  805.473580] PM: Adding info for No Bus:7:0:0:0
[  805.473593] device: 'sg7': device_add
[  805.473608] PM: Adding info for No Bus:sg7
[  805.473679] sd 7:0:0:0: Attached scsi generic sg7 type 0
[  805.473682] device: '7:0:0:0': device_add
[  805.473690] PM: Adding info for No Bus:7:0:0:0
[  805.473936] sd 7:0:0:0: [sdd] 31293440 512-byte logical blocks:
(16.0 GB/14.9 GiB)
[  805.474070] sd 7:0:0:0: [sdd] Write Protect is off
[  805.474073] sd 7:0:0:0: [sdd] Mode Sense: 23 00 00 00
[  805.474213] sd 7:0:0:0: [sdd] No Caching mode page found
[  805.474214] sd 7:0:0:0: [sdd] Assuming drive cache: write through
[  805.474219] device: '8:48': device_add
[  805.474233] PM: Adding info for No Bus:8:48
[  805.474244] device: 'sdd': device_add
[  805.474258] PM: Adding info for No Bus:sdd
[  805.770620]  sdd: sdd1
[  805.770627] device: 'sdd1': device_add
[  805.770642] PM: Adding info for No Bus:sdd1
[  805.771818] sd 7:0:0:0: [sdd] Attached SCSI removable disk
[  805.811101] DMAR: DRHD: handling fault status reg 2
[  805.811108] DMAR: [DMA Read] Request device [07:00.0] fault addr
fff98000 [fault reason 06] PTE Read access is not set
[  805.811139] xhci_hcd :07:00.0: WARNING: Host System Error
[  805.848184] xhci_hcd :07:00.0: Host not halted after 16000 microseconds.

2017-02-11 16:32 GMT+03:00 c400 :
> no, i`ve already tested and wrote about useless experiments
> https://www.mail-archive.com/linux-usb@vger.kernel.org/msg80329.html
>
> 2017-02-11 0:45 GMT+03:00 c400 :
>> sorry, got the letter branch cleaned, so haven`t seen your reply
>>
>> tested on 4.9.9 kernel
>>
>> [13964.125187] sd 7:0:0:0: [sdc] Attached SCSI removable disk
>> [13964.150525] DMAR: DRHD: handling fault status reg 2
>> [13964.150532] DMAR: [DMA Read] Request device [02:00.0] fault addr
>> fffb1000 [fault reason 06] PTE Read access is not set
>> [13995.848713] usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
>> [14001.072566] usb 4-2: device descriptor read/8, error -110
>> [14001.179732] usb 4-2: reset SuperSpeed USB device number 2 using xhci_hcd
>> [14

Re: [RFC v2 11/20] scsi: megaraid: Replace PCI pool old API

2017-02-18 Thread Peter Senna Tschudin
On Sat, Feb 18, 2017 at 09:35:47AM +0100, Romain Perier wrote:
> The PCI pool API is deprecated. This commits replaces the PCI pool old
> API by the appropriated function with the DMA pool API.

Did not apply on linux-next-20170217


> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/scsi/megaraid/megaraid_mbox.c   | 30 -
>  drivers/scsi/megaraid/megaraid_mm.c | 29 
>  drivers/scsi/megaraid/megaraid_sas_base.c   | 25 +++---
>  drivers/scsi/megaraid/megaraid_sas_fusion.c | 51 
> +++--
>  4 files changed, 70 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/scsi/megaraid/megaraid_mbox.c 
> b/drivers/scsi/megaraid/megaraid_mbox.c
> index f0987f2..6d0bd3a 100644
> --- a/drivers/scsi/megaraid/megaraid_mbox.c
> +++ b/drivers/scsi/megaraid/megaraid_mbox.c
> @@ -1153,8 +1153,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>  
>   // Allocate memory for 16-bytes aligned mailboxes
> - raid_dev->mbox_pool_handle = pci_pool_create("megaraid mbox pool",
> - adapter->pdev,
> + raid_dev->mbox_pool_handle = dma_pool_create("megaraid mbox pool",
> + &adapter->pdev->dev,
>   sizeof(mbox64_t) + 16,
>   16, 0);
>  
> @@ -1164,7 +1164,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   mbox_pci_blk = raid_dev->mbox_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - mbox_pci_blk[i].vaddr = pci_pool_alloc(
> + mbox_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->mbox_pool_handle,
>   GFP_KERNEL,
>   &mbox_pci_blk[i].dma_addr);
> @@ -1181,8 +1181,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>* share common memory pool. Passthru structures piggyback on memory
>* allocted to extended passthru since passthru is smaller of the two
>*/
> - raid_dev->epthru_pool_handle = pci_pool_create("megaraid mbox pthru",
> - adapter->pdev, sizeof(mraid_epassthru_t), 128, 0);
> + raid_dev->epthru_pool_handle = dma_pool_create("megaraid mbox pthru",
> + &adapter->pdev->dev, sizeof(mraid_epassthru_t), 128, 0);
>  
>   if (raid_dev->epthru_pool_handle == NULL) {
>   goto fail_setup_dma_pool;
> @@ -1190,7 +1190,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   epthru_pci_blk = raid_dev->epthru_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - epthru_pci_blk[i].vaddr = pci_pool_alloc(
> + epthru_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->epthru_pool_handle,
>   GFP_KERNEL,
>   &epthru_pci_blk[i].dma_addr);
> @@ -1202,8 +1202,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   // Allocate memory for each scatter-gather list. Request for 512 bytes
>   // alignment for each sg list
> - raid_dev->sg_pool_handle = pci_pool_create("megaraid mbox sg",
> - adapter->pdev,
> + raid_dev->sg_pool_handle = dma_pool_create("megaraid mbox sg",
> + &adapter->pdev->dev,
>   sizeof(mbox_sgl64) * MBOX_MAX_SG_SIZE,
>   512, 0);
>  
> @@ -1213,7 +1213,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   sg_pci_blk = raid_dev->sg_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - sg_pci_blk[i].vaddr = pci_pool_alloc(
> + sg_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->sg_pool_handle,
>   GFP_KERNEL,
>   &sg_pci_blk[i].dma_addr);
> @@ -1249,29 +1249,29 @@ megaraid_mbox_teardown_dma_pools(adapter_t *adapter)
>  
>   sg_pci_blk = raid_dev->sg_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS && sg_pci_blk[i].vaddr; i++) {
> - pci_pool_free(raid_dev->sg_pool_handle, sg_pci_blk[i].vaddr,
> + dma_pool_free(raid_dev->sg_pool_handle, sg_pci_blk[i].vaddr,
>   sg_pci_blk[i].dma_addr);
>   }
>   if (raid_dev->sg_pool_handle)
> - pci_pool_destroy(raid_dev->sg_pool_handle);
> + dma_pool_destroy(raid_dev->sg_pool_handle);
>  
>  
>   epthru_pci_blk = raid_dev->epthru_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS && epthru_pci_blk[i].vaddr; i++) {
> - pci_pool_free(raid_dev->epthru_pool_handle,
> + dma_pool_free(raid_dev->epthru_pool_handle,
>   epthru_pci_blk[i].vaddr, epthr

Re: [RFC v2 00/20] Replace PCI pool by DMA pool API

2017-02-18 Thread Peter Senna Tschudin
On Sat, Feb 18, 2017 at 09:35:36AM +0100, Romain Perier wrote:

Tested all patches by compilation and checkpatch. All of them compile
fine, but patches 11 and 12 need some fixes. You can resend as
PATCH instead of RFC.

> The current PCI pool API are simple macro functions direct expanded to
> the appropriated dma pool functions. The prototypes are almost the same
> and semantically, they are very similar. I propose to use the DMA pool
> API directly and get rid of the old API.
> 
> This set of patches, replaces the old API by the dma pool API, adds
> support to warn about this old API in checkpath.pl and remove the
> defines.
> 
> Changes in v2:
> - Introduced patch 18/20
> - Fixed cosmetic changes: spaces before brace, live over 80 characters
> - Removed some of the check for NULL pointers before calling dma_pool_destroy
> - Improved the regexp in checkpatch for pci_pool, thanks to Joe Perches
> - Added Tested-by and Acked-by tags
> 
> Romain Perier (20):
>   block: DAC960: Replace PCI pool old API
>   dmaengine: pch_dma: Replace PCI pool old API
>   IB/mthca: Replace PCI pool old API
>   net: e100: Replace PCI pool old API
>   mlx4: Replace PCI pool old API
>   mlx5: Replace PCI pool old API
>   wireless: ipw2200: Replace PCI pool old API
>   scsi: be2iscsi: Replace PCI pool old API
>   scsi: csiostor: Replace PCI pool old API
>   scsi: lpfc: Replace PCI pool old API
>   scsi: megaraid: Replace PCI pool old API
>   scsi: mpt3sas: Replace PCI pool old API
>   scsi: mvsas: Replace PCI pool old API
>   scsi: pmcraid: Replace PCI pool old API
>   usb: gadget: amd5536udc: Replace PCI pool old API
>   usb: gadget: net2280: Replace PCI pool old API
>   usb: gadget: pch_udc: Replace PCI pool old API
>   usb: host: Remove remaining pci_pool in comments
>   PCI: Remove PCI pool macro functions
>   checkpatch: warn for use of old PCI pool API
> 
>  drivers/block/DAC960.c| 36 ++---
>  drivers/block/DAC960.h|  4 +-
>  drivers/dma/pch_dma.c | 12 ++---
>  drivers/infiniband/hw/mthca/mthca_av.c| 10 ++--
>  drivers/infiniband/hw/mthca/mthca_cmd.c   |  8 +--
>  drivers/infiniband/hw/mthca/mthca_dev.h   |  4 +-
>  drivers/net/ethernet/intel/e100.c | 12 ++---
>  drivers/net/ethernet/mellanox/mlx4/cmd.c  | 10 ++--
>  drivers/net/ethernet/mellanox/mlx4/mlx4.h |  2 +-
>  drivers/net/ethernet/mellanox/mlx5/core/cmd.c | 11 ++--
>  drivers/net/wireless/intel/ipw2x00/ipw2200.c  | 13 ++---
>  drivers/scsi/be2iscsi/be_iscsi.c  |  6 +--
>  drivers/scsi/be2iscsi/be_main.c   |  6 +--
>  drivers/scsi/be2iscsi/be_main.h   |  2 +-
>  drivers/scsi/csiostor/csio_hw.h   |  2 +-
>  drivers/scsi/csiostor/csio_init.c | 11 ++--
>  drivers/scsi/csiostor/csio_scsi.c |  6 +--
>  drivers/scsi/lpfc/lpfc.h  | 10 ++--
>  drivers/scsi/lpfc/lpfc_init.c |  6 +--
>  drivers/scsi/lpfc/lpfc_mem.c  | 73 
> +--
>  drivers/scsi/lpfc/lpfc_scsi.c | 12 ++---
>  drivers/scsi/megaraid/megaraid_mbox.c | 30 +--
>  drivers/scsi/megaraid/megaraid_mm.c   | 29 ++-
>  drivers/scsi/megaraid/megaraid_sas_base.c | 25 -
>  drivers/scsi/megaraid/megaraid_sas_fusion.c   | 51 ++-
>  drivers/scsi/mpt3sas/mpt3sas_base.c   | 73 
> +--
>  drivers/scsi/mvsas/mv_init.c  |  6 +--
>  drivers/scsi/mvsas/mv_sas.c   |  6 +--
>  drivers/scsi/pmcraid.c| 10 ++--
>  drivers/scsi/pmcraid.h|  2 +-
>  drivers/usb/gadget/udc/amd5536udc.c   |  8 +--
>  drivers/usb/gadget/udc/amd5536udc.h   |  4 +-
>  drivers/usb/gadget/udc/net2280.c  | 12 ++---
>  drivers/usb/gadget/udc/net2280.h  |  2 +-
>  drivers/usb/gadget/udc/pch_udc.c  | 31 ++--
>  drivers/usb/host/ehci-hcd.c   |  2 +-
>  drivers/usb/host/fotg210-hcd.c|  2 +-
>  drivers/usb/host/oxu210hp-hcd.c   |  2 +-
>  include/linux/mlx5/driver.h   |  2 +-
>  include/linux/pci.h   |  9 
>  scripts/checkpatch.pl |  9 +++-
>  41 files changed, 284 insertions(+), 287 deletions(-)
> 
> -- 
> 2.9.3
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 12/20] scsi: mpt3sas: Replace PCI pool old API

2017-02-18 Thread Peter Senna Tschudin
On Sat, Feb 18, 2017 at 09:35:48AM +0100, Romain Perier wrote:
> The PCI pool API is deprecated. This commits replaces the PCI pool old
> API by the appropriated function with the DMA pool API.

Please run checkpatch, fix the style issue and resend.

> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/scsi/mpt3sas/mpt3sas_base.c | 73 
> +
>  1 file changed, 34 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c 
> b/drivers/scsi/mpt3sas/mpt3sas_base.c
> index a3fe1fb..3c2206d 100644
> --- a/drivers/scsi/mpt3sas/mpt3sas_base.c
> +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
> @@ -3210,9 +3210,8 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
>   }
>  
>   if (ioc->sense) {
> - pci_pool_free(ioc->sense_dma_pool, ioc->sense, ioc->sense_dma);
> - if (ioc->sense_dma_pool)
> - pci_pool_destroy(ioc->sense_dma_pool);
> + dma_pool_free(ioc->sense_dma_pool, ioc->sense, ioc->sense_dma);
> + dma_pool_destroy(ioc->sense_dma_pool);
>   dexitprintk(ioc, pr_info(MPT3SAS_FMT
>   "sense_pool(0x%p): free\n",
>   ioc->name, ioc->sense));
> @@ -3220,9 +3219,8 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
>   }
>  
>   if (ioc->reply) {
> - pci_pool_free(ioc->reply_dma_pool, ioc->reply, ioc->reply_dma);
> - if (ioc->reply_dma_pool)
> - pci_pool_destroy(ioc->reply_dma_pool);
> + dma_pool_free(ioc->reply_dma_pool, ioc->reply, ioc->reply_dma);
> + dma_pool_destroy(ioc->reply_dma_pool);
>   dexitprintk(ioc, pr_info(MPT3SAS_FMT
>   "reply_pool(0x%p): free\n",
>   ioc->name, ioc->reply));
> @@ -3230,10 +3228,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
>   }
>  
>   if (ioc->reply_free) {
> - pci_pool_free(ioc->reply_free_dma_pool, ioc->reply_free,
> + dma_pool_free(ioc->reply_free_dma_pool, ioc->reply_free,
>   ioc->reply_free_dma);
> - if (ioc->reply_free_dma_pool)
> - pci_pool_destroy(ioc->reply_free_dma_pool);
> + dma_pool_destroy(ioc->reply_free_dma_pool);
>   dexitprintk(ioc, pr_info(MPT3SAS_FMT
>   "reply_free_pool(0x%p): free\n",
>   ioc->name, ioc->reply_free));
> @@ -3244,7 +3241,7 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
>   do {
>   rps = &ioc->reply_post[i];
>   if (rps->reply_post_free) {
> - pci_pool_free(
> + dma_pool_free(
>   ioc->reply_post_free_dma_pool,
>   rps->reply_post_free,
>   rps->reply_post_free_dma);
> @@ -3256,8 +3253,7 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
>   } while (ioc->rdpq_array_enable &&
>  (++i < ioc->reply_queue_count));
>  
> - if (ioc->reply_post_free_dma_pool)
> - pci_pool_destroy(ioc->reply_post_free_dma_pool);
> + dma_pool_destroy(ioc->reply_post_free_dma_pool);
>   kfree(ioc->reply_post);
>   }
>  
> @@ -3278,12 +3274,11 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER 
> *ioc)
>   if (ioc->chain_lookup) {
>   for (i = 0; i < ioc->chain_depth; i++) {
>   if (ioc->chain_lookup[i].chain_buffer)
> - pci_pool_free(ioc->chain_dma_pool,
> + dma_pool_free(ioc->chain_dma_pool,
>   ioc->chain_lookup[i].chain_buffer,
>   ioc->chain_lookup[i].chain_buffer_dma);
>   }
> - if (ioc->chain_dma_pool)
> - pci_pool_destroy(ioc->chain_dma_pool);
> + dma_pool_destroy(ioc->chain_dma_pool);
>   free_pages((ulong)ioc->chain_lookup, ioc->chain_pages);
>   ioc->chain_lookup = NULL;
>   }
> @@ -3458,23 +3453,23 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER 
> *ioc)
>   ioc->name);
>   goto out;
>   }
> - ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
> - ioc->pdev, sz, 16, 0);
> + ioc->reply_post_free_dma_pool = dma_pool_create("reply_post_free pool",
> + &ioc->pdev->dev, sz, 16, 0);
>   if (!ioc->reply_post_free_dma_pool) {
>   pr_err(MPT3SAS_FMT
> -  "reply_post_free pool: pci_pool_create failed\n",
> +  "reply_post_free pool: dma_pool_create failed\n",
>ioc->name);
>   goto out;
>   }
>   i = 0;
>   do {
>   ioc->reply_post[i].reply_post_free =
> -  

Re: [RFC v2 11/20] scsi: megaraid: Replace PCI pool old API

2017-02-18 Thread Peter Senna Tschudin
On Sat, Feb 18, 2017 at 09:35:47AM +0100, Romain Perier wrote:

Hi Romain,

Checkpatch gives some warnings you can fix related to NULL tests before
dma_pool_destroy(), and you changed indentation style in some of your
changes. Some times it is important to keep consistency within a file
even if the style is not the default. Please fix and resend.


> The PCI pool API is deprecated. This commits replaces the PCI pool old
> API by the appropriated function with the DMA pool API.
> 
> Signed-off-by: Romain Perier 
> ---
>  drivers/scsi/megaraid/megaraid_mbox.c   | 30 -
>  drivers/scsi/megaraid/megaraid_mm.c | 29 
>  drivers/scsi/megaraid/megaraid_sas_base.c   | 25 +++---
>  drivers/scsi/megaraid/megaraid_sas_fusion.c | 51 
> +++--
>  4 files changed, 70 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/scsi/megaraid/megaraid_mbox.c 
> b/drivers/scsi/megaraid/megaraid_mbox.c
> index f0987f2..6d0bd3a 100644
> --- a/drivers/scsi/megaraid/megaraid_mbox.c
> +++ b/drivers/scsi/megaraid/megaraid_mbox.c
> @@ -1153,8 +1153,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>  
>   // Allocate memory for 16-bytes aligned mailboxes
> - raid_dev->mbox_pool_handle = pci_pool_create("megaraid mbox pool",
> - adapter->pdev,
> + raid_dev->mbox_pool_handle = dma_pool_create("megaraid mbox pool",
> + &adapter->pdev->dev,
>   sizeof(mbox64_t) + 16,
>   16, 0);
>  
> @@ -1164,7 +1164,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   mbox_pci_blk = raid_dev->mbox_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - mbox_pci_blk[i].vaddr = pci_pool_alloc(
> + mbox_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->mbox_pool_handle,
>   GFP_KERNEL,
>   &mbox_pci_blk[i].dma_addr);
> @@ -1181,8 +1181,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>* share common memory pool. Passthru structures piggyback on memory
>* allocted to extended passthru since passthru is smaller of the two
>*/
> - raid_dev->epthru_pool_handle = pci_pool_create("megaraid mbox pthru",
> - adapter->pdev, sizeof(mraid_epassthru_t), 128, 0);
> + raid_dev->epthru_pool_handle = dma_pool_create("megaraid mbox pthru",
> + &adapter->pdev->dev, sizeof(mraid_epassthru_t), 128, 0);
>  
>   if (raid_dev->epthru_pool_handle == NULL) {
>   goto fail_setup_dma_pool;
> @@ -1190,7 +1190,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   epthru_pci_blk = raid_dev->epthru_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - epthru_pci_blk[i].vaddr = pci_pool_alloc(
> + epthru_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->epthru_pool_handle,
>   GFP_KERNEL,
>   &epthru_pci_blk[i].dma_addr);
> @@ -1202,8 +1202,8 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   // Allocate memory for each scatter-gather list. Request for 512 bytes
>   // alignment for each sg list
> - raid_dev->sg_pool_handle = pci_pool_create("megaraid mbox sg",
> - adapter->pdev,
> + raid_dev->sg_pool_handle = dma_pool_create("megaraid mbox sg",
> + &adapter->pdev->dev,
>   sizeof(mbox_sgl64) * MBOX_MAX_SG_SIZE,
>   512, 0);
>  
> @@ -1213,7 +1213,7 @@ megaraid_mbox_setup_dma_pools(adapter_t *adapter)
>  
>   sg_pci_blk = raid_dev->sg_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS; i++) {
> - sg_pci_blk[i].vaddr = pci_pool_alloc(
> + sg_pci_blk[i].vaddr = dma_pool_alloc(
>   raid_dev->sg_pool_handle,
>   GFP_KERNEL,
>   &sg_pci_blk[i].dma_addr);
> @@ -1249,29 +1249,29 @@ megaraid_mbox_teardown_dma_pools(adapter_t *adapter)
>  
>   sg_pci_blk = raid_dev->sg_pool;
>   for (i = 0; i < MBOX_MAX_SCSI_CMDS && sg_pci_blk[i].vaddr; i++) {
> - pci_pool_free(raid_dev->sg_pool_handle, sg_pci_blk[i].vaddr,
> + dma_pool_free(raid_dev->sg_pool_handle, sg_pci_blk[i].vaddr,
>   sg_pci_blk[i].dma_addr);
>   }
>   if (raid_dev->sg_pool_handle)
> - pci_pool_destroy(raid_dev->sg_pool_handle);
> + dma_pool_destroy(raid_dev->sg_pool_handle);
>  
>  
>   epthru_pci_blk = raid_dev->epthru_pool;
>   

Re: [RFC v2 00/20] Replace PCI pool by DMA pool API

2017-02-18 Thread Romain Perier


Le 18/02/2017 à 14:06, Greg Kroah-Hartman a écrit :
> On Sat, Feb 18, 2017 at 09:35:36AM +0100, Romain Perier wrote:
>> The current PCI pool API are simple macro functions direct expanded to
>> the appropriated dma pool functions. The prototypes are almost the same
>> and semantically, they are very similar. I propose to use the DMA pool
>> API directly and get rid of the old API.
>>
>> This set of patches, replaces the old API by the dma pool API, adds
>> support to warn about this old API in checkpath.pl and remove the
>> defines.
> Why is this a "RFC" series?  Personally, I never apply those as it
> implies that the author doesn't think they are ready to be merged :)
>
> thanks,
>
> greg k-h
Hi,

I was not sure about this. I have noticed that most of the API changes
are tagged as RFC.
I can re-send a v3 without the prefix RFC if you prefer.

Thanks,
Romain
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 00/20] Replace PCI pool by DMA pool API

2017-02-18 Thread Greg Kroah-Hartman
On Sat, Feb 18, 2017 at 09:35:36AM +0100, Romain Perier wrote:
> The current PCI pool API are simple macro functions direct expanded to
> the appropriated dma pool functions. The prototypes are almost the same
> and semantically, they are very similar. I propose to use the DMA pool
> API directly and get rid of the old API.
> 
> This set of patches, replaces the old API by the dma pool API, adds
> support to warn about this old API in checkpath.pl and remove the
> defines.

Why is this a "RFC" series?  Personally, I never apply those as it
implies that the author doesn't think they are ready to be merged :)

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 01/20] block: DAC960: Replace PCI pool old API

2017-02-18 Thread Peter Senna Tschudin
On Sat, Feb 18, 2017 at 09:35:37AM +0100, Romain Perier wrote:
> The PCI pool API is deprecated. This commits replaces the PCI pool old
> API by the appropriated function with the DMA pool API.
> 

no new errors added, tested by compilation only.

> Signed-off-by: Romain Perier 
> Acked-by: Peter Senna Tschudin 
> Tested-by: Peter Senna Tschudin 
> ---
>  drivers/block/DAC960.c | 36 ++--
>  drivers/block/DAC960.h |  4 ++--
>  2 files changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
> index 26a51be..2b221cc 100644
> --- a/drivers/block/DAC960.c
> +++ b/drivers/block/DAC960.c
> @@ -268,17 +268,17 @@ static bool 
> DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller)
>void *AllocationPointer = NULL;
>void *ScatterGatherCPU = NULL;
>dma_addr_t ScatterGatherDMA;
> -  struct pci_pool *ScatterGatherPool;
> +  struct dma_pool *ScatterGatherPool;
>void *RequestSenseCPU = NULL;
>dma_addr_t RequestSenseDMA;
> -  struct pci_pool *RequestSensePool = NULL;
> +  struct dma_pool *RequestSensePool = NULL;
>  
>if (Controller->FirmwareType == DAC960_V1_Controller)
>  {
>CommandAllocationLength = offsetof(DAC960_Command_T, V1.EndMarker);
>CommandAllocationGroupSize = DAC960_V1_CommandAllocationGroupSize;
> -  ScatterGatherPool = pci_pool_create("DAC960_V1_ScatterGather",
> - Controller->PCIDevice,
> +  ScatterGatherPool = dma_pool_create("DAC960_V1_ScatterGather",
> + &Controller->PCIDevice->dev,
>   DAC960_V1_ScatterGatherLimit * sizeof(DAC960_V1_ScatterGatherSegment_T),
>   sizeof(DAC960_V1_ScatterGatherSegment_T), 0);
>if (ScatterGatherPool == NULL)
> @@ -290,18 +290,18 @@ static bool 
> DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller)
>  {
>CommandAllocationLength = offsetof(DAC960_Command_T, V2.EndMarker);
>CommandAllocationGroupSize = DAC960_V2_CommandAllocationGroupSize;
> -  ScatterGatherPool = pci_pool_create("DAC960_V2_ScatterGather",
> - Controller->PCIDevice,
> +  ScatterGatherPool = dma_pool_create("DAC960_V2_ScatterGather",
> + &Controller->PCIDevice->dev,
>   DAC960_V2_ScatterGatherLimit * sizeof(DAC960_V2_ScatterGatherSegment_T),
>   sizeof(DAC960_V2_ScatterGatherSegment_T), 0);
>if (ScatterGatherPool == NULL)
>   return DAC960_Failure(Controller,
>   "AUXILIARY STRUCTURE CREATION (SG)");
> -  RequestSensePool = pci_pool_create("DAC960_V2_RequestSense",
> - Controller->PCIDevice, sizeof(DAC960_SCSI_RequestSense_T),
> +  RequestSensePool = dma_pool_create("DAC960_V2_RequestSense",
> + &Controller->PCIDevice->dev, sizeof(DAC960_SCSI_RequestSense_T),
>   sizeof(int), 0);
>if (RequestSensePool == NULL) {
> - pci_pool_destroy(ScatterGatherPool);
> + dma_pool_destroy(ScatterGatherPool);
>   return DAC960_Failure(Controller,
>   "AUXILIARY STRUCTURE CREATION (SG)");
>}
> @@ -335,16 +335,16 @@ static bool 
> DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller)
>Command->Next = Controller->FreeCommands;
>Controller->FreeCommands = Command;
>Controller->Commands[CommandIdentifier-1] = Command;
> -  ScatterGatherCPU = pci_pool_alloc(ScatterGatherPool, GFP_ATOMIC,
> +  ScatterGatherCPU = dma_pool_alloc(ScatterGatherPool, GFP_ATOMIC,
>   &ScatterGatherDMA);
>if (ScatterGatherCPU == NULL)
> return DAC960_Failure(Controller, "AUXILIARY STRUCTURE CREATION");
>  
>if (RequestSensePool != NULL) {
> -   RequestSenseCPU = pci_pool_alloc(RequestSensePool, GFP_ATOMIC,
> +   RequestSenseCPU = dma_pool_alloc(RequestSensePool, GFP_ATOMIC,
>   &RequestSenseDMA);
> if (RequestSenseCPU == NULL) {
> -pci_pool_free(ScatterGatherPool, ScatterGatherCPU,
> +dma_pool_free(ScatterGatherPool, ScatterGatherCPU,
>  ScatterGatherDMA);
>   return DAC960_Failure(Controller,
>   "AUXILIARY STRUCTURE CREATION");
> @@ -379,8 +379,8 @@ static bool 
> DAC960_CreateAuxiliaryStructures(DAC960_Controller_T *Controller)
>  static void DAC960_DestroyAuxiliaryStructures(DAC960_Controller_T 
> *Controller)
>  {
>int i;
> -  struct pci_pool *ScatterGatherPool = Controller->ScatterGatherPool;
> -  struct pci_pool *RequestSensePool = NULL;
> +  struct dma_pool *ScatterGatherPool = Controller->ScatterGatherPool;
> +  struct dma_pool *RequestSensePool = NULL;
>void *ScatterGatherCPU;
>dma_addr_t ScatterGatherDMA;
>void *RequestSenseCPU;
> @@ -411,9 +411,9 @@ static void 
> DAC960_DestroyAuxiliaryStructures(DAC960_Controller_T *Controller)
> RequestS

Re: v4.10-rc8 (-rc6) boot regression on Intel desktop, does not boot after cold boots, boots after reboot

2017-02-18 Thread Pavel Machek
On Thu 2017-02-16 12:21:13, Linus Torvalds wrote:
> On Thu, Feb 16, 2017 at 12:06 PM, Pavel Machek  wrote:
> >
> > Hmm, that would explain problems at boot _and_ problems during
> > suspend/resume.
> 
> I've committed the revert, and I'm just assuming that the revert also
> fixed your suspend/resume issues, but I wanted to just double-check
> that since it's only implied, no staed explicitly..

So boot issue is fixed, but it hung on resume, again. v4.9 worked
ok. Display is restored when it hangs on resume, but mouse is dead; I
guess that means there should be some chance to get debugging messages
during the resume.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature