Re: [PATCH bpf-next v2 27/35] bpf: eliminate rlimit-based memory accounting infra for bpf maps

2020-07-27 Thread Andrii Nakryiko
On Mon, Jul 27, 2020 at 10:47 PM Song Liu  wrote:
>
> On Mon, Jul 27, 2020 at 12:26 PM Roman Gushchin  wrote:
> >
> > Remove rlimit-based accounting infrastructure code, which is not used
> > anymore.
> >
> > Signed-off-by: Roman Gushchin 
> [...]
> >
> >  static void bpf_map_put_uref(struct bpf_map *map)
> > @@ -541,7 +484,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, 
> > struct file *filp)
> >"value_size:\t%u\n"
> >"max_entries:\t%u\n"
> >"map_flags:\t%#x\n"
> > -  "memlock:\t%llu\n"
> > +  "memlock:\t%llu\n" /* deprecated */
>
> I am not sure whether we can deprecate this one.. How difficult is it
> to keep this statistics?
>

It's factually correct now, that BPF map doesn't use any memlock memory, no?

This is actually one way to detect whether RLIMIT_MEMLOCK is necessary
or not: create a small map, check if it's fdinfo has memlock: 0 or not
:)

> Thanks,
> Song


Re: [PATCH bpf-next v2 22/35] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer

2020-07-27 Thread Andrii Nakryiko
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for bpf ringbuffer.
> It has been replaced with the memcg-based memory accounting.
>
> bpf_ringbuf_alloc() can't return anything except ERR_PTR(-ENOMEM)
> and a valid pointer, so to simplify the code make it return NULL
> in the first case. This allows to drop a couple of lines in
> ringbuf_map_alloc() and also makes it look similar to other memory
> allocating function like kmalloc().
>
> Signed-off-by: Roman Gushchin 
> ---

LGTM.

Acked-by: Andrii Nakryiko 

>  kernel/bpf/ringbuf.c | 24 
>  1 file changed, 4 insertions(+), 20 deletions(-)
>

[...]


[PATCH -mmotm] pinctrl: mediatek: fix build for tristate changes

2020-07-27 Thread Randy Dunlap
From: Randy Dunlap 

Export mtk_is_virt_gpio() for the case when
CONFIG_PINCTRL_MTK=y
CONFIG_PINCTRL_MTK_V2=y
CONFIG_PINCTRL_MTK_MOORE=y
CONFIG_PINCTRL_MTK_PARIS=m

to fix this build error:

ERROR: modpost: "mtk_is_virt_gpio" [drivers/pinctrl/mediatek/pinctrl-paris.ko] 
undefined!

Signed-off-by: Randy Dunlap 
Cc: Sean Wang 
Cc: linux-media...@lists.infradead.org
---
 drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c |1 +
 1 file changed, 1 insertion(+)

--- mmotm-2020-0727-1818.orig/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
+++ mmotm-2020-0727-1818/drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
@@ -264,6 +264,7 @@ bool mtk_is_virt_gpio(struct mtk_pinctrl
 
return virt_gpio;
 }
+EXPORT_SYMBOL_GPL(mtk_is_virt_gpio);
 
 static int mtk_xt_get_gpio_n(void *data, unsigned long eint_n,
 unsigned int *gpio_n,



Re: [PATCH bpf-next v2 28/35] bpf: eliminate rlimit-based memory accounting for bpf progs

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for bpf progs. It has been
> replaced with memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 


Re: [PATCH 01/24] asm-generic: add generic versions of mmu context functions

2020-07-27 Thread kernel test robot
Hi Nicholas,

I love your patch! Yet something to improve:

[auto build test ERROR on openrisc/for-next]
[also build test ERROR on sparc/master linus/master asm-generic/master v5.8-rc7 
next-20200727]
[cannot apply to nios2/for-linus]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Nicholas-Piggin/Use-asm-generic-for-mmu_context-no-op-functions/20200728-113854
base:   https://github.com/openrisc/linux.git for-next
config: c6x-allyesconfig (attached as .config)
compiler: c6x-elf-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=c6x 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All errors (new ones prefixed by >>):

   In file included from ./arch/c6x/include/generated/asm/mmu_context.h:1,
from include/linux/mmu_context.h:5,
from kernel//sched/sched.h:54,
from kernel//sched/core.c:9:
   include/asm-generic/mmu_context.h: In function 'activate_mm':
>> include/asm-generic/mmu_context.h:59:2: error: implicit declaration of 
>> function 'switch_mm' [-Werror=implicit-function-declaration]
  59 |  switch_mm(prev_mm, next_mm, current);
 |  ^
   cc1: some warnings being treated as errors
--
   In file included from ./arch/c6x/include/generated/asm/mmu_context.h:1,
from include/linux/mmu_context.h:5,
from kernel//sched/sched.h:54,
from kernel//sched/rt.c:6:
   include/asm-generic/mmu_context.h: In function 'activate_mm':
>> include/asm-generic/mmu_context.h:59:2: error: implicit declaration of 
>> function 'switch_mm' [-Werror=implicit-function-declaration]
  59 |  switch_mm(prev_mm, next_mm, current);
 |  ^
   kernel//sched/rt.c: At top level:
   kernel//sched/rt.c:668:6: warning: no previous prototype for 
'sched_rt_bandwidth_account' [-Wmissing-prototypes]
 668 | bool sched_rt_bandwidth_account(struct rt_rq *rt_rq)
 |  ^~
   cc1: some warnings being treated as errors

vim +/switch_mm +59 include/asm-generic/mmu_context.h

49  
50  /**
51   * activate_mm - called after exec switches the current task to a new 
mm, to switch to it
52   * @prev_mm: previous mm of this task
53   * @next_mm: new mm
54   */
55  #ifndef activate_mm
56  static inline void activate_mm(struct mm_struct *prev_mm,
57 struct mm_struct *next_mm)
58  {
  > 59  switch_mm(prev_mm, next_mm, current);
60  }
61  #endif
62  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


答复: [PATCH] USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci

2020-07-27 Thread WeitaoWang-oc

On Fri, 24 Jul 2020 19:17:49 + Alex wrote:
> On Fri, 24 Jul 2020 12:57:49 +
> WeitaoWang-oc  wrote:
> 
> > On Thu, 23 Jul 2020 12:38:21 -0400, Alan wrote:
> > > On Thu, Jul 23, 2020 at 10:17:35AM -0600, Alex Williamson wrote:
> > > > The IOMMU grouping restriction does solve the hardware issue, so long
> > > > as one driver doesn't blindly assume the driver private data for
> > > > another device and modify it.
> > >
> > > Correction: The IOMMU grouping restriction solves the hardware issue for
> > > vfio-pci.  It won't necessarily help if some other driver comes along
> > > and wants to bind to this hardware.
> > >
> > > >   I do agree that your solution would
> > > > work, requiring all devices are unbound before any can be bound, but it
> > > > also seems difficult to manage.  The issue is largely unique to USB
> > > > AFAIK.  On the other hand, drivers coordinating with each other to
> > > > register their _private_ data as share-able within a set of drivers
> > > > seems like a much more direct and explicit interaction between the
> > > > drivers.  Thanks,
> > >
> > > Yes, that makes sense.  But it would have to be implemented in the
> > > driver core, not in particular subsystems like USB or PCI.  And it might
> > > be seen as overkill, given that only UHCI/OHCI/EHCI devices require this
> > > sort of sharing AFAIK.
> > >
> > > Also, when you think about it, what form would such coordination among
> > > drivers take?  From your description, it sounds like the drivers would
> > > agree to avoid accessing each other's private data if the proper
> > > registration wasn't in place.
> > >
> > > On the other hand, a stronger and perhaps more robust approach would be
> > > to enforce the condition that non-cooperating drivers are never bound to
> > > devices in the same group at the same time.  That's basically what I'm
> > > proposing here -- the question is whether the enforcement should be
> > > instituted in the kernel or should merely be part of a standard protocol
> > > followed by userspace drivers.
> > >
> > > Given that it's currently needed in only one place, it seems reasonable
> > > to leave this as a "gentlemen's agreement" in userspace for the time
> > > being instead of adding it to the kernel.
> > >
> >
> > Provided that EHCI and UHCI host controller declare not support P2P and
> > ACS. So, we can assign EHCI and UHCI host controller to different IOMMU
> > group separately. We assign EHCI host controller to host and assign UHCI
> > host controller to VM. Then, ehci_hcd driver load/unload operation in host
> > will cause the same issue as discussed
> 
> And you have an example of such a device?  I expect these do not exist,
> nor should they.  It seems like it would be an improper use of ACS.
> Thanks,


In kernel source code tree drivers/pci/quirks.c,
There is a device list declared by pci_dev_acs_enabled. In which list, such as 
multi-function device without acs capability not allow peer-to-peer bewteen 
functions. Those device can be assign to to different IOMMU group separately.

Thnaks
weitao



Re: [PATCH v3 3/3] remoteproc: qcom_q6v5_mss: Add modem debug policy support

2020-07-27 Thread Bjorn Andersson
On Wed 22 Jul 13:10 PDT 2020, Sibi Sankar wrote:

> Add modem debug policy support which will enable coredumps and live
> debug support when the msadp firmware is present on secure devices.
> 
> Signed-off-by: Sibi Sankar 

Reviewed-by: Bjorn Andersson 

> ---
> 
> v3:
>  * Fix dp_fw leak and create a separate func for dp load [Bjorn]
>  * Reset dp_size on mba_reclaim
> 
> v2:
>  * Use request_firmware_direct [Bjorn]
>  * Use Bjorn's template to show if debug policy is present
>  * Add size check to prevent memcpy out of bounds [Bjorn]
> 
>  drivers/remoteproc/qcom_q6v5_mss.c | 25 -
>  1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
> b/drivers/remoteproc/qcom_q6v5_mss.c
> index f4aa61ba220dc..da99c8504a346 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -191,6 +191,7 @@ struct q6v5 {
>   phys_addr_t mba_phys;
>   void *mba_region;
>   size_t mba_size;
> + size_t dp_size;
>  
>   phys_addr_t mpss_phys;
>   phys_addr_t mpss_reloc;
> @@ -408,6 +409,21 @@ static int q6v5_xfer_mem_ownership(struct q6v5 *qproc, 
> int *current_perm,
>  current_perm, next, perms);
>  }
>  
> +static void q6v5_debug_policy_load(struct q6v5 *qproc)
> +{
> + const struct firmware *dp_fw;
> +
> + if (request_firmware_direct(_fw, "msadp", qproc->dev))
> + return;
> +
> + if (SZ_1M + dp_fw->size <= qproc->mba_size) {
> + memcpy(qproc->mba_region + SZ_1M, dp_fw->data, dp_fw->size);
> + qproc->dp_size = dp_fw->size;
> + }
> +
> + release_firmware(dp_fw);
> +}
> +
>  static int q6v5_load(struct rproc *rproc, const struct firmware *fw)
>  {
>   struct q6v5 *qproc = rproc->priv;
> @@ -419,6 +435,7 @@ static int q6v5_load(struct rproc *rproc, const struct 
> firmware *fw)
>   }
>  
>   memcpy(qproc->mba_region, fw->data, fw->size);
> + q6v5_debug_policy_load(qproc);
>  
>   return 0;
>  }
> @@ -928,6 +945,10 @@ static int q6v5_mba_load(struct q6v5 *qproc)
>   }
>  
>   writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
> + if (qproc->dp_size) {
> + writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + 
> RMB_PMI_CODE_START_REG);
> + writel(qproc->dp_size, qproc->rmb_base + 
> RMB_PMI_CODE_LENGTH_REG);
> + }
>  
>   ret = q6v5proc_reset(qproc);
>   if (ret)
> @@ -996,6 +1017,7 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
>   u32 val;
>  
>   qproc->dump_mba_loaded = false;
> + qproc->dp_size = 0;
>  
>   q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_q6);
>   q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_modem);
> @@ -1290,7 +1312,8 @@ static int q6v5_start(struct rproc *rproc)
>   if (ret)
>   return ret;
>  
> - dev_info(qproc->dev, "MBA booted, loading mpss\n");
> + dev_info(qproc->dev, "MBA booted with%s debug policy, loading mpss\n",
> +  qproc->dp_size ? "" : "out");
>  
>   ret = q6v5_mpss_load(qproc);
>   if (ret)
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 


Re: [PATCH 1/4] ACPI: APD: Change name from ST to FCH

2020-07-27 Thread Agrawal, Akshu



On 7/27/2020 6:58 PM, Rafael J. Wysocki wrote:

On Mon, Jul 20, 2020 at 7:06 AM Akshu Agrawal  wrote:

AMD SoC general pupose clk is present in new platforms with
same MMIO mappings. We can reuse the same clk handler support
for other platforms. Hence, changing name from ST(SoC) to FCH(IP)

Signed-off-by: Akshu Agrawal 

This patch and the [3/4] appear to be part of a larger series which
isn't visible to me as a whole.


Link to other patches:

https://patchwork.kernel.org/patch/11672857/

https://patchwork.kernel.org/patch/11672861/



Do you want me to apply them nevertheless?


This patch on its own will cause compilation error as we need the second 
patch.


Since, there is dependency we need them to be merged together. Can you 
or Stephen please suggest a way forward.



Thanks,

Akshu


---
  drivers/acpi/acpi_apd.c| 14 +++---
  .../linux/platform_data/{clk-st.h => clk-fch.h}| 10 +-
  2 files changed, 12 insertions(+), 12 deletions(-)
  rename include/linux/platform_data/{clk-st.h => clk-fch.h} (53%)

diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
index ba2612e9a0eb..2d99e46add1a 100644
--- a/drivers/acpi/acpi_apd.c
+++ b/drivers/acpi/acpi_apd.c
@@ -8,7 +8,7 @@
   */

  #include 
-#include 
+#include 
  #include 
  #include 
  #include 
@@ -79,11 +79,11 @@ static int misc_check_res(struct acpi_resource *ares, void 
*data)
 return !acpi_dev_resource_memory(ares, );
  }

-static int st_misc_setup(struct apd_private_data *pdata)
+static int fch_misc_setup(struct apd_private_data *pdata)
  {
 struct acpi_device *adev = pdata->adev;
 struct platform_device *clkdev;
-   struct st_clk_data *clk_data;
+   struct fch_clk_data *clk_data;
 struct resource_entry *rentry;
 struct list_head resource_list;
 int ret;
@@ -106,7 +106,7 @@ static int st_misc_setup(struct apd_private_data *pdata)

 acpi_dev_free_resource_list(_list);

-   clkdev = platform_device_register_data(>dev, "clk-st",
+   clkdev = platform_device_register_data(>dev, "clk-fch",
PLATFORM_DEVID_NONE, clk_data,
sizeof(*clk_data));
 return PTR_ERR_OR_ZERO(clkdev);
@@ -135,8 +135,8 @@ static const struct apd_device_desc cz_uart_desc = {
 .properties = uart_properties,
  };

-static const struct apd_device_desc st_misc_desc = {
-   .setup = st_misc_setup,
+static const struct apd_device_desc fch_misc_desc = {
+   .setup = fch_misc_setup,
  };
  #endif

@@ -239,7 +239,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
 { "AMD0020", APD_ADDR(cz_uart_desc) },
 { "AMDI0020", APD_ADDR(cz_uart_desc) },
 { "AMD0030", },
-   { "AMD0040", APD_ADDR(st_misc_desc)},
+   { "AMD0040", APD_ADDR(fch_misc_desc)},
  #endif
  #ifdef CONFIG_ARM64
 { "APMC0D0F", APD_ADDR(xgene_i2c_desc) },
diff --git a/include/linux/platform_data/clk-st.h 
b/include/linux/platform_data/clk-fch.h
similarity index 53%
rename from include/linux/platform_data/clk-st.h
rename to include/linux/platform_data/clk-fch.h
index 7cdb6a402b35..850ca776156d 100644
--- a/include/linux/platform_data/clk-st.h
+++ b/include/linux/platform_data/clk-fch.h
@@ -1,17 +1,17 @@
  /* SPDX-License-Identifier: MIT */
  /*
- * clock framework for AMD Stoney based clock
+ * clock framework for AMD misc clocks
   *
   * Copyright 2018 Advanced Micro Devices, Inc.
   */

-#ifndef __CLK_ST_H
-#define __CLK_ST_H
+#ifndef __CLK_FCH_H
+#define __CLK_FCH_H

  #include 

-struct st_clk_data {
+struct fch_clk_data {
 void __iomem *base;
  };

-#endif /* __CLK_ST_H */
+#endif /* __CLK_FCH_H */
--
2.20.1



linux-next: manual merge of the devicetree tree with the pci tree

2020-07-27 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the devicetree tree got a conflict in:

  Documentation/devicetree/bindings/pci/qcom,pcie.txt

between commits:

  736ae5c91712 ("dt-bindings: PCI: qcom: Add missing clks")
  b11b8cc161de ("dt-bindings: PCI: qcom: Add ext reset")
  d511580ea9c2 ("dt-bindings: PCI: qcom: Add ipq8064 rev 2 variant")

from the pci tree and commit:

  70172d196947 ("dt-bindings: pci: convert QCOM pci bindings to YAML")

from the devicetree tree.

I don;t know how to fixed it up so I just left the latter one . This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell


pgpFcIlTUAXTP.pgp
Description: OpenPGP digital signature


Re: [PATCH v3 1/3] remoteproc: qcom_q6v5_mss: Validate MBA firmware size before load

2020-07-27 Thread Bjorn Andersson
On Wed 22 Jul 13:10 PDT 2020, Sibi Sankar wrote:

> The following mem abort is observed when the mba firmware size exceeds
> the allocated mba region. MBA firmware size is restricted to a maximum
> size of 1M and remaining memory region is used by modem debug policy
> firmware when available. Hence verify whether the MBA firmware size lies
> within the allocated memory region and is not greater than 1M before
> loading.
> 
> Err Logs:
> Unable to handle kernel paging request at virtual address
> Mem abort info:
> ...
> Call trace:
>   __memcpy+0x110/0x180
>   rproc_start+0x40/0x218
>   rproc_boot+0x5b4/0x608
>   state_store+0x54/0xf8
>   dev_attr_store+0x44/0x60
>   sysfs_kf_write+0x58/0x80
>   kernfs_fop_write+0x140/0x230
>   vfs_write+0xc4/0x208
>   ksys_write+0x74/0xf8
>   __arm64_sys_write+0x24/0x30
> ...
> 
> Fixes: 051fb70fd4ea4 ("remoteproc: qcom: Driver for the self-authenticating 
> Hexagon v5")
> Cc: sta...@vger.kernel.org

Reviewed-by: Bjorn Andersson 

> Signed-off-by: Sibi Sankar 
> ---
>  drivers/remoteproc/qcom_q6v5_mss.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c 
> b/drivers/remoteproc/qcom_q6v5_mss.c
> index 718acebae777f..4e72c9e30426c 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -412,6 +412,12 @@ static int q6v5_load(struct rproc *rproc, const struct 
> firmware *fw)
>  {
>   struct q6v5 *qproc = rproc->priv;
>  
> + /* MBA is restricted to a maximum size of 1M */
> + if (fw->size > qproc->mba_size || fw->size > SZ_1M) {
> + dev_err(qproc->dev, "MBA firmware load failed\n");

I'll change this to "MBA firmware exceeds size limit\n". Please let me
know if you object.

Regards,
Bjorn

> + return -EINVAL;
> + }
> +
>   memcpy(qproc->mba_region, fw->data, fw->size);
>  
>   return 0;
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 


Re: linux-next: Tree for Jul 27 (kernel/bpf/syscall.o)

2020-07-27 Thread Andrii Nakryiko
On Mon, Jul 27, 2020 at 11:58 AM Randy Dunlap  wrote:
>
> On 7/27/20 6:23 AM, Stephen Rothwell wrote:
> > Hi all,
> >
> > Changes since 20200724:
> >
>
> on i386:
> when CONFIG_XPS is not set/enabled:
>
> ld: kernel/bpf/syscall.o: in function `__do_sys_bpf':
> syscall.c:(.text+0x4482): undefined reference to `bpf_xdp_link_attach'
>

I can't repro this on x86-64 with CONFIG_XPS unset. Do you mind
sharing the exact config you've used?

I see that kernel/bpf/syscall.c doesn't include linux/netdevice.h
directly, so something must be preventing netdevice.h to eventually
get to bpf/syscall.c, but instead of guessing on the fix, I'd like to
repro it first. Thanks!


>
> --
> ~Randy
> Reported-by: Randy Dunlap 


Re: [PATCH bpf-next v2 27/35] bpf: eliminate rlimit-based memory accounting infra for bpf maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:26 PM Roman Gushchin  wrote:
>
> Remove rlimit-based accounting infrastructure code, which is not used
> anymore.
>
> Signed-off-by: Roman Gushchin 
[...]
>
>  static void bpf_map_put_uref(struct bpf_map *map)
> @@ -541,7 +484,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, 
> struct file *filp)
>"value_size:\t%u\n"
>"max_entries:\t%u\n"
>"map_flags:\t%#x\n"
> -  "memlock:\t%llu\n"
> +  "memlock:\t%llu\n" /* deprecated */

I am not sure whether we can deprecate this one.. How difficult is it
to keep this statistics?

Thanks,
Song


RE: rtsx_pci not restoring ASPM state after suspend/resume

2020-07-27 Thread 吳昊澄 Ricky

> On Mon, Jul 27, 2020 at 08:52:25PM +0100, James Ettle wrote:
> > On Mon, 2020-07-27 at 09:14 -0500, Bjorn Helgaas wrote:
> > > I don't know the connection between ASPM and package C-states, so I
> > > need to simplify this even more.  All I want to do right now is
> > > verify
> > > that if we don't have any outside influences on the ASPM
> > > configuration
> > > (eg, no manual changes and no udev rules), it stays the same across
> > > suspend/resume.
> >
> > Basically this started from me observing deep package C-states weren't
> > being used, until I went and fiddled with the ASPM state of the
> > rtsx_pci card reader under sysfs -- so phenomenological poking on my
> > part.
> >
> > > So let's read the ASPM state directly from the
> > > hardware like this:
> > >
> > >   sudo lspci -vvs 00:1d.0 | egrep "^0|Lnk|L1|LTR|snoop"
> > >   sudo lspci -vvs 01:00   | egrep "^0|Lnk|L1|LTR|snoop"
> > >
> > > Can you try that before and after suspend/resume?
> >
> > I've attached these to the bugzilla entry at:
> >
> > https://bugzilla.kernel.org/show_bug.cgi?id=208117
> >
> > Spoiler: With no udev rules or suspend hooks, things are the same
> > before and after suspend/resume. One thing I do see (both before and
> > after) is that ASPM L0s and L1 is enabled for the card reader, but
> > disabled for the ethernet chip (does r8169 fiddle with ASPM too?).
> 
> Thank you!  It's good that this stays the same across suspend/resume.
> Do you see different C-state behavior before vs after?
> 
> This is the config I see:
> 
>   00:1d.0 bridge to [bus 01]: ASPM L1 supported; ASPM Disabled
>   01:00.0 card reader:ASPM L0s L1 supported; L0s L1 Enabled
>   01:00.1 GigE NIC:   ASPM L0s L1 supported; ASPM Disabled
> 
> This is actually illegal because PCIe r5.0, sec 5.4.1.3, says software
> must not enable L0s in either direction unless components on both ends
> of the link support L0s.  The bridge (00:1d.0) does not support L0s,
> so it's illegal to enable L0s on 01:00.0.  I don't know whether this
> causes problems in practice.
> 

If system want to entry deep C-state, system have to support L1. Host bridge 
handshake with device to determine whether to enter the L1 state.
Our card reader driver did not set L0s, here need to check who set this, but we 
thought this L0s enable should not cause Host bridge ASPM disable
 

> I don't see anything in rtsx that enables L0s.  Can you collect the
> dmesg log when booting with "pci=earlydump"?  That will show whether
> the BIOS left it this way.  The PCI core isn't supposed to do this, so
> if it did, we need to fix that.
> 
> I don't know whether r8169 mucks with ASPM.  It is legal to have
> different configurations for the two functions, even though they share
> the same link.  Sec 5.4.1 has rules about how hardware resolves
> differences.
> 
> > [Oddly when I set ASPM (e.g. using udev) the lspci tools show ASPM
> > enabled after a suspend/resume, but still no deep package C-states
> > until I manually fiddle via sysfs on the card reader. Sorry if this
> > only muddies the water further!]
> 
> Let's defer this for now.  It sounds worth pursuing, but I can't keep
> everything in my head at once.
> 
> Bjorn
> 
> --Please consider the environment before printing this e-mail.


Re: [PATCH bpf-next v2 26/35] bpf: eliminate rlimit-based memory accounting for xskmap maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for xskmap maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 


> ---
>  net/xdp/xskmap.c | 10 +-
>  1 file changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
> index e574b22defe5..0366013f13c6 100644
> --- a/net/xdp/xskmap.c
> +++ b/net/xdp/xskmap.c
> @@ -74,7 +74,6 @@ static void xsk_map_sock_delete(struct xdp_sock *xs,
>
>  static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
>  {
> -   struct bpf_map_memory mem;
> int err, numa_node;
> struct xsk_map *m;
> u64 size;
> @@ -90,18 +89,11 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
> numa_node = bpf_map_attr_numa_node(attr);
> size = struct_size(m, xsk_map, attr->max_entries);
>
> -   err = bpf_map_charge_init(, size);
> -   if (err < 0)
> -   return ERR_PTR(err);
> -
> m = bpf_map_area_alloc(size, numa_node);
> -   if (!m) {
> -   bpf_map_charge_finish();
> +   if (!m)
> return ERR_PTR(-ENOMEM);
> -   }
>
> bpf_map_init_from_attr(>map, attr);
> -   bpf_map_charge_move(>map.memory, );
> spin_lock_init(>lock);
>
> return >map;
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 25/35] bpf: eliminate rlimit-based memory accounting for socket storage maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:26 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for socket storage maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  net/core/bpf_sk_storage.c | 11 ---
>  1 file changed, 11 deletions(-)
>
> diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
> index fbcd03cd00d3..c0a35b6368af 100644
> --- a/net/core/bpf_sk_storage.c
> +++ b/net/core/bpf_sk_storage.c
> @@ -676,8 +676,6 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union 
> bpf_attr *attr)
> struct bpf_sk_storage_map *smap;
> unsigned int i;
> u32 nbuckets;
> -   u64 cost;
> -   int ret;
>
> smap = kzalloc(sizeof(*smap), GFP_USER | __GFP_NOWARN | 
> __GFP_ACCOUNT);
> if (!smap)
> @@ -688,18 +686,9 @@ static struct bpf_map *bpf_sk_storage_map_alloc(union 
> bpf_attr *attr)
> /* Use at least 2 buckets, select_bucket() is undefined behavior with 
> 1 bucket */
> nbuckets = max_t(u32, 2, nbuckets);
> smap->bucket_log = ilog2(nbuckets);
> -   cost = sizeof(*smap->buckets) * nbuckets + sizeof(*smap);
> -
> -   ret = bpf_map_charge_init(>map.memory, cost);
> -   if (ret < 0) {
> -   kfree(smap);
> -   return ERR_PTR(ret);
> -   }
> -
> smap->buckets = kvcalloc(sizeof(*smap->buckets), nbuckets,
>  GFP_USER | __GFP_NOWARN | __GFP_ACCOUNT);
> if (!smap->buckets) {
> -   bpf_map_charge_finish(>map.memory);
> kfree(smap);
> return ERR_PTR(-ENOMEM);
> }
> --
> 2.26.2
>


Re: [PATCH V2 3/3] tty: serial: qcom_geni_serial: Fix the UART wakeup issue

2020-07-27 Thread Akash Asthana



On 7/24/2020 9:28 AM, satya priya wrote:

As a part of system suspend we call uart_port_suspend from the
Serial driver, which calls set_mctrl passing mctrl as NULL. This
makes RFR high(NOT_READY) during suspend.

Due to this BT SoC is not able to send wakeup bytes to UART during
suspend. Included if check for non-suspend case to keep RFR low
during suspend.



Reviewed-by: Akash Asthana 


Signed-off-by: satya priya 
---
Changes in V2:
  - This patch fixes the UART flow control issue during suspend.
Newly added in V2.

  drivers/tty/serial/qcom_geni_serial.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/tty/serial/qcom_geni_serial.c 
b/drivers/tty/serial/qcom_geni_serial.c
index 07b7b6b..7108dfc 100644
--- a/drivers/tty/serial/qcom_geni_serial.c
+++ b/drivers/tty/serial/qcom_geni_serial.c
@@ -242,7 +242,7 @@ static void qcom_geni_serial_set_mctrl(struct uart_port 
*uport,
if (mctrl & TIOCM_LOOP)
port->loopback = RX_TX_CTS_RTS_SORTED;
  
-	if (!(mctrl & TIOCM_RTS))

+   if ((!(mctrl & TIOCM_RTS)) && (!(uport->suspended)))
uart_manual_rfr = UART_MANUAL_RFR_EN | UART_RFR_NOT_READY;
writel(uart_manual_rfr, uport->membase + SE_UART_MANUAL_RFR);
  }


--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,\na 
Linux Foundation Collaborative Project



Re: [PATCH V2 2/3] arm64: dts: qcom: sc7180: Add sleep pin ctrl for BT uart

2020-07-27 Thread Akash Asthana



On 7/24/2020 9:28 AM, satya priya wrote:

Add sleep pin ctrl for BT uart, and also change the bias
configuration to match Bluetooth module.



Reviewed-by: Akash Asthana 


Signed-off-by: satya priya 
---
Changes in V2:
  - This patch adds sleep state for BT UART. Newly added in V2.

  arch/arm64/boot/dts/qcom/sc7180-idp.dts | 42 -
  1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sc7180-idp.dts 
b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
index 26cc491..bc919f2 100644
--- a/arch/arm64/boot/dts/qcom/sc7180-idp.dts
+++ b/arch/arm64/boot/dts/qcom/sc7180-idp.dts
@@ -469,20 +469,50 @@
  
  _uart3_default {

pinconf-cts {
-   /*
-* Configure a pull-down on 38 (CTS) to match the pull of
-* the Bluetooth module.
-*/
+   /* Configure no pull on 38 (CTS) to match Bluetooth module */
pins = "gpio38";
+   bias-disable;
+   };
+
+   pinconf-rts {
+   /* We'll drive 39 (RTS), so configure pull-down */
+   pins = "gpio39";
+   drive-strength = <2>;
bias-pull-down;
+   };
+
+   pinconf-tx {
+   /* We'll drive 40 (TX), so no pull */
+   pins = "gpio40";
+   drive-strength = <2>;
+   bias-disable;
output-high;
};
  
+	pinconf-rx {

+   /*
+* Configure a pull-up on 41 (RX). This is needed to avoid
+* garbage data when the TX pin of the Bluetooth module is
+* in tri-state (module powered off or not driving the
+* signal yet).
+*/
+   pins = "gpio41";
+   bias-pull-up;
+   };
+};
+
+_uart3_sleep {
+   pinconf-cts {
+   /* Configure no-pull on 38 (CTS) to match Bluetooth module */
+   pins = "gpio38";
+   bias-disable;
+   };
+
pinconf-rts {
-   /* We'll drive 39 (RTS), so no pull */
+   /* We'll drive 39 (RTS), so configure pull-down */
pins = "gpio39";
drive-strength = <2>;
-   bias-disable;
+   bias-pull-down;
};
  
  	pinconf-tx {


--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,\na 
Linux Foundation Collaborative Project



Re: [PATCH bpf-next v2 24/35] bpf: eliminate rlimit-based memory accounting for stackmap maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:22 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for stackmap maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/stackmap.c | 16 +++-
>  1 file changed, 3 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 5beb2f8c23da..9ac0f405beef 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -90,7 +90,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
>  {
> u32 value_size = attr->value_size;
> struct bpf_stack_map *smap;
> -   struct bpf_map_memory mem;
> u64 cost, n_buckets;
> int err;
>
> @@ -119,15 +118,9 @@ static struct bpf_map *stack_map_alloc(union bpf_attr 
> *attr)
>
> cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
> cost += n_buckets * (value_size + sizeof(struct stack_map_bucket));
> -   err = bpf_map_charge_init(, cost);
> -   if (err)
> -   return ERR_PTR(err);
> -
> smap = bpf_map_area_alloc(cost, bpf_map_attr_numa_node(attr));
> -   if (!smap) {
> -   bpf_map_charge_finish();
> +   if (!smap)
> return ERR_PTR(-ENOMEM);
> -   }
>
> bpf_map_init_from_attr(>map, attr);
> smap->map.value_size = value_size;
> @@ -135,20 +128,17 @@ static struct bpf_map *stack_map_alloc(union bpf_attr 
> *attr)
>
> err = get_callchain_buffers(sysctl_perf_event_max_stack);
> if (err)
> -   goto free_charge;
> +   goto free_smap;
>
> err = prealloc_elems_and_freelist(smap);
> if (err)
> goto put_buffers;
>
> -   bpf_map_charge_move(>map.memory, );
> -
> return >map;
>
>  put_buffers:
> put_callchain_buffers();
> -free_charge:
> -   bpf_map_charge_finish();
> +free_smap:
> bpf_map_area_free(smap);
> return ERR_PTR(err);
>  }
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 23/35] bpf: eliminate rlimit-based memory accounting for sockmap and sockhash maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for sockmap and sockhash maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  net/core/sock_map.c | 33 ++---
>  1 file changed, 6 insertions(+), 27 deletions(-)
>
> diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> index bc797adca44c..07c90baf8db1 100644
> --- a/net/core/sock_map.c
> +++ b/net/core/sock_map.c
> @@ -26,8 +26,6 @@ struct bpf_stab {
>  static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
>  {
> struct bpf_stab *stab;
> -   u64 cost;
> -   int err;
>
> if (!capable(CAP_NET_ADMIN))
> return ERR_PTR(-EPERM);
> @@ -45,22 +43,15 @@ static struct bpf_map *sock_map_alloc(union bpf_attr 
> *attr)
> bpf_map_init_from_attr(>map, attr);
> raw_spin_lock_init(>lock);
>
> -   /* Make sure page count doesn't overflow. */
> -   cost = (u64) stab->map.max_entries * sizeof(struct sock *);
> -   err = bpf_map_charge_init(>map.memory, cost);
> -   if (err)
> -   goto free_stab;
> -
> stab->sks = bpf_map_area_alloc(stab->map.max_entries *
>sizeof(struct sock *),
>stab->map.numa_node);
> -   if (stab->sks)
> -   return >map;
> -   err = -ENOMEM;
> -   bpf_map_charge_finish(>map.memory);
> -free_stab:
> -   kfree(stab);
> -   return ERR_PTR(err);
> +   if (!stab->sks) {
> +   kfree(stab);
> +   return ERR_PTR(-ENOMEM);
> +   }
> +
> +   return >map;
>  }
>
>  int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog)
> @@ -999,7 +990,6 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr 
> *attr)
>  {
> struct bpf_shtab *htab;
> int i, err;
> -   u64 cost;
>
> if (!capable(CAP_NET_ADMIN))
> return ERR_PTR(-EPERM);
> @@ -1027,21 +1017,10 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr 
> *attr)
> goto free_htab;
> }
>
> -   cost = (u64) htab->buckets_num * sizeof(struct bpf_shtab_bucket) +
> -  (u64) htab->elem_size * htab->map.max_entries;
> -   if (cost >= U32_MAX - PAGE_SIZE) {
> -   err = -EINVAL;
> -   goto free_htab;
> -   }
> -   err = bpf_map_charge_init(>map.memory, cost);
> -   if (err)
> -   goto free_htab;
> -
> htab->buckets = bpf_map_area_alloc(htab->buckets_num *
>sizeof(struct bpf_shtab_bucket),
>htab->map.numa_node);
> if (!htab->buckets) {
> -   bpf_map_charge_finish(>map.memory);
> err = -ENOMEM;
> goto free_htab;
> }
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 22/35] bpf: eliminate rlimit-based memory accounting for bpf ringbuffer

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:22 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for bpf ringbuffer.
> It has been replaced with the memcg-based memory accounting.
>
> bpf_ringbuf_alloc() can't return anything except ERR_PTR(-ENOMEM)
> and a valid pointer, so to simplify the code make it return NULL
> in the first case. This allows to drop a couple of lines in
> ringbuf_map_alloc() and also makes it look similar to other memory
> allocating function like kmalloc().
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/ringbuf.c | 24 
>  1 file changed, 4 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> index e8e2c39cbdc9..e687b798d097 100644
> --- a/kernel/bpf/ringbuf.c
> +++ b/kernel/bpf/ringbuf.c
> @@ -48,7 +48,6 @@ struct bpf_ringbuf {
>
>  struct bpf_ringbuf_map {
> struct bpf_map map;
> -   struct bpf_map_memory memory;
> struct bpf_ringbuf *rb;
>  };
>
> @@ -135,7 +134,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t 
> data_sz, int numa_node)
>
> rb = bpf_ringbuf_area_alloc(data_sz, numa_node);
> if (!rb)
> -   return ERR_PTR(-ENOMEM);
> +   return NULL;
>
> spin_lock_init(>spinlock);
> init_waitqueue_head(>waitq);
> @@ -151,8 +150,6 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t 
> data_sz, int numa_node)
>  static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
>  {
> struct bpf_ringbuf_map *rb_map;
> -   u64 cost;
> -   int err;
>
> if (attr->map_flags & ~RINGBUF_CREATE_FLAG_MASK)
> return ERR_PTR(-EINVAL);
> @@ -174,26 +171,13 @@ static struct bpf_map *ringbuf_map_alloc(union bpf_attr 
> *attr)
>
> bpf_map_init_from_attr(_map->map, attr);
>
> -   cost = sizeof(struct bpf_ringbuf_map) +
> -  sizeof(struct bpf_ringbuf) +
> -  attr->max_entries;
> -   err = bpf_map_charge_init(_map->map.memory, cost);
> -   if (err)
> -   goto err_free_map;
> -
> rb_map->rb = bpf_ringbuf_alloc(attr->max_entries, 
> rb_map->map.numa_node);
> -   if (IS_ERR(rb_map->rb)) {
> -   err = PTR_ERR(rb_map->rb);
> -   goto err_uncharge;
> +   if (!rb_map->rb) {
> +   kfree(rb_map);
> +   return ERR_PTR(-ENOMEM);
> }
>
> return _map->map;
> -
> -err_uncharge:
> -   bpf_map_charge_finish(_map->map.memory);
> -err_free_map:
> -   kfree(rb_map);
> -   return ERR_PTR(err);
>  }
>
>  static void bpf_ringbuf_free(struct bpf_ringbuf *rb)
> --
> 2.26.2
>


[Linux-kernel-mentees] [PATCH net v2] xdp: Prevent kernel-infoleak in xsk_getsockopt()

2020-07-27 Thread Peilin Ye
xsk_getsockopt() is copying uninitialized stack memory to userspace when
`extra_stats` is `false`. Fix it.

Fixes: 8aa5a33578e9 ("xsk: Add new statistics")
Suggested-by: Dan Carpenter 
Signed-off-by: Peilin Ye 
---
Doing `= {};` is sufficient since currently `struct xdp_statistics` is
defined as follows:

struct xdp_statistics {
__u64 rx_dropped;
__u64 rx_invalid_descs;
__u64 tx_invalid_descs;
__u64 rx_ring_full;
__u64 rx_fill_ring_empty_descs;
__u64 tx_ring_empty_descs;
};

When being copied to the userspace, `stats` will not contain any
uninitialized "holes" between struct fields.

Changes in v2:
- Remove the "Cc: sta...@vger.kernel.org" tag. (Suggested by Song Liu
  )
- Initialize `stats` by assignment instead of using memset().
  (Suggested by Song Liu )

 net/xdp/xsk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 26e3bba8c204..b2b533eddebf 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -840,7 +840,7 @@ static int xsk_getsockopt(struct socket *sock, int level, 
int optname,
switch (optname) {
case XDP_STATISTICS:
{
-   struct xdp_statistics stats;
+   struct xdp_statistics stats = {};
bool extra_stats = true;
size_t stats_size;
 
-- 
2.25.1



Re: [PATCH bpf-next v2 21/35] bpf: eliminate rlimit-based memory accounting for reuseport_array maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:23 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for reuseport_array maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/reuseport_array.c | 12 ++--
>  1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c
> index 90b29c5b1da7..9d0161fdfec7 100644
> --- a/kernel/bpf/reuseport_array.c
> +++ b/kernel/bpf/reuseport_array.c
> @@ -150,9 +150,8 @@ static void reuseport_array_free(struct bpf_map *map)
>
>  static struct bpf_map *reuseport_array_alloc(union bpf_attr *attr)
>  {
> -   int err, numa_node = bpf_map_attr_numa_node(attr);
> +   int numa_node = bpf_map_attr_numa_node(attr);
> struct reuseport_array *array;
> -   struct bpf_map_memory mem;
> u64 array_size;
>
> if (!bpf_capable())
> @@ -161,20 +160,13 @@ static struct bpf_map *reuseport_array_alloc(union 
> bpf_attr *attr)
> array_size = sizeof(*array);
> array_size += (u64)attr->max_entries * sizeof(struct sock *);
>
> -   err = bpf_map_charge_init(, array_size);
> -   if (err)
> -   return ERR_PTR(err);
> -
> /* allocate all map elements and zero-initialize them */
> array = bpf_map_area_alloc(array_size, numa_node);
> -   if (!array) {
> -   bpf_map_charge_finish();
> +   if (!array)
> return ERR_PTR(-ENOMEM);
> -   }
>
> /* copy mandatory map attributes */
> bpf_map_init_from_attr(>map, attr);
> -   bpf_map_charge_move(>map.memory, );
>
> return >map;
>  }
> --
> 2.26.2
>


drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c:123:39: sparse: sparse: restricted __le32 degrades to integer

2020-07-27 Thread kernel test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   92ed301919932f13b9172e525674157e983d
commit: 93c7f4d357de68f1e3a998b2fc775466d75c4c07 crypto: sun8i-ce - enable 
working on big endian
date:   8 months ago
config: arm64-randconfig-s031-20200728 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.2-94-geb6779f6-dirty
git checkout 93c7f4d357de68f1e3a998b2fc775466d75c4c07
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 
CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


sparse warnings: (new ones prefixed by >>)

>> drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c:123:39: sparse: sparse: 
>> restricted __le32 degrades to integer

vim +123 drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c

06f751b613296c Corentin Labbe 2019-10-23  109  
06f751b613296c Corentin Labbe 2019-10-23  110   mutex_lock(>mlock);
06f751b613296c Corentin Labbe 2019-10-23  111  
06f751b613296c Corentin Labbe 2019-10-23  112   v = readl(ce->base + CE_ICR);
06f751b613296c Corentin Labbe 2019-10-23  113   v |= 1 << flow;
06f751b613296c Corentin Labbe 2019-10-23  114   writel(v, ce->base + CE_ICR);
06f751b613296c Corentin Labbe 2019-10-23  115  
06f751b613296c Corentin Labbe 2019-10-23  116   
reinit_completion(>chanlist[flow].complete);
06f751b613296c Corentin Labbe 2019-10-23  117   
writel(ce->chanlist[flow].t_phy, ce->base + CE_TDQ);
06f751b613296c Corentin Labbe 2019-10-23  118  
06f751b613296c Corentin Labbe 2019-10-23  119   ce->chanlist[flow].status = 0;
06f751b613296c Corentin Labbe 2019-10-23  120   /* Be sure all data is written 
before enabling the task */
06f751b613296c Corentin Labbe 2019-10-23  121   wmb();
06f751b613296c Corentin Labbe 2019-10-23  122  
06f751b613296c Corentin Labbe 2019-10-23 @123   v = 1 | 
(ce->chanlist[flow].tl->t_common_ctl & 0x7F) << 8;
06f751b613296c Corentin Labbe 2019-10-23  124   writel(v, ce->base + CE_TLR);
06f751b613296c Corentin Labbe 2019-10-23  125   mutex_unlock(>mlock);
06f751b613296c Corentin Labbe 2019-10-23  126  
06f751b613296c Corentin Labbe 2019-10-23  127   
wait_for_completion_interruptible_timeout(>chanlist[flow].complete,
06f751b613296c Corentin Labbe 2019-10-23  128   
msecs_to_jiffies(ce->chanlist[flow].timeout));
06f751b613296c Corentin Labbe 2019-10-23  129  
06f751b613296c Corentin Labbe 2019-10-23  130   if (ce->chanlist[flow].status 
== 0) {
06f751b613296c Corentin Labbe 2019-10-23  131   dev_err(ce->dev, "DMA 
timeout for %s\n", name);
06f751b613296c Corentin Labbe 2019-10-23  132   err = -EFAULT;
06f751b613296c Corentin Labbe 2019-10-23  133   }
06f751b613296c Corentin Labbe 2019-10-23  134   /* No need to lock for this 
read, the channel is locked so
06f751b613296c Corentin Labbe 2019-10-23  135* nothing could modify the 
error value for this channel
06f751b613296c Corentin Labbe 2019-10-23  136*/
06f751b613296c Corentin Labbe 2019-10-23  137   v = readl(ce->base + CE_ESR);
06f751b613296c Corentin Labbe 2019-10-23  138   if (v) {
06f751b613296c Corentin Labbe 2019-10-23  139   v >>= (flow * 4);
06f751b613296c Corentin Labbe 2019-10-23  140   v &= 0xFF;
06f751b613296c Corentin Labbe 2019-10-23  141   if (v) {
06f751b613296c Corentin Labbe 2019-10-23  142   
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
06f751b613296c Corentin Labbe 2019-10-23  143   err = -EFAULT;
06f751b613296c Corentin Labbe 2019-10-23  144   }
06f751b613296c Corentin Labbe 2019-10-23  145   if (v & 
CE_ERR_ALGO_NOTSUP)
06f751b613296c Corentin Labbe 2019-10-23  146   
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
06f751b613296c Corentin Labbe 2019-10-23  147   if (v & CE_ERR_DATALEN)
06f751b613296c Corentin Labbe 2019-10-23  148   
dev_err(ce->dev, "CE ERROR: data length error\n");
06f751b613296c Corentin Labbe 2019-10-23  149   if (v & CE_ERR_KEYSRAM)
06f751b613296c Corentin Labbe 2019-10-23  150   
dev_err(ce->dev, "CE ERROR: keysram access error for AES\n");
06f751b613296c Corentin Labbe 2019-10-23  151   if (v & 
CE_ERR_ADDR_INVALID)
06f751b613296c Corentin Labbe 2019-10-23  152   
dev_err(ce->dev, "CE ERROR: address invalid\n");
06f751b613296c Corentin Labbe 2019-10-23  153   }
06f751b613296c Corentin Labbe 2019-10-23  154  
06f751b613296c Corentin Labbe 2019-10-23  155   return err;
06f751b613296c Corentin Labbe 2019-10-23  156  }
06f751b613296c Corentin Labbe 2019-10-23  157  

:: The code at line 123 was first 

Re: [PATCH V2 1/3] arm64: dts: sc7180: Add wakeup support over UART RX

2020-07-27 Thread Akash Asthana



On 7/24/2020 9:28 AM, satya priya wrote:

Add the necessary pinctrl and interrupts to make UART
wakeup capable.



Reviewed-by: Akash Asthana 


Signed-off-by: satya priya 
---
Changes in V2:
  - As per Matthias's comment added wakeup support for all the UARTs
of SC7180.

  arch/arm64/boot/dts/qcom/sc7180.dtsi | 98 ++--
  1 file changed, 84 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi 
b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index 16df08d..044a4d0 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -787,9 +787,11 @@
reg = <0 0x0088 0 0x4000>;
clock-names = "se";
clocks = < GCC_QUPV3_WRAP0_S0_CLK>;
-   pinctrl-names = "default";
+   pinctrl-names = "default", "sleep";
pinctrl-0 = <_uart0_default>;
-   interrupts = ;
+   pinctrl-1 = <_uart0_sleep>;
+   interrupts-extended = < GIC_SPI 601 
IRQ_TYPE_LEVEL_HIGH>,
+   < 37 
IRQ_TYPE_EDGE_FALLING>;
power-domains = < SC7180_CX>;
operating-points-v2 = <_opp_table>;
interconnects = <_virt MASTER_QUP_CORE_0 
_virt SLAVE_QUP_CORE_0>,
@@ -839,9 +841,11 @@
reg = <0 0x00884000 0 0x4000>;
clock-names = "se";
clocks = < GCC_QUPV3_WRAP0_S1_CLK>;
-   pinctrl-names = "default";
+   pinctrl-names = "default", "sleep";
pinctrl-0 = <_uart1_default>;
-   interrupts = ;
+   pinctrl-1 = <_uart1_sleep>;
+   interrupts-extended = < GIC_SPI 602 
IRQ_TYPE_LEVEL_HIGH>,
+   < 3 
IRQ_TYPE_EDGE_FALLING>;
power-domains = < SC7180_CX>;
operating-points-v2 = <_opp_table>;
interconnects = <_virt MASTER_QUP_CORE_0 
_virt SLAVE_QUP_CORE_0>,
@@ -925,9 +929,11 @@
reg = <0 0x0088c000 0 0x4000>;
clock-names = "se";
clocks = < GCC_QUPV3_WRAP0_S3_CLK>;
-   pinctrl-names = "default";
+   pinctrl-names = "default", "sleep";
pinctrl-0 = <_uart3_default>;
-   interrupts = ;
+   pinctrl-1 = <_uart3_sleep>;
+   interrupts-extended = < GIC_SPI 604 
IRQ_TYPE_LEVEL_HIGH>,
+   < 41 
IRQ_TYPE_EDGE_FALLING>;
power-domains = < SC7180_CX>;
operating-points-v2 = <_opp_table>;
interconnects = <_virt MASTER_QUP_CORE_0 
_virt SLAVE_QUP_CORE_0>,
@@ -1011,9 +1017,11 @@
reg = <0 0x00894000 0 0x4000>;
clock-names = "se";
clocks = < GCC_QUPV3_WRAP0_S5_CLK>;
-   pinctrl-names = "default";
+   pinctrl-names = "default", "sleep";
pinctrl-0 = <_uart5_default>;
-   interrupts = ;
+   pinctrl-1 = <_uart5_sleep>;
+   interrupts-extended = < GIC_SPI 606 
IRQ_TYPE_LEVEL_HIGH>,
+   < 28 
IRQ_TYPE_EDGE_FALLING>;
power-domains = < SC7180_CX>;
operating-points-v2 = <_opp_table>;
interconnects = <_virt MASTER_QUP_CORE_0 
_virt SLAVE_QUP_CORE_0>,
@@ -1078,9 +1086,11 @@
reg = <0 0x00a8 0 0x4000>;
clock-names = "se";
clocks = < GCC_QUPV3_WRAP1_S0_CLK>;
-   pinctrl-names = "default";
+   pinctrl-names = "default", "sleep";
pinctrl-0 = <_uart6_default>;
-   interrupts = ;
+   pinctrl-1 = <_uart6_sleep>;
+   interrupts-extended = < GIC_SPI 353 
IRQ_TYPE_LEVEL_HIGH>,
+   < 62 
IRQ_TYPE_EDGE_FALLING>;
power-domains = < SC7180_CX>;
 

Re: [PATCH bpf-next v2 20/35] bpf: eliminate rlimit-based memory accounting for queue_stack_maps maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:25 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for queue_stack maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/queue_stack_maps.c | 16 
>  1 file changed, 4 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
> index 44184f82916a..92e73c35a34a 100644
> --- a/kernel/bpf/queue_stack_maps.c
> +++ b/kernel/bpf/queue_stack_maps.c
> @@ -66,29 +66,21 @@ static int queue_stack_map_alloc_check(union bpf_attr 
> *attr)
>
>  static struct bpf_map *queue_stack_map_alloc(union bpf_attr *attr)
>  {
> -   int ret, numa_node = bpf_map_attr_numa_node(attr);
> -   struct bpf_map_memory mem = {0};
> +   int numa_node = bpf_map_attr_numa_node(attr);
> struct bpf_queue_stack *qs;
> -   u64 size, queue_size, cost;
> +   u64 size, queue_size;
>
> size = (u64) attr->max_entries + 1;
> -   cost = queue_size = sizeof(*qs) + size * attr->value_size;
> -
> -   ret = bpf_map_charge_init(, cost);
> -   if (ret < 0)
> -   return ERR_PTR(ret);
> +   queue_size = sizeof(*qs) + size * attr->value_size;
>
> qs = bpf_map_area_alloc(queue_size, numa_node);
> -   if (!qs) {
> -   bpf_map_charge_finish();
> +   if (!qs)
> return ERR_PTR(-ENOMEM);
> -   }
>
> memset(qs, 0, sizeof(*qs));
>
> bpf_map_init_from_attr(>map, attr);
>
> -   bpf_map_charge_move(>map.memory, );
> qs->size = size;
>
> raw_spin_lock_init(>lock);
> --
> 2.26.2
>


Re: [PATCH v7 4/8] scsi: ufs: Add some debug infos to ufshcd_print_host_state

2020-07-27 Thread hongwus

On 2020-07-28 13:00, Can Guo wrote:
The infos of the last interrupt status and its timestamp are very 
helpful

when debug system stability issues, e.g. IRQ starvation, so add them to
ufshcd_print_host_state. Meanwhile, UFS device infos like model name 
and
its FW version also come in handy during debug. In addition, this 
change
makes cleanup to some prints in ufshcd_print_host_regs as similar 
prints

are already available in ufshcd_print_host_state.

Signed-off-by: Can Guo 
Reviewed-by: Avri Altman 
---
 drivers/scsi/ufs/ufshcd.c | 31 ++-
 drivers/scsi/ufs/ufshcd.h |  5 +
 2 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 99bd3e4..eda4dc6 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -411,15 +411,6 @@ static void ufshcd_print_err_hist(struct ufs_hba 
*hba,

 static void ufshcd_print_host_regs(struct ufs_hba *hba)
 {
ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
-	dev_err(hba->dev, "hba->ufs_version = 0x%x, hba->capabilities = 
0x%x\n",

-   hba->ufs_version, hba->capabilities);
-   dev_err(hba->dev,
-   "hba->outstanding_reqs = 0x%x, hba->outstanding_tasks = 0x%x\n",
-   (u32)hba->outstanding_reqs, (u32)hba->outstanding_tasks);
-   dev_err(hba->dev,
-   "last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt = %d\n",
-   ktime_to_us(hba->ufs_stats.last_hibern8_exit_tstamp),
-   hba->ufs_stats.hibern8_exit_cnt);

ufshcd_print_err_hist(hba, >ufs_stats.pa_err, "pa_err");
ufshcd_print_err_hist(hba, >ufs_stats.dl_err, "dl_err");
@@ -438,8 +429,6 @@ static void ufshcd_print_host_regs(struct ufs_hba 
*hba)

ufshcd_print_err_hist(hba, >ufs_stats.host_reset, "host_reset");
ufshcd_print_err_hist(hba, >ufs_stats.task_abort, "task_abort");

-   ufshcd_print_clk_freqs(hba);
-
ufshcd_vops_dbg_register_dump(hba);
 }

@@ -499,6 +488,8 @@ static void ufshcd_print_tmrs(struct ufs_hba *hba,
unsigned long bitmap)

 static void ufshcd_print_host_state(struct ufs_hba *hba)
 {
+   struct scsi_device *sdev_ufs = hba->sdev_ufs_device;
+
dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state);
dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n",
hba->outstanding_reqs, hba->outstanding_tasks);
@@ -511,12 +502,24 @@ static void ufshcd_print_host_state(struct 
ufs_hba *hba)

dev_err(hba->dev, "Auto BKOPS=%d, Host self-block=%d\n",
hba->auto_bkops_enabled, hba->host->host_self_blocked);
dev_err(hba->dev, "Clk gate=%d\n", hba->clk_gating.state);
+   dev_err(hba->dev,
+   "last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt=%d\n",
+   ktime_to_us(hba->ufs_stats.last_hibern8_exit_tstamp),
+   hba->ufs_stats.hibern8_exit_cnt);
+   dev_err(hba->dev, "last intr at %lld us, last intr status=0x%x\n",
+   ktime_to_us(hba->ufs_stats.last_intr_ts),
+   hba->ufs_stats.last_intr_status);
dev_err(hba->dev, "error handling flags=0x%x, req. abort count=%d\n",
hba->eh_flags, hba->req_abort_count);
-   dev_err(hba->dev, "Host capabilities=0x%x, caps=0x%x\n",
-   hba->capabilities, hba->caps);
+   dev_err(hba->dev, "hba->ufs_version=0x%x, Host capabilities=0x%x,
caps=0x%x\n",
+   hba->ufs_version, hba->capabilities, hba->caps);
dev_err(hba->dev, "quirks=0x%x, dev. quirks=0x%x\n", hba->quirks,
hba->dev_quirks);
+   if (sdev_ufs)
+   dev_err(hba->dev, "UFS dev info: %.8s %.16s rev %.4s\n",
+   sdev_ufs->vendor, sdev_ufs->model, sdev_ufs->rev);
+
+   ufshcd_print_clk_freqs(hba);
 }

 /**
@@ -5951,6 +5954,8 @@ static irqreturn_t ufshcd_intr(int irq, void 
*__hba)


spin_lock(hba->host->host_lock);
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+   hba->ufs_stats.last_intr_status = intr_status;
+   hba->ufs_stats.last_intr_ts = ktime_get();

/*
 * There could be max of hba->nutrs reqs in flight and in worst case
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 656c069..5b2cdaf 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -406,6 +406,8 @@ struct ufs_err_reg_hist {

 /**
  * struct ufs_stats - keeps usage/err statistics
+ * @last_intr_status: record the last interrupt status.
+ * @last_intr_ts: record the last interrupt timestamp.
  * @hibern8_exit_cnt: Counter to keep track of number of exits,
  * reset this after link-startup.
  * @last_hibern8_exit_tstamp: Set time after the hibern8 exit.
@@ -425,6 +427,9 @@ struct ufs_err_reg_hist {
  * @tsk_abort: tracks task abort events
  */
 struct ufs_stats {
+   u32 last_intr_status;
+   ktime_t last_intr_ts;
+
u32 hibern8_exit_cnt;

Re: [PATCH bpf-next v2 18/35] bpf: eliminate rlimit-based memory accounting for hashtab maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for hashtab maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/hashtab.c | 19 +--
>  1 file changed, 1 insertion(+), 18 deletions(-)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 9d0432170812..9372b559b4e7 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -422,7 +422,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr 
> *attr)
> bool percpu_lru = (attr->map_flags & BPF_F_NO_COMMON_LRU);
> bool prealloc = !(attr->map_flags & BPF_F_NO_PREALLOC);
> struct bpf_htab *htab;
> -   u64 cost;
> int err;
>
> htab = kzalloc(sizeof(*htab), GFP_USER | __GFP_ACCOUNT);
> @@ -459,26 +458,12 @@ static struct bpf_map *htab_map_alloc(union bpf_attr 
> *attr)
> htab->n_buckets > U32_MAX / sizeof(struct bucket))
> goto free_htab;
>
> -   cost = (u64) htab->n_buckets * sizeof(struct bucket) +
> -  (u64) htab->elem_size * htab->map.max_entries;
> -
> -   if (percpu)
> -   cost += (u64) round_up(htab->map.value_size, 8) *
> -   num_possible_cpus() * htab->map.max_entries;
> -   else
> -  cost += (u64) htab->elem_size * num_possible_cpus();
> -
> -   /* if map size is larger than memlock limit, reject it */
> -   err = bpf_map_charge_init(>map.memory, cost);
> -   if (err)
> -   goto free_htab;
> -
> err = -ENOMEM;
> htab->buckets = bpf_map_area_alloc(htab->n_buckets *
>sizeof(struct bucket),
>htab->map.numa_node);
> if (!htab->buckets)
> -   goto free_charge;
> +   goto free_htab;
>
> if (htab->map.map_flags & BPF_F_ZERO_SEED)
> htab->hashrnd = 0;
> @@ -508,8 +493,6 @@ static struct bpf_map *htab_map_alloc(union bpf_attr 
> *attr)
> prealloc_destroy(htab);
>  free_buckets:
> bpf_map_area_free(htab->buckets);
> -free_charge:
> -   bpf_map_charge_finish(>map.memory);
>  free_htab:
> kfree(htab);
> return ERR_PTR(err);
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 19/35] bpf: eliminate rlimit-based memory accounting for lpm_trie maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:25 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for lpm_trie maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/lpm_trie.c | 13 -
>  1 file changed, 13 deletions(-)
>
> diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
> index d85e0fc2cafc..c747f0835eb1 100644
> --- a/kernel/bpf/lpm_trie.c
> +++ b/kernel/bpf/lpm_trie.c
> @@ -540,8 +540,6 @@ static int trie_delete_elem(struct bpf_map *map, void 
> *_key)
>  static struct bpf_map *trie_alloc(union bpf_attr *attr)
>  {
> struct lpm_trie *trie;
> -   u64 cost = sizeof(*trie), cost_per_node;
> -   int ret;
>
> if (!bpf_capable())
> return ERR_PTR(-EPERM);
> @@ -567,20 +565,9 @@ static struct bpf_map *trie_alloc(union bpf_attr *attr)
>   offsetof(struct bpf_lpm_trie_key, data);
> trie->max_prefixlen = trie->data_size * 8;
>
> -   cost_per_node = sizeof(struct lpm_trie_node) +
> -   attr->value_size + trie->data_size;
> -   cost += (u64) attr->max_entries * cost_per_node;
> -
> -   ret = bpf_map_charge_init(>map.memory, cost);
> -   if (ret)
> -   goto out_err;
> -
> spin_lock_init(>lock);
>
> return >map;
> -out_err:
> -   kfree(trie);
> -   return ERR_PTR(ret);
>  }
>
>  static void trie_free(struct bpf_map *map)
> --
> 2.26.2
>


Re: [PATCH v7 3/8] scsi: ufs-qcom: Remove testbus dump in ufs_qcom_dump_dbg_regs

2020-07-27 Thread hongwus

On 2020-07-28 13:00, Can Guo wrote:

Dumping testbus registers is heavy enough to cause stability issues
sometime, just remove them as of now.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufs-qcom.c | 32 
 1 file changed, 32 deletions(-)

diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 7da27ee..96e0999 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -1620,44 +1620,12 @@ int ufs_qcom_testbus_config(struct 
ufs_qcom_host *host)

return 0;
 }

-static void ufs_qcom_testbus_read(struct ufs_hba *hba)
-{
-   ufshcd_dump_regs(hba, UFS_TEST_BUS, 4, "UFS_TEST_BUS ");
-}
-
-static void ufs_qcom_print_unipro_testbus(struct ufs_hba *hba)
-{
-   struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-   u32 *testbus = NULL;
-   int i, nminor = 256, testbus_len = nminor * sizeof(u32);
-
-   testbus = kmalloc(testbus_len, GFP_KERNEL);
-   if (!testbus)
-   return;
-
-   host->testbus.select_major = TSTBUS_UNIPRO;
-   for (i = 0; i < nminor; i++) {
-   host->testbus.select_minor = i;
-   ufs_qcom_testbus_config(host);
-   testbus[i] = ufshcd_readl(hba, UFS_TEST_BUS);
-   }
-   print_hex_dump(KERN_ERR, "UNIPRO_TEST_BUS ", DUMP_PREFIX_OFFSET,
-   16, 4, testbus, testbus_len, false);
-   kfree(testbus);
-}
-
 static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
 {
ufshcd_dump_regs(hba, REG_UFS_SYS1CLK_1US, 16 * 4,
 "HCI Vendor Specific Registers ");

-   /* sleep a bit intermittently as we are dumping too much data */
 	ufs_qcom_print_hw_debug_reg_all(hba, NULL, 
ufs_qcom_dump_regs_wrapper);

-   udelay(1000);
-   ufs_qcom_testbus_read(hba);
-   udelay(1000);
-   ufs_qcom_print_unipro_testbus(hba);
-   udelay(1000);
 }

 /**


Reviewed-by: Hongwu Su 


Re: [PATCH bpf-next v2 16/35] bpf: eliminate rlimit-based memory accounting for cgroup storage maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for cgroup storage maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/local_storage.c | 21 +
>  1 file changed, 1 insertion(+), 20 deletions(-)
>
> diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c
> index 117acb2e80fb..5f29a420849c 100644
> --- a/kernel/bpf/local_storage.c
> +++ b/kernel/bpf/local_storage.c
> @@ -288,8 +288,6 @@ static struct bpf_map *cgroup_storage_map_alloc(union 
> bpf_attr *attr)
>  {
> int numa_node = bpf_map_attr_numa_node(attr);
> struct bpf_cgroup_storage_map *map;
> -   struct bpf_map_memory mem;
> -   int ret;
>
> if (attr->key_size != sizeof(struct bpf_cgroup_storage_key) &&
> attr->key_size != sizeof(__u64))
> @@ -309,18 +307,10 @@ static struct bpf_map *cgroup_storage_map_alloc(union 
> bpf_attr *attr)
> /* max_entries is not used and enforced to be 0 */
> return ERR_PTR(-EINVAL);
>
> -   ret = bpf_map_charge_init(, sizeof(struct 
> bpf_cgroup_storage_map));
> -   if (ret < 0)
> -   return ERR_PTR(ret);
> -
> map = kmalloc_node(sizeof(struct bpf_cgroup_storage_map),
>__GFP_ZERO | GFP_USER | __GFP_ACCOUNT, numa_node);
> -   if (!map) {
> -   bpf_map_charge_finish();
> +   if (!map)
> return ERR_PTR(-ENOMEM);
> -   }
> -
> -   bpf_map_charge_move(>map.memory, );
>
> /* copy mandatory map attributes */
> bpf_map_init_from_attr(>map, attr);
> @@ -509,9 +499,6 @@ struct bpf_cgroup_storage 
> *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
>
> size = bpf_cgroup_storage_calculate_size(map, );
>
> -   if (bpf_map_charge_memlock(map, pages))
> -   return ERR_PTR(-EPERM);
> -
> storage = kmalloc_node(sizeof(struct bpf_cgroup_storage), gfp,
>map->numa_node);
> if (!storage)
> @@ -533,7 +520,6 @@ struct bpf_cgroup_storage 
> *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
> return storage;
>
>  enomem:
> -   bpf_map_uncharge_memlock(map, pages);
> kfree(storage);
> return ERR_PTR(-ENOMEM);
>  }
> @@ -560,16 +546,11 @@ void bpf_cgroup_storage_free(struct bpf_cgroup_storage 
> *storage)
>  {
> enum bpf_cgroup_storage_type stype;
> struct bpf_map *map;
> -   u32 pages;
>
> if (!storage)
> return;
>
> map = >map->map;
> -
> -   bpf_cgroup_storage_calculate_size(map, );
> -   bpf_map_uncharge_memlock(map, pages);
> -
> stype = cgroup_storage_type(map);
> if (stype == BPF_CGROUP_STORAGE_SHARED)
> call_rcu(>rcu, free_shared_cgroup_storage_rcu);
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 17/35] bpf: eliminate rlimit-based memory accounting for devmap maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:20 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for devmap maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/devmap.c | 18 ++
>  1 file changed, 2 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index 05bf93088063..8148c7260a54 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -109,8 +109,6 @@ static inline struct hlist_head 
> *dev_map_index_hash(struct bpf_dtab *dtab,
>  static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
>  {
> u32 valsize = attr->value_size;
> -   u64 cost = 0;
> -   int err;
>
> /* check sanity of attributes. 2 value sizes supported:
>  * 4 bytes: ifindex
> @@ -135,21 +133,13 @@ static int dev_map_init_map(struct bpf_dtab *dtab, 
> union bpf_attr *attr)
>
> if (!dtab->n_buckets) /* Overflow check */
> return -EINVAL;
> -   cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets;
> -   } else {
> -   cost += (u64) dtab->map.max_entries * sizeof(struct 
> bpf_dtab_netdev *);
> }
>
> -   /* if map size is larger than memlock limit, reject it */
> -   err = bpf_map_charge_init(>map.memory, cost);
> -   if (err)
> -   return -EINVAL;
> -
> if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
> dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
>
> dtab->map.numa_node);
> if (!dtab->dev_index_head)
> -   goto free_charge;
> +   return -ENOMEM;
>
> spin_lock_init(>index_lock);
> } else {
> @@ -157,14 +147,10 @@ static int dev_map_init_map(struct bpf_dtab *dtab, 
> union bpf_attr *attr)
>   sizeof(struct 
> bpf_dtab_netdev *),
>   dtab->map.numa_node);
> if (!dtab->netdev_map)
> -   goto free_charge;
> +   return -ENOMEM;
> }
>
> return 0;
> -
> -free_charge:
> -   bpf_map_charge_finish(>map.memory);
> -   return -ENOMEM;
>  }
>
>  static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 15/35] bpf: eliminate rlimit-based memory accounting for cpumap maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:22 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for cpumap maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 

> ---
>  kernel/bpf/cpumap.c | 16 +---
>  1 file changed, 1 insertion(+), 15 deletions(-)
>
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index 74ae9fcbe82e..50f3444a3301 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -86,8 +86,6 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
> u32 value_size = attr->value_size;
> struct bpf_cpu_map *cmap;
> int err = -ENOMEM;
> -   u64 cost;
> -   int ret;
>
> if (!bpf_capable())
> return ERR_PTR(-EPERM);
> @@ -111,26 +109,14 @@ static struct bpf_map *cpu_map_alloc(union bpf_attr 
> *attr)
> goto free_cmap;
> }
>
> -   /* make sure page count doesn't overflow */
> -   cost = (u64) cmap->map.max_entries * sizeof(struct bpf_cpu_map_entry 
> *);
> -
> -   /* Notice returns -EPERM on if map size is larger than memlock limit 
> */
> -   ret = bpf_map_charge_init(>map.memory, cost);
> -   if (ret) {
> -   err = ret;
> -   goto free_cmap;
> -   }
> -
> /* Alloc array for possible remote "destination" CPUs */
> cmap->cpu_map = bpf_map_area_alloc(cmap->map.max_entries *
>sizeof(struct bpf_cpu_map_entry *),
>cmap->map.numa_node);
> if (!cmap->cpu_map)
> -   goto free_charge;
> +   goto free_cmap;
>
> return >map;
> -free_charge:
> -   bpf_map_charge_finish(>map.memory);
>  free_cmap:
> kfree(cmap);
> return ERR_PTR(err);
> --
> 2.26.2
>


Re: [PATCH bpf-next v2 14/35] bpf: eliminate rlimit-based memory accounting for bpf_struct_ops maps

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 12:26 PM Roman Gushchin  wrote:
>
> Do not use rlimit-based memory accounting for bpf_struct_ops maps.
> It has been replaced with the memcg-based memory accounting.
>
> Signed-off-by: Roman Gushchin 

Acked-by: Song Liu 


Re: [PATCH net-next RFC 00/13] Add devlink reload level option

2020-07-27 Thread Vasundhara Volam
On Mon, Jul 27, 2020 at 4:36 PM Moshe Shemesh  wrote:
>
> Introduce new option on devlink reload API to enable the user to select the
> reload level required. Complete support for all levels in mlx5.
> The following reload levels are supported:
>   driver: Driver entities re-instantiation only.
>   fw_reset: Firmware reset and driver entities re-instantiation.
The Name is a little confusing. I think it should be renamed to
fw_live_reset (in which both firmware and driver entities are
re-instantiated).  For only fw_reset, the driver should not undergo
reset (it requires a driver reload for firmware to undergo reset).

>   fw_live_patch: Firmware live patching only.
This level is not clear. Is this similar to flashing??

Also I have a basic query. The reload command is split into
reload_up/reload_down handlers (Please correct me if this behaviour is
changed with this patchset). What if the vendor specific driver does
not support up/down and needs only a single handler to fire a firmware
reset or firmware live reset command?
>
> Each driver which support this command should expose the reload levels
> supported and the driver's default reload level.
> The uAPI is backward compatible, if the reload level option is omitted
> from the reload command, the driver's default reload level will be used.
>
> Patch 1 adds the new API reload level option to devlink.
> Patch 2 exposes the supported reload levels and default level on devlink
> dev get.
> Patches 3-8 add support on mlx5 for devlink reload level fw-reset and
> handle the firmware reset events.
> Patches 9-10 add devlink enable remote dev reset parameter and use it
>  in mlx5.
> Patches 11-12 mlx5 add devlink reload live patch support and event
>   handling.
> Patch 13 adds documentation file devlink-reload.rst
>
> Command examples:
>
> # Run reload command with fw-reset reload level:
> $ devlink dev reload pci/:82:00.0 level fw-reset
>
> # Run reload command with driver reload level:
> $ devlink dev reload pci/:82:00.0 level driver
>
> # Run reload command with driver's default level (backward compatible):
> $ devlink dev reload pci/:82:00.0
>
>
> Moshe Shemesh (13):
>   devlink: Add reload level option to devlink reload command
>   devlink: Add reload levels data to dev get
>   net/mlx5: Add functions to set/query MFRL register
>   net/mlx5: Set cap for pci sync for fw update event
>   net/mlx5: Handle sync reset request event
>   net/mlx5: Handle sync reset now event
>   net/mlx5: Handle sync reset abort event
>   net/mlx5: Add support for devlink reload level fw reset
>   devlink: Add enable_remote_dev_reset generic parameter
>   net/mlx5: Add devlink param enable_remote_dev_reset support
>   net/mlx5: Add support for fw live patch event
>   net/mlx5: Add support for devlink reload level live patch
>   devlink: Add Documentation/networking/devlink/devlink-reload.rst
>
>  .../networking/devlink/devlink-params.rst |   6 +
>  .../networking/devlink/devlink-reload.rst |  56 +++
>  Documentation/networking/devlink/index.rst|   1 +
>  drivers/net/ethernet/mellanox/mlx4/main.c |   6 +-
>  .../net/ethernet/mellanox/mlx5/core/Makefile  |   2 +-
>  .../net/ethernet/mellanox/mlx5/core/devlink.c | 114 +-
>  .../mellanox/mlx5/core/diag/fw_tracer.c   |  31 ++
>  .../mellanox/mlx5/core/diag/fw_tracer.h   |   1 +
>  .../ethernet/mellanox/mlx5/core/fw_reset.c| 328 ++
>  .../ethernet/mellanox/mlx5/core/fw_reset.h|  17 +
>  .../net/ethernet/mellanox/mlx5/core/health.c  |  74 +++-
>  .../net/ethernet/mellanox/mlx5/core/main.c|  13 +
>  drivers/net/ethernet/mellanox/mlxsw/core.c|   6 +-
>  drivers/net/netdevsim/dev.c   |   6 +-
>  include/linux/mlx5/device.h   |   1 +
>  include/linux/mlx5/driver.h   |  12 +
>  include/net/devlink.h |  10 +-
>  include/uapi/linux/devlink.h  |  22 ++
>  net/core/devlink.c|  95 -
>  19 files changed, 764 insertions(+), 37 deletions(-)
>  create mode 100644 Documentation/networking/devlink/devlink-reload.rst
>  create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
>  create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
>
> --
> 2.17.1
>


Re: [Linux-kernel-mentees] [PATCH net] xdp: Prevent kernel-infoleak in xsk_getsockopt()

2020-07-27 Thread Peilin Ye
On Mon, Jul 27, 2020 at 10:07:20PM -0700, Song Liu wrote:
> On Mon, Jul 27, 2020 at 7:30 PM Peilin Ye  wrote:
> >
> > xsk_getsockopt() is copying uninitialized stack memory to userspace when
> > `extra_stats` is `false`. Fix it by initializing `stats` with memset().
> >
> > Cc: sta...@vger.kernel.org
> 
> 8aa5a33578e9 is not in stable branches yet, so we don't need to Cc stable.
> 
> > Fixes: 8aa5a33578e9 ("xsk: Add new statistics")
> > Suggested-by: Dan Carpenter 
> > Signed-off-by: Peilin Ye 
> > ---
> >  net/xdp/xsk.c | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 26e3bba8c204..acf001908a0d 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -844,6 +844,8 @@ static int xsk_getsockopt(struct socket *sock, int 
> > level, int optname,
> > bool extra_stats = true;
> > size_t stats_size;
> >
> > +   memset(, 0, sizeof(stats));
> > +
> 
> xsk.c doesn't include linux/string.h directly, so using memset may break
> build for some config combinations. We can probably just use
> 
> struct xdp_statistics stats = {};

I see. I will send v2 soon. Thank you for reviewing the patch!

Peilin Ye


Re: [PATCH] drivers/net/wan/lapbether: Use needed_headroom instead of hard_header_len

2020-07-27 Thread Cong Wang
Hello,

On Mon, Jul 27, 2020 at 12:41 PM Xie He  wrote:
>
> Hi Cong Wang,
>
> I'm wishing to change a driver from using "hard_header_len" to using
> "needed_headroom" to declare its needed headroom. I submitted a patch
> and it is decided it needs to be reviewed. I see you participated in
> "hard_header_len vs needed_headroom" discussions in the past. Can you
> help me review this patch? Thanks!
>
> The patch is at:
> http://patchwork.ozlabs.org/project/netdev/patch/20200726110524.151957-1-xie.he.0...@gmail.com/
>
> In my understanding, hard_header_len should be the length of the header
> created by dev_hard_header. Any additional headroom needed should be
> declared in needed_headroom instead of hard_header_len. I came to this
> conclusion by examining the logic of net/packet/af_packet.c:packet_snd.

I am not familiar with this WAN driver, but I suggest you to look at
the following commit, which provides a lot of useful information:

commit 9454f7a895b822dd8fb4588fc55fda7c96728869
Author: Brian Norris 
Date:   Wed Feb 26 16:05:11 2020 -0800

mwifiex: set needed_headroom, not hard_header_len

hard_header_len provides limitations for things like AF_PACKET, such
that we don't allow transmitting packets smaller than this.

needed_headroom provides a suggested minimum headroom for SKBs, so that
we can trivally add our headers to the front.

The latter is the correct field to use in this case, while the former
mostly just prevents sending small AF_PACKET frames.

In any case, mwifiex already does its own bounce buffering [1] if we
don't have enough headroom, so hints (not hard limits) are all that are
needed.

This is the essentially the same bug (and fix) that brcmfmac had, fixed
in commit cb39288fd6bb ("brcmfmac: use ndev->needed_headroom to reserve
additional header space").

[1] mwifiex_hard_start_xmit():
if (skb_headroom(skb) < MWIFIEX_MIN_DATA_HEADER_LEN) {
[...]
/* Insufficient skb headroom - allocate a new skb */

Hope this helps.

Thanks.


drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1388:5: warning: 'strncpy' output may be truncated copying 32 bytes from a string of length 32

2020-07-27 Thread kernel test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   92ed301919932f13b9172e525674157e983d
commit: df41017eafd267c08acbfff99d34e4f96bbfbc92 ia64: remove support for 
machvecs
date:   12 months ago
config: ia64-randconfig-r003-20200728 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout df41017eafd267c08acbfff99d34e4f96bbfbc92
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=ia64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>):

   drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c: In function 
'ieee80211_softmac_init':
   drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:2598:8: warning: cast 
between incompatible function types from 'void (*)(struct ieee80211_device *)' 
to 'void (*)(long unsigned int)' [-Wcast-function-type]
2598 |(void(*)(unsigned long)) ieee80211_sta_ps,
 |^
   drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c: In function 
'ieee80211_softmac_new_net':
>> drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1388:5: warning: 
>> 'strncpy' output may be truncated copying 32 bytes from a string of length 
>> 32 [-Wstringop-truncation]
1388 | strncpy(tmp_ssid, ieee->current_network.ssid, IW_ESSID_MAX_SIZE);
 | ^~~~
   drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1393:4: warning: 
'strncpy' output may be truncated copying 32 bytes from a string of length 32 
[-Wstringop-truncation]
1393 |strncpy(ieee->current_network.ssid, tmp_ssid, IW_ESSID_MAX_SIZE);
 |^~~~
   In function 'ieee80211_softmac_new_net',
   inlined from 'ieee80211_softmac_check_all_nets' at 
drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1455:4:
>> drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1388:5: warning: 
>> 'strncpy' output may be truncated copying 32 bytes from a string of length 
>> 32 [-Wstringop-truncation]
1388 | strncpy(tmp_ssid, ieee->current_network.ssid, IW_ESSID_MAX_SIZE);
 | ^~~~
   drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c:1393:4: warning: 
'strncpy' output may be truncated copying 32 bytes from a string of length 32 
[-Wstringop-truncation]
1393 |strncpy(ieee->current_network.ssid, tmp_ssid, IW_ESSID_MAX_SIZE);
 |^~~~

vim +/strncpy +1388 drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c

8fc8598e61f6f3 Jerry Chuang 2009-11-03  1341  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1342  inline void 
ieee80211_softmac_new_net(struct ieee80211_device *ieee, struct 
ieee80211_network *net)
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1343  {
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1344u8 
tmp_ssid[IW_ESSID_MAX_SIZE + 1];
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1345int tmp_ssid_len = 0;
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1346  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1347short apset, ssidset, 
ssidbroad, apmatch, ssidmatch;
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1348  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1349/* we are interested in 
new new only if we are not associated
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1350 * and we are not 
associating / authenticating
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1351 */
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1352if (ieee->state != 
IEEE80211_NOLINK)
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1353return;
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1354  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1355if ((ieee->iw_mode == 
IW_MODE_INFRA) && !(net->capability & WLAN_CAPABILITY_BSS))
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1356return;
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1357  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1358if ((ieee->iw_mode == 
IW_MODE_ADHOC) && !(net->capability & WLAN_CAPABILITY_IBSS))
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1359return;
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1360  
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1361if (ieee->iw_mode == 
IW_MODE_INFRA || ieee->iw_mode == IW_MODE_ADHOC) {
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1362/* if the user 
specified the AP MAC, we need also the essid
8fc8598e61f6f3 Jerry Chuang 2009-11-03  1363 * This could 
be obtained by 

Re: [PATCH] MAINTAINERS: update entry to thermal governors file name prefixing

2020-07-27 Thread Amit Kucheria
On Tue, Jul 28, 2020 at 10:29 AM Lukas Bulwahn  wrote:
>
> Commit 0015d9a2a727 ("thermal/governors: Prefix all source files with
> gov_") renamed power_allocator.c to gov_power_allocator.c in
> ./drivers/thermal amongst some other file renames, but missed to adjust
> the MAINTAINERS entry.
>
> Hence, ./scripts/get_maintainer.pl --self-test=patterns complains:
>
>   warning: no file matchesF:drivers/thermal/power_allocator.c
>
> Update the file entry in MAINTAINERS to the new file name.
>
> Signed-off-by: Lukas Bulwahn 

Acked-by: Amit Kucheria 

> ---
> Amit, please ack.
>
> Daniel, please pick this non-urgent minor patch for your -next tree.
>
> applies cleanly on next-20200727
>
>  MAINTAINERS | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index aad65cc8f35d..aa5a11d71f71 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -17164,7 +17164,7 @@ M:  Lukasz Luba 
>  L: linux...@vger.kernel.org
>  S: Maintained
>  F: Documentation/driver-api/thermal/power_allocator.rst
> -F: drivers/thermal/power_allocator.c
> +F: drivers/thermal/gov_power_allocator.c
>  F: include/trace/events/thermal_power_allocator.h
>
>  THINKPAD ACPI EXTRAS DRIVER
> --
> 2.17.1
>


Re: [PATCH v2] ata: use generic power management

2020-07-27 Thread Vaibhav Gupta
On Mon, Jul 27, 2020 at 02:30:03PM -0600, Jens Axboe wrote:
> On 7/27/20 12:11 PM, Vaibhav Gupta wrote:
> > On Mon, Jul 27, 2020 at 11:59:05AM -0600, Jens Axboe wrote:
> >> On 7/27/20 11:51 AM, Vaibhav Gupta wrote:
> >>> On Mon, Jul 27, 2020 at 11:42:51AM -0600, Jens Axboe wrote:
>  On 7/27/20 11:40 AM, Vaibhav Gupta wrote:
> > Yes, I agree. Actually with previous drivers, I was able to get help
> > from maintainers and/or supporters for the hardware testing. Is that
> > possible for this patch?
> 
> It might be, you'll have to ask people to help you, very rarely do people
> just test patches unsolicited unless they have some sort of interest in the
> feature.
> 
> This is all part of what it takes to get code upstream. Writing the code
> is just a small part of it, the bigger part is usually getting it tested
> and providing some assurance that you are willing to fix issues when/if
> they come up.
> 
> You might want to consider splitting up the patchset a bit - you could
> have one patch for the generic bits, then one for each chipset. That
> would allow you to at least get some of the work upstream, once tested.
>
I think I can break this patch into one commit per driver. The reason that
all updates got into one single patch is that I made
ata_pci_device_suspend/resume() static and exported just the
ata_pci_device_pm_ops variable. Thus, all the driver using .suspend/.resume()
had to be updated in a single patch.

First I will make changes in drivers/ata/libata-core.c, but won't make any
function static. Thus, each driver can be updated in independent commits
without breaking anything. And then in the last commit, I can hide the
unnecessary .suspend()/.resume() callbacks. This will create patch-series of 55
or 56 patches.

Will this approach work?

Thanks
Vaibhav Gupta
> -- 
> Jens Axboe
> 


[PATCH 13/15] arch, drivers: replace for_each_membock() with for_each_mem_range()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

There are several occurrences of the following pattern:

for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));

/* do something with start and end */
}

Using for_each_mem_range() iterator is more appropriate in such cases and
allows simpler and cleaner code.

Signed-off-by: Mike Rapoport 
---
 arch/arm/kernel/setup.c  | 18 +++
 arch/arm/mm/mmu.c| 39 
 arch/arm/mm/pmsa-v7.c| 20 ++--
 arch/arm/mm/pmsa-v8.c| 17 +--
 arch/arm/xen/mm.c|  7 +++--
 arch/arm64/mm/kasan_init.c   |  8 ++---
 arch/arm64/mm/mmu.c  | 11 ++-
 arch/c6x/kernel/setup.c  |  9 +++---
 arch/microblaze/mm/init.c|  9 +++---
 arch/mips/cavium-octeon/dma-octeon.c | 12 
 arch/mips/kernel/setup.c | 31 +--
 arch/openrisc/mm/init.c  |  8 +++--
 arch/powerpc/kernel/fadump.c | 27 +++-
 arch/powerpc/mm/book3s64/hash_utils.c| 16 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c | 11 +++
 arch/powerpc/mm/kasan/kasan_init_32.c|  8 ++---
 arch/powerpc/mm/mem.c| 16 ++
 arch/powerpc/mm/pgtable_32.c |  8 ++---
 arch/riscv/mm/init.c | 24 ++-
 arch/riscv/mm/kasan_init.c   | 10 +++---
 arch/s390/kernel/setup.c | 27 ++--
 arch/s390/mm/vmem.c  | 16 +-
 arch/sparc/mm/init_64.c  | 12 +++-
 drivers/bus/mvebu-mbus.c | 12 
 drivers/s390/char/zcore.c|  9 +++---
 25 files changed, 187 insertions(+), 198 deletions(-)

diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index d8e18cdd96d3..3f65d0ac9f63 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -843,19 +843,25 @@ early_param("mem", early_mem);
 
 static void __init request_standard_resources(const struct machine_desc *mdesc)
 {
-   struct memblock_region *region;
+   phys_addr_t start, end, res_end;
struct resource *res;
+   u64 i;
 
kernel_code.start   = virt_to_phys(_text);
kernel_code.end = virt_to_phys(__init_begin - 1);
kernel_data.start   = virt_to_phys(_sdata);
kernel_data.end = virt_to_phys(_end - 1);
 
-   for_each_memblock(memory, region) {
-   phys_addr_t start = 
__pfn_to_phys(memblock_region_memory_base_pfn(region));
-   phys_addr_t end = 
__pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
+   for_each_mem_range(i, , ) {
unsigned long boot_alias_start;
 
+   /*
+* In memblock, end points to the first byte after the
+* range while in resourses, end points to the last byte in
+* the range.
+*/
+   res_end = end - 1;
+
/*
 * Some systems have a special memory alias which is only
 * used for booting.  We need to advertise this region to
@@ -869,7 +875,7 @@ static void __init request_standard_resources(const struct 
machine_desc *mdesc)
  __func__, sizeof(*res));
res->name = "System RAM (boot alias)";
res->start = boot_alias_start;
-   res->end = phys_to_idmap(end);
+   res->end = phys_to_idmap(res_end);
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
request_resource(_resource, res);
}
@@ -880,7 +886,7 @@ static void __init request_standard_resources(const struct 
machine_desc *mdesc)
  sizeof(*res));
res->name  = "System RAM";
res->start = start;
-   res->end = end;
+   res->end = res_end;
res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
 
request_resource(_resource, res);
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 628028bfbb92..a149d9cb4fdb 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1155,9 +1155,8 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
 
 void __init adjust_lowmem_bounds(void)
 {
-   phys_addr_t memblock_limit = 0;
-   u64 vmalloc_limit;
-   struct memblock_region *reg;
+   phys_addr_t block_start, block_end, memblock_limit = 0;
+   u64 vmalloc_limit, i;
phys_addr_t lowmem_limit = 0;
 
/*
@@ -1173,26 +1172,18 @@ void __init adjust_lowmem_bounds(void)
 * The first usable region must be PMD aligned. Mark its start
 * as MEMBLOCK_NOMAP if it isn't
   

[PATCH 15/15] memblock: remove 'type' parameter from for_each_memblock()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

for_each_memblock() is used exclusively to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.

Remove type parameter from the for_each_memblock() iterator to improve
encapsulation of memblock internals from its users.

Signed-off-by: Mike Rapoport 
---
 arch/arm64/kernel/setup.c  |  2 +-
 arch/arm64/mm/numa.c   |  2 +-
 arch/mips/netlogic/xlp/setup.c |  2 +-
 include/linux/memblock.h   | 10 +++---
 mm/memblock.c  |  4 ++--
 mm/page_alloc.c|  8 
 6 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 93b3844cf442..23da7908cbed 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -217,7 +217,7 @@ static void __init request_standard_resources(void)
if (!standard_resources)
panic("%s: Failed to allocate %zu bytes\n", __func__, res_size);
 
-   for_each_memblock(memory, region) {
+   for_each_memblock(region) {
res = _resources[i++];
if (memblock_is_nomap(region)) {
res->name  = "reserved";
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index 0cbdbcc885fb..08721d2c0b79 100644
--- a/arch/arm64/mm/numa.c
+++ b/arch/arm64/mm/numa.c
@@ -350,7 +350,7 @@ static int __init numa_register_nodes(void)
struct memblock_region *mblk;
 
/* Check that valid nid is set to memblks */
-   for_each_memblock(memory, mblk) {
+   for_each_memblock(mblk) {
int mblk_nid = memblock_get_region_node(mblk);
 
if (mblk_nid == NUMA_NO_NODE || mblk_nid >= MAX_NUMNODES) {
diff --git a/arch/mips/netlogic/xlp/setup.c b/arch/mips/netlogic/xlp/setup.c
index 1a0fc5b62ba4..e69d9fc468cf 100644
--- a/arch/mips/netlogic/xlp/setup.c
+++ b/arch/mips/netlogic/xlp/setup.c
@@ -70,7 +70,7 @@ static void nlm_fixup_mem(void)
const int pref_backup = 512;
struct memblock_region *mem;
 
-   for_each_memblock(memory, mem) {
+   for_each_memblock(mem) {
memblock_remove(mem->base + mem->size - pref_backup,
pref_backup);
}
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index d70c2835e913..c901cb8ecf92 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -527,9 +527,13 @@ static inline unsigned long 
memblock_region_reserved_end_pfn(const struct memblo
return PFN_UP(reg->base + reg->size);
 }
 
-#define for_each_memblock(memblock_type, region)   
\
-   for (region = memblock.memblock_type.regions;   
\
-region < (memblock.memblock_type.regions + 
memblock.memblock_type.cnt);\
+/**
+ * for_each_memblock - itereate over registered memory regions
+ * @region: loop variable
+ */
+#define for_each_memblock(region)  \
+   for (region = memblock.memory.regions;  \
+region < (memblock.memory.regions + memblock.memory.cnt);  \
 region++)
 
 extern void *alloc_large_system_hash(const char *tablename,
diff --git a/mm/memblock.c b/mm/memblock.c
index 2ad5e6e47215..550bb72cf6cb 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1694,7 +1694,7 @@ static phys_addr_t __init_memblock 
__find_max_addr(phys_addr_t limit)
 * the memory memblock regions, if the @limit exceeds the total size
 * of those regions, max_addr will keep original value PHYS_ADDR_MAX
 */
-   for_each_memblock(memory, r) {
+   for_each_memblock(r) {
if (limit <= r->size) {
max_addr = r->base + limit;
break;
@@ -1864,7 +1864,7 @@ void __init_memblock memblock_trim_memory(phys_addr_t 
align)
phys_addr_t start, end, orig_start, orig_end;
struct memblock_region *r;
 
-   for_each_memblock(memory, r) {
+   for_each_memblock(r) {
orig_start = r->base;
orig_end = r->base + r->size;
start = round_up(orig_start, align);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 95af111d69d3..8a19f46dc86e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5927,7 +5927,7 @@ overlap_memmap_init(unsigned long zone, unsigned long 
*pfn)
 
if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
-   for_each_memblock(memory, r) {
+   for_each_memblock(r) {
if (*pfn < memblock_region_memory_end_pfn(r))
break;
}
@@ -6528,7 +6528,7 @@ static unsigned long __init zone_absent_pages_in_node(int 
nid,
unsigned long start_pfn, end_pfn;
struct memblock_region *r;
 
- 

[PATCH 14/15] x86/numa: remove redundant iteration over memblock.reserved

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
regions to set node ID in memblock.reserved and than traverses
memblock.reserved to update reserved_nodemask to include node IDs that were
set in the first loop.

Remove redundant traversal over memblock.reserved and update
reserved_nodemask while iterating over numa_meminfo.

Signed-off-by: Mike Rapoport 
---
 arch/x86/mm/numa.c | 26 ++
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 8ee952038c80..4078abd33938 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -498,31 +498,25 @@ static void __init numa_clear_kernel_node_hotplug(void)
 * and use those ranges to set the nid in memblock.reserved.
 * This will split up the memblock regions along node
 * boundaries and will set the node IDs as well.
+*
+* The nid will also be set in reserved_nodemask which is later
+* used to clear MEMBLOCK_HOTPLUG flag.
+*
+* [ Note, when booting with mem=nn[kMG] or in a kdump kernel,
+*   numa_meminfo might not include all memblock.reserved
+*   memory ranges, because quirks such as trim_snb_memory()
+*   reserve specific pages for Sandy Bridge graphics.
+*   These ranges will remain with nid == MAX_NUMNODES. ]
 */
for (i = 0; i < numa_meminfo.nr_blks; i++) {
struct numa_memblk *mb = numa_meminfo.blk + i;
int ret;
 
ret = memblock_set_node(mb->start, mb->end - mb->start, 
, mb->nid);
+   node_set(mb->nid, reserved_nodemask);
WARN_ON_ONCE(ret);
}
 
-   /*
-* Now go over all reserved memblock regions, to construct a
-* node mask of all kernel reserved memory areas.
-*
-* [ Note, when booting with mem=nn[kMG] or in a kdump kernel,
-*   numa_meminfo might not include all memblock.reserved
-*   memory ranges, because quirks such as trim_snb_memory()
-*   reserve specific pages for Sandy Bridge graphics. ]
-*/
-   for_each_memblock(reserved, mb_region) {
-   int nid = memblock_get_region_node(mb_region);
-
-   if (nid != MAX_NUMNODES)
-   node_set(nid, reserved_nodemask);
-   }
-
/*
 * Finally, clear the MEMBLOCK_HOTPLUG flag for all memory
 * belonging to the reserved node mask.
-- 
2.26.2



[PATCH 08/15] mircoblaze: drop unneeded NUMA and sparsemem initializations

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.

Remove these calls and the surrounding code.

Signed-off-by: Mike Rapoport 
---
 arch/microblaze/mm/init.c | 17 +
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 521b59ba716c..49e0c241f9b1 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -105,9 +105,8 @@ static void __init paging_init(void)
 
 void __init setup_memory(void)
 {
-   struct memblock_region *reg;
-
 #ifndef CONFIG_MMU
+   struct memblock_region *reg;
u32 kernel_align_start, kernel_align_size;
 
/* Find main memory where is the kernel */
@@ -161,20 +160,6 @@ void __init setup_memory(void)
pr_info("%s: max_low_pfn: %#lx\n", __func__, max_low_pfn);
pr_info("%s: max_pfn: %#lx\n", __func__, max_pfn);
 
-   /* Add active regions with valid PFNs */
-   for_each_memblock(memory, reg) {
-   unsigned long start_pfn, end_pfn;
-
-   start_pfn = memblock_region_memory_base_pfn(reg);
-   end_pfn = memblock_region_memory_end_pfn(reg);
-   memblock_set_node(start_pfn << PAGE_SHIFT,
- (end_pfn - start_pfn) << PAGE_SHIFT,
- , 0);
-   }
-
-   /* XXX need to clip this if using highmem? */
-   sparse_memory_present_with_active_regions(0);
-
paging_init();
 }
 
-- 
2.26.2



[PATCH 12/15] arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

There are several occurrences of the following pattern:

for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);

/* do something with start_pfn and end_pfn */
}

Rather than iterate over all memblock.memory regions and each time query
for their start and end PFNs, use for_each_mem_pfn_range() iterator to get
simpler and clearer code.

Signed-off-by: Mike Rapoport 
---
 arch/arm/mm/init.c   | 11 ---
 arch/arm64/mm/init.c | 11 ---
 arch/powerpc/kernel/fadump.c | 11 ++-
 arch/powerpc/mm/mem.c| 15 ---
 arch/powerpc/mm/numa.c   |  7 ++-
 arch/s390/mm/page-states.c   |  6 ++
 arch/sh/mm/init.c|  9 +++--
 mm/memblock.c|  6 ++
 mm/sparse.c  | 10 --
 9 files changed, 35 insertions(+), 51 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 626af348eb8f..bb56668b4f54 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -304,16 +304,14 @@ free_memmap(unsigned long start_pfn, unsigned long 
end_pfn)
  */
 static void __init free_unused_memmap(void)
 {
-   unsigned long start, prev_end = 0;
-   struct memblock_region *reg;
+   unsigned long start, end, prev_end = 0;
+   int i;
 
/*
 * This relies on each bank being in address order.
 * The banks are sorted previously in bootmem_init().
 */
-   for_each_memblock(memory, reg) {
-   start = memblock_region_memory_base_pfn(reg);
-
+   for_each_mem_pfn_range(i, NUMA_NO_NODE, , , NULL) {
 #ifdef CONFIG_SPARSEMEM
/*
 * Take care not to free memmap entries that don't exist
@@ -341,8 +339,7 @@ static void __init free_unused_memmap(void)
 * memmap entries are valid from the bank end aligned to
 * MAX_ORDER_NR_PAGES.
 */
-   prev_end = ALIGN(memblock_region_memory_end_pfn(reg),
-MAX_ORDER_NR_PAGES);
+   prev_end = ALIGN(end, MAX_ORDER_NR_PAGES);
}
 
 #ifdef CONFIG_SPARSEMEM
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1e93cfc7c47a..271a8ea32482 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -473,12 +473,10 @@ static inline void free_memmap(unsigned long start_pfn, 
unsigned long end_pfn)
  */
 static void __init free_unused_memmap(void)
 {
-   unsigned long start, prev_end = 0;
-   struct memblock_region *reg;
-
-   for_each_memblock(memory, reg) {
-   start = __phys_to_pfn(reg->base);
+   unsigned long start, end, prev_end = 0;
+   int i;
 
+   for_each_mem_pfn_range(i, NUMA_NO_NODE, , , NULL) {
 #ifdef CONFIG_SPARSEMEM
/*
 * Take care not to free memmap entries that don't exist due
@@ -498,8 +496,7 @@ static void __init free_unused_memmap(void)
 * memmap entries are valid from the bank end aligned to
 * MAX_ORDER_NR_PAGES.
 */
-   prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
-MAX_ORDER_NR_PAGES);
+   prev_end = ALIGN(end, MAX_ORDER_NR_PAGES);
}
 
 #ifdef CONFIG_SPARSEMEM
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 2446a61e3c25..fdbafe417139 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -1216,14 +1216,15 @@ static void fadump_free_reserved_memory(unsigned long 
start_pfn,
  */
 static void fadump_release_reserved_area(u64 start, u64 end)
 {
-   u64 tstart, tend, spfn, epfn;
-   struct memblock_region *reg;
+   u64 tstart, tend, spfn, epfn, reg_spfn, reg_epfn, i;
 
spfn = PHYS_PFN(start);
epfn = PHYS_PFN(end);
-   for_each_memblock(memory, reg) {
-   tstart = max_t(u64, spfn, memblock_region_memory_base_pfn(reg));
-   tend   = min_t(u64, epfn, memblock_region_memory_end_pfn(reg));
+
+   for_each_mem_pfn_range(i, NUMA_NO_NODE, _spfn, _epfn, NULL) {
+   tstart = max_t(u64, spfn, reg_spfn);
+   tend   = min_t(u64, epfn, reg_epfn);
+
if (tstart < tend) {
fadump_free_reserved_memory(tstart, tend);
 
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index c2c11eb8dcfc..38d1acd7c8ef 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -192,15 +192,16 @@ void __init initmem_init(void)
 /* mark pages that don't exist as nosave */
 static int __init mark_nonram_nosave(void)
 {
-   struct memblock_region *reg, *prev = NULL;
+   unsigned long spfn, epfn, prev = 0;
+   int i;
 
-   for_each_memblock(memory, reg) {
-   if (prev &&
-   memblock_region_memory_end_pfn(prev) < 

[PATCH 11/15] memblock: reduce number of parameters in for_each_mem_range()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters outside memblock is s390
crash dump implementation.

To avoid yet another naming for memblock iterators, rename the existing
for_each_mem_range() to __for_each_mem_range() and add a new
for_each_mem_range() wrapper with only index, start and end parameters.

The new wrapper nicely fits into init_unavailable_mem() and will be used in
upcoming changes to simplify memblock traversals.

Signed-off-by: Mike Rapoport 
---
 .clang-format  |  1 +
 arch/arm64/kernel/machine_kexec_file.c |  6 ++
 arch/s390/kernel/crash_dump.c  |  8 
 include/linux/memblock.h   | 18 ++
 mm/page_alloc.c|  3 +--
 5 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/.clang-format b/.clang-format
index a0a96088c74f..52ededab25ce 100644
--- a/.clang-format
+++ b/.clang-format
@@ -205,6 +205,7 @@ ForEachMacros:
   - 'for_each_memblock_type'
   - 'for_each_memcg_cache_index'
   - 'for_each_mem_pfn_range'
+  - '__for_each_mem_range'
   - 'for_each_mem_range'
   - 'for_each_mem_range_rev'
   - 'for_each_migratetype_order'
diff --git a/arch/arm64/kernel/machine_kexec_file.c 
b/arch/arm64/kernel/machine_kexec_file.c
index 361a1143e09e..5b0e67b93cdc 100644
--- a/arch/arm64/kernel/machine_kexec_file.c
+++ b/arch/arm64/kernel/machine_kexec_file.c
@@ -215,8 +215,7 @@ static int prepare_elf_headers(void **addr, unsigned long 
*sz)
phys_addr_t start, end;
 
nr_ranges = 1; /* for exclusion of crashkernel region */
-   for_each_mem_range(i, , NULL, NUMA_NO_NODE,
-   MEMBLOCK_NONE, , , NULL)
+   for_each_mem_range(i, , )
nr_ranges++;
 
cmem = kmalloc(struct_size(cmem, ranges, nr_ranges), GFP_KERNEL);
@@ -225,8 +224,7 @@ static int prepare_elf_headers(void **addr, unsigned long 
*sz)
 
cmem->max_nr_ranges = nr_ranges;
cmem->nr_ranges = 0;
-   for_each_mem_range(i, , NULL, NUMA_NO_NODE,
-   MEMBLOCK_NONE, , , NULL) {
+   for_each_mem_range(i, , ) {
cmem->ranges[cmem->nr_ranges].start = start;
cmem->ranges[cmem->nr_ranges].end = end - 1;
cmem->nr_ranges++;
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index f96a5857bbfd..e28085c725ff 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -549,8 +549,8 @@ static int get_mem_chunk_cnt(void)
int cnt = 0;
u64 idx;
 
-   for_each_mem_range(idx, , _type, NUMA_NO_NODE,
-  MEMBLOCK_NONE, NULL, NULL, NULL)
+   __for_each_mem_range(idx, , _type, NUMA_NO_NODE,
+MEMBLOCK_NONE, NULL, NULL, NULL)
cnt++;
return cnt;
 }
@@ -563,8 +563,8 @@ static void loads_init(Elf64_Phdr *phdr, u64 loads_offset)
phys_addr_t start, end;
u64 idx;
 
-   for_each_mem_range(idx, , _type, NUMA_NO_NODE,
-  MEMBLOCK_NONE, , , NULL) {
+   __for_each_mem_range(idx, , _type, NUMA_NO_NODE,
+MEMBLOCK_NONE, , , NULL) {
phdr->p_filesz = end - start;
phdr->p_type = PT_LOAD;
phdr->p_offset = start;
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index e6a23b3db696..d70c2835e913 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -142,7 +142,7 @@ void __next_reserved_mem_region(u64 *idx, phys_addr_t 
*out_start,
 void __memblock_free_late(phys_addr_t base, phys_addr_t size);
 
 /**
- * for_each_mem_range - iterate through memblock areas from type_a and not
+ * __for_each_mem_range - iterate through memblock areas from type_a and not
  * included in type_b. Or just type_a if type_b is NULL.
  * @i: u64 used as loop variable
  * @type_a: ptr to memblock_type to iterate
@@ -153,7 +153,7 @@ void __memblock_free_late(phys_addr_t base, phys_addr_t 
size);
  * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL
  * @p_nid: ptr to int for nid of the range, can be %NULL
  */
-#define for_each_mem_range(i, type_a, type_b, nid, flags,  \
+#define __for_each_mem_range(i, type_a, type_b, nid, flags,\
   p_start, p_end, p_nid)   \
for (i = 0, __next_mem_range(, nid, flags, type_a, type_b,\
 p_start, p_end, p_nid);\
@@ -182,6 +182,16 @@ void __memblock_free_late(phys_addr_t base, phys_addr_t 
size);
 __next_mem_range_rev(, nid, flags, type_a, type_b,   \
  p_start, p_end, p_nid))
 

[PATCH 07/15] riscv: drop unneeded node initialization

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

RISC-V does not (yet) support NUMA  and for UMA architectures node 0 is
used implicitly during early memory initialization.

There is no need to call memblock_set_node(), remove this call and the
surrounding code.

Signed-off-by: Mike Rapoport 
---
 arch/riscv/mm/init.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 79e9d55bdf1a..7440ba2cdaaa 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -191,15 +191,6 @@ void __init setup_bootmem(void)
early_init_fdt_scan_reserved_mem();
memblock_allow_resize();
memblock_dump_all();
-
-   for_each_memblock(memory, reg) {
-   unsigned long start_pfn = memblock_region_memory_base_pfn(reg);
-   unsigned long end_pfn = memblock_region_memory_end_pfn(reg);
-
-   memblock_set_node(PFN_PHYS(start_pfn),
- PFN_PHYS(end_pfn - start_pfn),
- , 0);
-   }
 }
 
 #ifdef CONFIG_MMU
-- 
2.26.2



[PATCH 09/15] memblock: make for_each_memblock_type() iterator private

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h

Signed-off-by: Mike Rapoport 
---
 include/linux/memblock.h | 5 -
 mm/memblock.c| 5 +
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 017fae833d4a..220b5f0dad42 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -532,11 +532,6 @@ static inline unsigned long 
memblock_region_reserved_end_pfn(const struct memblo
 region < (memblock.memblock_type.regions + 
memblock.memblock_type.cnt);\
 region++)
 
-#define for_each_memblock_type(i, memblock_type, rgn)  \
-   for (i = 0, rgn = _type->regions[0];   \
-i < memblock_type->cnt;\
-i++, rgn = _type->regions[i])
-
 extern void *alloc_large_system_hash(const char *tablename,
 unsigned long bucketsize,
 unsigned long numentries,
diff --git a/mm/memblock.c b/mm/memblock.c
index 39aceafc57f6..a5b9b3df81fc 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -129,6 +129,11 @@ struct memblock memblock __initdata_memblock = {
.current_limit  = MEMBLOCK_ALLOC_ANYWHERE,
 };
 
+#define for_each_memblock_type(i, memblock_type, rgn)  \
+   for (i = 0, rgn = _type->regions[0];   \
+i < memblock_type->cnt;\
+i++, rgn = _type->regions[i])
+
 int memblock_debug __initdata_memblock;
 static bool system_has_some_mirror __initdata_memblock = false;
 static int memblock_can_resize __initdata_memblock;
-- 
2.26.2



[PATCH 10/15] memblock: make memblock_debug and related functionality private

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.

Signed-off-by: Mike Rapoport 
---
 arch/s390/kernel/setup.c |  4 ++--
 include/linux/memblock.h | 12 +---
 mm/memblock.c| 13 +++--
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index 07aa15ba43b3..8b284cf6e199 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -776,8 +776,8 @@ static void __init memblock_add_mem_detect_info(void)
unsigned long start, end;
int i;
 
-   memblock_dbg("physmem info source: %s (%hhd)\n",
-get_mem_info_source(), mem_detect.info_source);
+   pr_debug("physmem info source: %s (%hhd)\n",
+get_mem_info_source(), mem_detect.info_source);
/* keep memblock lists close to the kernel */
memblock_set_bottom_up(true);
for_each_mem_detect_block(i, , ) {
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 220b5f0dad42..e6a23b3db696 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -90,7 +90,6 @@ struct memblock {
 };
 
 extern struct memblock memblock;
-extern int memblock_debug;
 
 #ifndef CONFIG_ARCH_KEEP_MEMBLOCK
 #define __init_memblock __meminit
@@ -102,9 +101,6 @@ void memblock_discard(void);
 static inline void memblock_discard(void) {}
 #endif
 
-#define memblock_dbg(fmt, ...) \
-   if (memblock_debug) printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
-
 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
   phys_addr_t size, phys_addr_t align);
 void memblock_allow_resize(void);
@@ -456,13 +452,7 @@ bool memblock_is_region_memory(phys_addr_t base, 
phys_addr_t size);
 bool memblock_is_reserved(phys_addr_t addr);
 bool memblock_is_region_reserved(phys_addr_t base, phys_addr_t size);
 
-extern void __memblock_dump_all(void);
-
-static inline void memblock_dump_all(void)
-{
-   if (memblock_debug)
-   __memblock_dump_all();
-}
+void memblock_dump_all(void);
 
 /**
  * memblock_set_current_limit - Set the current allocation limit to allow
diff --git a/mm/memblock.c b/mm/memblock.c
index a5b9b3df81fc..824938849f6d 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -134,7 +134,10 @@ struct memblock memblock __initdata_memblock = {
 i < memblock_type->cnt;\
 i++, rgn = _type->regions[i])
 
-int memblock_debug __initdata_memblock;
+#define memblock_dbg(fmt, ...) \
+   if (memblock_debug) printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
+
+static int memblock_debug __initdata_memblock;
 static bool system_has_some_mirror __initdata_memblock = false;
 static int memblock_can_resize __initdata_memblock;
 static int memblock_memory_in_slab __initdata_memblock = 0;
@@ -1919,7 +1922,7 @@ static void __init_memblock memblock_dump(struct 
memblock_type *type)
}
 }
 
-void __init_memblock __memblock_dump_all(void)
+static void __init_memblock __memblock_dump_all(void)
 {
pr_info("MEMBLOCK configuration:\n");
pr_info(" memory size = %pa reserved size = %pa\n",
@@ -1933,6 +1936,12 @@ void __init_memblock __memblock_dump_all(void)
 #endif
 }
 
+void __init_memblock memblock_dump_all(void)
+{
+   if (memblock_debug)
+   __memblock_dump_all();
+}
+
 void __init memblock_allow_resize(void)
 {
memblock_can_resize = 1;
-- 
2.26.2



[PATCH 05/15] h8300, nds32, openrisc: simplify detection of memory extents

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().

Signed-off-by: Mike Rapoport 
---
 arch/h8300/kernel/setup.c| 8 +++-
 arch/nds32/kernel/setup.c| 8 ++--
 arch/openrisc/kernel/setup.c | 9 ++---
 3 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/arch/h8300/kernel/setup.c b/arch/h8300/kernel/setup.c
index 28ac88358a89..0281f92eea3d 100644
--- a/arch/h8300/kernel/setup.c
+++ b/arch/h8300/kernel/setup.c
@@ -74,17 +74,15 @@ static void __init bootmem_init(void)
memory_end = memory_start = 0;
 
/* Find main memory where is the kernel */
-   for_each_memblock(memory, region) {
-   memory_start = region->base;
-   memory_end = region->base + region->size;
-   }
+   memory_start = memblock_start_of_DRAM();
+   memory_end = memblock_end_of_DRAM();
 
if (!memory_end)
panic("No memory!");
 
/* setup bootmem globals (we use no_bootmem, but mm still depends on 
this) */
min_low_pfn = PFN_UP(memory_start);
-   max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
+   max_low_pfn = PFN_DOWN(memory_end);
max_pfn = max_low_pfn;
 
memblock_reserve(__pa(_stext), _end - _stext);
diff --git a/arch/nds32/kernel/setup.c b/arch/nds32/kernel/setup.c
index a066efbe53c0..c356e484dcab 100644
--- a/arch/nds32/kernel/setup.c
+++ b/arch/nds32/kernel/setup.c
@@ -249,12 +249,8 @@ static void __init setup_memory(void)
memory_end = memory_start = 0;
 
/* Find main memory where is the kernel */
-   for_each_memblock(memory, region) {
-   memory_start = region->base;
-   memory_end = region->base + region->size;
-   pr_info("%s: Memory: 0x%x-0x%x\n", __func__,
-   memory_start, memory_end);
-   }
+   memory_start = memblock_start_of_DRAM();
+   memory_end = memblock_end_of_DRAM();
 
if (!memory_end) {
panic("No memory!");
diff --git a/arch/openrisc/kernel/setup.c b/arch/openrisc/kernel/setup.c
index 8aa438e1f51f..c5706153d3b6 100644
--- a/arch/openrisc/kernel/setup.c
+++ b/arch/openrisc/kernel/setup.c
@@ -48,17 +48,12 @@ static void __init setup_memory(void)
unsigned long ram_start_pfn;
unsigned long ram_end_pfn;
phys_addr_t memory_start, memory_end;
-   struct memblock_region *region;
 
memory_end = memory_start = 0;
 
/* Find main memory where is the kernel, we assume its the only one */
-   for_each_memblock(memory, region) {
-   memory_start = region->base;
-   memory_end = region->base + region->size;
-   printk(KERN_INFO "%s: Memory: 0x%x-0x%x\n", __func__,
-  memory_start, memory_end);
-   }
+   memory_start = memblock_start_of_DRAM();
+   memory_end = memblock_end_of_DRAM();
 
if (!memory_end) {
panic("No memory!");
-- 
2.26.2



[PATCH 06/15] powerpc: fadamp: simplify fadump_reserve_crash_area()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

fadump_reserve_crash_area() reserves memory from a specified base address
till the end of the RAM.

Replace iteration through the memblock.memory with a single call to
memblock_reserve() with appropriate  that will take care of proper memory
reservation.

Signed-off-by: Mike Rapoport 
---
 arch/powerpc/kernel/fadump.c | 20 +---
 1 file changed, 1 insertion(+), 19 deletions(-)

diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 78ab9a6ee6ac..2446a61e3c25 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -1658,25 +1658,7 @@ int __init fadump_reserve_mem(void)
 /* Preserve everything above the base address */
 static void __init fadump_reserve_crash_area(u64 base)
 {
-   struct memblock_region *reg;
-   u64 mstart, msize;
-
-   for_each_memblock(memory, reg) {
-   mstart = reg->base;
-   msize  = reg->size;
-
-   if ((mstart + msize) < base)
-   continue;
-
-   if (mstart < base) {
-   msize -= (base - mstart);
-   mstart = base;
-   }
-
-   pr_info("Reserving %lluMB of memory at %#016llx for preserving 
crash data",
-   (msize >> 20), mstart);
-   memblock_reserve(mstart, msize);
-   }
+   memblock_reserve(base, memblock_end_of_DRAM() - base);
 }
 
 unsigned long __init arch_reserved_kernel_pages(void)
-- 
2.26.2



Re: [PATCH] MAINTAINERS: adjust kprobes.rst entry to new location

2020-07-27 Thread Masami Hiramatsu
On Sun, 26 Jul 2020 07:58:43 +0200
Lukas Bulwahn  wrote:

> Commit 2165b82fde82 ("docs: Move kprobes.rst from staging/ to trace/")
> moved kprobes.rst, but missed to adjust the MAINTAINERS entry.
> 
> Hence, ./scripts/get_maintainer.pl --self-test=patterns complains:
> 
>   warning: no file matchesF:Documentation/staging/kprobes.rst
> 
> Adjust the entry to the new file location.
> 

Good catch!

Acked-by: Masami Hiramatsu 

Thanks!

> Signed-off-by: Lukas Bulwahn 
> ---
> Naveen, Masami-san, please ack.
> Jonathan, please pick this minor non-urgent patch into docs-next.
> 
> applies cleanly on next-20200724
> 
>  MAINTAINERS | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 960f7d43f9d7..416fc4555834 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -9676,7 +9676,7 @@ M:  Anil S Keshavamurthy 
> 
>  M:   "David S. Miller" 
>  M:   Masami Hiramatsu 
>  S:   Maintained
> -F:   Documentation/staging/kprobes.rst
> +F:   Documentation/trace/kprobes.rst
>  F:   include/asm-generic/kprobes.h
>  F:   include/linux/kprobes.h
>  F:   kernel/kprobes.c
> -- 
> 2.17.1
> 


-- 
Masami Hiramatsu 


[PATCH 01/15] KVM: PPC: Book3S HV: simplify kvm_cma_reserve()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could be used.
As of now, there is no memblock_analyze() at all and
memblock_phys_mem_size() can be used as soon as cold-plug memory is
registerd with memblock.

Replace loop over memblock.memory with a call to memblock_phys_mem_size().

Signed-off-by: Mike Rapoport 
---
 arch/powerpc/kvm/book3s_hv_builtin.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c 
b/arch/powerpc/kvm/book3s_hv_builtin.c
index 7cd3cf3d366b..56ab0d28de2a 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -95,22 +95,15 @@ EXPORT_SYMBOL_GPL(kvm_free_hpt_cma);
 void __init kvm_cma_reserve(void)
 {
unsigned long align_size;
-   struct memblock_region *reg;
-   phys_addr_t selected_size = 0;
+   phys_addr_t selected_size;
 
/*
 * We need CMA reservation only when we are in HV mode
 */
if (!cpu_has_feature(CPU_FTR_HVMODE))
return;
-   /*
-* We cannot use memblock_phys_mem_size() here, because
-* memblock_analyze() has not been called yet.
-*/
-   for_each_memblock(memory, reg)
-   selected_size += memblock_region_memory_end_pfn(reg) -
-memblock_region_memory_base_pfn(reg);
 
+   selected_size = PHYS_PFN(memblock_phys_mem_size());
selected_size = (selected_size * kvm_cma_resv_ratio / 100) << 
PAGE_SHIFT;
if (selected_size) {
pr_debug("%s: reserving %ld MiB for global area\n", __func__,
-- 
2.26.2



[PATCH 04/15] arm64: numa: simplify dummy_numa_init()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.

Replace the loop with a single call to memblock_set_node() to the entire
memory.

Signed-off-by: Mike Rapoport 
---
 arch/arm64/mm/numa.c | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index aafcee3e3f7e..0cbdbcc885fb 100644
--- a/arch/arm64/mm/numa.c
+++ b/arch/arm64/mm/numa.c
@@ -423,19 +423,16 @@ static int __init numa_init(int (*init_func)(void))
  */
 static int __init dummy_numa_init(void)
 {
+   phys_addr_t start = memblock_start_of_DRAM();
+   phys_addr_t end = memblock_end_of_DRAM();
int ret;
-   struct memblock_region *mblk;
 
if (numa_off)
pr_info("NUMA disabled\n"); /* Forced off on command line. */
-   pr_info("Faking a node at [mem %#018Lx-%#018Lx]\n",
-   memblock_start_of_DRAM(), memblock_end_of_DRAM() - 1);
-
-   for_each_memblock(memory, mblk) {
-   ret = numa_add_memblk(0, mblk->base, mblk->base + mblk->size);
-   if (!ret)
-   continue;
+   pr_info("Faking a node at [mem %#018Lx-%#018Lx]\n", start, end - 1);
 
+   ret = numa_add_memblk(0, start, end);
+   if (ret) {
pr_err("NUMA init failed\n");
return ret;
}
-- 
2.26.2



[PATCH 03/15] arm, xtensa: simplify initialization of high memory pages

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.

Replace open-coded implementation of for_each_free_mem_range() with usage
of memblock API to simplify the code.

Signed-off-by: Mike Rapoport 
---
 arch/arm/mm/init.c| 48 +++--
 arch/xtensa/mm/init.c | 55 ---
 2 files changed, 18 insertions(+), 85 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 01e18e43b174..626af348eb8f 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -352,61 +352,29 @@ static void __init free_unused_memmap(void)
 #endif
 }
 
-#ifdef CONFIG_HIGHMEM
-static inline void free_area_high(unsigned long pfn, unsigned long end)
-{
-   for (; pfn < end; pfn++)
-   free_highmem_page(pfn_to_page(pfn));
-}
-#endif
-
 static void __init free_highpages(void)
 {
 #ifdef CONFIG_HIGHMEM
unsigned long max_low = max_low_pfn;
-   struct memblock_region *mem, *res;
+   phys_addr_t range_start, range_end;
+   u64 i;
 
/* set highmem page free */
-   for_each_memblock(memory, mem) {
-   unsigned long start = memblock_region_memory_base_pfn(mem);
-   unsigned long end = memblock_region_memory_end_pfn(mem);
+   for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
+   _start, _end, NULL) {
+   unsigned long start = PHYS_PFN(range_start);
+   unsigned long end = PHYS_PFN(range_end);
 
/* Ignore complete lowmem entries */
if (end <= max_low)
continue;
 
-   if (memblock_is_nomap(mem))
-   continue;
-
/* Truncate partial highmem entries */
if (start < max_low)
start = max_low;
 
-   /* Find and exclude any reserved regions */
-   for_each_memblock(reserved, res) {
-   unsigned long res_start, res_end;
-
-   res_start = memblock_region_reserved_base_pfn(res);
-   res_end = memblock_region_reserved_end_pfn(res);
-
-   if (res_end < start)
-   continue;
-   if (res_start < start)
-   res_start = start;
-   if (res_start > end)
-   res_start = end;
-   if (res_end > end)
-   res_end = end;
-   if (res_start != start)
-   free_area_high(start, res_start);
-   start = res_end;
-   if (start == end)
-   break;
-   }
-
-   /* And now free anything which remains */
-   if (start < end)
-   free_area_high(start, end);
+   for (; start < end; start++)
+   free_highmem_page(pfn_to_page(start));
}
 #endif
 }
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index a05b306cf371..ad9d59d93f39 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -79,67 +79,32 @@ void __init zones_init(void)
free_area_init(max_zone_pfn);
 }
 
-#ifdef CONFIG_HIGHMEM
-static void __init free_area_high(unsigned long pfn, unsigned long end)
-{
-   for (; pfn < end; pfn++)
-   free_highmem_page(pfn_to_page(pfn));
-}
-
 static void __init free_highpages(void)
 {
+#ifdef CONFIG_HIGHMEM
unsigned long max_low = max_low_pfn;
-   struct memblock_region *mem, *res;
+   phys_addr_t range_start, range_end;
+   u64 i;
 
-   reset_all_zones_managed_pages();
/* set highmem page free */
-   for_each_memblock(memory, mem) {
-   unsigned long start = memblock_region_memory_base_pfn(mem);
-   unsigned long end = memblock_region_memory_end_pfn(mem);
+   for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
+   _start, _end, NULL) {
+   unsigned long start = PHYS_PFN(range_start);
+   unsigned long end = PHYS_PFN(range_end);
 
/* Ignore complete lowmem entries */
if (end <= max_low)
continue;
 
-   if (memblock_is_nomap(mem))
-   continue;
-
/* Truncate partial highmem entries */
if (start < max_low)
start = max_low;
 
-   /* Find and exclude any reserved regions */
-   for_each_memblock(reserved, res) {
-   unsigned long res_start, res_end;
-
-   res_start = memblock_region_reserved_base_pfn(res);
-  

[PATCH 02/15] dma-contiguous: simplify cma_early_percent_memory()

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size() could be used.
As of now, there is no memblock_analyze() at all and
memblock_phys_mem_size() can be used as soon as cold-plug memory is
registerd with memblock.

Replace loop over memblock.memory with a call to memblock_phys_mem_size().

Signed-off-by: Mike Rapoport 
---
 kernel/dma/contiguous.c | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index 15bc5026c485..1992afd8ca7b 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -73,16 +73,7 @@ early_param("cma", early_cma);
 
 static phys_addr_t __init __maybe_unused cma_early_percent_memory(void)
 {
-   struct memblock_region *reg;
-   unsigned long total_pages = 0;
-
-   /*
-* We cannot use memblock_phys_mem_size() here, because
-* memblock_analyze() has not been called yet.
-*/
-   for_each_memblock(memory, reg)
-   total_pages += memblock_region_memory_end_pfn(reg) -
-  memblock_region_memory_base_pfn(reg);
+   unsigned long total_pages = PHYS_PFN(memblock_phys_mem_size());
 
return (total_pages * CONFIG_CMA_SIZE_PERCENTAGE / 100) << PAGE_SHIFT;
 }
-- 
2.26.2



[PATCH 00/15] memblock: seasonal cleaning^w cleanup

2020-07-27 Thread Mike Rapoport
From: Mike Rapoport 

Hi,

These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.

The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from mmotm tree.

[1] http://lkml.kernel.org/r/20200712083130.22919-1-r...@kernel.org 

Mike Rapoport (15):
  KVM: PPC: Book3S HV: simplify kvm_cma_reserve()
  dma-contiguous: simplify cma_early_percent_memory()
  arm, xtensa: simplify initialization of high memory pages
  arm64: numa: simplify dummy_numa_init()
  h8300, nds32, openrisc: simplify detection of memory extents
  powerpc: fadamp: simplify fadump_reserve_crash_area()
  riscv: drop unneeded node initialization
  mircoblaze: drop unneeded NUMA and sparsemem initializations
  memblock: make for_each_memblock_type() iterator private
  memblock: make memblock_debug and related functionality private
  memblock: reduce number of parameters in for_each_mem_range()
  arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()
  arch, drivers: replace for_each_membock() with for_each_mem_range()
  x86/numa: remove redundant iteration over memblock.reserved
  memblock: remove 'type' parameter from for_each_memblock()

 .clang-format|  1 +
 arch/arm/kernel/setup.c  | 18 +---
 arch/arm/mm/init.c   | 59 +---
 arch/arm/mm/mmu.c| 39 ++--
 arch/arm/mm/pmsa-v7.c| 20 
 arch/arm/mm/pmsa-v8.c| 17 ---
 arch/arm/xen/mm.c|  7 +--
 arch/arm64/kernel/machine_kexec_file.c   |  6 +--
 arch/arm64/kernel/setup.c|  2 +-
 arch/arm64/mm/init.c | 11 ++---
 arch/arm64/mm/kasan_init.c   |  8 ++--
 arch/arm64/mm/mmu.c  | 11 ++---
 arch/arm64/mm/numa.c | 15 +++---
 arch/c6x/kernel/setup.c  |  9 ++--
 arch/h8300/kernel/setup.c|  8 ++--
 arch/microblaze/mm/init.c| 24 ++
 arch/mips/cavium-octeon/dma-octeon.c | 12 ++---
 arch/mips/kernel/setup.c | 31 ++---
 arch/mips/netlogic/xlp/setup.c   |  2 +-
 arch/nds32/kernel/setup.c|  8 +---
 arch/openrisc/kernel/setup.c |  9 +---
 arch/openrisc/mm/init.c  |  8 ++--
 arch/powerpc/kernel/fadump.c | 58 ---
 arch/powerpc/kvm/book3s_hv_builtin.c | 11 +
 arch/powerpc/mm/book3s64/hash_utils.c| 16 +++
 arch/powerpc/mm/book3s64/radix_pgtable.c | 11 ++---
 arch/powerpc/mm/kasan/kasan_init_32.c|  8 ++--
 arch/powerpc/mm/mem.c| 33 +++--
 arch/powerpc/mm/numa.c   |  7 +--
 arch/powerpc/mm/pgtable_32.c |  8 ++--
 arch/riscv/mm/init.c | 33 -
 arch/riscv/mm/kasan_init.c   | 10 ++--
 arch/s390/kernel/crash_dump.c|  8 ++--
 arch/s390/kernel/setup.c | 31 -
 arch/s390/mm/page-states.c   |  6 +--
 arch/s390/mm/vmem.c  | 16 ---
 arch/sh/mm/init.c|  9 ++--
 arch/sparc/mm/init_64.c  | 12 ++---
 arch/x86/mm/numa.c   | 26 ---
 arch/xtensa/mm/init.c| 55 --
 drivers/bus/mvebu-mbus.c | 12 ++---
 drivers/s390/char/zcore.c|  9 ++--
 include/linux/memblock.h | 45 +-
 kernel/dma/contiguous.c  | 11 +
 mm/memblock.c| 28 +++
 mm/page_alloc.c  | 11 ++---
 mm/sparse.c  | 10 ++--
 47 files changed, 324 insertions(+), 485 deletions(-)

-- 
2.26.2



Re: [PATCH v6 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

2020-07-27 Thread Michal Simek



On 28. 07. 20 0:59, Mathieu Poirier wrote:
> On Wed, Jul 15, 2020 at 08:33:17AM -0700, Ben Levinsky wrote:
>> R5 is included in Xilinx Zynq UltraScale MPSoC so by adding this
>> remotproc driver, we can boot the R5 sub-system in different
>> configurations.
>>
>> Acked-by: Stefano Stabellini 
>> Acked-by: Ben Levinsky 
>> Reviewed-by: Radhey Shyam Pandey 
>> Signed-off-by: Ben Levinsky 
>> Signed-off-by: Wendy Liang 
>> Signed-off-by: Michal Simek 
>> Signed-off-by: Ed Mooring 
>> Signed-off-by: Jason Wu 
>> Tested-by: Ben Levinsky 
>> ---
>> v2:
>> - remove domain struct as per review from Mathieu
>> v3:
>> - add xilinx-related platform mgmt fn's instead of wrapping around
>>   function pointer in xilinx eemi ops struct
>> v4:
>> - add default values for enums
>> - fix formatting as per checkpatch.pl --strict. Note that 1 warning and 1 
>> check
>>   are still raised as each is due to fixing the warning results in that
>> particular line going over 80 characters.
>> v5:
>> - parse_fw change from use of rproc_of_resm_mem_entry_init to 
>> rproc_mem_entry_init and use of alloc/release
>> - var's of type zynqmp_r5_pdata all have same local variable name
>> - use dev_dbg instead of dev_info
>> v6:
>> - adding memory carveouts is handled much more similarly. All mem carveouts 
>> are
>>   now described in reserved memory as needed. That is, TCM nodes are not
>>   coupled to remoteproc anymore. This is reflected in the remoteproc R5 
>> driver
>>   and the device tree binding.
>> - remove mailbox from device tree binding as it is not necessary for elf
>>   loading
>> - use lockstep-mode property for configuring RPU
>> ---
>>  drivers/remoteproc/Kconfig|  10 +
>>  drivers/remoteproc/Makefile   |   1 +
>>  drivers/remoteproc/zynqmp_r5_remoteproc.c | 911 ++
>>  3 files changed, 922 insertions(+)
>>  create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
>>
>> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
>> index c4d1731295eb..342a7e668636 100644
>> --- a/drivers/remoteproc/Kconfig
>> +++ b/drivers/remoteproc/Kconfig
>> @@ -249,6 +249,16 @@ config STM32_RPROC
>>  
>>This can be either built-in or a loadable module.
>>  
>> +config ZYNQMP_R5_REMOTEPROC
>> +tristate "ZynqMP_R5 remoteproc support"
>> +depends on ARM64 && PM && ARCH_ZYNQMP
>> +select RPMSG_VIRTIO
>> +select MAILBOX
>> +select ZYNQMP_IPI_MBOX
>> +help
>> +  Say y here to support ZynqMP R5 remote processors via the remote
>> +  processor framework.
>> +
>>  endif # REMOTEPROC
>>  
>>  endmenu
>> diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
>> index e8b886e511f0..04d1c95d06d7 100644
>> --- a/drivers/remoteproc/Makefile
>> +++ b/drivers/remoteproc/Makefile
>> @@ -28,5 +28,6 @@ obj-$(CONFIG_QCOM_WCNSS_PIL)   += 
>> qcom_wcnss_pil.o
>>  qcom_wcnss_pil-y+= qcom_wcnss.o
>>  qcom_wcnss_pil-y+= qcom_wcnss_iris.o
>>  obj-$(CONFIG_ST_REMOTEPROC) += st_remoteproc.o
>> +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC)  += zynqmp_r5_remoteproc.o
>>  obj-$(CONFIG_ST_SLIM_REMOTEPROC)+= st_slim_rproc.o
>>  obj-$(CONFIG_STM32_RPROC)   += stm32_rproc.o
>> diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c 
>> b/drivers/remoteproc/zynqmp_r5_remoteproc.c
>> new file mode 100644
>> index ..b600759e257e
>> --- /dev/null
>> +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
>> @@ -0,0 +1,911 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Zynq R5 Remote Processor driver
>> + *
>> + * Copyright (C) 2019, 2020 Xilinx Inc. Ben Levinsky 
>> 
>> + * Copyright (C) 2015 - 2018 Xilinx Inc.
>> + * Copyright (C) 2015 Jason Wu 
>> + *
>> + * Based on origin OMAP and Zynq Remote Processor driver
>> + *
>> + * Copyright (C) 2012 Michal Simek 
>> + * Copyright (C) 2012 PetaLogix
>> + * Copyright (C) 2011 Texas Instruments, Inc.
>> + * Copyright (C) 2011 Google, Inc.
>> + */
>> +
>> +#include 
> 
> Unused
> 
>> +#include 
>> +#include 
>> +#include 
> 
> Unused
> 
>> +#include 
>> +#include 
>> +#include 
> 
> Unused
> 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
> 
> Unused
> 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +
>> +#include "remoteproc_internal.h"
>> +
>> +#define MAX_RPROCS  2 /* Support up to 2 RPU */
>> +#define MAX_MEM_PNODES  4 /* Max power nodes for one RPU memory 
>> instance */
>> +
>> +#define DEFAULT_FIRMWARE_NAME   "rproc-rpu-fw"
>> +
>> +/* PM proc states */
>> +#define PM_PROC_STATE_ACTIVE 1U
> 
> Unused
> 
>> +
>> +/* IPI buffer MAX length */
>> +#define IPI_BUF_LEN_MAX 32U
>> +/* RX mailbox client buffer max length */
>> +#define RX_MBOX_CLIENT_BUF_MAX  (IPI_BUF_LEN_MAX + \
>> + sizeof(struct zynqmp_ipi_message))
>> +
>> +#define 

Re: [Linux-kernel-mentees] [PATCH net] xdp: Prevent kernel-infoleak in xsk_getsockopt()

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 7:30 PM Peilin Ye  wrote:
>
> xsk_getsockopt() is copying uninitialized stack memory to userspace when
> `extra_stats` is `false`. Fix it by initializing `stats` with memset().
>
> Cc: sta...@vger.kernel.org

8aa5a33578e9 is not in stable branches yet, so we don't need to Cc stable.

> Fixes: 8aa5a33578e9 ("xsk: Add new statistics")
> Suggested-by: Dan Carpenter 
> Signed-off-by: Peilin Ye 
> ---
>  net/xdp/xsk.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 26e3bba8c204..acf001908a0d 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -844,6 +844,8 @@ static int xsk_getsockopt(struct socket *sock, int level, 
> int optname,
> bool extra_stats = true;
> size_t stats_size;
>
> +   memset(, 0, sizeof(stats));
> +

xsk.c doesn't include linux/string.h directly, so using memset may break
build for some config combinations. We can probably just use

struct xdp_statistics stats = {};

Thanks,
Song


> if (len < sizeof(struct xdp_statistics_v1)) {
> return -EINVAL;
> } else if (len < sizeof(stats)) {
> --
> 2.25.1
>


Re: [PATCH 1/4] dma-mapping: Add bounced DMA ops

2020-07-27 Thread Claire Chang
v2 that reuses SWIOTLB here: https://lore.kernel.org/patchwork/cover/1280705/

Thanks,
Claire


BUG: unable to handle kernel paging request in x86_pmu_event_init

2020-07-27 Thread syzbot
Hello,

syzbot found the following issue on:

HEAD commit:d15be546 Merge tag 'media/v5.8-3' of git://git.kernel.org/..
git tree:   upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1638031710
kernel config:  https://syzkaller.appspot.com/x/.config?x=f3bc31881f1ae8a7
dashboard link: https://syzkaller.appspot.com/bug?extid=c01c3ea720877d228202
compiler:   clang version 10.0.0 (https://github.com/llvm/llvm-project/ 
c2443155a0fb245c8f17f2c1c72b6ea391e86e81)
syz repro:  https://syzkaller.appspot.com/x/repro.syz?x=125be09c90
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=115ce01f10

Bisection is inconclusive: the issue happens on the oldest tested release.

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=122b822890
final oops: https://syzkaller.appspot.com/x/report.txt?x=112b822890
console output: https://syzkaller.appspot.com/x/log.txt?x=162b822890

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c01c3ea720877d228...@syzkaller.appspotmail.com

BUG: unable to handle page fault for address: ffe8
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 947c067 P4D 947c067 PUD 947e067 PMD 0 
Oops: 0002 [#1] PREEMPT SMP KASAN
CPU: 1 PID: 3889 Comm: systemd-udevd Not tainted 5.8.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
RIP: 0010:x86_pmu_initialized arch/x86/events/core.c:297 [inline]
RIP: 0010:__x86_pmu_event_init arch/x86/events/core.c:594 [inline]
RIP: 0010:x86_pmu_event_init+0x95/0xac0 arch/x86/events/core.c:2140
Code: e8 30 5a 73 00 48 83 3d d8 cc 8b 08 00 75 1e e8 21 5a 73 00 bd ed ff ff 
ff e9 2a 07 00 00 e8 12 5a 73 00 48 83 3d ba cc 8b 08 <00> 74 e2 e8 43 7b ff ff 
89 c5 31 ff 89 c6 e8 08 5e 73 00 85 ed 74
RSP: 0018:c90001577f10 EFLAGS: 00010097
RAX: 0286 RBX: 4100 RCX: 2920ecf39a0bcf00
RDX:  RSI: 0003 RDI: c90001577f58
RBP:  R08: 814dec5e R09: ed1012f42c69
R10: ed1012f42c69 R11:  R12: 
R13:  R14: c90001577f58 R15: 8880a1c40080
FS:  7f1b1fd358c0() GS:8880ae90() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: ffe8 CR3: a26b6000 CR4: 001406e0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400
Call Trace:
Modules linked in:
CR2: ffe8
---[ end trace b1f79b2301ebd5ee ]---
RIP: 0010:x86_pmu_initialized arch/x86/events/core.c:297 [inline]
RIP: 0010:__x86_pmu_event_init arch/x86/events/core.c:594 [inline]
RIP: 0010:x86_pmu_event_init+0x95/0xac0 arch/x86/events/core.c:2140
Code: e8 30 5a 73 00 48 83 3d d8 cc 8b 08 00 75 1e e8 21 5a 73 00 bd ed ff ff 
ff e9 2a 07 00 00 e8 12 5a 73 00 48 83 3d ba cc 8b 08 <00> 74 e2 e8 43 7b ff ff 
89 c5 31 ff 89 c6 e8 08 5e 73 00 85 ed 74
RSP: 0018:c90001577f10 EFLAGS: 00010097
RAX: 0286 RBX: 4100 RCX: 2920ecf39a0bcf00
RDX:  RSI: 0003 RDI: c90001577f58
RBP:  R08: 814dec5e R09: ed1012f42c69
R10: ed1012f42c69 R11:  R12: 
R13:  R14: c90001577f58 R15: 8880a1c40080
FS:  7f1b1fd358c0() GS:8880ae90() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: ffe8 CR3: a26b6000 CR4: 001406e0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkal...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches


RE: [PATCH v1] platform/mellanox: mlxbf-pmc: Add Mellanox BlueField PMC driver

2020-07-27 Thread Shravan Ramani


> -Original Message-
> From: Andy Shevchenko 
> Sent: Monday, July 27, 2020 4:24 PM
> To: Shravan Ramani 
> Cc: Andy Shevchenko ; Darren Hart
> ; Vadim Pasternak ; Jiri Pirko
> ; Platform Driver ;
> Linux Kernel Mailing List 
> Subject: Re: [PATCH v1] platform/mellanox: mlxbf-pmc: Add Mellanox BlueField
> PMC driver
> 
> On Mon, Jul 27, 2020 at 12:02 PM Shravan Kumar Ramani
>  wrote:
> >
> > The performance modules in BlueField are present in several hardware
> > blocks and each block provides access to these stats either through
> > counters that can be programmed to monitor supported events or through
> > memory-mapped registers that hold the relevant information.
> > The hardware blocks that include a performance module are:
> >  * Tile (block containing 2 cores and a shared L2 cache)
> >  * TRIO (PCIe root complex)
> >  * MSS (Memory Sub-system containing the Memory Controller and L3
> > cache)
> >  * GIC (Interrupt controller)
> >  * SMMU (System Memory Management Unit) The mlx_pmc driver provides
> > access to all of these performance modules through a hwmon sysfs
> > interface.
> 
> Just brief comments:
> - consider to revisit header block to see what is really necessary and what 
> can
> be dropped
> - add comma to the arrays where last line is not a termination
> - look at match_string() / sysfs_match_string() API, I think they can be 
> utilised
> here
> - UUID manipulations (esp. with that GUID_INIT() against non-constant) seems
> too much, consider refactoring and cleaning up these pieces

Could you please elaborate on what approach you'd like me to take with the UUID 
manipulation?
I used the same approach as in drivers/platform/mellanox/mlxbf-bootctl.c which 
seemed like an appropriate example.
Any other pointers would be helpful.

Thanks for the feedback. Will address all the other comments in v2.

Regards,
Shravan

> - use kstroto*() API instead of sscanf. It has a range check
> 
> 
> --
> With Best Regards,
> Andy Shevchenko


[PATCH v7 6/8] scsi: ufs: Recover hba runtime PM error in error handler

2020-07-27 Thread Can Guo
Current error handler cannot work well or recover hba runtime PM error if
ufshcd_suspend/resume has failed due to UFS errors, e.g. hibern8 enter/exit
error or SSU cmd error. When this happens, error handler may fail doing
full reset and restore because error handler always assumes that powers,
IRQs and clocks are ready after pm_runtime_get_sync returns, but actually
they are not if ufshcd_reusme fails [1]. Besides, if ufschd_suspend/resume
fails due to UFS error, runtime PM framework saves the error value to
dev.power.runtime_error. After that, hba dev runtime suspend/resume would
not be invoked anymore unless runtime_error is cleared [2].

In case of ufshcd_suspend/resume fails due to UFS errors, for scenario [1],
error handler cannot assume anything of pm_runtime_get_sync, meaning error
handler should explicitly turn ON powers, IRQs and clocks again. To get the
hba runtime PM work as regard for scenario [2], error handler can clear the
runtime_error by calling pm_runtime_set_active() if full reset and restore
succeeds. And, more important, if pm_runtime_set_active() returns no error,
which means runtime_error has been cleared, we also need to resume those
scsi devices under hba in case any of them has failed to be resumed due to
hba runtime resume failure. This is to unblock blk_queue_enter in case
there are bios waiting inside it.

In addition, if ufshcd_resume errors out, ufshcd_release in ufshcd_resume
would be skipped. After hba runtime PM error is recovered in error handler,
we should do ufshcd_release once to get clock gating back to work.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufshcd.c | 91 +++
 drivers/scsi/ufs/ufshcd.h |  1 +
 2 files changed, 86 insertions(+), 6 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index c2d7a90..c480823 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "ufshcd.h"
 #include "ufs_quirks.h"
 #include "unipro.h"
@@ -229,6 +230,10 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba);
 static int ufshcd_change_power_mode(struct ufs_hba *hba,
 struct ufs_pa_layer_attr *pwr_mode);
 static void ufshcd_schedule_eh_work(struct ufs_hba *hba);
+static int ufshcd_setup_hba_vreg(struct ufs_hba *hba, bool on);
+static int ufshcd_setup_vreg(struct ufs_hba *hba, bool on);
+static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba,
+struct ufs_vreg *vreg);
 static int ufshcd_wb_buf_flush_enable(struct ufs_hba *hba);
 static int ufshcd_wb_buf_flush_disable(struct ufs_hba *hba);
 static int ufshcd_wb_ctrl(struct ufs_hba *hba, bool enable);
@@ -5553,6 +5558,78 @@ static inline void ufshcd_schedule_eh_work(struct 
ufs_hba *hba)
}
 }
 
+static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
+{
+   pm_runtime_get_sync(hba->dev);
+   if (pm_runtime_status_suspended(hba->dev)) {
+   /*
+* Don't assume anything of pm_runtime_get_sync(), if
+* resume fails, irq and clocks can be OFF, and powers
+* can be OFF or in LPM.
+*/
+   ufshcd_setup_hba_vreg(hba, true);
+   ufshcd_setup_clocks(hba, true);
+   ufshcd_enable_irq(hba);
+   ufshcd_setup_vreg(hba, true);
+   ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq);
+   ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq2);
+   ufshcd_vops_resume(hba, UFS_RUNTIME_PM);
+   }
+   ufshcd_hold(hba, false);
+   if (ufshcd_is_clkscaling_supported(hba)) {
+   cancel_work_sync(>clk_scaling.suspend_work);
+   cancel_work_sync(>clk_scaling.resume_work);
+   ufshcd_suspend_clkscaling(hba);
+   }
+}
+
+static void ufshcd_err_handling_unprepare(struct ufs_hba *hba)
+{
+   /* If clk_gating is held by pm ops, release it */
+   if (pm_runtime_active(hba->dev) && hba->clk_gating.held_by_pm) {
+   hba->clk_gating.held_by_pm = false;
+   ufshcd_release(hba);
+   }
+   ufshcd_release(hba);
+   if (ufshcd_is_clkscaling_supported(hba))
+   ufshcd_resume_clkscaling(hba);
+   pm_runtime_put(hba->dev);
+}
+
+#ifdef CONFIG_PM
+static void ufshcd_recover_pm_error(struct ufs_hba *hba)
+{
+   struct Scsi_Host *shost = hba->host;
+   struct scsi_device *sdev;
+   struct request_queue *q;
+   int ret;
+
+   /*
+* Set RPM status of hba device to RPM_ACTIVE,
+* this also clears its runtime error.
+*/
+   ret = pm_runtime_set_active(hba->dev);
+   /*
+* If hba device had runtime error, we also need to resume those
+* scsi devices under hba in case any of them has failed to be
+* resumed due to hba runtime resume failure. This is to unblock
+* blk_queue_enter in case 

[RFC v2 5/5] of: Add plumbing for restricted DMA pool

2020-07-27 Thread Claire Chang
If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma property is presented.
One can specify two reserved-memory nodes in the device tree. One with
shared-dma-pool to handle the coherent DMA buffer allocation, and
another one with device-swiotlb-pool for regular DMA to/from system
memory, which would be subject to bouncing.

Signed-off-by: Claire Chang 
---
 drivers/of/address.c| 39 +++
 drivers/of/device.c |  3 +++
 drivers/of/of_private.h |  6 ++
 3 files changed, 48 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index 381dc9be7b22..1285f914481f 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1009,6 +1010,44 @@ int of_dma_get_range(struct device_node *np, u64 
*dma_addr, u64 *paddr, u64 *siz
return ret;
 }
 
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+   int length, size, ret, i;
+   u32 idx[2];
+
+   if (!dev || !dev->of_node)
+   return -EINVAL;
+
+   if (!of_get_property(dev->of_node, "restricted-dma", ))
+   return 0;
+
+   size = length / sizeof(idx[0]);
+   if (size > ARRAY_SIZE(idx)) {
+   dev_err(dev,
+   "restricted-dma expected less than or equal to %d 
indexs, but got %d\n",
+   ARRAY_SIZE(idx), size);
+   return -EINVAL;
+   }
+
+   ret = of_property_read_u32_array(dev->of_node, "restricted-dma", idx,
+size);
+   if (ret)
+   return ret;
+
+   for (i = 0; i < size; i++) {
+   ret = of_reserved_mem_device_init_by_idx(dev, dev->of_node,
+idx[i]);
+   if (ret) {
+   dev_err(dev,
+   "of_reserved_mem_device_init_by_idx() failed 
with %d\n",
+   ret);
+   return ret;
+   }
+   }
+
+   return 0;
+}
+
 /**
  * of_dma_is_coherent - Check if device is coherent
  * @np:device node
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 27203bfd0b22..83d6cf8a8256 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -169,6 +169,9 @@ int of_dma_configure(struct device *dev, struct device_node 
*np, bool force_dma)
 
arch_setup_dma_ops(dev, dma_addr, size, iommu, coherent);
 
+   if (!iommu)
+   return of_dma_set_restricted_buffer(dev);
+
return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index edc682249c00..f2e3adfb7d85 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -160,12 +160,18 @@ extern int of_bus_n_size_cells(struct device_node *np);
 #ifdef CONFIG_OF_ADDRESS
 extern int of_dma_get_range(struct device_node *np, u64 *dma_addr,
u64 *paddr, u64 *size);
+extern int of_dma_set_restricted_buffer(struct device *dev);
 #else
 static inline int of_dma_get_range(struct device_node *np, u64 *dma_addr,
   u64 *paddr, u64 *size)
 {
return -ENODEV;
 }
+
+static inline int of_dma_get_restricted_buffer(struct device *dev)
+{
+   return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.28.0.rc0.142.g3c755180ce-goog



[PATCH v7 2/8] ufs: ufs-qcom: Fix race conditions caused by func ufs_qcom_testbus_config

2020-07-27 Thread Can Guo
If ufs_qcom_dump_dbg_regs() calls ufs_qcom_testbus_config() from
ufshcd_suspend/resume and/or clk gate/ungate context, pm_runtime_get_sync()
and ufshcd_hold() will cause racing problems. Fix this by removing the
unnecessary calls of pm_runtime_get_sync() and ufshcd_hold().

Signed-off-by: Can Guo 
Reviewed-by: Hongwu Su 
Reviewed-by: Avri Altman 
---
 drivers/scsi/ufs/ufs-qcom.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 2e6ddb5..7da27ee 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -1604,9 +1604,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
 */
}
mask <<= offset;
-
-   pm_runtime_get_sync(host->hba->dev);
-   ufshcd_hold(host->hba, false);
ufshcd_rmwl(host->hba, TEST_BUS_SEL,
(u32)host->testbus.select_major << 19,
REG_UFS_CFG1);
@@ -1619,8 +1616,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
 * committed before returning.
 */
mb();
-   ufshcd_release(host->hba);
-   pm_runtime_put_sync(host->hba->dev);
 
return 0;
 }
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



[RFC v2 4/5] dt-bindings: of: Add plumbing for restricted DMA pool

2020-07-27 Thread Claire Chang
Introduce the new compatible string, device-swiotlb-pool, for restricted
DMA. One can specify the address and length of the device swiotlb memory
region by device-swiotlb-pool in the device tree.

Signed-off-by: Claire Chang 
---
 .../reserved-memory/reserved-memory.txt   | 35 +++
 1 file changed, 35 insertions(+)

diff --git 
a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt 
b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index 4dd20de6977f..78850896e1d0 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,24 @@ compatible (optional) - standard definition
   used as a shared pool of DMA buffers for a set of devices. It can
   be used by an operating system to instantiate the necessary pool
   management subsystem if necessary.
+- device-swiotlb-pool: This indicates a region of memory meant to be
+  used as a pool of device swiotlb buffers for a given device. When
+  using this, the no-map and reusable properties must not be set, so 
the
+  operating system can create a virtual mapping that will be used for
+  synchronization. Also, there must be a restricted-dma property in the
+  device node to specify the indexes of reserved-memory nodes. One can
+  specify two reserved-memory nodes in the device tree. One with
+  shared-dma-pool to handle the coherent DMA buffer allocation, and
+  another one with device-swiotlb-pool for regular DMA to/from system
+  memory, which would be subject to bouncing. The main purpose for
+  restricted DMA is to mitigate the lack of DMA access control on
+  systems without an IOMMU, which could result in the DMA accessing the
+  system memory at unexpected times and/or unexpected addresses,
+  possibly leading to data leakage or corruption. The feature on its 
own
+  provides a basic level of protection against the DMA overwriting 
buffer
+  contents at unexpected times. However, to protect against general 
data
+  leakage and system memory corruption, the system needs to provide a
+  way to restrict the DMA to a predefined memory region.
 - vendor specific string in the form ,[-]
 no-map (optional) - empty property
 - Indicates the operating system must not create a virtual mapping
@@ -117,6 +135,16 @@ one for multimedia processing (named 
multimedia-memory@7700, 64MiB).
compatible = "acme,multimedia-memory";
reg = <0x7700 0x400>;
};
+
+   wifi_coherent_mem_region: wifi_coherent_mem_region {
+   compatible = "shared-dma-pool";
+   reg = <0x5000 0x40>;
+   };
+
+   wifi_device_swiotlb_region: wifi_device_swiotlb_region {
+   compatible = "device-swiotlb-pool";
+   reg = <0x5040 0x400>;
+   };
};
 
/* ... */
@@ -135,4 +163,11 @@ one for multimedia processing (named 
multimedia-memory@7700, 64MiB).
memory-region = <_reserved>;
/* ... */
};
+
+   pcie_wifi: pcie_wifi@0,0 {
+   memory-region = <_coherent_mem_region>,
+<_device_swiotlb_region>;
+   restricted-dma = <0>, <1>;
+   /* ... */
+   };
 };
-- 
2.28.0.rc0.142.g3c755180ce-goog



[PATCH v7 3/8] scsi: ufs-qcom: Remove testbus dump in ufs_qcom_dump_dbg_regs

2020-07-27 Thread Can Guo
Dumping testbus registers is heavy enough to cause stability issues
sometime, just remove them as of now.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufs-qcom.c | 32 
 1 file changed, 32 deletions(-)

diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 7da27ee..96e0999 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -1620,44 +1620,12 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
return 0;
 }
 
-static void ufs_qcom_testbus_read(struct ufs_hba *hba)
-{
-   ufshcd_dump_regs(hba, UFS_TEST_BUS, 4, "UFS_TEST_BUS ");
-}
-
-static void ufs_qcom_print_unipro_testbus(struct ufs_hba *hba)
-{
-   struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-   u32 *testbus = NULL;
-   int i, nminor = 256, testbus_len = nminor * sizeof(u32);
-
-   testbus = kmalloc(testbus_len, GFP_KERNEL);
-   if (!testbus)
-   return;
-
-   host->testbus.select_major = TSTBUS_UNIPRO;
-   for (i = 0; i < nminor; i++) {
-   host->testbus.select_minor = i;
-   ufs_qcom_testbus_config(host);
-   testbus[i] = ufshcd_readl(hba, UFS_TEST_BUS);
-   }
-   print_hex_dump(KERN_ERR, "UNIPRO_TEST_BUS ", DUMP_PREFIX_OFFSET,
-   16, 4, testbus, testbus_len, false);
-   kfree(testbus);
-}
-
 static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba)
 {
ufshcd_dump_regs(hba, REG_UFS_SYS1CLK_1US, 16 * 4,
 "HCI Vendor Specific Registers ");
 
-   /* sleep a bit intermittently as we are dumping too much data */
ufs_qcom_print_hw_debug_reg_all(hba, NULL, ufs_qcom_dump_regs_wrapper);
-   udelay(1000);
-   ufs_qcom_testbus_read(hba);
-   udelay(1000);
-   ufs_qcom_print_unipro_testbus(hba);
-   udelay(1000);
 }
 
 /**
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



[PATCH v7 8/8] scsi: ufs: Fix a racing problem btw error handler and runtime PM ops

2020-07-27 Thread Can Guo
Current IRQ handler blocks scsi requests before scheduling eh_work, when
error handler calls pm_runtime_get_sync, if ufshcd_suspend/resume sends a
scsi cmd, most likely the SSU cmd, since scsi requests are blocked,
pm_runtime_get_sync() will never return because ufshcd_suspend/reusme is
blocked by the scsi cmd. Some changes and code re-arrangement can be made
to resolve it.

o In queuecommand path, hba->ufshcd_state check and ufshcd_send_command
  should stay into the same spin lock. This is to make sure that no more
  commands leak into doorbell after hba->ufshcd_state is changed.
o Don't block scsi requests before scheduling eh_work, let error handler
  block scsi requests when it is ready to start error recovery.
o Don't let scsi layer keep requeuing the scsi cmds sent from hba runtime
  PM ops, let them pass or fail them. Let them pass if eh_work is scheduled
  due to non-fatal errors. Fail them fail if eh_work is scheduled due to
  fatal errors, otherwise the cmds may eventually time out since UFS is in
  bad state, which gets error handler blocked for too long. If we fail the
  scsi cmds sent from hba runtime PM ops, hba runtime PM ops fails too, but
  it does not hurt since error handler can recover hba runtime PM error.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufshcd.c | 84 +++
 1 file changed, 49 insertions(+), 35 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index b2bafa3..9c8c43f 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -126,7 +126,8 @@ enum {
UFSHCD_STATE_RESET,
UFSHCD_STATE_ERROR,
UFSHCD_STATE_OPERATIONAL,
-   UFSHCD_STATE_EH_SCHEDULED,
+   UFSHCD_STATE_EH_SCHEDULED_FATAL,
+   UFSHCD_STATE_EH_SCHEDULED_NON_FATAL,
 };
 
 /* UFSHCD error handling flags */
@@ -2515,34 +2516,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, 
struct scsi_cmnd *cmd)
if (!down_read_trylock(>clk_scaling_lock))
return SCSI_MLQUEUE_HOST_BUSY;
 
-   spin_lock_irqsave(hba->host->host_lock, flags);
-   switch (hba->ufshcd_state) {
-   case UFSHCD_STATE_OPERATIONAL:
-   break;
-   case UFSHCD_STATE_EH_SCHEDULED:
-   case UFSHCD_STATE_RESET:
-   err = SCSI_MLQUEUE_HOST_BUSY;
-   goto out_unlock;
-   case UFSHCD_STATE_ERROR:
-   set_host_byte(cmd, DID_ERROR);
-   cmd->scsi_done(cmd);
-   goto out_unlock;
-   default:
-   dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
-   __func__, hba->ufshcd_state);
-   set_host_byte(cmd, DID_BAD_TARGET);
-   cmd->scsi_done(cmd);
-   goto out_unlock;
-   }
-
-   /* if error handling is in progress, don't issue commands */
-   if (ufshcd_eh_in_progress(hba)) {
-   set_host_byte(cmd, DID_ERROR);
-   cmd->scsi_done(cmd);
-   goto out_unlock;
-   }
-   spin_unlock_irqrestore(hba->host->host_lock, flags);
-
hba->req_abort_count = 0;
 
err = ufshcd_hold(hba, true);
@@ -2578,11 +2551,50 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, 
struct scsi_cmnd *cmd)
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
 
-   /* issue command to the controller */
spin_lock_irqsave(hba->host->host_lock, flags);
+   switch (hba->ufshcd_state) {
+   case UFSHCD_STATE_OPERATIONAL:
+   case UFSHCD_STATE_EH_SCHEDULED_NON_FATAL:
+   break;
+   case UFSHCD_STATE_EH_SCHEDULED_FATAL:
+   /*
+* If we are here, eh_work is either scheduled or running.
+* Before eh_work sets ufshcd_state to STATE_RESET, it flushes
+* runtime PM ops by calling pm_runtime_get_sync(). If a scsi
+* cmd, e.g. the SSU cmd, is sent by PM ops, it can never be
+* finished if we let SCSI layer keep retrying it, which gets
+* eh_work stuck forever. Neither can we let it pass, because
+* ufs now is not in good status, so the SSU cmd may eventually
+* time out, blocking eh_work for too long. So just let it fail.
+*/
+   if (hba->pm_op_in_progress) {
+   hba->force_reset = true;
+   set_host_byte(cmd, DID_BAD_TARGET);
+   goto out_compl_cmd;
+   }
+   case UFSHCD_STATE_RESET:
+   err = SCSI_MLQUEUE_HOST_BUSY;
+   goto out_compl_cmd;
+   case UFSHCD_STATE_ERROR:
+   set_host_byte(cmd, DID_ERROR);
+   goto out_compl_cmd;
+   default:
+   dev_WARN_ONCE(hba->dev, 1, "%s: invalid state %d\n",
+   __func__, hba->ufshcd_state);
+   set_host_byte(cmd, DID_BAD_TARGET);
+   goto 

[RFC v2 3/5] swiotlb: Use device swiotlb pool if available

2020-07-27 Thread Claire Chang
Regardless of swiotlb setting, the device swiotlb pool is preferred if
available.

The device swiotlb pools provide a basic level of protection against
the DMA overwriting buffer contents at unexpected times. However, to
protect against general data leakage and system memory corruption, the
system needs to provide a way to restrict the DMA to a predefined memory
region.

Signed-off-by: Claire Chang 
---
 drivers/iommu/intel/iommu.c |  6 +++---
 include/linux/dma-direct.h  |  8 
 include/linux/swiotlb.h | 13 -
 kernel/dma/direct.c |  8 
 kernel/dma/swiotlb.c| 18 +++---
 5 files changed, 30 insertions(+), 23 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 44c9230251eb..37d6583cf628 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3684,7 +3684,7 @@ bounce_sync_single(struct device *dev, dma_addr_t addr, 
size_t size,
return;
 
tlb_addr = intel_iommu_iova_to_phys(>domain, addr);
-   if (is_swiotlb_buffer(tlb_addr))
+   if (is_swiotlb_buffer(dev, tlb_addr))
swiotlb_tbl_sync_single(dev, tlb_addr, size, dir, target);
 }
 
@@ -3768,7 +3768,7 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, 
size_t size,
return (phys_addr_t)iova_pfn << PAGE_SHIFT;
 
 mapping_error:
-   if (is_swiotlb_buffer(tlb_addr))
+   if (is_swiotlb_buffer(dev, tlb_addr))
swiotlb_tbl_unmap_single(dev, tlb_addr, size,
 aligned_size, dir, attrs);
 swiotlb_error:
@@ -3796,7 +3796,7 @@ bounce_unmap_single(struct device *dev, dma_addr_t 
dev_addr, size_t size,
return;
 
intel_unmap(dev, dev_addr, size);
-   if (is_swiotlb_buffer(tlb_addr))
+   if (is_swiotlb_buffer(dev, tlb_addr))
swiotlb_tbl_unmap_single(dev, tlb_addr, size,
 aligned_size, dir, attrs);
 
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 5a3ce2a24794..1cf920ddb2f6 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -134,7 +134,7 @@ static inline void dma_direct_sync_single_for_device(struct 
device *dev,
 {
phys_addr_t paddr = dma_to_phys(dev, addr);
 
-   if (unlikely(is_swiotlb_buffer(paddr)))
+   if (unlikely(is_swiotlb_buffer(dev, paddr)))
swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
if (!dev_is_dma_coherent(dev))
@@ -151,7 +151,7 @@ static inline void dma_direct_sync_single_for_cpu(struct 
device *dev,
arch_sync_dma_for_cpu_all();
}
 
-   if (unlikely(is_swiotlb_buffer(paddr)))
+   if (unlikely(is_swiotlb_buffer(dev, paddr)))
swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
 }
 
@@ -162,7 +162,7 @@ static inline dma_addr_t dma_direct_map_page(struct device 
*dev,
phys_addr_t phys = page_to_phys(page) + offset;
dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-   if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+   if (unlikely(swiotlb_force == SWIOTLB_FORCE || dev->dma_io_tlb_mem))
return swiotlb_map(dev, phys, size, dir, attrs);
 
if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
@@ -188,7 +188,7 @@ static inline void dma_direct_unmap_page(struct device 
*dev, dma_addr_t addr,
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-   if (unlikely(is_swiotlb_buffer(phys)))
+   if (unlikely(is_swiotlb_buffer(dev, phys)))
swiotlb_tbl_unmap_single(dev, phys, size, size, dir, attrs);
 }
 #endif /* _LINUX_DMA_DIRECT_H */
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index ab0d571d0826..8a50b3af7c3f 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -105,18 +105,21 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem io_tlb_default_mem;
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-   return paddr >= io_tlb_mem.start && paddr < io_tlb_mem.end;
+   struct io_tlb_mem *mem =
+   dev->dma_io_tlb_mem ? dev->dma_io_tlb_mem : _tlb_default_mem;
+
+   return paddr >= mem->start && paddr < mem->end;
 }
 
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
return false;
 }
@@ -132,7 +135,7 @@ static inline size_t swiotlb_max_mapping_size(struct device 
*dev)
return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool 

[PATCH v7 1/8] scsi: ufs: Add checks before setting clk-gating states

2020-07-27 Thread Can Guo
Clock gating features can be turned on/off selectively which means its
state information is only important if it is enabled. This change makes
sure that we only look at state of clk-gating if it is enabled.

Signed-off-by: Can Guo 
Reviewed-by: Avri Altman 
Reviewed-by: Hongwu Su 
---
 drivers/scsi/ufs/ufshcd.c | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index cdff7e5..99bd3e4 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -1839,6 +1839,8 @@ static void ufshcd_init_clk_gating(struct ufs_hba *hba)
if (!ufshcd_is_clkgating_allowed(hba))
return;
 
+   hba->clk_gating.state = CLKS_ON;
+
hba->clk_gating.delay_ms = 150;
INIT_DELAYED_WORK(>clk_gating.gate_work, ufshcd_gate_work);
INIT_WORK(>clk_gating.ungate_work, ufshcd_ungate_work);
@@ -2541,7 +2543,8 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, 
struct scsi_cmnd *cmd)
err = SCSI_MLQUEUE_HOST_BUSY;
goto out;
}
-   WARN_ON(hba->clk_gating.state != CLKS_ON);
+   WARN_ON(ufshcd_is_clkgating_allowed(hba) &&
+   (hba->clk_gating.state != CLKS_ON));
 
lrbp = >lrb[tag];
 
@@ -8315,8 +8318,11 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum 
ufs_pm_op pm_op)
/* If link is active, device ref_clk can't be switched off */
__ufshcd_setup_clocks(hba, false, true);
 
-   hba->clk_gating.state = CLKS_OFF;
-   trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+   if (ufshcd_is_clkgating_allowed(hba)) {
+   hba->clk_gating.state = CLKS_OFF;
+   trace_ufshcd_clk_gating(dev_name(hba->dev),
+   hba->clk_gating.state);
+   }
 
/* Put the host controller in low power mode if possible */
ufshcd_hba_vreg_set_lpm(hba);
@@ -8456,6 +8462,11 @@ static int ufshcd_resume(struct ufs_hba *hba, enum 
ufs_pm_op pm_op)
if (hba->clk_scaling.is_allowed)
ufshcd_suspend_clkscaling(hba);
ufshcd_setup_clocks(hba, false);
+   if (ufshcd_is_clkgating_allowed(hba)) {
+   hba->clk_gating.state = CLKS_OFF;
+   trace_ufshcd_clk_gating(dev_name(hba->dev),
+   hba->clk_gating.state);
+   }
 out:
hba->pm_op_in_progress = 0;
if (ret)
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



[PATCH v7 4/8] scsi: ufs: Add some debug infos to ufshcd_print_host_state

2020-07-27 Thread Can Guo
The infos of the last interrupt status and its timestamp are very helpful
when debug system stability issues, e.g. IRQ starvation, so add them to
ufshcd_print_host_state. Meanwhile, UFS device infos like model name and
its FW version also come in handy during debug. In addition, this change
makes cleanup to some prints in ufshcd_print_host_regs as similar prints
are already available in ufshcd_print_host_state.

Signed-off-by: Can Guo 
Reviewed-by: Avri Altman 
---
 drivers/scsi/ufs/ufshcd.c | 31 ++-
 drivers/scsi/ufs/ufshcd.h |  5 +
 2 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 99bd3e4..eda4dc6 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -411,15 +411,6 @@ static void ufshcd_print_err_hist(struct ufs_hba *hba,
 static void ufshcd_print_host_regs(struct ufs_hba *hba)
 {
ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: ");
-   dev_err(hba->dev, "hba->ufs_version = 0x%x, hba->capabilities = 0x%x\n",
-   hba->ufs_version, hba->capabilities);
-   dev_err(hba->dev,
-   "hba->outstanding_reqs = 0x%x, hba->outstanding_tasks = 0x%x\n",
-   (u32)hba->outstanding_reqs, (u32)hba->outstanding_tasks);
-   dev_err(hba->dev,
-   "last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt = %d\n",
-   ktime_to_us(hba->ufs_stats.last_hibern8_exit_tstamp),
-   hba->ufs_stats.hibern8_exit_cnt);
 
ufshcd_print_err_hist(hba, >ufs_stats.pa_err, "pa_err");
ufshcd_print_err_hist(hba, >ufs_stats.dl_err, "dl_err");
@@ -438,8 +429,6 @@ static void ufshcd_print_host_regs(struct ufs_hba *hba)
ufshcd_print_err_hist(hba, >ufs_stats.host_reset, "host_reset");
ufshcd_print_err_hist(hba, >ufs_stats.task_abort, "task_abort");
 
-   ufshcd_print_clk_freqs(hba);
-
ufshcd_vops_dbg_register_dump(hba);
 }
 
@@ -499,6 +488,8 @@ static void ufshcd_print_tmrs(struct ufs_hba *hba, unsigned 
long bitmap)
 
 static void ufshcd_print_host_state(struct ufs_hba *hba)
 {
+   struct scsi_device *sdev_ufs = hba->sdev_ufs_device;
+
dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state);
dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n",
hba->outstanding_reqs, hba->outstanding_tasks);
@@ -511,12 +502,24 @@ static void ufshcd_print_host_state(struct ufs_hba *hba)
dev_err(hba->dev, "Auto BKOPS=%d, Host self-block=%d\n",
hba->auto_bkops_enabled, hba->host->host_self_blocked);
dev_err(hba->dev, "Clk gate=%d\n", hba->clk_gating.state);
+   dev_err(hba->dev,
+   "last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt=%d\n",
+   ktime_to_us(hba->ufs_stats.last_hibern8_exit_tstamp),
+   hba->ufs_stats.hibern8_exit_cnt);
+   dev_err(hba->dev, "last intr at %lld us, last intr status=0x%x\n",
+   ktime_to_us(hba->ufs_stats.last_intr_ts),
+   hba->ufs_stats.last_intr_status);
dev_err(hba->dev, "error handling flags=0x%x, req. abort count=%d\n",
hba->eh_flags, hba->req_abort_count);
-   dev_err(hba->dev, "Host capabilities=0x%x, caps=0x%x\n",
-   hba->capabilities, hba->caps);
+   dev_err(hba->dev, "hba->ufs_version=0x%x, Host capabilities=0x%x, 
caps=0x%x\n",
+   hba->ufs_version, hba->capabilities, hba->caps);
dev_err(hba->dev, "quirks=0x%x, dev. quirks=0x%x\n", hba->quirks,
hba->dev_quirks);
+   if (sdev_ufs)
+   dev_err(hba->dev, "UFS dev info: %.8s %.16s rev %.4s\n",
+   sdev_ufs->vendor, sdev_ufs->model, sdev_ufs->rev);
+
+   ufshcd_print_clk_freqs(hba);
 }
 
 /**
@@ -5951,6 +5954,8 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
 
spin_lock(hba->host->host_lock);
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+   hba->ufs_stats.last_intr_status = intr_status;
+   hba->ufs_stats.last_intr_ts = ktime_get();
 
/*
 * There could be max of hba->nutrs reqs in flight and in worst case
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 656c069..5b2cdaf 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -406,6 +406,8 @@ struct ufs_err_reg_hist {
 
 /**
  * struct ufs_stats - keeps usage/err statistics
+ * @last_intr_status: record the last interrupt status.
+ * @last_intr_ts: record the last interrupt timestamp.
  * @hibern8_exit_cnt: Counter to keep track of number of exits,
  * reset this after link-startup.
  * @last_hibern8_exit_tstamp: Set time after the hibern8 exit.
@@ -425,6 +427,9 @@ struct ufs_err_reg_hist {
  * @tsk_abort: tracks task abort events
  */
 struct ufs_stats {
+   u32 last_intr_status;
+   ktime_t last_intr_ts;
+
u32 hibern8_exit_cnt;
ktime_t last_hibern8_exit_tstamp;
 
-- 

[RFC v2 0/5] Restricted DMA

2020-07-27 Thread Claire Chang
This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi on one MTK platform and
that PCI-e bus is not behind an IOMMU. As PCI-e, by design, gives the
device full access to system memory, a vulnerability in the Wi-Fi firmware
could easily escalate to a full system exploit (remote wifi exploits: [1a],
[1b] that shows a full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. The
restricted DMA is implemented by per-device swiotlb and coherent memory
pools. The feature on its own provides a basic level of protection against
the DMA overwriting buffer contents at unexpected times. However, to
protect against general data leakage and system memory corruption, the
system needs to provide a way to restrict the DMA to a predefined memory
region (this is usually done at firmware level, e.g. in ATF on some ARM
platforms).

[1a] 
https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] 
https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] 
https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/


Claire Chang (5):
  swiotlb: Add io_tlb_mem struct
  swiotlb: Add device swiotlb pool
  swiotlb: Use device swiotlb pool if available
  dt-bindings: of: Add plumbing for restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt   |  35 ++
 drivers/iommu/intel/iommu.c   |   8 +-
 drivers/of/address.c  |  39 ++
 drivers/of/device.c   |   3 +
 drivers/of/of_private.h   |   6 +
 drivers/xen/swiotlb-xen.c |   4 +-
 include/linux/device.h|   4 +
 include/linux/dma-direct.h|   8 +-
 include/linux/swiotlb.h   |  49 +-
 kernel/dma/direct.c   |   8 +-
 kernel/dma/swiotlb.c  | 418 +++---
 11 files changed, 393 insertions(+), 189 deletions(-)

--
v1: https://lore.kernel.org/patchwork/cover/1271660/
Changes in v2:
- build on top of swiotlb
 
2.28.0.rc0.142.g3c755180ce-goog



[PATCH v7 5/8] scsi: ufs: Fix concurrency of error handler and other error recovery paths

2020-07-27 Thread Can Guo
Error recovery can be invoked from multiple paths, including hibern8
enter/exit (from ufshcd_link_recovery), ufshcd_eh_host_reset_handler and
eh_work scheduled from IRQ context. Ultimately, these paths are trying to
invoke ufshcd_reset_and_restore, in either sync or async manner.

Having both sync and async manners at the same time has some problems

- If link recovery happens during ungate work, ufshcd_hold() would be
  called recursively. Although commit 53c12d0ef6fcb
  ("scsi: ufs: fix error recovery after the hibern8 exit failure") [1]
  fixed a deadlock due to recursive calls of ufshcd_hold() by adding a
  check of eh_in_progress into ufshcd_hold, this check allows eh_work to
  run in parallel while link recovery is running.

- Similar concurrency can also happen when error recovery is invoked from
  ufshcd_eh_host_reset_handler and ufshcd_link_recovery.

- Concurrency can even happen between eh_works. eh_work, currently queued
  on system_wq, is allowed to have multiple instances running in parallel,
  but we don't have proper protection for that.

If any of above concurrency happens, error recovery would fail and lead
ufs device and host into bad states. To fix the concurrency problem, this
change queues eh_work on a single threaded workqueue and remove link
recovery calls from hibern8 enter/exit path. Meanwhile, make use of eh_work
in eh_host_reset_handler instead of calling ufshcd_reset_and_restore. This
unifies UFS error recovery mechanism.

In addition, according to the UFSHCI JEDEC spec, hibern8 enter/exit error
occurs when the link is broken. This essentially applies to any power mode
change operations (since they all use PACP_PWR cmds in UniPro layer). So,
in this change, if a power mode change operation (including AH8 enter/exit)
fails, mark link state as UIC_LINK_BROKEN_STATE and schedule the eh_work.
In this case, error handler needs to do a full reset and restore to recover
the link back to active. Before the link state is recovered to active,
ufshcd_uic_pwr_ctrl simply returns -ENOLINK to avoid more errors.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufs-sysfs.c |   1 +
 drivers/scsi/ufs/ufshcd.c| 268 +++
 drivers/scsi/ufs/ufshcd.h|   9 ++
 3 files changed, 151 insertions(+), 127 deletions(-)

diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
index 2d71d23..02d379f00 100644
--- a/drivers/scsi/ufs/ufs-sysfs.c
+++ b/drivers/scsi/ufs/ufs-sysfs.c
@@ -16,6 +16,7 @@ static const char *ufschd_uic_link_state_to_string(
case UIC_LINK_OFF_STATE:return "OFF";
case UIC_LINK_ACTIVE_STATE: return "ACTIVE";
case UIC_LINK_HIBERN8_STATE:return "HIBERN8";
+   case UIC_LINK_BROKEN_STATE: return "BROKEN";
default:return "UNKNOWN";
}
 }
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index eda4dc6..c2d7a90 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -228,6 +228,7 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, bool 
scale_up);
 static irqreturn_t ufshcd_intr(int irq, void *__hba);
 static int ufshcd_change_power_mode(struct ufs_hba *hba,
 struct ufs_pa_layer_attr *pwr_mode);
+static void ufshcd_schedule_eh_work(struct ufs_hba *hba);
 static int ufshcd_wb_buf_flush_enable(struct ufs_hba *hba);
 static int ufshcd_wb_buf_flush_disable(struct ufs_hba *hba);
 static int ufshcd_wb_ctrl(struct ufs_hba *hba, bool enable);
@@ -1571,11 +1572,6 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
spin_lock_irqsave(hba->host->host_lock, flags);
hba->clk_gating.active_reqs++;
 
-   if (ufshcd_eh_in_progress(hba)) {
-   spin_unlock_irqrestore(hba->host->host_lock, flags);
-   return 0;
-   }
-
 start:
switch (hba->clk_gating.state) {
case CLKS_ON:
@@ -1653,6 +1649,7 @@ static void ufshcd_gate_work(struct work_struct *work)
struct ufs_hba *hba = container_of(work, struct ufs_hba,
clk_gating.gate_work.work);
unsigned long flags;
+   int ret;
 
spin_lock_irqsave(hba->host->host_lock, flags);
/*
@@ -1679,8 +1676,11 @@ static void ufshcd_gate_work(struct work_struct *work)
 
/* put the link into hibern8 mode before turning off clocks */
if (ufshcd_can_hibern8_during_gating(hba)) {
-   if (ufshcd_uic_hibern8_enter(hba)) {
+   ret = ufshcd_uic_hibern8_enter(hba);
+   if (ret) {
hba->clk_gating.state = CLKS_ON;
+   dev_err(hba->dev, "%s: hibern8 enter failed %d\n",
+   __func__, ret);
trace_ufshcd_clk_gating(dev_name(hba->dev),
hba->clk_gating.state);
goto out;
@@ -1725,11 +1725,10 @@ static void __ufshcd_release(struct ufs_hba *hba)
 

[RFC v2 1/5] swiotlb: Add io_tlb_mem struct

2020-07-27 Thread Claire Chang
Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and
moved relevant global variables into that struct.
This will be useful later to allow for per-device swiotlb regions.

Signed-off-by: Claire Chang 
---
 drivers/iommu/intel/iommu.c |   2 +-
 drivers/xen/swiotlb-xen.c   |   4 +-
 include/linux/swiotlb.h |  38 -
 kernel/dma/swiotlb.c| 286 +---
 4 files changed, 172 insertions(+), 158 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 3f7c04cf89b3..44c9230251eb 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3736,7 +3736,7 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, 
size_t size,
 */
if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
tlb_addr = swiotlb_tbl_map_single(dev,
-   __phys_to_dma(dev, io_tlb_start),
+   __phys_to_dma(dev, io_tlb_default_mem.start),
paddr, size, aligned_size, dir, attrs);
if (tlb_addr == DMA_MAPPING_ERROR) {
goto swiotlb_error;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..62452424ec8a 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -190,8 +190,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
/*
 * IO TLB memory already allocated. Just use it.
 */
-   if (io_tlb_start != 0) {
-   xen_io_tlb_start = phys_to_virt(io_tlb_start);
+   if (io_tlb_default_mem.start != 0) {
+   xen_io_tlb_start = phys_to_virt(io_tlb_default_mem.start);
goto end;
}
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 046bb94bd4d6..ab0d571d0826 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -69,11 +69,45 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
 
 #ifdef CONFIG_SWIOTLB
 extern enum swiotlb_force swiotlb_force;
-extern phys_addr_t io_tlb_start, io_tlb_end;
+
+/**
+ * struct io_tlb_mem - IO TLB Memory Pool Descriptor
+ *
+ * @start: The start address of the swiotlb memory pool. Used to do a quick
+ * range check to see if the memory was in fact allocated by this
+ * API. For device private swiotlb, this is device tree adjustable.
+ * @end:   The end address of the swiotlb memory pool. Used to do a quick
+ * range check to see if the memory was in fact allocated by this
+ * API. For device private swiotlb, this is device tree adjustable.
+ * @nslabs:The number of IO TLB blocks (in groups of 64) between @start and
+ * @end. For system swiotlb, this is command line adjustable via
+ * setup_io_tlb_npages.
+ * @used:  The number of used IO TLB block.
+ * @list:  The free list describing the number of free entries available
+ * from each index.
+ * @index: The index to start searching in the next round.
+ * @orig_addr: The original address corresponding to a mapped entry for the
+ * sync operations.
+ * @lock:  The lock to protect the above data structures in the map and
+ * unmap calls.
+ * @debugfs:   The dentry to debugfs.
+ */
+struct io_tlb_mem {
+   phys_addr_t start;
+   phys_addr_t end;
+   unsigned long nslabs;
+   unsigned long used;
+   unsigned int *list;
+   unsigned int index;
+   phys_addr_t *orig_addr;
+   spinlock_t lock;
+   struct dentry *debugfs;
+};
+extern struct io_tlb_mem io_tlb_default_mem;
 
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
-   return paddr >= io_tlb_start && paddr < io_tlb_end;
+   return paddr >= io_tlb_mem.start && paddr < io_tlb_mem.end;
 }
 
 void __init swiotlb_exit(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c19379fabd20..f83911fa14ce 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -61,33 +61,11 @@
  * allocate a contiguous 1MB, we're probably in trouble anyway.
  */
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
+#define INVALID_PHYS_ADDR (~(phys_addr_t)0)
 
 enum swiotlb_force swiotlb_force;
 
-/*
- * Used to do a quick range check in swiotlb_tbl_unmap_single and
- * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by 
this
- * API.
- */
-phys_addr_t io_tlb_start, io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) between io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * The number of used IO TLB block
- */
-static unsigned long io_tlb_used;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
+struct io_tlb_mem io_tlb_default_mem;
 
 /*
  * Max segment that we can 

[RFC v2 2/5] swiotlb: Add device swiotlb pool

2020-07-27 Thread Claire Chang
Add the initialization function to create device swiotlb pools from
matching reserved-memory nodes in the device tree.

Signed-off-by: Claire Chang 
---
 include/linux/device.h |   4 ++
 kernel/dma/swiotlb.c   | 148 +
 2 files changed, 126 insertions(+), 26 deletions(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 79ce404619e6..f40f711e43e9 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -575,6 +575,10 @@ struct device {
struct cma *cma_area;   /* contiguous memory area for dma
   allocations */
 #endif
+#ifdef CONFIG_SWIOTLB
+   struct io_tlb_mem   *dma_io_tlb_mem;
+#endif
+
/* arch specific additions */
struct dev_archdata archdata;
 
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f83911fa14ce..eaa101b3e75b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -36,6 +36,10 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
+#include 
 #ifdef CONFIG_DEBUG_FS
 #include 
 #endif
@@ -298,20 +302,14 @@ static void swiotlb_cleanup(void)
max_segment = 0;
 }
 
-int
-swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+   size_t size)
 {
-   struct io_tlb_mem *mem = _tlb_default_mem;
-   unsigned long i, bytes;
-
-   bytes = nslabs << IO_TLB_SHIFT;
+   unsigned long i;
 
-   mem->nslabs = nslabs;
-   mem->start = virt_to_phys(tlb);
-   mem->end = mem->start + bytes;
-
-   set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-   memset(tlb, 0, bytes);
+   mem->nslabs = size >> IO_TLB_SHIFT;
+   mem->start = start;
+   mem->end = mem->start + size;
 
/*
 * Allocate and initialize the free list array.  This array is used
@@ -336,11 +334,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
}
mem->index = 0;
 
-   swiotlb_print_info();
-
-   late_alloc = 1;
-
-   swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
spin_lock_init(>lock);
 
return 0;
@@ -354,6 +347,38 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
return -ENOMEM;
 }
 
+int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
+{
+   struct io_tlb_mem *mem = _tlb_default_mem;
+   unsigned long bytes;
+   int ret;
+
+   bytes = nslabs << IO_TLB_SHIFT;
+
+   set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
+   memset(tlb, 0, bytes);
+
+   ret = swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), bytes);
+   if (ret)
+   return ret;
+
+   swiotlb_print_info();
+
+   late_alloc = 1;
+
+   swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
+
+   return 0;
+}
+
+static void swiotlb_free_pages(struct io_tlb_mem *mem)
+{
+   free_pages((unsigned long)mem->orig_addr,
+  get_order(mem->nslabs * sizeof(phys_addr_t)));
+   free_pages((unsigned long)mem->list,
+  get_order(mem->nslabs * sizeof(int)));
+}
+
 void __init swiotlb_exit(void)
 {
struct io_tlb_mem *mem = _tlb_default_mem;
@@ -362,10 +387,7 @@ void __init swiotlb_exit(void)
return;
 
if (late_alloc) {
-   free_pages((unsigned long)mem->orig_addr,
-  get_order(mem->nslabs * sizeof(phys_addr_t)));
-   free_pages((unsigned long)mem->list, get_order(mem->nslabs *
-  sizeof(int)));
+   swiotlb_free_pages(mem);
free_pages((unsigned long)phys_to_virt(mem->start),
   get_order(mem->nslabs << IO_TLB_SHIFT));
} else {
@@ -687,16 +709,90 @@ bool is_swiotlb_active(void)
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name,
+  struct dentry *node)
 {
-   struct io_tlb_mem *mem = _tlb_default_mem;
-
-   mem->debugfs = debugfs_create_dir("swiotlb", NULL);
+   mem->debugfs = debugfs_create_dir(name, node);
debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, >nslabs);
debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, >used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+   swiotlb_create_debugfs(_tlb_default_mem, "swiotlb", NULL);
+
return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+static int device_swiotlb_init(struct reserved_mem *rmem,
+  struct device *dev)
+{
+   struct io_tlb_mem *mem;
+   int ret;
+
+   if (dev->dma_io_tlb_mem)
+   return 0;
+
+   mem = kzalloc(sizeof(*mem), GFP_KERNEL);
+   

[PATCH v7 7/8] scsi: ufs: Move dumps in IRQ handler to error handler

2020-07-27 Thread Can Guo
Sometime dumps in IRQ handler are heavy enough to cause system stability
issues, move them to error handler.

Signed-off-by: Can Guo 
---
 drivers/scsi/ufs/ufshcd.c | 31 +++
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index c480823..b2bafa3 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -5682,6 +5682,21 @@ static void ufshcd_err_handler(struct work_struct *work)
UFSHCD_UIC_DL_TCx_REPLAY_ERROR
needs_reset = true;
 
+   if (hba->saved_err & (INT_FATAL_ERRORS | UIC_ERROR |
+ UFSHCD_UIC_HIBERN8_MASK)) {
+   bool pr_prdt = !!(hba->saved_err & SYSTEM_BUS_FATAL_ERROR);
+
+   dev_err(hba->dev, "%s: saved_err 0x%x saved_uic_err 0x%x\n",
+   __func__, hba->saved_err, hba->saved_uic_err);
+   spin_unlock_irqrestore(hba->host->host_lock, flags);
+   ufshcd_print_host_state(hba);
+   ufshcd_print_pwr_info(hba);
+   ufshcd_print_host_regs(hba);
+   ufshcd_print_tmrs(hba, hba->outstanding_tasks);
+   ufshcd_print_trs(hba, hba->outstanding_reqs, pr_prdt);
+   spin_lock_irqsave(hba->host->host_lock, flags);
+   }
+
/*
 * if host reset is required then skip clearing the pending
 * transfers forcefully because they will get cleared during
@@ -5900,22 +5915,6 @@ static irqreturn_t ufshcd_check_errors(struct ufs_hba 
*hba)
 
/* block commands from scsi mid-layer */
ufshcd_scsi_block_requests(hba);
-
-   /* dump controller state before resetting */
-   if (hba->saved_err & (INT_FATAL_ERRORS | UIC_ERROR)) {
-   bool pr_prdt = !!(hba->saved_err &
-   SYSTEM_BUS_FATAL_ERROR);
-
-   dev_err(hba->dev, "%s: saved_err 0x%x saved_uic_err 
0x%x\n",
-   __func__, hba->saved_err,
-   hba->saved_uic_err);
-
-   ufshcd_print_host_regs(hba);
-   ufshcd_print_pwr_info(hba);
-   ufshcd_print_tmrs(hba, hba->outstanding_tasks);
-   ufshcd_print_trs(hba, hba->outstanding_reqs,
-   pr_prdt);
-   }
ufshcd_schedule_eh_work(hba);
retval |= IRQ_HANDLED;
}
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project.



[PATCH] MAINTAINERS: update entry to thermal governors file name prefixing

2020-07-27 Thread Lukas Bulwahn
Commit 0015d9a2a727 ("thermal/governors: Prefix all source files with
gov_") renamed power_allocator.c to gov_power_allocator.c in
./drivers/thermal amongst some other file renames, but missed to adjust
the MAINTAINERS entry.

Hence, ./scripts/get_maintainer.pl --self-test=patterns complains:

  warning: no file matchesF:drivers/thermal/power_allocator.c

Update the file entry in MAINTAINERS to the new file name.

Signed-off-by: Lukas Bulwahn 
---
Amit, please ack.

Daniel, please pick this non-urgent minor patch for your -next tree.

applies cleanly on next-20200727

 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index aad65cc8f35d..aa5a11d71f71 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17164,7 +17164,7 @@ M:  Lukasz Luba 
 L: linux...@vger.kernel.org
 S: Maintained
 F: Documentation/driver-api/thermal/power_allocator.rst
-F: drivers/thermal/power_allocator.c
+F: drivers/thermal/gov_power_allocator.c
 F: include/trace/events/thermal_power_allocator.h
 
 THINKPAD ACPI EXTRAS DRIVER
-- 
2.17.1



pinctrl: kirkwood: gpio mode not being selected

2020-07-27 Thread Chris Packham
Hi,

I'm in the process updating our platforms from a v4.4.x based kernel to 
a v5.7 based one.

On one of our Marvell Kirkwood based boards I'm seeing a problem where a 
gpio isn't being driven (the gpio happens to be a reset to a PHY chip 
that our userspace switching code is attempting to talk to).

Our bootloader is inadvertently configuring MPP15 into uart0 RTS mode 
(probably a copy and paste from the reference board).

Under the v4.4 kernel by the time userspace gets started the MPP15 pin 
has been put into GPIO mode. With the latest v5.7 kernel the incorrect 
mode is retained.

I haven't gone bisecting but I'm guessing something somewhere has 
decided not to put the pin into GPIO mode (because that is the hardware 
default).

I probably need to define an explicit pinctrl node in my dts, but I 
wanted to make sure that this was an intentional change in behaviour.

Thanks,
Chris


Re: [PATCH 2/2] iio: light: as73211: New driver

2020-07-27 Thread kernel test robot
Hi Christian,

I love your patch! Perhaps something to improve:

[auto build test WARNING on iio/togreg]
[also build test WARNING on robh/for-next linux/master linus/master v5.8-rc7 
next-20200727]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Christian-Eggers/dt-bindings-iio-light-add-AMS-AS73211-support/20200727-234842
base:   https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git togreg
config: sparc64-randconfig-s031-20200728 (attached as .config)
compiler: sparc64-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.2-94-geb6779f6-dirty
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 
CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=sparc64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


sparse warnings: (new ones prefixed by >>)

>> drivers/iio/light/as73211.c:473:35: sparse: sparse: cast to restricted __le16
>> drivers/iio/light/as73211.c:473:35: sparse: sparse: cast to restricted __le16
>> drivers/iio/light/as73211.c:473:35: sparse: sparse: cast to restricted __le16
>> drivers/iio/light/as73211.c:473:35: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:477:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:477:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:477:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:477:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:478:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:478:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:478:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:478:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:479:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:479:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:479:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:479:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:498:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:498:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:498:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:498:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:499:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:499:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:499:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:499:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:500:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:500:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:500:43: sparse: sparse: cast to restricted __le16
   drivers/iio/light/as73211.c:500:43: sparse: sparse: cast to restricted __le16

vim +473 drivers/iio/light/as73211.c

   439  
   440  static irqreturn_t as73211_trigger_handler(int irq __always_unused, 
void *p)
   441  {
   442  struct iio_poll_func *pf = p;
   443  struct iio_dev *indio_dev = pf->indio_dev;
   444  struct as73211_data *data = iio_priv(indio_dev);
   445  int data_result, ret;
   446  
   447  mutex_lock(>mutex);
   448  
   449  data_result = as73211_req_data(data);
   450  
   451  /* Optimization for reading all (color + temperature) channels 
*/
   452  if (*indio_dev->active_scan_mask == 0xf) {
   453  u8 addr = as73211_channels[0].address;
   454  struct i2c_msg msgs[] = {
   455  {
   456  .addr = data->client->addr,
   457  .flags = 0,
   458  .len = 1,
   459  .buf = 
   460  },
   461  {
   462  .addr = data->client->addr,
   463  .flags = I2C_M_RD,
   464  .len = 4 * sizeof(*data->buffer),
   465  .buf = (u8 *)>buffer[0]
   466  },
   467  };
   468  ret = i

Re: [PATCH] checkpatch: disable commit log length check warning for signature tag

2020-07-27 Thread Nachiket Naganure
On Mon, Jul 27, 2020 at 02:17:06PM -0700, Joe Perches wrote:
> On Mon, 2020-07-27 at 22:34 +0200, Lukas Bulwahn wrote:
> > On Mon, 27 Jul 2020, Nachiket Naganure wrote:
> > > On Sun, Jul 26, 2020 at 11:14:42PM -0700, Joe Perches wrote:
> > > > On Mon, 2020-07-27 at 11:24 +0530, Nachiket Naganure wrote:
> []
> > > > OK, but the test should be:
> > > > 
> > > >   $line =~ /^\s*$signature_tags/ ||
> > > > 
> > > > so the line has to start with a signature and
> > > > it won't match on signature tags in the middle
> > > > of other content on the same line.
> > > > 
> > > > 
> > > But the suggested won't work in case of merged signatures.
> > > Such as "Reported-and-tested-by: u...@email.com"
> > > 
> > But Joe's remark is valid; we do not want to match on signature tags in 
> > the middle. These cases are probably mentioning signature tags as part of 
> > a sentence or some explanation.
> > 
> > Nachiket, think about a proper solution for this issue.
> 
> Mostly the problem doesn't occur very often and likely
> isn't worth much effort.
> 
> Combinations aren't common in git logs and arbitrary
> combinatorial logic isn't trivial.
> 
> For the last million commits, only ~3000 have any combination
> and almost all of those are Reported-and-tested-by:
> 
> Likely that could be added to $signature_tags and the problem
> just ignored.
> 
> I'd still exempt signature lines from line length limits.
> 
> $ git log -100 --no-merges --format=email | \
>   grep -P "^[\w_-]+-by:" | cut -f1 -d":" | \
>   sort | uniq -c | sort -rn | cat -n | grep -i and
>  7   2159 Reported-and-tested-by
> 11255 Reported-and-Tested-by
> 12203 Reviewed-and-tested-by
> 13132 Reviewed-and-Tested-by
> 22 68 Reported-and-bisected-by
> 31 44 Acked-and-tested-by
> 40 21 Tested-and-Acked-by
> 41 20 Tested-and-acked-by
> 42 20 Reported-bisected-and-tested-by
> 49 17 Suggested-and-Acked-by
> 50 16 Tested-and-reported-by
> 51 16 Acked-and-Tested-by
> 52 15 Suggested-and-Tested-by
> 53 15 Suggested-and-acked-by
> 56 14 Tested-and-reviewed-by
> 58 13 Tested-and-Reviewed-by
> 61 12 Reported-and-acked-by
> 62 11 Reported-and-debugged-by
> 65 10 Reported-and-Acked-by
> 73  8 Suggested-and-reviewed-by
> 76  8 Reported-and-suggested-by
> 77  8 Reported-and-analyzed-by
> 79  8 Bisected-and-tested-by
> 81  7 Requested-and-tested-by
> 82  7 Reported-and-reviewed-by
> 91  6 Bisected-and-reported-by
>104  4 Tested-and-Reported-by
>111  4 Requested-and-Tested-by
>125  3 Reported-by-and-Tested-by
>127  3 Reported-And-Tested-by
>128  3 Reported-and-requested-by
>155  2 Suggested-and-tested-by
>166  2 Reported-tested-and-acked-by
>169  2 Reported-and-Suggested-by
>170  2 Reported-and-by
>201  2 Debugged-and-tested-by
>232  1 Tested-by-and-Reviewed-by
>234  1 Tested-And-Reviewed-by
>235  1 Tested-and-requested-by
>236  1 Tested-and-bugfixed-by
>245  1 Suggested-and-Reviewed-by
>265  1 Signed-off-and-morning-tea-spilled-by
>284  1 Reviewed-and-wanted-by
>285  1 Reviewed-and-discussed-by
>286  1 Reviewed-and-Acked-by
>287  1 Reviewed-and-acked-by
>294  1 Requested-and-debugged-by
>295  1 Requested-and-Acked-by
>296  1 Requested-and-acked-by
>301  1 Reportedy-and-tested-by
>303  1 Reported-tested-and-fixed-by
>304  1 Reported-tested-and-bisected-by
>305  1 Reported-Reviewed-and-Acked-by
>306  1 Reported-requested-and-tested-by
>312  1 Reported-by-and-tested-by
>313  1 Reported-Bistected-and-Tested-by
>316  1 Reported-and_tested-by
>317  1 Reported-and-Tested-and-Reviewed-by
>318  1 Reported-and-tested-and-reviewed-by
>319  1 Reported-and-test-by
>320  1 Reported-and-root-caused-by
>321  1 Reported-and-Reviwed-by
>322  1 Reported-and-reviwed-by
>323  1 Reported-and-Reviewed-by
>324  1 Reported-and-Reviewed-and-Tested-by
>325  1 Reported-and-isolated-by
>326  1 Reported-and-introduced-by
>327  1 Reported-and-Inspired-by
>328  1 Reported-and-helped-by
>329 

Re: [PATCH bpf-next v2 01/35] bpf: memcg-based memory accounting for bpf progs

2020-07-27 Thread Song Liu
On Mon, Jul 27, 2020 at 5:08 PM Roman Gushchin  wrote:
>
> On Mon, Jul 27, 2020 at 03:11:42PM -0700, Song Liu wrote:
> > On Mon, Jul 27, 2020 at 12:20 PM Roman Gushchin  wrote:
> > >
> > > Include memory used by bpf programs into the memcg-based accounting.
> > > This includes the memory used by programs itself, auxiliary data
> > > and statistics.
> > >
> > > Signed-off-by: Roman Gushchin 
> > > ---
> > >  kernel/bpf/core.c | 8 
> > >  1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > > index bde93344164d..daab8dcafbd4 100644
> > > --- a/kernel/bpf/core.c
> > > +++ b/kernel/bpf/core.c
> > > @@ -77,7 +77,7 @@ void *bpf_internal_load_pointer_neg_helper(const struct 
> > > sk_buff *skb, int k, uns
> > >
> > >  struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t 
> > > gfp_extra_flags)
> > >  {
> > > -   gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
> > > +   gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | 
> > > gfp_extra_flags;
> > > struct bpf_prog_aux *aux;
> > > struct bpf_prog *fp;
> > >
> > > @@ -86,7 +86,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int 
> > > size, gfp_t gfp_extra_flag
> > > if (fp == NULL)
> > > return NULL;
> > >
> > > -   aux = kzalloc(sizeof(*aux), GFP_KERNEL | gfp_extra_flags);
> > > +   aux = kzalloc(sizeof(*aux), GFP_KERNEL_ACCOUNT | gfp_extra_flags);
> > > if (aux == NULL) {
> > > vfree(fp);
> > > return NULL;
> > > @@ -104,7 +104,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int 
> > > size, gfp_t gfp_extra_flag
> > >
> > >  struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags)
> > >  {
> > > -   gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
> > > +   gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | 
> > > gfp_extra_flags;
> > > struct bpf_prog *prog;
> > > int cpu;
> > >
> > > @@ -217,7 +217,7 @@ void bpf_prog_free_linfo(struct bpf_prog *prog)
> > >  struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int 
> > > size,
> > >   gfp_t gfp_extra_flags)
> > >  {
> > > -   gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
> > > +   gfp_t gfp_flags = GFP_KERNEL_ACCOUNT | __GFP_ZERO | 
> > > gfp_extra_flags;
> > > struct bpf_prog *fp;
> > > u32 pages, delta;
> > > int ret;
> > > --
>
> Hi Song!
>
> Thank you for looking into the patchset!
>
> >
> > Do we need similar changes in
> >
> > bpf_prog_array_copy()
> > bpf_prog_alloc_jited_linfo()
> > bpf_prog_clone_create()
> >
> > and maybe a few more?
>
> I've tried to follow the rlimit-based accounting, so those objects which were
> skipped are mostly skipped now and vice versa. The main reason for that is
> simple: I don't know many parts of bpf code well enough to decide whether
> we need accounting or not.
>
> In general with memcg-based accounting we can easily cover places which were
> not covered previously: e.g. the memory used by the verifier. But I guess it's
> better to do it case-by-case.
>
> But if you're aware of any big objects which should be accounted for sure,
> please, let me know.

Thanks for the explanation. I think we can do one-to-one migration to
memcg-based accounting for now.

Song


Re: [PATCH] drm/amd/display: Clear dm_state for fast updates

2020-07-27 Thread Mazin Rezk
On Monday, July 27, 2020 7:42 PM, Mazin Rezk  wrote:

> On Monday, July 27, 2020 5:32 PM, Daniel Vetter  wrote:
>
> > On Mon, Jul 27, 2020 at 11:11 PM Mazin Rezk  wrote:
> > >
> > > On Monday, July 27, 2020 4:29 PM, Daniel Vetter  wrote:
> > >
> > > > On Mon, Jul 27, 2020 at 9:28 PM Christian König
> > > >  wrote:
> > > > >
> > > > > Am 27.07.20 um 16:05 schrieb Kazlauskas, Nicholas:
> > > > > > On 2020-07-27 9:39 a.m., Christian König wrote:
> > > > > >> Am 27.07.20 um 07:40 schrieb Mazin Rezk:
> > > > > >>> This patch fixes a race condition that causes a use-after-free 
> > > > > >>> during
> > > > > >>> amdgpu_dm_atomic_commit_tail. This can occur when 2 non-blocking
> > > > > >>> commits
> > > > > >>> are requested and the second one finishes before the first.
> > > > > >>> Essentially,
> > > > > >>> this bug occurs when the following sequence of events happens:
> > > > > >>>
> > > > > >>> 1. Non-blocking commit #1 is requested w/ a new dm_state #1 and is
> > > > > >>> deferred to the workqueue.
> > > > > >>>
> > > > > >>> 2. Non-blocking commit #2 is requested w/ a new dm_state #2 and is
> > > > > >>> deferred to the workqueue.
> > > > > >>>
> > > > > >>> 3. Commit #2 starts before commit #1, dm_state #1 is used in the
> > > > > >>> commit_tail and commit #2 completes, freeing dm_state #1.
> > > > > >>>
> > > > > >>> 4. Commit #1 starts after commit #2 completes, uses the freed 
> > > > > >>> dm_state
> > > > > >>> 1 and dereferences a freelist pointer while setting the context.
> > > > > >>
> > > > > >> Well I only have a one mile high view on this, but why don't you 
> > > > > >> let
> > > > > >> the work items execute in order?
> > > > > >>
> > > > > >> That would be better anyway cause this way we don't trigger a cache
> > > > > >> line ping pong between CPUs.
> > > > > >>
> > > > > >> Christian.
> > > > > >
> > > > > > We use the DRM helpers for managing drm_atomic_commit_state and 
> > > > > > those
> > > > > > helpers internally push non-blocking commit work into the system
> > > > > > unbound work queue.
> > > > >
> > > > > Mhm, well if you send those helper atomic commits in the order A,B and
> > > > > they execute it in the order B,A I would call that a bug :)
> > > >
> > > > The way it works is it pushes all commits into unbound work queue, but
> > > > then forces serialization as needed. We do _not_ want e.g. updates on
> > > > different CRTC to be serialized, that would result in lots of judder.
> > > > And hw is funny enough that there's all kinds of dependencies.
> > > >
> > > > The way you force synchronization is by adding other CRTC state
> > > > objects. So if DC is busted and can only handle a single update per
> > > > work item, then I guess you always need all CRTC states and everything
> > > > will be run in order. But that also totally kills modern multi-screen
> > > > compositors. Xorg isn't modern, just in case that's not clear :-)
> > > >
> > > > Lucking at the code it seems like you indeed have only a single dm
> > > > state, so yeah global sync is what you'll need as immediate fix, and
> > > > then maybe fix up DM to not be quite so silly ... or at least only do
> > > > the dm state stuff when really needed.
> > > >
> > > > We could also sprinkle the drm_crtc_commit structure around a bit
> > > > (it's the glue that provides the synchronization across commits), but
> > > > since your dm state is global just grabbing all crtc states
> > > > unconditionally as part of that is probably best.
> > > >
> > > > > > While we could duplicate a copy of that code with nothing but the
> > > > > > workqueue changed that isn't something I'd really like to maintain
> > > > > > going forward.
> > > > >
> > > > > I'm not talking about duplicating the code, I'm talking about fixing 
> > > > > the
> > > > > helpers. I don't know that code well, but from the outside it sounds
> > > > > like a bug there.
> > > > >
> > > > > And executing work items in the order they are submitted is trivial.
> > > > >
> > > > > Had anybody pinged Daniel or other people familiar with the helper 
> > > > > code
> > > > > about it?
> > > >
> > > > Yeah something is wrong here, and the fix looks horrible :-)
> > > >
> > > > Aside, I've also seen some recent discussion flare up about
> > > > drm_atomic_state_get/put used to paper over some other use-after-free,
> > > > but this time related to interrupt handlers. Maybe a few rules about
> > > > that:
> > > > - dont
> > > > - especially not when it's interrupt handlers, because you can't call
> > > > drm_atomic_state_put from interrupt handlers.
> > > >
> > > > Instead have an spin_lock_irq to protect the shared date with your
> > > > interrupt handler, and _copy_ the date over. This is e.g. what
> > > > drm_crtc_arm_vblank_event does.
> > >
> > > Nicholas wrote a patch that attempted to resolve the issue by adding every
> > > CRTC into the commit to use use the stall checks. [1] While this forces
> > > synchronisation on commits, it's kind of a hacky method that 

Re: [PATCH 4/4] x86/cpu: Use SERIALIZE in sync_core() when available

2020-07-27 Thread Ricardo Neri
On Mon, Jul 27, 2020 at 03:30:20PM +0200, pet...@infradead.org wrote:
> On Mon, Jul 27, 2020 at 03:05:36PM +0200, pet...@infradead.org wrote:
> > Yeah, I'm not sure.. the 'funny' thing is that typically call
> > sync_core() from an IPI anyway. And the synchronous broadcast IPI is by
> > far the most expensive part of that.
> > 
> > Something like this...
> > 
> > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> > index 20e07feb4064..528e049ee1d9 100644
> > --- a/arch/x86/kernel/alternative.c
> > +++ b/arch/x86/kernel/alternative.c
> > @@ -989,12 +989,13 @@ void *text_poke_kgdb(void *addr, const void *opcode, 
> > size_t len)
> >  
> >  static void do_sync_core(void *info)
> >  {
> > -   sync_core();
> > +   /* IRET implies sync_core() */
> >  }
> >  
> >  void text_poke_sync(void)
> >  {
> > on_each_cpu(do_sync_core, NULL, 1);
> > +   sync_core();
> >  }
> >  
> >  struct text_poke_loc {
> 
> So 'people' have wanted to optimize this for NOHZ_FULL and I suppose
> virt as well.
> 
> IFF VMENTER is serializing, I suppose we can simply do something like:
> 
> bool text_poke_cond(int cpu, void *info)
> {
>   /*
>* If we observe the vCPU is preempted, it will do VMENTER
>* no point in sending an IPI to SERIALIZE.
>*/
>   return !vcpu_is_preempted(cpu);
> }
> 
> void text_poke_sync(void)
> {
>   smp_call_function_many_cond(cpu_possible_mask,
>   do_sync_core, NULL, 1, text_poke_cond);
>   sync_core();
> }
> 
> The 'same' for NOHZ_FULL, except we need to cmpxchg a value such that
> if the cmpxchg() succeeds we know the CPU is in userspace and will
> SERIALIZE on the next entry. Much like kvm_flush_tlb_others().
> 
> 
> Anyway, that's all hand-wavey.. I'll let someone that cares about those
> things write actual patches :-)

I think I got a little lost here. If I understand correctly, there are
two alternatives to implement support for serialize better:

  a) alternative(IRET_TO_SELF, SERIALIZE, X86_FEATURE_SERIALIZE); or
  b) asm volatile("1:.byte 0xf, 0x1, 0xe8;2:" _ASM_EXTABLE(1b:2b)

a) would be the traditional and simpler solution. b) would rely on
causing an #UD and getting an IRET on existing hardware b) would need some
more optimization work when handling the exception and a few reworks on
the poke patching code.

Which option should I focus on? Which option would be more desirable/better?

Thanks and BR,
Ricardo


Re: [PATCH 2/4] arm64: dts: qcom: sc7180: Add iommus property to ETR

2020-07-27 Thread Sai Prakash Ranjan

On 2020-07-28 02:28, Bjorn Andersson wrote:

On Tue 23 Jun 23:56 PDT 2020, Sai Prakash Ranjan wrote:


Hi Bjorn,

On 2020-06-21 13:39, Sai Prakash Ranjan wrote:
> Hi Bjorn,
>
> On 2020-06-21 12:52, Bjorn Andersson wrote:
> > On Tue 09 Jun 06:30 PDT 2020, Sai Prakash Ranjan wrote:
> >
> > > Define iommus property for Coresight ETR component in
> > > SC7180 SoC with the SID and mask to enable SMMU
> > > translation for this master.
> > >
> >
> > We don't have _smmu in linux-next, as we've yet to figure out how
> > to disable the boot splash or support the stream mapping handover.
> >
> > So I'm not able to apply this.
> >
>
> This is for SC7180 which has apps_smmu not SM8150.
>

Please let me know if this needs further explanation.



I must have commented on the wrong patch, sorry about that. The SM8150
patch in this series does not compile due to the lack of _smmu.

I've picked the other 3 patches.



Thanks Bjorn, I can resend SM8150 coresight change when SMMU support 
lands for it

since coresight ETR won't work without it on android bootloaders.

As for the other 3 patches, Patch 1 and Patch 2 will apply cleanly to 
the right coresight
nodes but due to the missing unique context in Patch 3, it could be 
applied to some other node.
We had to upload this change 3 times in chromium tree to get it applied 
to the right replicator node :)
and this property in Patch 3 is important to fix a hard lockup. I'm not 
sure why this patch is missing

the proper context :/

I couldn't find the changes yet in qcom/for-next or other branches to 
see if it is
applied to right replicator node. In case you haven't applied it yet, 
Patch 3 change

should be applied to "replicator@6b06000" node.

Thanks,
Sai

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member

of Code Aurora Forum, hosted by The Linux Foundation


[PATCH v1] farsync: use generic power management

2020-07-27 Thread Vaibhav Gupta
The .suspend() and .resume() callbacks are not defined for this driver.
Still, their power management structure follows the legacy framework. To
bring it under the generic framework, simply remove the binding of
callbacks from "struct pci_driver".

Change code indentation from space to tab in "struct pci_driver".

Signed-off-by: Vaibhav Gupta 
---
 drivers/net/wan/farsync.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
index 7916efce7188..15dacfde6b83 100644
--- a/drivers/net/wan/farsync.c
+++ b/drivers/net/wan/farsync.c
@@ -2636,12 +2636,10 @@ fst_remove_one(struct pci_dev *pdev)
 }
 
 static struct pci_driver fst_driver = {
-.name  = FST_NAME,
-.id_table  = fst_pci_dev_id,
-.probe = fst_add_one,
-.remove= fst_remove_one,
-.suspend   = NULL,
-.resume= NULL,
+   .name   = FST_NAME,
+   .id_table   = fst_pci_dev_id,
+   .probe  = fst_add_one,
+   .remove = fst_remove_one,
 };
 
 static int __init
-- 
2.27.0



[PATCH v2 2/2] usb: xhci: Fix ASMedia ASM1142 DMA addressing

2020-07-27 Thread Forest Crossman
I've confirmed that the ASMedia ASM1142 has the same problem as the
ASM2142/ASM3142, in that it too reports that it supports 64-bit DMA
addresses when in fact it does not. As with the ASM2142/ASM3142, this
can cause problems on systems where the upper bits matter, and adding
the XHCI_NO_64BIT_SUPPORT quirk completely fixes the issue.

Signed-off-by: Forest Crossman 
---
 drivers/usb/host/xhci-pci.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index baa5af88ca67..3feaafebfe58 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -59,6 +59,7 @@
 #define PCI_DEVICE_ID_AMD_PROMONTORYA_10x43bc
 #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI0x1042
 #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI   0x1142
+#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI0x1242
 #define PCI_DEVICE_ID_ASMEDIA_2142_XHCI0x2142
 
 static const char hcd_name[] = "xhci_hcd";
@@ -268,7 +269,8 @@ static void xhci_pci_quirks(struct device *dev, struct 
xhci_hcd *xhci)
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-   pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI)
+   (pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
+pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
 
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-- 
2.20.1



[PATCH v2 0/2] Small fixes for ASMedia host controllers

2020-07-27 Thread Forest Crossman
The first patch just defines some host controller device IDs to make the
code a bit easier to read (since the controller part number is not
always the same as the DID) and to prepare for the next patch.

The second patch defines a new device ID for the ASM1142 and enables the
XHCI_NO_64BIT_SUPPORT quirk for that device, since it has the same
problem with truncating the higher bits as the ASM2142/ASM3142.


Changes since v1:
 - Added changelog text to the first patch.


Forest Crossman (2):
  usb: xhci: define IDs for various ASMedia host controllers
  usb: xhci: Fix ASMedia ASM1142 DMA addressing

 drivers/usb/host/xhci-pci.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

-- 
2.20.1



[PATCH v2 1/2] usb: xhci: define IDs for various ASMedia host controllers

2020-07-27 Thread Forest Crossman
Not all ASMedia host controllers have a device ID that matches its part
number. #define some of these IDs to make it clearer at a glance which
chips require what quirks.

Signed-off-by: Forest Crossman 
---
 drivers/usb/host/xhci-pci.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 9234c82e70e4..baa5af88ca67 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -57,7 +57,9 @@
 #define PCI_DEVICE_ID_AMD_PROMONTORYA_30x43ba
 #define PCI_DEVICE_ID_AMD_PROMONTORYA_20x43bb
 #define PCI_DEVICE_ID_AMD_PROMONTORYA_10x43bc
+#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI0x1042
 #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI   0x1142
+#define PCI_DEVICE_ID_ASMEDIA_2142_XHCI0x2142
 
 static const char hcd_name[] = "xhci_hcd";
 
@@ -260,13 +262,13 @@ static void xhci_pci_quirks(struct device *dev, struct 
xhci_hcd *xhci)
xhci->quirks |= XHCI_LPM_SUPPORT;
 
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-   pdev->device == 0x1042)
+   pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
xhci->quirks |= XHCI_BROKEN_STREAMS;
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-   pdev->device == 0x1142)
+   pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-   pdev->device == 0x2142)
+   pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI)
xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
 
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
-- 
2.20.1



drivers/net/hamradio/6pack.c:706:23: sparse: sparse: incorrect type in initializer (different address spaces)

2020-07-27 Thread kernel test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   92ed301919932f13b9172e525674157e983d
commit: 670d0a4b10704667765f7d18f7592993d02783aa sparse: use identifiers to 
define address spaces
date:   6 weeks ago
config: openrisc-randconfig-s031-20200728 (attached as .config)
compiler: or1k-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.2-94-geb6779f6-dirty
git checkout 670d0a4b10704667765f7d18f7592993d02783aa
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 
CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=openrisc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


sparse warnings: (new ones prefixed by >>)

>> drivers/net/hamradio/6pack.c:706:23: sparse: sparse: incorrect type in 
>> initializer (different address spaces) @@ expected int *__pu_addr @@ 
>> got int [noderef] __user * @@
   drivers/net/hamradio/6pack.c:706:23: sparse: expected int *__pu_addr
>> drivers/net/hamradio/6pack.c:706:23: sparse: got int [noderef] __user *
>> drivers/net/hamradio/6pack.c:710:21: sparse: sparse: incorrect type in 
>> initializer (different address spaces) @@ expected int const *__gu_addr 
>> @@ got int [noderef] __user * @@
   drivers/net/hamradio/6pack.c:710:21: sparse: expected int const 
*__gu_addr
   drivers/net/hamradio/6pack.c:710:21: sparse: got int [noderef] __user *
   drivers/net/hamradio/6pack.c: note: in included file:
   include/linux/uaccess.h:131:38: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@ expected void *to @@ got void [noderef] 
__user *to @@
   include/linux/uaccess.h:131:38: sparse: expected void *to
   include/linux/uaccess.h:131:38: sparse: got void [noderef] __user *to
   include/linux/uaccess.h:131:42: sparse: sparse: incorrect type in argument 2 
(different address spaces) @@ expected void const [noderef] __user *from @@ 
got void const *from @@
   include/linux/uaccess.h:131:42: sparse: expected void const [noderef] 
__user *from
   include/linux/uaccess.h:131:42: sparse: got void const *from
   drivers/net/hamradio/6pack.c: note: in included file (through 
include/linux/uaccess.h):
   arch/openrisc/include/asm/uaccess.h:246:55: sparse: sparse: incorrect type 
in argument 2 (different address spaces) @@ expected void const *from @@
 got void const [noderef] __user *from @@
   arch/openrisc/include/asm/uaccess.h:246:55: sparse: expected void const 
*from
   arch/openrisc/include/asm/uaccess.h:246:55: sparse: got void const 
[noderef] __user *from
--
>> net/bluetooth/rfcomm/sock.c:659:21: sparse: sparse: incorrect type in 
>> initializer (different address spaces) @@ expected unsigned int const 
>> *__gu_addr @@ got unsigned int [noderef] [usertype] __user * @@
   net/bluetooth/rfcomm/sock.c:659:21: sparse: expected unsigned int const 
*__gu_addr
   net/bluetooth/rfcomm/sock.c:659:21: sparse: got unsigned int [noderef] 
[usertype] __user *
   net/bluetooth/rfcomm/sock.c:735:21: sparse: sparse: incorrect type in 
initializer (different address spaces) @@ expected unsigned int const 
*__gu_addr @@ got unsigned int [noderef] [usertype] __user * @@
   net/bluetooth/rfcomm/sock.c:735:21: sparse: expected unsigned int const 
*__gu_addr
   net/bluetooth/rfcomm/sock.c:735:21: sparse: got unsigned int [noderef] 
[usertype] __user *
   net/bluetooth/rfcomm/sock.c:767:13: sparse: sparse: incorrect type in 
initializer (different address spaces) @@ expected int const *__gu_addr @@  
   got int [noderef] __user *optlen @@
   net/bluetooth/rfcomm/sock.c:767:13: sparse: expected int const *__gu_addr
   net/bluetooth/rfcomm/sock.c:767:13: sparse: got int [noderef] __user 
*optlen
   net/bluetooth/rfcomm/sock.c:797:21: sparse: sparse: incorrect type in 
initializer (different address spaces) @@ expected unsigned int *__pu_addr 
@@ got unsigned int [noderef] [usertype] __user * @@
   net/bluetooth/rfcomm/sock.c:797:21: sparse: expected unsigned int 
*__pu_addr
   net/bluetooth/rfcomm/sock.c:797:21: sparse: got unsigned int [noderef] 
[usertype] __user *
   net/bluetooth/rfcomm/sock.c:845:13: sparse: sparse: incorrect type in 
initializer (different address spaces) @@ expected int const *__gu_addr @@  
   got int [noderef] __user *optlen @@
   net/bluetooth/rfcomm/sock.c:845:13: sparse: expected int const *__gu_addr
   net/bluetooth/rfcomm/sock.c:845:13: sparse: got int [noderef] __user 
*optlen
   net/bluetooth/rfcomm/sock.c:872:21: sparse: sparse: incorrect type in 
initializer (different address spaces) @@ expected unsigned int *__pu_addr 
@@ got unsigned 

Re: [PATCH v4 4/5] arm64: dts: sdm845: Add OPP tables and power-domains for venus

2020-07-27 Thread Rajendra Nayak



On 7/28/2020 6:22 AM, Stephen Boyd wrote:

Quoting Viresh Kumar (2020-07-27 08:38:06)

On 27-07-20, 17:38, Rajendra Nayak wrote:

On 7/27/2020 11:23 AM, Rajendra Nayak wrote:

On 7/24/2020 7:39 PM, Stanimir Varbanov wrote:

+
+    opp-53300 {
+    opp-hz = /bits/ 64 <53300>;


Is this the highest OPP in table ?


Actually it comes from videocc, where ftbl_video_cc_venus_clk_src
defines 53300 but the real calculated freq is 53397.


I still don't quite understand why the videocc driver returns this
frequency despite this not being in the freq table.


Ok, so I see the same issue on sc7180 also. clk_round_rate() does seem to
return whats in the freq table, but clk_set_rate() goes ahead and sets it


I'm happy to see clk_round_rate() return the actual rate that would be
achieved and not just the rate that is in the frequency tables. Would
that fix the problem? 


It would, but only if I also update the OPP table to have 53397
instead of 53300 (which I guess is needed anyway)
If this is the actual frequency that's achievable, then perhaps even the clock
freq table should have this? 53397 and not 53300?
That way clk_round_rate() would return the actual rate that's achieved and
we don't need any extra math. Isn't that the reason these freq tables exist
anyway.


It may be that we need to make clk_round_rate() do
some more math on qcom platforms and actually figure out what the rate
is going to be instead of blindly trust the frequency that has been set
in the tables.


to 53397. Subsequently when we try to set a different OPP, it fails to
find the 'current' OPP entry for 53397. This sounds like an issue with the 
OPP
framework? Should we not fall back to the highest OPP as the current OPP?

Stephen/Viresh, any thoughts?


I think we (in all frameworks generally) try to set a frequency <=
target frequency and so there may be a problem if the frequency is
larger than highest supported. IOW, you need to fix tables a bit.



Rounding is annoying for sure.



--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation


Re: [PATCH 04/24] arm: use asm-generic/mmu_context.h for no-op implementations

2020-07-27 Thread Vineet Gupta
On 7/27/20 8:33 PM, Nicholas Piggin wrote:
> Cc: Russell King 
> Cc: linux-arm-ker...@lists.infradead.org
> Signed-off-by: Nicholas Piggin 
> ---
>  arch/arm/include/asm/mmu_context.h | 26 +++---
>  1 file changed, 3 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm/include/asm/mmu_context.h 
> b/arch/arm/include/asm/mmu_context.h
> index f99ed524fe41..84e58956fcab 100644
> --- a/arch/arm/include/asm/mmu_context.h
> +++ b/arch/arm/include/asm/mmu_context.h
> @@ -26,6 +26,8 @@ void __check_vmalloc_seq(struct mm_struct *mm);
>  #ifdef CONFIG_CPU_HAS_ASID
>  
>  void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk);
> +
> +#define init_new_context init_new_context
>  static inline int
>  init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  {
> @@ -92,32 +94,10 @@ static inline void finish_arch_post_lock_switch(void)
>  
>  #endif   /* CONFIG_MMU */
>  
> -static inline int
> -init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> -{
> - return 0;
> -}
> -
> -
>  #endif   /* CONFIG_CPU_HAS_ASID */
>  
> -#define destroy_context(mm)  do { } while(0)
>  #define activate_mm(prev,next)   switch_mm(prev, next, NULL)

Actually this can also go away too.

ARM switch_mm(prev, next, tsk) -> check_and_switch_context(next, tsk) but latter
doesn't use @tsk at all. With patch below, you can remove above as well...

>
From 672e0f78a94892794057a5a7542d85b71c1369c4 Mon Sep 17 00:00:00 2001
From: Vineet Gupta 
Date: Mon, 27 Jul 2020 21:12:42 -0700
Subject: [PATCH] ARM: mm: check_and_switch_context() doesn't use @tsk arg

Signed-off-by: Vineet Gupta 
---
 arch/arm/include/asm/efi.h | 2 +-
 arch/arm/include/asm/mmu_context.h | 5 ++---
 arch/arm/mm/context.c  | 2 +-
 3 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm/include/asm/efi.h b/arch/arm/include/asm/efi.h
index 5dcf3c6011b7..0995b308149d 100644
--- a/arch/arm/include/asm/efi.h
+++ b/arch/arm/include/asm/efi.h
@@ -37,7 +37,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm,
efi_memory_desc_t *md);

 static inline void efi_set_pgd(struct mm_struct *mm)
 {
-   check_and_switch_context(mm, NULL);
+   check_and_switch_context(mm);
 }

 void efi_virtmap_load(void);
diff --git a/arch/arm/include/asm/mmu_context.h 
b/arch/arm/include/asm/mmu_context.h
index f99ed524fe41..c96360fa3466 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -25,7 +25,7 @@ void __check_vmalloc_seq(struct mm_struct *mm);

 #ifdef CONFIG_CPU_HAS_ASID

-void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk);
+void check_and_switch_context(struct mm_struct *mm);
 static inline int
 init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 {
@@ -47,8 +47,7 @@ static inline void a15_erratum_get_cpumask(int this_cpu, 
struct
mm_struct *mm,

 #ifdef CONFIG_MMU

-static inline void check_and_switch_context(struct mm_struct *mm,
-   struct task_struct *tsk)
+static inline void check_and_switch_context(struct mm_struct *mm)
 {
if (unlikely(mm->context.vmalloc_seq != init_mm.context.vmalloc_seq))
__check_vmalloc_seq(mm);
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index b7525b433f3e..86c411e1d7cb 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -234,7 +234,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int 
cpu)
return asid | generation;
 }

-void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
+void check_and_switch_context(struct mm_struct *mm)
 {
unsigned long flags;
unsigned int cpu = smp_processor_id();
-- 
2.20.1



Re: [PATCH 06/24] csky: use asm-generic/mmu_context.h for no-op implementations

2020-07-27 Thread Guo Ren
Acked-by: Guo Ren 

On Tue, Jul 28, 2020 at 11:34 AM Nicholas Piggin  wrote:
>
> Cc: Guo Ren 
> Cc: linux-c...@vger.kernel.org
> Signed-off-by: Nicholas Piggin 
> ---
>  arch/csky/include/asm/mmu_context.h | 8 +++-
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/arch/csky/include/asm/mmu_context.h 
> b/arch/csky/include/asm/mmu_context.h
> index abdf1f1cb6ec..b227d29393a8 100644
> --- a/arch/csky/include/asm/mmu_context.h
> +++ b/arch/csky/include/asm/mmu_context.h
> @@ -24,11 +24,6 @@
>  #define cpu_asid(mm)   (atomic64_read(>context.asid) & ASID_MASK)
>
>  #define init_new_context(tsk,mm)   ({ atomic64_set(&(mm)->context.asid, 
> 0); 0; })
> -#define activate_mm(prev,next) switch_mm(prev, next, current)
> -
> -#define destroy_context(mm)do {} while (0)
> -#define enter_lazy_tlb(mm, tsk)do {} while (0)
> -#define deactivate_mm(tsk, mm) do {} while (0)
>
>  void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
>
> @@ -46,4 +41,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
>
> flush_icache_deferred(next);
>  }
> +
> +#include 
> +
>  #endif /* __ASM_CSKY_MMU_CONTEXT_H */
> --
> 2.23.0
>


-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/


Re: [PATCH v3 2/5] MIPS: Loongson64: Process ISA Node in DeviceTree

2020-07-27 Thread WANG Xuerui
Hi Jiaxun,


On 2020/7/25 09:45, Jiaxun Yang wrote:
> Previously, we're hardcoding resserved ISA I/O Space in code, now
"reserved"; also "in code" seems redundant (we're "hard-coding", aren't we?)
> we're processing reverved I/O via DeviceTree directly. Using the ranges
another "reserved" typo, but better restructure the whole clause.
> property to determine the size and address of reserved I/O space.
This sentence has no verb. Maybe you mean "Use"?
> Signed-off-by: Jiaxun Yang 
> --
> v2: Use range_parser instead of pci_range_parser
> ---
>  arch/mips/loongson64/init.c | 87 ++---
>  1 file changed, 62 insertions(+), 25 deletions(-)
>
> diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
> index 59ddadace83f..8ba22c30f312 100644
> --- a/arch/mips/loongson64/init.c
> +++ b/arch/mips/loongson64/init.c
> @@ -7,6 +7,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -63,41 +65,76 @@ void __init prom_free_prom_memory(void)
>  {
>  }
>  
> -static __init void reserve_pio_range(void)
> +static int __init add_legacy_isa_io(struct fwnode_handle *fwnode, 
> resource_size_t hw_start,
> + resource_size_t size)
>  {
> + int ret = 0;
>   struct logic_pio_hwaddr *range;
> + unsigned long vaddr;
>  
>   range = kzalloc(sizeof(*range), GFP_ATOMIC);
>   if (!range)
> - return;
> + return -ENOMEM;
>  
> - range->fwnode = _root->fwnode;
> - range->size = MMIO_LOWER_RESERVED;
> - range->hw_start = LOONGSON_PCIIO_BASE;
> + range->fwnode = fwnode;
> + range->size = size;
> + range->hw_start = hw_start;
>   range->flags = LOGIC_PIO_CPU_MMIO;
>  
> - if (logic_pio_register_range(range)) {
> - pr_err("Failed to reserve PIO range for legacy ISA\n");
> - goto free_range;
> + ret = logic_pio_register_range(range);
> + if (ret) {
> + kfree(range);
> + return ret;
> + }
> +
> + /* Legacy ISA must placed at the start of PCI_IOBASE */
> + if (range->io_start != 0) {
> + logic_pio_unregister_range(range);
> + kfree(range);
> + return -EINVAL;
>   }
>  
> - if (WARN(range->io_start != 0,
> - "Reserved PIO range does not start from 0\n"))
> - goto unregister;
> -
> - /*
> -  * i8259 would access I/O space, so mapping must be done here.
> -  * Please remove it when all drivers can be managed by logic_pio.
> -  */
> - ioremap_page_range(PCI_IOBASE, PCI_IOBASE + MMIO_LOWER_RESERVED,
> - LOONGSON_PCIIO_BASE,
> - pgprot_device(PAGE_KERNEL));
> -
> - return;
> -unregister:
> - logic_pio_unregister_range(range);
> -free_range:
> - kfree(range);
> + vaddr = PCI_IOBASE + range->io_start;
> +
> + ioremap_page_range(vaddr, vaddr + size, hw_start, 
> pgprot_device(PAGE_KERNEL));
> +
> + return 0;
> +}
> +
> +static __init void reserve_pio_range(void)
> +{
> + struct device_node *np;
> +
> + for_each_node_by_name(np, "isa") {
> + struct of_range range;
> + struct of_range_parser parser;
> +
> + pr_info("ISA Bridge: %pOF\n", np);
> +
> + if (of_range_parser_init(, np)) {
> + pr_info("Failed to parse resources.\n");
> + break;
> + }
> +
> + for_each_of_range(, ) {
> + switch (range.flags & IORESOURCE_TYPE_BITS) {
> + case IORESOURCE_IO:
> + pr_info(" IO 0x%016llx..0x%016llx  ->  
> 0x%016llx\n",
> + range.cpu_addr,
> + range.cpu_addr + range.size - 1,
> + range.bus_addr);
> + if (add_legacy_isa_io(>fwnode, 
> range.cpu_addr, range.size))
> + pr_warn("Failed to reserve legacy IO in 
> Logic PIO\n");
> + break;
> + case IORESOURCE_MEM:
> + pr_info(" MEM 0x%016llx..0x%016llx  ->  
> 0x%016llx\n",
> + range.cpu_addr,
> + range.cpu_addr + range.size - 1,
> + range.bus_addr);
> + break;
> + }
> + }
> + }
>  }
>  
>  void __init arch_init_irq(void)


Hi

2020-07-27 Thread P Esq
Dear friend,
I have 12,800,000 pounds inheritance of your late relative who was my client 
please reply for more details.
Patrick Gomez, Esq.
Attorney At Law, UK.


Re: [PATCH 01/24] asm-generic: add generic versions of mmu context functions

2020-07-27 Thread Vineet Gupta
On 7/27/20 8:33 PM, Nicholas Piggin wrote:
> Many of these are no-ops on many architectures, so extend mmu_context.h
> to cover MMU and NOMMU, and split the NOMMU bits out to nommu_context.h
> 

> -static inline void switch_mm(struct mm_struct *prev,
> - struct mm_struct *next,
> - struct task_struct *tsk)
> +/**
> + * activate_mm - called after exec switches the current task to a new mm, to 
> switch to it
> + * @prev_mm: previous mm of this task
> + * @next_mm: new mm
> + */
> +#ifndef activate_mm
> +static inline void activate_mm(struct mm_struct *prev_mm,
> +struct mm_struct *next_mm)
>  {
> + switch_mm(prev_mm, next_mm, current);
>  }
> +#endif

Is activate_mm() really needed now. It seems most arches have
   activate_mm(p, n) -> switch_mm(p, n, NULL)

And if we are passing current, that can be pushed inside switch_mm()

>  
> -static inline void activate_mm(struct mm_struct *prev_mm,
> -struct mm_struct *next_mm)


Re: [PATCH 03/24] arc: use asm-generic/mmu_context.h for no-op implementations

2020-07-27 Thread Vineet Gupta
On 7/27/20 8:33 PM, Nicholas Piggin wrote:

>  /*
> - * Called at the time of execve() to get a new ASID
> - * Note the subtlety here: get_new_mmu_context() behaves differently here
> - * vs. in switch_mm(). Here it always returns a new ASID, because mm has
> - * an unallocated "initial" value, while in latter, it moves to a new ASID,
> - * only if it was unallocated
> + * activate_mm defaults to switch_mm and is called at the time of execve() to

With activate_mm() definition actually gone, perhaps add "activate_mm() comes 
from
generic code..." to provide next reader about the "spurious looking comment"

> + * get a new ASID Note the subtlety here: get_new_mmu_context() behaves
> + * differently here vs. in switch_mm(). Here it always returns a new ASID,
> + * because mm has an unallocated "initial" value, while in latter, it moves 
> to
> + * a new ASID, only if it was unallocated
>   */
> -#define activate_mm(prev, next)  switch_mm(prev, next, NULL)
>  
>  /* it seemed that deactivate_mm( ) is a reasonable place to do book-keeping
>   * for retiring-mm. However destroy_context( ) still needs to do that because
> @@ -168,8 +169,7 @@ static inline void switch_mm(struct mm_struct *prev, 
> struct mm_struct *next,
>   * there is a good chance that task gets sched-out/in, making it's ASID valid
>   * again (this teased me for a whole day).
>   */
> -#define deactivate_mm(tsk, mm)   do { } while (0)

same for deactivate_mm()


Re: kunit compile failed on um

2020-07-27 Thread Cixi Geng
Here I found my error rootcause:

My Makefile add a Werror flag, So it wil build waring as error as the
follow code

../arch/um/os-Linux/signal.c: In function
\xe2\x80\x98sig_handler_common\xe2\x80\x99:
../arch/um/os-Linux/signal.c:51:1: error: the frame size of 2960 bytes
is larger than 2048 bytes [-Werror=frame-larger-than=]
 }
 ^
../arch/um/os-Linux/signal.c: In function
\xe2\x80\x98timer_real_alarm_handler\xe2\x80\x99:
../arch/um/os-Linux/signal.c:95:1: error: the frame size of 2960 bytes
is larger than 2048 bytes [-Werror=frame-larger-than=]
 }

and I add the CONFIG_FRAME_WARN=4096  in arch/um/x86_64_defconfig  so
there was no error when I use make ARCH=um defconfig && make ARCH=um
BUT I found the commit ff7b437f36b026dcd7351f86a90a0424c891dc06 has
changed use  make ARCH=um  kunit_defconfig in kunit.py. So the error
happened.

Further More, I compare the different between use x86_64_defconfig and
kunit_defconfig, there is a big difference.
Can We use kunit_defconfig on the basis of the x86_64_defconfig ?
If the answer is ok, I would like to help add this functions

Brendan Higgins  于2020年7月28日周二 上午5:29写道:
>
> On Mon, Jul 27, 2020 at 3:01 AM Cixi Geng  wrote:
> >
> > Hi Brendan:
> > When I run kunit test in um , it failed on kernel 5.8-rc* while
> > succeeded  in v5.7 with same configuration. is this a bug?
> >
> > Here is my operation:
> >  gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)
> >
> > the kunitconfig:
> > Cixi.Geng:~/git-projects/torvals-linux$ cat .kunitconfig
> > CONFIG_KUNIT=y
> > CONFIG_KUNIT_TEST=y
> > CONFIG_KUNIT_EXAMPLE_TEST=y
> >
> > command:
> > Cixi.Geng:~/git-projects/torvals-linux$ ./tools/testing/kunit/kunit.py run
> >
> > the Error log:
> > [17:51:14] Configuring KUnit Kernel ...
> > [17:51:14] Building KUnit Kernel ...
> > ERROR:root:b"make[1]:
> > \xe8\xbf\x9b\xe5\x85\xa5\xe7\x9b\xae\xe5\xbd\x95\xe2\x80\x9c/home/cixi.geng1/git-projects/torvals-linux/.kunit\xe2\x80\x9d\n/home/cixi.geng1/git-projects/torvals-linux/Makefile:551:
> > recipe for target 'outputmakefile' failed\nmake[1]:
> > \xe7\xa6\xbb\xe5\xbc\x80\xe7\x9b\xae\xe5\xbd\x95\xe2\x80\x9c/home/cixi.geng1/git-projects/torvals-linux/.kunit\xe2\x80\x9d\nMakefile:185:
> > recipe for target '__sub-make' failed\n"
>
> So we have a fix out for the cryptic error messages:
>
> https://patchwork.kernel.org/patch/11652711/
>
> But I believe it has not been picked up yet.
>
> In the meantime, you should get more information by running
>
> ls .kunit
> make ARCH=um O=.kunit
>
> Let us know if you have any additional questions.


Re: [PATCH v3 2/2] soc: mediatek: add mtk-devapc driver

2020-07-27 Thread Neal Liu
Hi Chun-Kuang,

On Mon, 2020-07-27 at 22:47 +0800, Chun-Kuang Hu wrote:
> Hi, Neal:
> 
> Neal Liu  於 2020年7月27日 週一 上午11:06寫道:
> >
> > Hi Chun-Kuang,
> >
> > On Fri, 2020-07-24 at 23:55 +0800, Chun-Kuang Hu wrote:
> > > Hi, Neal:
> > >
> > > Neal Liu  於 2020年7月24日 週五 下午2:55寫道:
> > > >
> > > > Hi Chun-Kuang,
> > > >
> > > > On Fri, 2020-07-24 at 00:32 +0800, Chun-Kuang Hu wrote:
> > > > > Hi, Neal:
> > > > >
> > > > > Neal Liu  於 2020年7月23日 週四 下午2:11寫道:
> > > > > >
> > > > > > Hi Chun-Kuang,
> > > > > >
> > > > > > On Wed, 2020-07-22 at 22:25 +0800, Chun-Kuang Hu wrote:
> > > > > > > Hi, Neal:
> > > > > > >
> > > > > > > Neal Liu  於 2020年7月22日 週三 上午11:49寫道:
> > > > > > > >
> > > > > > > > Hi Chun-Kuang,
> > > > > > > >
> > > > > > > > On Wed, 2020-07-22 at 07:21 +0800, Chun-Kuang Hu wrote:
> > > > > > > > > Hi, Neal:
> > > > > > > > >
> > > > > > > > > Neal Liu  於 2020年7月21日 週二 下午12:00寫道:
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > > +
> > > > > > > > > > +/*
> > > > > > > > > > + * mtk_devapc_dump_vio_dbg - get the violation index and 
> > > > > > > > > > dump the full violation
> > > > > > > > > > + *   debug information.
> > > > > > > > > > + */
> > > > > > > > > > +static bool mtk_devapc_dump_vio_dbg(struct 
> > > > > > > > > > mtk_devapc_context *ctx, u32 vio_idx)
> > > > > > > > > > +{
> > > > > > > > > > +   u32 shift_bit;
> > > > > > > > > > +
> > > > > > > > > > +   if (check_vio_mask(ctx, vio_idx))
> > > > > > > > > > +   return false;
> > > > > > > > > > +
> > > > > > > > > > +   if (!check_vio_status(ctx, vio_idx))
> > > > > > > > > > +   return false;
> > > > > > > > > > +
> > > > > > > > > > +   shift_bit = get_shift_group(ctx, vio_idx);
> > > > > > > > > > +
> > > > > > > > > > +   if (sync_vio_dbg(ctx, shift_bit))
> > > > > > > > > > +   return false;
> > > > > > > > > > +
> > > > > > > > > > +   devapc_extract_vio_dbg(ctx);
> > > > > > > > >
> > > > > > > > > I think get_shift_group(), sync_vio_dbg(), and
> > > > > > > > > devapc_extract_vio_dbg() should be moved out of vio_idx 
> > > > > > > > > for-loop (the
> > > > > > > > > loop in devapc_violation_irq()) because these three function 
> > > > > > > > > is not
> > > > > > > > > related to vio_idx.
> > > > > > > > > Another question: when multiple vio_idx violation occur, 
> > > > > > > > > vio_addr is
> > > > > > > > > related to which one vio_idx? The latest happened one?
> > > > > > > > >
> > > > > > > >
> > > > > > > > Actually, it's related to vio_idx. But we don't use it directly 
> > > > > > > > on these
> > > > > > > > function. I think below snip code might be better way to 
> > > > > > > > understand it.
> > > > > > > >
> > > > > > > > for (...)
> > > > > > > > {
> > > > > > > > check_vio_mask()
> > > > > > > > check_vio_status()
> > > > > > > >
> > > > > > > > // if get vio_idx, mask it temporarily
> > > > > > > > mask_module_irq(true)
> > > > > > > > clear_vio_status()
> > > > > > > >
> > > > > > > > // dump violation info
> > > > > > > > get_shift_group()
> > > > > > > > sync_vio_dbg()
> > > > > > > > devapc_extract_vio_dbg()
> > > > > > > >
> > > > > > > > // unmask
> > > > > > > > mask_module_irq(false)
> > > > > > > > }
> > > > > > >
> > > > > > > This snip code does not explain any thing. I could rewrite this 
> > > > > > > code as:
> > > > > > >
> > > > > > > for (...)
> > > > > > > {
> > > > > > > check_vio_mask()
> > > > > > > check_vio_status()
> > > > > > >
> > > > > > > // if get vio_idx, mask it temporarily
> > > > > > > mask_module_irq(true)
> > > > > > > clear_vio_status()
> > > > > > > // unmask
> > > > > > > mask_module_irq(false)
> > > > > > > }
> > > > > > >
> > > > > > > // dump violation info
> > > > > > > get_shift_group()
> > > > > > > sync_vio_dbg()
> > > > > > > devapc_extract_vio_dbg()
> > > > > > >
> > > > > > > And my version is identical with your version, isn't it?
> > > > > >
> > > > > > Sorry, I did not explain it clearly. Let's me try again.
> > > > > > The reason why I put "dump violation info" between mask & unmask 
> > > > > > context
> > > > > > is because it has to stop interrupt first before dump violation 
> > > > > > info,
> > > > > > and then unmask it to prepare next violation.
> > > > > > These sequence guarantee that if multiple violation is triggered, we
> > > > > > still have information to debug.
> > > > > > If the code sequence in your version and multiple violation is
> > > > > > triggered, there might be no any information but keeps entering ISR.
> > > > > > Finally, system might be abnormal and watchdog timeout.
> > > > > > In this case, we still don't have any information to debug.
> > > > >
> > > > > I still don't understand why no information to debug. For example when
> > > > > vio_idx 5, 10, 15 has violation,
> > > > > You would mask vio_idx 5 to get information, 

[PATCH v4] mm/hugetlb: add mempolicy check in the reservation routine

2020-07-27 Thread Muchun Song
In the reservation routine, we only check whether the cpuset meets
the memory allocation requirements. But we ignore the mempolicy of
MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
memory allocation may fail due to mempolicy restrictions and receives
the SIGBUS signal. This can be reproduced by the follow steps.

 1) Compile the test case.
cd tools/testing/selftests/vm/
gcc map_hugetlb.c -o map_hugetlb

 2) Pre-allocate huge pages. Suppose there are 2 numa nodes in the
system. Each node will pre-allocate one huge page.
echo 2 > /proc/sys/vm/nr_hugepages

 3) Run test case(mmap 4MB). We receive the SIGBUS signal.
numactl --membind=0 ./map_hugetlb 4

With this patch applied, the mmap will fail in the step 3) and throw
"mmap: Cannot allocate memory".

Signed-off-by: Muchun Song 
Reported-by: Jianchao Guo 
Suggested-by: Michal Hocko 
Reviewed-by: Mike Kravetz 
---
changelog in v4:
 1) Fix compilation errors with !CONFIG_NUMA.

changelog in v3:
 1) Do not allocate nodemask on the stack.
 2) Update comment.

changelog in v2:
 1) Reuse policy_nodemask().

 include/linux/mempolicy.h | 14 ++
 mm/hugetlb.c  | 22 ++
 mm/mempolicy.c|  2 +-
 3 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index ea9c15b60a96..0656ece1ccf1 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -152,6 +152,15 @@ extern int huge_node(struct vm_area_struct *vma,
 extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
 extern bool mempolicy_nodemask_intersects(struct task_struct *tsk,
const nodemask_t *mask);
+extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
+
+static inline nodemask_t *policy_nodemask_current(gfp_t gfp)
+{
+   struct mempolicy *mpol = get_task_policy(current);
+
+   return policy_nodemask(gfp, mpol);
+}
+
 extern unsigned int mempolicy_slab_node(void);
 
 extern enum zone_type policy_zone;
@@ -281,5 +290,10 @@ static inline int mpol_misplaced(struct page *page, struct 
vm_area_struct *vma,
 static inline void mpol_put_task_policy(struct task_struct *task)
 {
 }
+
+static inline nodemask_t *policy_nodemask_current(gfp_t gfp)
+{
+   return NULL;
+}
 #endif /* CONFIG_NUMA */
 #endif
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 589c330df4db..a34458f6a475 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3463,13 +3463,21 @@ static int __init default_hugepagesz_setup(char *s)
 }
 __setup("default_hugepagesz=", default_hugepagesz_setup);
 
-static unsigned int cpuset_mems_nr(unsigned int *array)
+static unsigned int allowed_mems_nr(struct hstate *h)
 {
int node;
unsigned int nr = 0;
+   nodemask_t *mpol_allowed;
+   unsigned int *array = h->free_huge_pages_node;
+   gfp_t gfp_mask = htlb_alloc_mask(h);
+
+   mpol_allowed = policy_nodemask_current(gfp_mask);
 
-   for_each_node_mask(node, cpuset_current_mems_allowed)
-   nr += array[node];
+   for_each_node_mask(node, cpuset_current_mems_allowed) {
+   if (!mpol_allowed ||
+   (mpol_allowed && node_isset(node, *mpol_allowed)))
+   nr += array[node];
+   }
 
return nr;
 }
@@ -3648,12 +3656,18 @@ static int hugetlb_acct_memory(struct hstate *h, long 
delta)
 * we fall back to check against current free page availability as
 * a best attempt and hopefully to minimize the impact of changing
 * semantics that cpuset has.
+*
+* Apart from cpuset, we also have memory policy mechanism that
+* also determines from which node the kernel will allocate memory
+* in a NUMA system. So similar to cpuset, we also should consider
+* the memory policy of the current task. Similar to the description
+* above.
 */
if (delta > 0) {
if (gather_surplus_pages(h, delta) < 0)
goto out;
 
-   if (delta > cpuset_mems_nr(h->free_huge_pages_node)) {
+   if (delta > allowed_mems_nr(h)) {
return_unused_surplus_pages(h, delta);
goto out;
}
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 93fcfc1f2fa2..fce14c3f4f38 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1873,7 +1873,7 @@ static int apply_policy_zone(struct mempolicy *policy, 
enum zone_type zone)
  * Return a nodemask representing a mempolicy for filtering nodes for
  * page allocation
  */
-static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
+nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
 {
/* Lower zones don't get a nodemask applied for MPOL_BIND */
if (unlikely(policy->mode == MPOL_BIND) &&
-- 
2.11.0



Re: [PATCH] clk: ti: clkctrl: add the missed kfree() for _ti_omap4_clkctrl_setup()

2020-07-27 Thread Jing Xiangfeng




On 2020/7/28 9:24, Stephen Boyd wrote:

Quoting Jing Xiangfeng (2020-07-20 05:23:43)

_ti_omap4_clkctrl_setup() misses to call kfree() in an error path. Add
the missed function call to fix it.

Fixes: 6c3090520554 ("clk: ti: clkctrl: Fix hidden dependency to node name")
Signed-off-by: Jing Xiangfeng 
---
  drivers/clk/ti/clkctrl.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
index 864c484bde1b..868e50132c21 100644
--- a/drivers/clk/ti/clkctrl.c
+++ b/drivers/clk/ti/clkctrl.c
@@ -655,8 +655,10 @@ static void __init _ti_omap4_clkctrl_setup(struct 
device_node *node)
 }

 hw = kzalloc(sizeof(*hw), GFP_KERNEL);
-   if (!hw)
+   if (!hw) {
+   kfree(clkctrl_name);
 return;
+   }


Why not goto cleanup?


Thanks, I will change it as you suggested.





 hw->enable_reg.ptr = provider->base + reg_data->offset;

--
2.17.1


.



linux-next: manual merge of the drm tree with the drm-misc-fixes tree

2020-07-27 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the drm tree got a conflict in:

  drivers/gpu/drm/drm_gem.c

between commit:

  8490d6a7e0a0 ("drm: hold gem reference until object is no longer accessed")

from the drm-misc-fixes tree and commit:

  be6ee102341b ("drm: remove _unlocked suffix in drm_gem_object_put_unlocked")

from the drm tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc drivers/gpu/drm/drm_gem.c
index ee2058ad482c,a57f5379fc08..
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@@ -901,9 -913,7 +909,9 @@@ drm_gem_open_ioctl(struct drm_device *d
args->handle = handle;
args->size = obj->size;
  
 -  return 0;
 +err:
-   drm_gem_object_put_unlocked(obj);
++  drm_gem_object_put(obj);
 +  return ret;
  }
  
  /**


pgpsisvqV7JZD.pgp
Description: OpenPGP digital signature


  1   2   3   4   5   6   7   8   9   10   >