On Tuesday, 2017-01-03 17:56:10 +0100, Rainer Hochecker wrote:
> On Mon, Jan 2, 2017 at 3:31 PM, Rainer Hochecker wrote:
> >
> > I chose GR16 because that matches with Mesa texture formats. Unfortunately
> > RG16 is already taken by DRM_FORMAT_RGB565
> > So GR32 / RG32 might
On Tuesday, 2017-01-03 17:56:10 +0100, Rainer Hochecker wrote:
> On Mon, Jan 2, 2017 at 3:31 PM, Rainer Hochecker wrote:
> >
> > I chose GR16 because that matches with Mesa texture formats. Unfortunately
> > RG16 is already taken by DRM_FORMAT_RGB565
> > So GR32 / RG32 might be better. All other
On 12/28/2016 12:46 AM, Yuriy Kolerov wrote:
> It is necessary to call entry/exit functions for parent interrupt
> controllers for proper masking/unmasking of interrupt lines.
>
> Signed-off-by: Yuriy Kolerov
Applied to for-curr.
Thx,
-vineet
On 12/28/2016 12:46 AM, Yuriy Kolerov wrote:
> It is necessary to call entry/exit functions for parent interrupt
> controllers for proper masking/unmasking of interrupt lines.
>
> Signed-off-by: Yuriy Kolerov
Applied to for-curr.
Thx,
-vineet
This patch fixes a bug in the freelist randomization code. When a high
random number is used, the freelist will contain duplicate entries. It
will result in different allocations sharing the same chunk.
Fixes: c7ce4f60ac19 ("mm: SLAB freelist randomization")
Signed-off-by: John Sperbeck
This patch fixes a bug in the freelist randomization code. When a high
random number is used, the freelist will contain duplicate entries. It
will result in different allocations sharing the same chunk.
Fixes: c7ce4f60ac19 ("mm: SLAB freelist randomization")
Signed-off-by: John Sperbeck
The HWMON support in DSA is currently embedded in the legacy DSA code.
Move it out in its own file, so that it can be reused in newer DSA code.
---
net/dsa/Makefile | 1 +
net/dsa/dsa.c | 131 +-
net/dsa/dsa_priv.h | 9
net/dsa/hwmon.c
The "out" label in dsa_switch_setup_one() is useless, thus remove it.
---
net/dsa/dsa.c | 40 +---
1 file changed, 13 insertions(+), 27 deletions(-)
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index 7899919cd9f0..89e66b623d73 100644
--- a/net/dsa/dsa.c
+++
The HWMON support in DSA is currently embedded in the legacy DSA code.
Move it out in its own file, so that it can be reused in newer DSA code.
---
net/dsa/Makefile | 1 +
net/dsa/dsa.c | 131 +-
net/dsa/dsa_priv.h | 9
net/dsa/hwmon.c
The "out" label in dsa_switch_setup_one() is useless, thus remove it.
---
net/dsa/dsa.c | 40 +---
1 file changed, 13 insertions(+), 27 deletions(-)
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index 7899919cd9f0..89e66b623d73 100644
--- a/net/dsa/dsa.c
+++
The current HWMON support in DSA in embedded in the legacy code.
Extract it to its own file and register it in the newer DSA code.
Tested on ZII Rev B boards.
Vivien Didelot (3):
net: dsa: remove out label in dsa_switch_setup_one
net: dsa: move HWMON support to its own file
net: dsa:
The current HWMON support in DSA in embedded in the legacy code.
Extract it to its own file and register it in the newer DSA code.
Tested on ZII Rev B boards.
Vivien Didelot (3):
net: dsa: remove out label in dsa_switch_setup_one
net: dsa: move HWMON support to its own file
net: dsa:
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/libata.ko] undefined!
To fix this, protect the DMA code by #ifdef CONFIG_HAS_DMA, and provide
dummies of ata_sg_clean() and ata_sg_setup() for the !CONFIG_HAS_DMA
case.
Signed-off-by: Geert Uytterhoeven
---
The HWMON support was only registered in the legacy DSA code. Register
it in the newer DSA code (dsa2) as well.
---
net/dsa/dsa2.c | 4
1 file changed, 4 insertions(+)
diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
index 5fff951a0a49..668aa2974d01 100644
--- a/net/dsa/dsa2.c
+++
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/libata.ko] undefined!
To fix this, protect the DMA code by #ifdef CONFIG_HAS_DMA, and provide
dummies of ata_sg_clean() and ata_sg_setup() for the !CONFIG_HAS_DMA
case.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/libata-core.c | 61
The HWMON support was only registered in the legacy DSA code. Register
it in the newer DSA code (dsa2) as well.
---
net/dsa/dsa2.c | 4
1 file changed, 4 insertions(+)
diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
index 5fff951a0a49..668aa2974d01 100644
--- a/net/dsa/dsa2.c
+++
On Tue, Jan 03, 2017 at 10:06:58AM -0800, Davidlohr Bueso wrote:
> On Tue, 03 Jan 2017, Mark Rutland wrote:
>
> >Does the below help?
>
> It does, yes. Performance is pretty much the same with either function
> without sysreg.
Great!
> With arm no longer in the picture, I'll send up another
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/libahci_platform.ko] undefined!
ERROR: "dmam_alloc_coherent" [drivers/ata/libahci.ko] undefined!
Add a block dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 4
1
On Tue, Jan 03, 2017 at 10:06:58AM -0800, Davidlohr Bueso wrote:
> On Tue, 03 Jan 2017, Mark Rutland wrote:
>
> >Does the below help?
>
> It does, yes. Performance is pretty much the same with either function
> without sysreg.
Great!
> With arm no longer in the picture, I'll send up another
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/libahci_platform.ko] undefined!
ERROR: "dmam_alloc_coherent" [drivers/ata/libahci.ko] undefined!
Add a block dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 4
1 file changed, 4
This patch documents the devicetree binding in use for ARM SPE.
Cc: Mark Rutland
Cc: Rob Herring
Signed-off-by: Will Deacon
---
Documentation/devicetree/bindings/arm/spe-pmu.txt | 20
1 file changed, 20
This patch documents the devicetree binding in use for ARM SPE.
Cc: Mark Rutland
Cc: Rob Herring
Signed-off-by: Will Deacon
---
Documentation/devicetree/bindings/arm/spe-pmu.txt | 20
1 file changed, 20 insertions(+)
create mode 100644
Any modular driver using cluster-affine PPIs needs to be able to call
irq_get_percpu_devid_partition so that it can enable the IRQ on the
correct subset of CPUs.
This patch exports the symbol so that it can be called from within a
module.
Cc: Marc Zyngier
Cc: Thomas
The ARMv8.2 architecture introduces the Statistical Profiling Extension
(SPE). SPE provides a way to configure and collect profiling samples
from the CPU in the form of a trace buffer, which can be mapped directly
into userspace using the perf AUX buffer infrastructure.
This patch adds support
The SPE buffer is virtually addressed, using the page tables of the CPU
MMU. Unusually, this means that the EL0/1 page table may be live whilst
we're executing at EL2 on non-VHE configurations. When VHE is in use,
we can use the same property to profile the guest behind its back.
This patch adds
Perf PMU drivers using AUX buffers cannot be built as modules unless
the AUX helpers are exported.
This patch exports perf_aux_output_{begin,end,skip} and perf_get_aux to
modules.
Cc: Peter Zijlstra
Signed-off-by: Will Deacon
---
The SPE buffer is virtually addressed, using the page tables of the CPU
MMU. Unusually, this means that the EL0/1 page table may be live whilst
we're executing at EL2 on non-VHE configurations. When VHE is in use,
we can use the same property to profile the guest behind its back.
This patch adds
Perf PMU drivers using AUX buffers cannot be built as modules unless
the AUX helpers are exported.
This patch exports perf_aux_output_{begin,end,skip} and perf_get_aux to
modules.
Cc: Peter Zijlstra
Signed-off-by: Will Deacon
---
kernel/events/ring_buffer.c | 4
1 file changed, 4
Any modular driver using cluster-affine PPIs needs to be able to call
irq_get_percpu_devid_partition so that it can enable the IRQ on the
correct subset of CPUs.
This patch exports the symbol so that it can be called from within a
module.
Cc: Marc Zyngier
Cc: Thomas Gleixner
Signed-off-by:
The ARMv8.2 architecture introduces the Statistical Profiling Extension
(SPE). SPE provides a way to configure and collect profiling samples
from the CPU in the form of a trace buffer, which can be mapped directly
into userspace using the perf AUX buffer infrastructure.
This patch adds support
In preparation for adding additional flags to perf AUX records, allow
the flags for a session to be passed directly to perf_aux_output_end,
rather than extend the function to take a bool for each one.
Signed-off-by: Will Deacon
---
arch/x86/events/intel/bts.c
Hi all,
Bartlomiej's "[PATCH 1/3] ata: allow subsystem to be used on m68k arch"
exposed a few missing dependencies on HAS_DMA. This series allows to
build again "allmodconfig" and "allyesconfig" kernels tailored for
Sun-3, which sets NO_DMA=y.
Geert Uytterhoeven (6):
ata: SATA_MV
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/sata_highbank.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/Kconfig
Hi all,
Bartlomiej's "[PATCH 1/3] ata: allow subsystem to be used on m68k arch"
exposed a few missing dependencies on HAS_DMA. This series allows to
build again "allmodconfig" and "allyesconfig" kernels tailored for
Sun-3, which sets NO_DMA=y.
Geert Uytterhoeven (6):
ata: SATA_MV
If NO_DMA=y:
ERROR: "bad_dma_ops" [drivers/ata/sata_highbank.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index
In preparation for adding additional flags to perf AUX records, allow
the flags for a session to be passed directly to perf_aux_output_end,
rather than extend the function to take a bool for each one.
Signed-off-by: Will Deacon
---
arch/x86/events/intel/bts.c | 11
The ARM SPE architecture permits an implementation to ignore a sample
if the sample is due to be taken whilst another sample is already being
produced. In this case, it is desirable to report the collision to
userspace, as they may want to lower the sample period.
This patch adds a
On 12/28/2016 12:47 AM, Yuriy Kolerov wrote:
> Ignore value of interrupt distribution mode for common interrupts in
> IDU since setting of affinity using value from Device Tree is deprecated
> in ARC. Originally it is done in idu_irq_xlate() function and it is
> semantically wrong and does not
The ARM SPE architecture permits an implementation to ignore a sample
if the sample is due to be taken whilst another sample is already being
produced. In this case, it is desirable to report the collision to
userspace, as they may want to lower the sample period.
This patch adds a
On 12/28/2016 12:47 AM, Yuriy Kolerov wrote:
> Ignore value of interrupt distribution mode for common interrupts in
> IDU since setting of affinity using value from Device Tree is deprecated
> in ARC. Originally it is done in idu_irq_xlate() function and it is
> semantically wrong and does not
The SPE architecture requires each exception level to enable access
to the SPE controls for the exception level below it, since additional
context-switch logic may be required to handle the buffer safely.
This patch allows EL1 (host) access to the SPE controls when entered at
EL2.
Cc: Marc
The SPE architecture requires each exception level to enable access
to the SPE controls for the exception level below it, since additional
context-switch logic may be required to handle the buffer safely.
This patch allows EL1 (host) access to the SPE controls when entered at
EL2.
Cc: Marc
The statistical profiling extension (SPE) is an optional feature of
ARMv8.1 and is unlikely to be supported by all of the CPUs in a
heterogeneous system.
This patch updates the cpufeature checks so that such systems are not
tainted as unsupported.
Reviewed-by: Suzuki Poulose
If NO_DMA=y:
ERROR: "dma_pool_alloc" [drivers/ata/sata_mv.ko] undefined!
ERROR: "dmam_pool_create" [drivers/ata/sata_mv.ko] undefined!
ERROR: "dma_pool_free" [drivers/ata/sata_mv.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
Commit 70e6ad0c6d1e6cb9 ("[PATCH] libata: prepare ata_sg_clean() for
invocation from EH") made ata_sg_clean() global, but no user outside
libata-core.c has ever materialized.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/libata-core.c | 2 +-
drivers/ata/libata.h
The statistical profiling extension (SPE) is an optional feature of
ARMv8.1 and is unlikely to be supported by all of the CPUs in a
heterogeneous system.
This patch updates the cpufeature checks so that such systems are not
tainted as unsupported.
Reviewed-by: Suzuki Poulose
Signed-off-by: Will
If NO_DMA=y:
ERROR: "dma_pool_alloc" [drivers/ata/sata_mv.ko] undefined!
ERROR: "dmam_pool_create" [drivers/ata/sata_mv.ko] undefined!
ERROR: "dma_pool_free" [drivers/ata/sata_mv.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
Commit 70e6ad0c6d1e6cb9 ("[PATCH] libata: prepare ata_sg_clean() for
invocation from EH") made ata_sg_clean() global, but no user outside
libata-core.c has ever materialized.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/libata-core.c | 2 +-
drivers/ata/libata.h | 1 -
2 files
Hi all,
This RFC series adds support for the ARMv8.2 Statistical Profiling
Extension (SPE) to Linux in the form of a perf PMU driver. There aren't
any userspace patches for perf tool yet, but Kim (on CC) is working on
those and I thought posting the kernel side as an RFC was still worth it
in the
Hi all,
This RFC series adds support for the ARMv8.2 Statistical Profiling
Extension (SPE) to Linux in the form of a perf PMU driver. There aren't
any userspace patches for perf tool yet, but Kim (on CC) is working on
those and I thought posting the kernel side as an RFC was still worth it
in the
If NO_DMA=y:
ERROR: "dmam_alloc_coherent" [drivers/ata/libata.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/Kconfig
If NO_DMA=y:
ERROR: "dmam_alloc_coherent" [drivers/ata/libata.ko] undefined!
Add a dependency on HAS_DMA to fix this.
Signed-off-by: Geert Uytterhoeven
---
drivers/ata/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index
Perf already supports multiple PMU instances for heterogeneous systems,
so there's no need to be strict in the cpufeature checking, particularly
as the PMU extension is optional in the architecture.
Reviewed-by: Suzuki K Poulose
Signed-off-by: Will Deacon
Perf already supports multiple PMU instances for heterogeneous systems,
so there's no need to be strict in the cpufeature checking, particularly
as the PMU extension is optional in the architecture.
Reviewed-by: Suzuki K Poulose
Signed-off-by: Will Deacon
---
arch/arm64/kernel/cpufeature.c | 6
On Tue, 03 Jan 2017, Mark Rutland wrote:
Does the below help?
It does, yes. Performance is pretty much the same with either function
without sysreg. With arm no longer in the picture, I'll send up another
patchset with this change as well as Peter's cleanup remarks.
Thanks,
Davidlohr
On Tue, 03 Jan 2017, Mark Rutland wrote:
Does the below help?
It does, yes. Performance is pretty much the same with either function
without sysreg. With arm no longer in the picture, I'll send up another
patchset with this change as well as Peter's cleanup remarks.
Thanks,
Davidlohr
On Wed, Dec 28, 2016 at 07:34:52PM +0900, Jaehoon Chung wrote:
> Adds the exynos-pcie-phy binding for Exynos PCIe PHY.
> This is for using generic PHY framework.
>
> Signed-off-by: Jaehoon Chung
> ---
> .../devicetree/bindings/phy/samsung-phy.txt| 23
>
On Wed, Dec 28, 2016 at 07:34:52PM +0900, Jaehoon Chung wrote:
> Adds the exynos-pcie-phy binding for Exynos PCIe PHY.
> This is for using generic PHY framework.
>
> Signed-off-by: Jaehoon Chung
> ---
> .../devicetree/bindings/phy/samsung-phy.txt| 23
> ++
> 1 file
On Mon, 2 Jan 2017, Juergen Gross wrote:
> On 28/12/16 01:47, Jiandi An wrote:
> > Ensure all reserved fields of xatp are zero before making
> > hypervisor call to XEN in xen_map_device_mmio().
> > xenmem_add_to_physmap_one() in XEN fails the mapping request if
> > extra.res reserved field in xatp
On Mon, 2 Jan 2017, Juergen Gross wrote:
> On 28/12/16 01:47, Jiandi An wrote:
> > Ensure all reserved fields of xatp are zero before making
> > hypervisor call to XEN in xen_map_device_mmio().
> > xenmem_add_to_physmap_one() in XEN fails the mapping request if
> > extra.res reserved field in xatp
When in interrupt context, the priority of the interrupted task is
meaningless. So static RT priority is assigned in this case to make
sure that it can get lock ASAP to reduce latency to the interrupted
task.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_rt.h | 15
This patch enables the collection of event counts in the slowpath of the
realtime queued spinlocks. The following events are being tracked if
CONFIG_QUEUED_LOCK_STAT is defined:
- # of interrupt context RT spinnings
- # of task context RT spinnings
- # of nested spinlock RT spinnings
- # of
When in interrupt context, the priority of the interrupted task is
meaningless. So static RT priority is assigned in this case to make
sure that it can get lock ASAP to reduce latency to the interrupted
task.
Signed-off-by: Waiman Long
---
kernel/locking/qspinlock_rt.h | 15 +--
1
This patch enables the collection of event counts in the slowpath of the
realtime queued spinlocks. The following events are being tracked if
CONFIG_QUEUED_LOCK_STAT is defined:
- # of interrupt context RT spinnings
- # of task context RT spinnings
- # of nested spinlock RT spinnings
- # of
On Tue, Dec 27, 2016 at 01:59:02PM +, Ramiro Oliveira wrote:
> Create device tree bindings documentation.
>
> Signed-off-by: Ramiro Oliveira
> ---
> .../devicetree/bindings/media/i2c/ov5647.txt | 35
> ++
> 1 file changed, 35 insertions(+)
>
Realtime queued spinlock is a variant of the queued spinlock that is
suitable for use in a realtime running environment where the highest
priority task should always get its work done as soon as possible. That
means a minimal wait for spinlocks. To make that happen, RT tasks
will not wait in the
On Tue, Dec 27, 2016 at 01:59:02PM +, Ramiro Oliveira wrote:
> Create device tree bindings documentation.
>
> Signed-off-by: Ramiro Oliveira
> ---
> .../devicetree/bindings/media/i2c/ov5647.txt | 35
> ++
> 1 file changed, 35 insertions(+)
> create mode 100644
Realtime queued spinlock is a variant of the queued spinlock that is
suitable for use in a realtime running environment where the highest
priority task should always get its work done as soon as possible. That
means a minimal wait for spinlocks. To make that happen, RT tasks
will not wait in the
As the priority of a task may get boosted due to an acquired rtmutex,
we will need to periodically check the task priority to see if it
gets boosted. For an originally non-RT task, that means unqueuing from
the MCS wait queue before doing an RT spinning. So the unqueuing code
from osq_lock is
As the priority of a task may get boosted due to an acquired rtmutex,
we will need to periodically check the task priority to see if it
gets boosted. For an originally non-RT task, that means unqueuing from
the MCS wait queue before doing an RT spinning. So the unqueuing code
from osq_lock is
In general, the spinlock critical section is typically small enough
that once a CPU acquires a spinlock, it should be able to release the
lock in a reasonably short time. Waiting for the lock, however, can
take a while depending on how many CPUs are contending for the lock.
The exception here is
In general, the spinlock critical section is typically small enough
that once a CPU acquires a spinlock, it should be able to release the
lock in a reasonably short time. Waiting for the lock, however, can
take a while depending on how many CPUs are contending for the lock.
The exception here is
Ideally we want the CPU to be preemptible even when inside or waiting
for a lock. We cannot make it preemptible when inside a lock critical
section, but we can try to make the task voluntarily yield the CPU
when waiting for a lock.
This patch checks the need_sched() flag and yields the CPU when
This patchset introduces a new variant of queued spinlocks - the
realtime queued spinlocks. The purpose of this new variant is to
support real spinlock in a realtime environment where high priority
RT tasks should be allowed to complete its work ASAP. This means as
little waiting time for
The spin_lock_bh_nested() API is defined but is not used anywhere
in the kernel. So all spin_lock_bh_nested() and related APIs are
now removed.
Signed-off-by: Waiman Long
---
include/linux/spinlock.h | 8
include/linux/spinlock_api_smp.h | 2 --
Ideally we want the CPU to be preemptible even when inside or waiting
for a lock. We cannot make it preemptible when inside a lock critical
section, but we can try to make the task voluntarily yield the CPU
when waiting for a lock.
This patch checks the need_sched() flag and yields the CPU when
This patchset introduces a new variant of queued spinlocks - the
realtime queued spinlocks. The purpose of this new variant is to
support real spinlock in a realtime environment where high priority
RT tasks should be allowed to complete its work ASAP. This means as
little waiting time for
The spin_lock_bh_nested() API is defined but is not used anywhere
in the kernel. So all spin_lock_bh_nested() and related APIs are
now removed.
Signed-off-by: Waiman Long
---
include/linux/spinlock.h | 8
include/linux/spinlock_api_smp.h | 2 --
include/linux/spinlock_api_up.h
On Mon, Dec 26, 2016 at 05:35:59PM +0100, Pavel Machek wrote:
>
> Right question is "should we solve it without user-space help"?
>
> Answer is no, too. Way too complex. Yes, it would be nice if hardware
> was designed in such a way that getting calibration data from kernel
> is easy, and if you
On Mon, Dec 26, 2016 at 05:35:59PM +0100, Pavel Machek wrote:
>
> Right question is "should we solve it without user-space help"?
>
> Answer is no, too. Way too complex. Yes, it would be nice if hardware
> was designed in such a way that getting calibration data from kernel
> is easy, and if you
x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
on virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This inclues callers
using virt_to_phys on image addresses instead of __pa_symbol. As features
such as
x86 has an option CONFIG_DEBUG_VIRTUAL to do additional checks
on virt_to_phys calls. The goal is to catch users who are calling
virt_to_phys on non-linear addresses immediately. This inclues callers
using virt_to_phys on image addresses instead of __pa_symbol. As features
such as
pr_cont(...) and printk(KERN_CONT ...) uses should be discouraged
as their output can be interleaved by multiple logging processes.
Signed-off-by: Joe Perches
---
scripts/checkpatch.pl | 6 ++
1 file changed, 6 insertions(+)
diff --git a/scripts/checkpatch.pl
pr_cont(...) and printk(KERN_CONT ...) uses should be discouraged
as their output can be interleaved by multiple logging processes.
Signed-off-by: Joe Perches
---
scripts/checkpatch.pl | 6 ++
1 file changed, 6 insertions(+)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index
+++ Jean Delvare [16/12/16 18:45 +0100]:
Struct module is already declared at the beginning of the file, no
need to declare it again.
Signed-off-by: Jean Delvare
Fixes: 93c2e105f6bc ("module: Optimize __module_address() using a latched
RB-tree")
Cc: Peter Zijlstra (Intel)
+++ Jean Delvare [16/12/16 18:45 +0100]:
Struct module is already declared at the beginning of the file, no
need to declare it again.
Signed-off-by: Jean Delvare
Fixes: 93c2e105f6bc ("module: Optimize __module_address() using a latched
RB-tree")
Cc: Peter Zijlstra (Intel)
Cc: Jessica Yu
Cc:
mm/filemap.c: In function ‘clear_bit_unlock_is_negative_byte’:
mm/filemap.c:933: warning: passing argument 2 of ‘test_bit’ discards qualifiers
from pointer target type
Make the bitmask pointed to by the "vaddr" parameter volatile to fix
this, like is done on other architectures.
Signed-off-by:
mm/filemap.c: In function 'clear_bit_unlock_is_negative_byte':
mm/filemap.c:933:9: warning: passing argument 2 of 'test_bit' discards
'volatile' qualifier from pointer target type
return test_bit(PG_waiters, mem);
^
In file included from include/linux/bitops.h:36:0,
mm/filemap.c: In function ‘clear_bit_unlock_is_negative_byte’:
mm/filemap.c:933: warning: passing argument 2 of ‘test_bit’ discards qualifiers
from pointer target type
Make the bitmask pointed to by the "vaddr" parameter volatile to fix
this, like is done on other architectures.
Signed-off-by:
mm/filemap.c: In function 'clear_bit_unlock_is_negative_byte':
mm/filemap.c:933:9: warning: passing argument 2 of 'test_bit' discards
'volatile' qualifier from pointer target type
return test_bit(PG_waiters, mem);
^
In file included from include/linux/bitops.h:36:0,
On Tue, 3 Jan 2017 11:45:30 -0500
Steven Rostedt wrote:
> On Tue, 3 Jan 2017 11:32:58 -0500
> Luiz Capitulino wrote:
>
> > On Tue, 22 Nov 2016 15:20:50 -0500
> > Luiz Capitulino wrote:
> >
> > > This series adds support
On Tue, 3 Jan 2017 11:45:30 -0500
Steven Rostedt wrote:
> On Tue, 3 Jan 2017 11:32:58 -0500
> Luiz Capitulino wrote:
>
> > On Tue, 22 Nov 2016 15:20:50 -0500
> > Luiz Capitulino wrote:
> >
> > > This series adds support for a --cpu-list option, which is
> > > much more human friendly than
On Tue, 06 Dec 2016, Andrew Jeffery wrote:
> Signed-off-by: Andrew Jeffery
Applied with Acks, thanks.
> ---
> Documentation/devicetree/bindings/mfd/mfd.txt | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/Documentation/devicetree/bindings/mfd/mfd.txt
On Tue, 2017-01-03 at 13:34 +0900, Minchan Kim wrote:
> Hi Jan,
>
> On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> >
> > Hi,
> >
> > On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> > >
> > > >
> > > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> > > >
On Tue, 06 Dec 2016, Andrew Jeffery wrote:
> Signed-off-by: Andrew Jeffery
Applied with Acks, thanks.
> ---
> Documentation/devicetree/bindings/mfd/mfd.txt | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/Documentation/devicetree/bindings/mfd/mfd.txt
>
On Tue, 2017-01-03 at 13:34 +0900, Minchan Kim wrote:
> Hi Jan,
>
> On Mon, Jan 02, 2017 at 04:48:41PM +0100, Jan Kara wrote:
> >
> > Hi,
> >
> > On Tue 27-12-16 16:45:03, Minchan Kim wrote:
> > >
> > > >
> > > > Patch 3 splits the swap cache radix tree into 64MB chunks, reducing
> > > >
On Tue, 06 Dec 2016, Andrew Jeffery wrote:
> Signed-off-by: Andrew Jeffery
Applied with Acks, thanks.
> ---
> .../devicetree/bindings/mfd/aspeed-lpc.txt | 111
> +
> 1 file changed, 111 insertions(+)
> create mode 100644
On Tue, Dec 27, 2016 at 05:13:04PM +0800, Minghuan Lian wrote:
> A MSI controller of LS1043a v1.0 only includes one MSIR and
> is assigned one GIC interrupt. In order to support affinity,
> LS1043a v1.1 MSI is assigned 4 MSIRs and 4 GIC interrupts.
> But the MSIR has the different offset and only
On Tue, Dec 27, 2016 at 05:13:03PM +0800, Minghuan Lian wrote:
> LS1046a includes 4 MSIRs, each MSIR is assigned a dedicate GIC
> SPI interrupt and provides 32 MSI interrupts. Compared to previous
> MSI, LS1046a's IBS(interrupt bit select) shift is changed to 2 and
> total MSI interrupt number is
SF Markus Elfring writes:
> From: Markus Elfring
> Date: Sat, 31 Dec 2016 22:42:34 +0100
>
> Some update suggestions were taken into account
> from static source code analysis.
This series is:
Reviewed-by: Eric Anholt
901 - 1000 of 1868 matches
Mail list logo