On 5/6/20 5:24 PM, Jared Rossi wrote:
> Remove the explicit prefetch check when using vfio-ccw devices.
> This check does not trigger in practice as all Linux channel programs
> are intended to use prefetch.
>
> It is expected that all ORBs issued by Linux will request prefetch.
> Although
On 5/5/20 9:53 AM, Thomas Gleixner wrote:
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -20,6 +20,7 @@
> #include
> #include
> #include
> +#include
> #include
> #include
>
> @@ -993,7 +994,8 @@ static void __init xen_pvmmu_arch_setup(
>
Intel CPUs have a new alternative MSR range (starting from MSR_IA32_PMC0)
for GP counters that allows writing the full counter width. Enable this
range from a new capability bit (IA32_PERF_CAPABILITIES.FW_WRITE[bit 13]).
The guest would query CPUID to get the counter width, and sign extends
the
allyesconfig
powerpc alldefconfig
powerpc rhel-kconfig
powerpc allmodconfig
m68k randconfig-a001-20200506
mips randconfig-a001-20200506
nds32randconfig-a001-20200506
parisc
On Fri, 2020-05-01 at 10:16 -0400, Nayna Jain wrote:
> To prevent verifying the kernel module appended signature twice
> (finit_module), once by the module_sig_check() and again by IMA, powerpc
> secure boot rules define an IMA architecture specific policy rule
> only if CONFIG_MODULE_SIG_FORCE is
Applied to drm-misc-next.
On 03/24, Rodrigo Siqueira wrote:
> Hi Melissa,
>
> First of all, thanks for your patch.
>
> I agree with you, it makes more sense to me if we enable cursors by
> default. I don't remember why we decided to add it as a disabled by
> default.
>
> Reviewed-by: Rodrigo
On 5/6/20 3:23 AM, Ard Biesheuvel wrote:
On Tue, 5 May 2020 at 21:00, Lenny Szubowicz wrote:
In allocate_e820(), free the EFI map buffer that has been returned
by efi_get_memory_map(). The returned size of the EFI map buffer
is used to allocate an adequately sized e820ext buffer, if it's
allmodconfig
powerpc allnoconfig
m68k randconfig-a001-20200506
mips randconfig-a001-20200506
nds32randconfig-a001-20200506
parisc randconfig-a001-20200506
alpharandconfig-a001
On Wed, May 6, 2020 at 9:11 PM Brian Geffon wrote:
>
> > > - mremap_userfaultfd_complete(, addr, new_addr, old_len);
> > > + mremap_userfaultfd_complete(, addr, ret, old_len);
> >
> > Not super familiar with this code, but thought I'd ask, does ret
> > to be checked for -ENOMEM before
Hi, Christoph,
On Wed, May 6, 2020 at 10:44 PM Christoph Hellwig wrote:
>
> On Wed, May 06, 2020 at 04:47:30PM +0800, Huacai Chen wrote:
> > > For the above reasons, I think what you are concerned is not a
> > > big deal.
> > I don't think so, this is obviously a regression. If we can accept a
>
powerpc allmodconfig
m68k randconfig-a001-20200506
mips randconfig-a001-20200506
nds32randconfig-a001-20200506
parisc randconfig-a001-20200506
alpharandconfig-a001-20200506
riscv
On 5/6/20 11:29 PM, Paolo Bonzini wrote:
On 06/05/20 15:17, Suravee Suthikulpanit wrote:
*/
-void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
+void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit,
+ struct kvm_vcpu
On 5/6/20 6:20 PM, Edwin Török wrote:
>> (Obviously, a full metadump would be useful for confirming the shape
>> of
>> the refcount btree, but...first things first let's look at the
>> filefrag
>> output.)
> I'll try to gather one, and find a place to store/share it.
>
> Best regards,
> --Edwin
Hi Xu,
On Tue, May 5, 2020 at 10:13 PM Xu Yilun wrote:
>
> Hi Moritz:
>
> Hao and I did several rounds of review and fix in the mailing list. Now
> the patches are all acked by Hao.
>
> Could you please help review it when you have time?
I'll get to it this weekend. Sorry for the delay,
Moritz
> > - mremap_userfaultfd_complete(, addr, new_addr, old_len);
> > + mremap_userfaultfd_complete(, addr, ret, old_len);
>
> Not super familiar with this code, but thought I'd ask, does ret
> to be checked for -ENOMEM before calling mremap_userfaultfd_complete?
> Sorry if I missed
Nelson D'Souza writes:
> There are now several commercially available processors that have h/w
> fixes for the TSX Async Abort (TAA) issue as indicated by enumerating
> the ARCH_CAPABILITIES "TAA_NO" bit.
>
> Change the default setting to "auto" so that these CPUs will leave
> TSX enabled by
Hi, Will
Please help to review the patch.
Thanks,
Jiping
On 04/29/2020 12:51 PM, Jiping Ma wrote:
We test it as the following steps.
# gcc -g -mthumb -gdwarf -o test test.c
# export CALLGRAPH=dwarf
#(./perftest ./test profiling 1; cd ./profiling/; perf script)
Thanks,
Jiping
On 04/29/2020
On Wed, May 6, 2020 at 1:22 PM Brian Geffon wrote:
>
> A user is not required to set a new address when using
> MREMAP_DONTUNMAP as it can be used without MREMAP_FIXED.
> When doing so the remap event will use new_addr which may not
> have been set and we didn't propagate it back other then
> in
The current code for BPF_{ADD,SUB} BPF_K loads the BPF immediate to a
temporary register before performing the addition/subtraction. Similarly,
BPF_JMP BPF_K cases load the immediate to a temporary register before
comparison.
This patch introduces optimizations that use arm64 immediate add, sub,
The current code for BPF_{AND,OR,XOR,JSET} BPF_K loads the immediate to
a temporary register before use.
This patch changes the code to avoid using a temporary register
when the BPF immediate is encodable using an arm64 logical immediate
instruction. If the encoding fails (due to the immediate
Record PC value from regs[15], it should be regs[32] in REGS_ABI_32 mode,
which cause perf parser the backtrace failed.
Signed-off-by: Jiping Ma
---
arch/arm64/kernel/perf_regs.c | 4
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/kernel/perf_regs.c
This patch fixes two issues present in the current function for encoding
arm64 logical immediates when using the 32-bit variants of instructions.
First, the code does not correctly reject an all-ones 32-bit immediate
and returns an undefined instruction encoding, which can crash the kernel.
The
This patch series introduces several optimizations to the arm64 BPF JIT.
The optimizations make use of arm64 immediate instructions to avoid
loading BPF immediates to temporary registers, when possible.
In the process, we discovered two bugs in the logical immediate encoding
function in
On Thu, 2020-05-07 at 09:50 +0900, Tetsuo Handa wrote:
> On 2020/05/07 0:26, Joe Perches wrote:
> > On Wed, 2020-05-06 at 18:45 +0900, Tetsuo Handa wrote:
> > > On 2020/04/28 20:33, Tetsuo Handa wrote:
> > > > On 2020/04/27 15:21, Sergey Senozhatsky wrote:
> > > > > > KERN_NO_CONSOLES is for type
07.05.2020 03:02, Chanwoo Choi пишет:
> Hi Dmitry,
>
> On 4/17/20 11:04 PM, Dmitry Osipenko wrote:
>> 27.02.2020 20:08, Dmitry Osipenko пишет:
>>> GCC produces this warning when kernel compiled using `make W=1`:
>>>
>>> warning: ‘strncpy’ specified bound 16 equals destination size
>>>
IOTLB flush already included in the PASID tear down and the page request
drain process. There is no need to flush again.
Signed-off-by: Jacob Pan
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-svm.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git
Export invalidation queue internals of each iommu device through the
debugfs.
Example of such dump on a Skylake machine:
$ sudo cat /sys/kernel/debug/iommu/intel/invalidation_queue
Invalidation queue on IOMMU: dmar1
Base: 0x1672c9000 Head: 80Tail: 80
Index qw0
When a PASID is used for SVA by the device, it's possible that the PASID
entry is cleared before the device flushes all ongoing DMA requests. The
IOMMU should ignore the non-recoverable faults caused by these requests.
Intel VT-d provides such function through the FPD bit of the PASID entry.
This
When a PASID is stopped or terminated, there can be pending PRQs
(requests that haven't received responses) in remapping hardware.
This adds the interface to drain page requests and call it when a
PASID is terminated.
Signed-off-by: Jacob Pan
Signed-off-by: Liu Yi L
Signed-off-by: Lu Baolu
---
When a PASID is stopped or terminated, there can be pending PRQs
(requests that haven't received responses) in the software and
remapping hardware. The pending page requests must be drained
so that the pasid could be reused. The chapter 7.10 in the VT-d
specification specifies the software steps
Current qi_submit_sync() only supports single invalidation descriptor
per submission and appends wait descriptor after each submission to
poll the hardware completion. This extends the qi_submit_sync() helper
to support multiple descriptors, and add an option so that the caller
could specify the
Hi Brian,
On Wed, May 6, 2020 at 1:32 PM Brian Geffon wrote:
>
> It hasn't landed in a stable kernel yet, 5.7 is rc4 so I don't think
> it needs to cc stable, right?
I think the criteria is, if it has been merged into Linus's tree in a
kernel release, then CC'ing stable means any future stable
On Wed, 6 May 2020 17:42:40 -0700 "Paul E. McKenney" wrote:
> This commit adds a shrinker so as to inform RCU when memory is scarce.
> RCU responds by shifting into the same fast and inefficient mode that is
> used in the presence of excessive numbers of RCU callbacks. RCU remains
> in this
From: Grygorii Strashko
Date: Tue, 5 May 2020 19:31:26 +0300
> The K3 INTA driver, which is source TX/RX IRQs for CPSW NUSS, defines IRQs
> triggering type as EDGE by default, but triggering type for CPSW NUSS TX/RX
> IRQs has to be LEVEL as the EDGE triggering type may cause unnecessary IRQs
>
> -Original Message-
> From: David Miller
> Sent: Wednesday, May 6, 2020 14:03
> To: yanai...@huawei.com
> Cc: Kirsher, Jeffrey T ; Azarewicz, Piotr
> ; intel-wired-...@lists.osuosl.org;
> net...@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH net-next] i40e: Make
On 2020/05/07 0:26, Joe Perches wrote:
> On Wed, 2020-05-06 at 18:45 +0900, Tetsuo Handa wrote:
>> On 2020/04/28 20:33, Tetsuo Handa wrote:
>>> On 2020/04/27 15:21, Sergey Senozhatsky wrote:
> KERN_NO_CONSOLES is for type of messages where "saved for later analysis"
> is
> important
From: Bryan O'Donoghue
Currently we check to make sure there is no error state on the extcon
handle for VBUS when writing to the HS_PHY_GENCONFIG_2 register. When using
the USB role-switch API we still need to write to this register absent an
extcon handle.
This patch makes the appropriate
> I thought we have established that checking device MTU (m*T*u)
> at ingress makes a very limited amount of sense, no?
>
> Shooting from the hip here, but won't something like:
>
> if (!skb->dev || skb->tc_at_ingress)
> return SKB_MAX_ALLOC;
> return skb->dev->mtu +
Improve performance by multithreading the work to preserve and restore
shmem pages.
Add 'pkram_max_threads=' kernel option to specify the maximum number
of threads to use to preserve or restore the pages of a shmem file.
The default is 16.
When preserving pages each thread saves chunks of a file
Provide a way for a caller external to numa to ensure memblocks in the
memblock reserved list do not cross node boundaries and have a node id
assigned to them. This will be used by PKRAM to ensure initialization of
page structs for preserved pages can be deferred and multithreaded
efficiently.
To support deferred initialization of page structs for preserved pages,
separate memblocks containing preserved pages by setting a new flag
when adding them to the memblock reserved list.
Signed-off-by: Anthony Yznaga
---
include/linux/memblock.h | 7 +++
mm/memblock.c| 8
This patch illustrates how the PKRAM API can be used for preserving tmpfs.
Two options are added to tmpfs:
The 'pkram=' option specifies the PKRAM node to load/save the
filesystem tree from/to.
The 'preserve' option initiates preservation of a read-only
filesystem tree.
If the
From: Geert Uytterhoeven
Date: Tue, 5 May 2020 15:28:09 +0200
> Currently bool ionic_cq.done_color is exported using
> debugfs_create_u8(), which requires a cast, preventing further compiler
> checks.
>
> Fix this by switching to debugfs_create_bool(), and dropping the cast.
>
>
Add and remove pkram_link pages from a pkram_obj atomically to prepare
for multithreading.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 5f4e4d12865f..042c14dedc25
When a kernel is loaded for kexec the address ranges where the kexec
segments will be copied to may conflict with pages already set to be
preserved. Provide a way to determine if preserved pages exist in a
specified range.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
Contention on the xarray lock when multiple threads are adding to the
same xarray can be mitigated by providing a way to add entries in
bulk.
Allow a caller to allocate and populate an xarray node outside of
the target xarray and then only take the xarray lock long enough to
import the node into
From: Arnd Bergmann
Date: Tue, 5 May 2020 17:38:19 +0200
> The addition of sja1105_port_status_ether structure into the
> statistics causes the frame size to go over the warning limit:
>
> drivers/net/dsa/sja1105/sja1105_ethtool.c:421:6: error: stack frame size of
> 1104 bytes in function
Considerable contention on the LRU lock happens when multiple threads
are used to insert pages into a shmem file in parallel. To alleviate this
provide a way for pages to be added to the same LRU to be staged so that
they can be added by splicing lists and updating stats once with the lock
held.
If shmem_insert_page() is called to insert a page that was preserved
using PKRAM on the current boot (i.e. preserved page is restored without
an intervening kexec boot), the page will still be charged to a memory
cgroup because it is never freed. Don't try to charge it again.
Signed-off-by:
To take advantage of optimizations when adding pages to the page cache
via shmem_insert_pages(), improve the likelihood that the pages array
passed to shmem_insert_pages() starts on an aligned index. Do this
when preserving pages by starting a new pkram_link page when the current
page is aligned
This patch adds three functions:
pkram_prepare_load_pages()
Called after calling pkram_prepare_load_obj()
pkram_load_pages()
Loads some number of pages that are contiguous by their original
file index values. The index of the first page, an array of the
page pointers, and the number of
Using the pkram_save_page() function, one can populate PKRAM objects with
memory pages which can later be loaded using the pkram_load_page()
function. Saving a memory page to PKRAM is accomplished by recording
its pfn and incrementing its refcount so that it will not be freed after
the last user
Support preserving a transparent hugepage by recording the page order and
a flag indicating it is a THP. Use these values when the page is
restored to reconstruct the THP.
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 2 ++
mm/pkram.c| 20
2 files
Rather than adding one page at a time to the page cache and taking the
page cache xarray lock each time, where possible add pages in bulk by
first populating an xarray node outside of the page cache before taking
the lock to insert it.
When a group of pages to be inserted will fill an xarray node,
Reduce LRU lock contention when inserting shmem pages by staging pages
to be added to the same LRU and adding them en masse.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index
Add the ability to walk the pkram pagetable from high to low addresses
and execute a callback for each contiguous range of preserved or not
preserved memory found. The reason for walking high to low is to align
with high to low memblock allocation when finding holes that memblocks
can safely be
Not all memory ranges can be used for saving preserved over-kexec data.
For example, a kexec kernel may be loaded before pages are preserved.
The memory regions where the kexec segments will be copied to on kexec
must not contain preserved pages or else they will be clobbered.
Originally-by:
From: Oleksij Rempel
Date: Tue, 5 May 2020 08:35:04 +0200
> changes v6:
> - use NL_SET_ERR_MSG_ATTR in ethnl_update_linkmodes
> - add sanity checks in the ioctl interface
> - use bool for ethnl_validate_master_slave_cfg()
...
Series applied, thank you.
shmem_insert_pages() currently loops over the array of pages passed
to it and calls shmem_add_to_page_cache() for each one. Prepare
for adding pages to the pagecache in bulk by adding and using a
shmem_add_pages_to_cache() call. For now it just iterates over
an array and adds pages individually,
Keep preserved pages from being recycled during boot by adding them
to the memblock reserved list during early boot. If memory reservation
fails (e.g. a region has already been reserved), all preserved pages
are dropped.
For efficiency the preserved pages pagetable is used to identify and
reserve
Calling shmem_insert_page() to insert one page at a time does
not scale well when multiple threads are inserting pages into
the same shmem segment. This is primarily due to the locking needed
when adding to the pagecache and LRU but also due to contention
on the shmem_inode_info lock. To address
Make use of new interfaces for loading and inserting preserved pages
into a shmem file in bulk.
Signed-off-by: Anthony Yznaga
---
mm/shmem_pkram.c | 23 +--
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/mm/shmem_pkram.c b/mm/shmem_pkram.c
index
PKRAM nodes are further divided into a list of objects. After a save
operation has been initiated for a node, a save operation for an object
associated with the node is initiated by calling pkram_prepare_save_obj().
A new object is created and linked to the node. The save operation for
the object
Preserved pages are represented in the memblock reserved list, but page
structs for pages in the reserved list are initialized early while boot
is single threaded which means that a large number of preserved pages
can impact boot time. To mitigate, defer initialization of preserved
pages by
To prepare for multithreading the work done to a preserve a file,
divide the work into subranges of the total index range of the file.
The chunk size is a rather arbitrary 256k indices.
A new API call, pkram_prepare_save_chunk(), is added. It is called
after calling pkram_prepare_save_obj(), and
In order to facilitate fast initialization of page structs for
preserved pages, memblocks with preserved pages must not cross
numa node boundaries and must have a node id assigned to them.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 10 ++
1 file changed, 10 insertions(+)
diff --git
To support deferred initialization of page structs for preserved
pages, add an iterator of the memblock reserved list that can select or
exclude ranges based on memblock flags.
Signed-off-by: Anthony Yznaga
---
include/linux/memblock.h | 10 ++
mm/memblock.c| 51
Add a pointer to the pagetable to the pkram_super_block page.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 20 +---
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 5a7b8f61a55d..54b2779d0813 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
Future patches will need a way to efficiently identify physically
contiguous ranges of preserved pages regardless of their virtual
addresses as well as a way to identify ranges that do not contain
preserved pages. To facilitate this all pages to be preserved across
kexec are added to an identity
Avoid regions of memory that contain preserved pages when computing
slots used to select where to put the decompressed kernel.
Signed-off-by: Anthony Yznaga
---
arch/x86/boot/compressed/Makefile | 3 +
arch/x86/boot/compressed/kaslr.c | 67 ++
arch/x86/boot/compressed/misc.h | 19
Ensure destination ranges of the kexec segments do not overlap
with any kernel pages marked to be preserved across kexec.
For kexec_load, return EADDRNOTAVAIL if overlap is detected.
For kexec_file_load, skip ranges containing preserved pages when
seaching for available ranges to use.
Preserved-across-kexec memory or PKRAM is a method for saving memory
pages of the currently executing kernel and restoring them after kexec
boot into a new one. This can be utilized for preserving guest VM state,
large in-memory databases, process memory, etc. across reboot. While
DRAM-as-PMEM or
The function inserts a page into a shmem file at a specified offset.
The page can be a regular PAGE_SIZE page or a transparent huge page.
If there is something at the offset (page or swap), the function fails.
The function will be used by the next patch.
Originally-by: Vladimir Davydov
Workaround the limitation that shmem pages must be in memory in order
to be preserved by preventing them from being swapped out in the first
place. Do this by marking shmem pages associated with a PKRAM node
as unevictable.
Signed-off-by: Anthony Yznaga
---
mm/shmem.c | 2 ++
1 file changed, 2
The PKRAM super block is the starting point for restoring preserved
memory. By providing the super block to the new kernel at boot time,
preserved memory can be reserved and made available to be restored.
To point the kernel to the location of the super block, one passes
its pfn via the 'pkram'
Explicitly specify the mm to pass to shmem_insert_page() when
the pkram_stream is initialized rather than use the mm of the
current thread. This will allow for multiple kernel threads to
target the same mm when inserting pages in parallel.
Signed-off-by: Anthony Yznaga
---
The kdump kernel should not preserve or restore pages.
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/pkram.c b/mm/pkram.c
index 95e691382721..4d4d836fea53 100644
--- a/mm/pkram.c
+++ b/mm/pkram.c
@@ -1,4 +1,5 @@
//
To free all space utilized for preserved memory, one can write 0 to
/sys/kernel/pkram. This will destroy all PKRAM nodes that are not
currently being read or written.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 39 ++-
1
Preserved memory is divided into nodes which can be saved and loaded
independently of each other. PKRAM nodes are kept on a list and
identified by unique names. Whenever a save operation is initiated by
calling pkram_prepare_save(), a new node is created and linked to the
list. When the save
The size of the memblock reserved array may be increased while preserved
pages are being reserved. When this happens, preserved pages that have
not yet been reserved are at risk for being clobbered when space for a
larger array is allocated.
When called from memblock_double_array(), a wrapper
This patchset implements preserved-over-kexec memory storage or PKRAM as a
method for saving memory pages of the currently executing kernel so that
they may be restored after kexec into a new kernel. The patches are adapted
from an RFC patchset sent out in 2013 by Vladimir Davydov [1]. They
Since page structs are used for linking PKRAM nodes and cleared
on boot, organize all PKRAM nodes into a list singly-linked by pfns
before reboot to facilitate the node list restore in the new kernel.
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
mm/pkram.c | 50
When loading a kernel for kexec, dynamically update the list of physical
ranges that are not to be used for storing preserved pages with the ranges
where kexec segments will be copied to on reboot. This ensures no pages
preserved after the new kernel has been loaded will reside in these ranges
on
This patch adds the ability to save arbitrary byte streams up to a
total length of one page to a PKRAM object using pkram_write() to be
restored later using pkram_read().
Originally-by: Vladimir Davydov
Signed-off-by: Anthony Yznaga
---
include/linux/pkram.h | 4 +++
mm/pkram.c|
After the page ranges in the pagetable have been reserved the pagetable
is no longer needed. Rather than free it during early boot by unreserving
page-sized blocks which can be inefficient when dealing with a large number
of blocks, wait until the page structs have been initialized and free them
This commit adds a shrinker so as to inform RCU when memory is scarce.
RCU responds by shifting into the same fast and inefficient mode that is
used in the presence of excessive numbers of RCU callbacks. RCU remains
in this state for one-tenth of a second, though this time window can be
extended
From: Alex Elder
Date: Mon, 4 May 2020 18:53:40 -0500
> It turns out that a workaround that performs a small DMA operation
> between retried attempts to stop a GSI channel is not needed for any
> supported hardware. The hardware quirk that required the extra DMA
> operation was fixed after IPA
Greg,
"Dr. Greg" writes:
> As an aside, for those who haven't spent the last 5+ years of their
> life working with this technology. SGX2 hardware platforms have the
> ability to allow unrestricted code execution in enclave context.
Unrestricted code execution even before loaded? Unrestricted
Hi all,
After merging the vfs tree, today's linux-next build (arm
multi_v7_defconfig) failed like this:
fs/eventfd.c: In function 'eventfd_read':
fs/eventfd.c:226:6: error: implicit declaration of function 'iov_iter_count'
[-Werror=implicit-function-declaration]
226 | if (iov_iter_count(to)
From: Alex Elder
Date: Mon, 4 May 2020 18:37:10 -0500
> A "delay mode" feature was put in place to work around a problem
> where packets could passed to the modem before it was ready to
> handle them. That problem no longer exists, and we don't need the
> workaround any more so get rid of it.
From: Alex Elder
Date: Mon, 4 May 2020 18:30:01 -0500
> Some special handling done during channel reset should only be done
> for IPA hardare version 3.5.1. This series generalizes the meaning
> of a flag passed to indicate special behavior, then has the special
> handling be used only when
From: Florian Fainelli
Date: Mon, 4 May 2020 13:18:06 -0700
> When ndo_get_phys_port_name() for the CPU port was added we introduced
> an early check for when the DSA master network device in
> dsa_master_ndo_setup() already implements ndo_get_phys_port_name(). When
> we perform the teardown
From: Alex Elder
Date: Mon, 4 May 2020 13:13:50 -0500
> Add an "iommus" property to the IPA node in "sdm845.dtsi". It is
> required because there are two regions of memory the IPA accesses
> through an SMMU. The next few patches define and map those regions.
>
> Signed-off-by: Alex Elder
On Wed, May 6, 2020 at 2:51 PM Thomas Gleixner wrote:
>
> Alexei,
>
> Thomas Gleixner writes:
> > Alexei Starovoitov writes:
> >>
> >> I'd like to pull
> >> commit 980737282232 ("capabilities: Introduce CAP_PERFMON to kernel
> >> and user space")
> >> into bpf-next to base my CAP_BPF work on
Hi Randy,
On Wed, 6 May 2020 17:13:40 -0700 Randy Dunlap wrote:
>
> Yes. It's here:
> https://lore.kernel.org/lkml/02b719674b031800b61e33c30b2e823183627c19.1587842122.git.jpoim...@redhat.com/
Thanks.
--
Cheers,
Stephen Rothwell
pgp7EXRihOA1K.pgp
Description: OpenPGP digital signature
From: Ahmed Abdelsalam
Date: Mon, 4 May 2020 14:42:11 +
> The Segment Routing Header (SRH) which defines the SRv6 dataplane is defined
> in RFC8754.
>
> RFC8754 (section 4.1) defines the SR source node behavior which encapsulates
> packets into an outer IPv6 header and SRH. The SR source
On 5/6/20 5:15 PM, Ramuthevar,Vadivel MuruganX wrote:
> diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig
> index a80a46bb5b8b..a026bec28f39 100644
> --- a/drivers/mtd/nand/raw/Kconfig
> +++ b/drivers/mtd/nand/raw/Kconfig
> @@ -457,6 +457,14 @@ config MTD_NAND_CADENCE
>
On 5/6/20 3:28 PM, Rafael Aquini wrote:
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt
> index 7bc83f3d9bdf..75c02c1841b2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++
From: Ramuthevar Vadivel Murugan
This patch adds the new IP of Nand Flash Controller(NFC) support
on Intel's Lightning Mountain(LGM) SoC.
DMA is used for burst data transfer operation, also DMA HW supports
aligned 32bit memory address and aligned data access by default.
DMA burst of 8
Hi Li,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on kvm/linux-next]
[also build test WARNING on next-20200505]
[cannot apply to tip/auto-latest linus/master linux/master v5.7-rc4]
[if your patch is applied to the wrong git tree, please drop us a note to help
101 - 200 of 1543 matches
Mail list logo