Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions

2018-10-10 Thread Andrew Morton
On Tue, 9 Oct 2018 17:32:16 -0700 John Hubbard wrote: > > I'm not really understanding. Patch 3/3 changes just one infiniband > > driver to use put_user_page(). But the changelogs here imply (to me) > > that every user of get_user_pages() needs to be converted to > > s/put_page/put_user_page/.

Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions

2018-10-10 Thread Andrew Morton
On Tue, 9 Oct 2018 17:32:16 -0700 John Hubbard wrote: > > I'm not really understanding. Patch 3/3 changes just one infiniband > > driver to use put_user_page(). But the changelogs here imply (to me) > > that every user of get_user_pages() needs to be converted to > > s/put_page/put_user_page/.

Re: [Ksummit-discuss] [PATCH v2 0/3] code of conduct fixes

2018-10-10 Thread Al Viro
On Wed, Oct 10, 2018 at 06:23:24PM -0500, Eric W. Biederman wrote: > Which is very problematic. > a) In append only logs like git we can not edit history. >Making it a mainters responsibility to edit the history, to do the >impossible is a problem. > > b) There are no responsibilities of

Re: [Ksummit-discuss] [PATCH v2 0/3] code of conduct fixes

2018-10-10 Thread Al Viro
On Wed, Oct 10, 2018 at 06:23:24PM -0500, Eric W. Biederman wrote: > Which is very problematic. > a) In append only logs like git we can not edit history. >Making it a mainters responsibility to edit the history, to do the >impossible is a problem. > > b) There are no responsibilities of

[PATCH] misc: mic: fix a DMA pool free failure

2018-10-10 Thread Wenwen Wang
In _scif_prog_signal(), the boolean variable 'x100' is used to indicate whether the MIC Coprocessor is X100. If 'x100' is true, the status descriptor will be used to write the value to the destination. Otherwise, a DMA pool will be allocated for this purpose. Specifically, if the DMA pool is

[PATCH] misc: mic: fix a DMA pool free failure

2018-10-10 Thread Wenwen Wang
In _scif_prog_signal(), the boolean variable 'x100' is used to indicate whether the MIC Coprocessor is X100. If 'x100' is true, the status descriptor will be used to write the value to the destination. Otherwise, a DMA pool will be allocated for this purpose. Specifically, if the DMA pool is

Re: [PATCH AUTOSEL 4.18 24/58] Input: atakbd - fix Atari CapsLock behaviour

2018-10-10 Thread Michael Schmitz
Hi Geert, On 10/10/18 19:59, Geert Uytterhoeven wrote: Hi Michael, On Wed, Oct 10, 2018 at 12:07 AM Michael Schmitz wrote: I agree the bug is neither subtle nor recent, not security relevant and will affect only a handful of users at best. If you're worried about weakening the rules around

Re: [PATCH AUTOSEL 4.18 24/58] Input: atakbd - fix Atari CapsLock behaviour

2018-10-10 Thread Michael Schmitz
Hi Geert, On 10/10/18 19:59, Geert Uytterhoeven wrote: Hi Michael, On Wed, Oct 10, 2018 at 12:07 AM Michael Schmitz wrote: I agree the bug is neither subtle nor recent, not security relevant and will affect only a handful of users at best. If you're worried about weakening the rules around

Re: [Ksummit-discuss] [PATCH] code-of-conduct: Remove explicit list of discrimination factors

2018-10-10 Thread Frank Rowand
On 10/10/18 15:33, Eric S. Raymond wrote: > Josh Triplett : >>> The words removed by this patch are a political statement. >> >> Choosing not to say those words is a political statement. > > The situation is not symmetrical. Choosing the protected classes > in the CoC is a *change* in its

Re: [Ksummit-discuss] [PATCH] code-of-conduct: Remove explicit list of discrimination factors

2018-10-10 Thread Frank Rowand
On 10/10/18 15:33, Eric S. Raymond wrote: > Josh Triplett : >>> The words removed by this patch are a political statement. >> >> Choosing not to say those words is a political statement. > > The situation is not symmetrical. Choosing the protected classes > in the CoC is a *change* in its

Re: [PATCH v3 1/3] mm: zero remaining unavailable struct pages

2018-10-10 Thread Andrew Morton
On Tue, 2 Oct 2018 10:38:19 -0400 Masayoshi Mizuma wrote: > From: Naoya Horiguchi > > ... > > Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap") > Signed-off-by: Naoya Horiguchi > Tested-by: Oscar Salvador > Tested-by: Masayoshi Mizuma This patch and [2/3]

Re: [PATCH v3 1/3] mm: zero remaining unavailable struct pages

2018-10-10 Thread Andrew Morton
On Tue, 2 Oct 2018 10:38:19 -0400 Masayoshi Mizuma wrote: > From: Naoya Horiguchi > > ... > > Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap") > Signed-off-by: Naoya Horiguchi > Tested-by: Oscar Salvador > Tested-by: Masayoshi Mizuma This patch and [2/3]

[PATCH 3/4] ASoC: max98373: Sort max98373_bde_gain_tlv in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_bde_gain_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 4/4] ASoC: max98373: Sort DHT Rot Pnt Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_dht_spkgain_min_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 3/4] ASoC: max98373: Sort max98373_bde_gain_tlv in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_bde_gain_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 4/4] ASoC: max98373: Sort DHT Rot Pnt Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_dht_spkgain_min_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 2/4] ASoC: max98373: Sort BDE Limiter Thresh Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_limiter_thresh_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 1/4] ASoC: max98373: Sort Digital Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_digital_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 2/4] ASoC: max98373: Sort BDE Limiter Thresh Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_limiter_thresh_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

[PATCH 1/4] ASoC: max98373: Sort Digital Volume in reverse order

2018-10-10 Thread Ryan Lee
Signed-off-by: Ryan Lee --- Changes : Sort 'max98373_digital_tlv' in reverse order. Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min, step, mute) Max value was set in min field and negative value was set in step field before.

Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions

2018-10-10 Thread John Hubbard
On 10/10/18 1:59 AM, Jan Kara wrote: > On Tue 09-10-18 17:42:09, John Hubbard wrote: >> On 10/8/18 5:14 PM, Andrew Morton wrote: >>> Also, maintainability. What happens if someone now uses put_page() by >>> mistake? Kernel fails in some mysterious fashion? How can we prevent >>> this from

Re: [Ksummit-discuss] [PATCH v2 0/3] code of conduct fixes

2018-10-10 Thread Eric W. Biederman
James Bottomley writes: > Resend to show accumulated tags and also to add a third patch listing > the TAB as the reporting point as a few people seem to want. If it > gets the same level of support, I'll send it in with the other two. There is also: > Our Responsibilities >

Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder versions

2018-10-10 Thread John Hubbard
On 10/10/18 1:59 AM, Jan Kara wrote: > On Tue 09-10-18 17:42:09, John Hubbard wrote: >> On 10/8/18 5:14 PM, Andrew Morton wrote: >>> Also, maintainability. What happens if someone now uses put_page() by >>> mistake? Kernel fails in some mysterious fashion? How can we prevent >>> this from

Re: [Ksummit-discuss] [PATCH v2 0/3] code of conduct fixes

2018-10-10 Thread Eric W. Biederman
James Bottomley writes: > Resend to show accumulated tags and also to add a third patch listing > the TAB as the reporting point as a few people seem to want. If it > gets the same level of support, I'll send it in with the other two. There is also: > Our Responsibilities >

[RFC PATCH 00/30] softirq: Make softirqs soft-interruptible (+ per vector disablement)

2018-10-10 Thread Frederic Weisbecker
There have been complaints around the fact that some softirq vectors consume a lot of CPU at the expense of other's latency. A few solutions have been proposed, mostly working around a fundamental design of softirqs in Linux: a vector can not interrupt another while softirq are executing. Also

[RFC PATCH 00/30] softirq: Make softirqs soft-interruptible (+ per vector disablement)

2018-10-10 Thread Frederic Weisbecker
There have been complaints around the fact that some softirq vectors consume a lot of CPU at the expense of other's latency. A few solutions have been proposed, mostly working around a fundamental design of softirqs in Linux: a vector can not interrupt another while softirq are executing. Also

[RFC PATCH 19/30] diva: Prepare diva_os_enter_spin_lock() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 19/30] diva: Prepare diva_os_enter_spin_lock() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 22/30] seqlock: Prepare write_seq[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker This pair of function is implemented on top of spin_[un]lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to

[RFC PATCH 25/30] softirq: Push down softirq mask to __local_bh_disable_ip()

2018-10-10 Thread Frederic Weisbecker
Now that all callers are ready, we can push down the softirq enabled mask to the core from callers such as spin_lock_bh(), local_bh_disable(), rcu_read_lock_bh(), etc... It is applied to the CPU vector enabled mask on __local_bh_disable_ip() which then returns the old value to be restored on

[RFC PATCH 22/30] seqlock: Prepare write_seq[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker This pair of function is implemented on top of spin_[un]lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to

[RFC PATCH 25/30] softirq: Push down softirq mask to __local_bh_disable_ip()

2018-10-10 Thread Frederic Weisbecker
Now that all callers are ready, we can push down the softirq enabled mask to the core from callers such as spin_lock_bh(), local_bh_disable(), rcu_read_lock_bh(), etc... It is applied to the CPU vector enabled mask on __local_bh_disable_ip() which then returns the old value to be restored on

[RFC PATCH 26/30] softirq: Increment the softirq offset on top of enabled bits

2018-10-10 Thread Frederic Weisbecker
Now we can rely on the vector enabled bits to know if some vector is disabled. Hence we can also now drive the softirq offset on top of it. As a result, the softirq offset don't need to nest anymore as the vector enable mask does it on the stack on its behalf: // Start with

[RFC PATCH 26/30] softirq: Increment the softirq offset on top of enabled bits

2018-10-10 Thread Frederic Weisbecker
Now we can rely on the vector enabled bits to know if some vector is disabled. Hence we can also now drive the softirq offset on top of it. As a result, the softirq offset don't need to nest anymore as the vector enable mask does it on the stack on its behalf: // Start with

[RFC PATCH 23/30] rwlock: Prepare write_[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker This pair of function is implemented on top of __local_bh_disable_ip() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to

[RFC PATCH 30/30] softirq: Tasklet/net-rx fixup

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Tasklets and net-rx vectors don't quite get along. If one is interrupted by another, we may run into a nasty spin_lock recursion: [ 135.427198] Call Trace: [ 135.429650] [ 135.431690] dump_stack+0x67/0x95 [ 135.435024]

Re: [PATCH v2] PCI: Fix Switchtec DMA aliasing quirk dmesg noise

2018-10-10 Thread Logan Gunthorpe
On 2018-10-10 3:08 p.m., Bjorn Helgaas wrote: > Applied to pci/misc for v4.20, thanks! > > I split this into two patches so the important change doesn't get lost in > the noise of the SWITCHTEC_QUIRK() addition: > > - Add the SWITCHTEC_QUIRK() macro, but don't change anything else > -

[RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Make do_softirq() re-entrant and allow a vector, being either processed or disabled, to be interrupted by another vector. This way a vector won't be able to monopolize the CPU for a long while at the expense of the others that may rely on some predictable latency,

[RFC PATCH 28/30] softirq: Disable vector on execution

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Disable a vector while it is being processed. This prepare for softirq re-entrancy with an obvious single constraint: a vector can't be interrupted by itself. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc:

[RFC PATCH 24/30] softirq: Introduce Local_bh_enter/exit()

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker So far, the state of handling the disablement of softirqs and processing their callbacks have been handled the same: increment the softirq offset, trace softirqs off, preempt off, etc... The only difference remains in the way the preempt count is incremented: by 1 for

[RFC PATCH 27/30] softirq: Swap softirq serving VS disable on preempt mask layout

2018-10-10 Thread Frederic Weisbecker
The current softirq_count() layout is designed as followed: * Serving the softirq is done under SOFTIRQ_OFFSET. It makes the softirq_count() odd and since it can't nest, due to softirq serving not being re-entrant, it's fine to differenciate it from softirq disablement that use even values.

Re: [PATCH v2] PCI: Fix Switchtec DMA aliasing quirk dmesg noise

2018-10-10 Thread Logan Gunthorpe
On 2018-10-10 3:08 p.m., Bjorn Helgaas wrote: > Applied to pci/misc for v4.20, thanks! > > I split this into two patches so the important change doesn't get lost in > the noise of the SWITCHTEC_QUIRK() addition: > > - Add the SWITCHTEC_QUIRK() macro, but don't change anything else > -

[RFC PATCH 29/30] softirq: Make softirq processing softinterruptible

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Make do_softirq() re-entrant and allow a vector, being either processed or disabled, to be interrupted by another vector. This way a vector won't be able to monopolize the CPU for a long while at the expense of the others that may rely on some predictable latency,

[RFC PATCH 28/30] softirq: Disable vector on execution

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Disable a vector while it is being processed. This prepare for softirq re-entrancy with an obvious single constraint: a vector can't be interrupted by itself. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc:

[RFC PATCH 24/30] softirq: Introduce Local_bh_enter/exit()

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker So far, the state of handling the disablement of softirqs and processing their callbacks have been handled the same: increment the softirq offset, trace softirqs off, preempt off, etc... The only difference remains in the way the preempt count is incremented: by 1 for

[RFC PATCH 27/30] softirq: Swap softirq serving VS disable on preempt mask layout

2018-10-10 Thread Frederic Weisbecker
The current softirq_count() layout is designed as followed: * Serving the softirq is done under SOFTIRQ_OFFSET. It makes the softirq_count() odd and since it can't nest, due to softirq serving not being re-entrant, it's fine to differenciate it from softirq disablement that use even values.

[RFC PATCH 23/30] rwlock: Prepare write_[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker This pair of function is implemented on top of __local_bh_disable_ip() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to

[RFC PATCH 30/30] softirq: Tasklet/net-rx fixup

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Tasklets and net-rx vectors don't quite get along. If one is interrupted by another, we may run into a nasty spin_lock recursion: [ 135.427198] Call Trace: [ 135.429650] [ 135.431690] dump_stack+0x67/0x95 [ 135.435024]

[RFC PATCH 20/30] tg3: Prepare tg3_full_[un]lock() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 18/30] softirq: Prepare local_bh_disable() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
In order to be able to disable softirqs at the vector level, we'll need to be able to: 1) Pass as parameter the vector mask we want to disable. By default it's going to be all of them (SOFTIRQ_ALL_MASK) to keep the current behaviour. Each callsite will later need to be audited in the long

[RFC PATCH 21/30] locking: Prepare spin_lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of __local_bh_disable_ip() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar

[RFC PATCH 20/30] tg3: Prepare tg3_full_[un]lock() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 18/30] softirq: Prepare local_bh_disable() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
In order to be able to disable softirqs at the vector level, we'll need to be able to: 1) Pass as parameter the vector mask we want to disable. By default it's going to be all of them (SOFTIRQ_ALL_MASK) to keep the current behaviour. Each callsite will later need to be audited in the long

[RFC PATCH 21/30] locking: Prepare spin_lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of __local_bh_disable_ip() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar

[RFC PATCH 05/30] softirq: Convert softirq_pending_set() to softirq_pending_nand()

2018-10-10 Thread Frederic Weisbecker
The vector pending bits will soon need to be opposed to the vector enabled bits. As such, plain reset of new pending mask is not going to be needed anymore. Instead we'll need to be able to clear specific bits. Update the relevant API to allow that. Signed-off-by: Frederic Weisbecker Cc: Ingo

[RFC PATCH 12/30] rcu: Prepare rcu_read_[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 11/30] net: Prepare netif_tx_lock_bh/netif_tx_unlock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 05/30] softirq: Convert softirq_pending_set() to softirq_pending_nand()

2018-10-10 Thread Frederic Weisbecker
The vector pending bits will soon need to be opposed to the vector enabled bits. As such, plain reset of new pending mask is not going to be needed anymore. Instead we'll need to be able to clear specific bits. Update the relevant API to allow that. Signed-off-by: Frederic Weisbecker Cc: Ingo

[RFC PATCH 12/30] rcu: Prepare rcu_read_[un]lock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 11/30] net: Prepare netif_tx_lock_bh/netif_tx_unlock_bh() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 02/30] arch/softirq: Rename softirq_pending fields to softirq_data

2018-10-10 Thread Frederic Weisbecker
We are going to extend the softirq bits with an enabled vector mask. Provide the field with a more generic name to later layout the pending states on the lower bits and the enabled states on the higher bits. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc:

[RFC PATCH 08/30] softirq: Move vectors bits to bottom_half.h

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Using the bottom-half masking APIs defined in linux/bottom-half.h won't be possible without passing the relevant softirq vectors that are currently defined in linux/interrupt.h Yet we can't include linux/interrupt.h from linux/bottom-half.h due to circular

[RFC PATCH 10/30] softirq: Check enabled bits on the softirq loop

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Check the enabled vector bits on softirq processing. Those that are pending but disabled will be ignored and handled by the interrupted code that disabled those vectors. No effective change yet as the core isn't yet ready for softirq re-entrancy. All softirqs should be

[RFC PATCH 13/30] net: Prepare tcp_get_md5sig_pool() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 07/30] softirq: Rename _local_bh_enable() to local_bh_enable_no_softirq()

2018-10-10 Thread Frederic Weisbecker
The bottom half masking APIs have become interestingly confusing with all these flavours: local_bh_enable() _local_bh_enable() local_bh_enable_ip() __local_bh_enable_ip() _local_bh_enable() is an exception here because it's the only

[RFC PATCH 06/30] softirq: Introduce disabled softirq vectors bits

2018-10-10 Thread Frederic Weisbecker
Disabling the softirqs is currently an all-or-nothing operation: either all softirqs are enabled or none of them. However we plan to introduce a per vector granularity of this ability to improve latency response and make each softirq vector interruptible by the others. The first step carried here

[RFC PATCH 14/30] softirq: Introduce local_bh_disable_all()

2018-10-10 Thread Frederic Weisbecker
As we plan to narrow down local_bh_disable() to a per-vector disablement granularity, a shortcut can be handy for code that want to disable all of them and not care about carrying the bh enabled mask state prior to the call. (TODO: check that it is called while bh are ALL enabled because

[RFC PATCH 16/30] net: Prepare nf_log_buf_open() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 17/30] isdn: Prepare isdn_net_get_locked_lp() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to that of

[RFC PATCH 15/30] net: Prepare [un]lock_sock_fast() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 08/30] softirq: Move vectors bits to bottom_half.h

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Using the bottom-half masking APIs defined in linux/bottom-half.h won't be possible without passing the relevant softirq vectors that are currently defined in linux/interrupt.h Yet we can't include linux/interrupt.h from linux/bottom-half.h due to circular

[RFC PATCH 10/30] softirq: Check enabled bits on the softirq loop

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Check the enabled vector bits on softirq processing. Those that are pending but disabled will be ignored and handled by the interrupted code that disabled those vectors. No effective change yet as the core isn't yet ready for softirq re-entrancy. All softirqs should be

[RFC PATCH 13/30] net: Prepare tcp_get_md5sig_pool() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 07/30] softirq: Rename _local_bh_enable() to local_bh_enable_no_softirq()

2018-10-10 Thread Frederic Weisbecker
The bottom half masking APIs have become interestingly confusing with all these flavours: local_bh_enable() _local_bh_enable() local_bh_enable_ip() __local_bh_enable_ip() _local_bh_enable() is an exception here because it's the only

[RFC PATCH 06/30] softirq: Introduce disabled softirq vectors bits

2018-10-10 Thread Frederic Weisbecker
Disabling the softirqs is currently an all-or-nothing operation: either all softirqs are enabled or none of them. However we plan to introduce a per vector granularity of this ability to improve latency response and make each softirq vector interruptible by the others. The first step carried here

[RFC PATCH 14/30] softirq: Introduce local_bh_disable_all()

2018-10-10 Thread Frederic Weisbecker
As we plan to narrow down local_bh_disable() to a per-vector disablement granularity, a shortcut can be handy for code that want to disable all of them and not care about carrying the bh enabled mask state prior to the call. (TODO: check that it is called while bh are ALL enabled because

[RFC PATCH 16/30] net: Prepare nf_log_buf_open() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of local_bh_disable() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model

[RFC PATCH 17/30] isdn: Prepare isdn_net_get_locked_lp() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to that of

[RFC PATCH 15/30] net: Prepare [un]lock_sock_fast() for handling softirq mask

2018-10-10 Thread Frederic Weisbecker
This pair of function is implemented on top of spin_lock_bh() that is going to handle a softirq mask in order to apply finegrained vector disablement. The lock function is going to return the previous vectors enabled mask prior to the last call to local_bh_disable(), following a similar model to

[RFC PATCH 02/30] arch/softirq: Rename softirq_pending fields to softirq_data

2018-10-10 Thread Frederic Weisbecker
We are going to extend the softirq bits with an enabled vector mask. Provide the field with a more generic name to later layout the pending states on the lower bits and the enabled states on the higher bits. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc:

[RFC PATCH 09/30] x86: Init softirq enabled field

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker All softirqs must be set enabled on boot. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: David S. Miller Cc: Mauro Carvalho Chehab Cc: Paul E. McKenney ---

[RFC PATCH 04/30] softirq: Normalize softirq_pending naming scheme

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Use the subsystem as the prefix to name the __softirq_data accessors. They are going to be extended and want a more greppable and standard naming sheme. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter

[RFC PATCH 03/30] softirq: Implement local_softirq_pending() below softirq vector definition

2018-10-10 Thread Frederic Weisbecker
The future extensions of this API are going to depend on the vector definitions. So order the code accordingly. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: David S. Miller Cc: Mauro Carvalho

[RFC PATCH 01/30] x86: Revert "x86/irq: Demote irq_cpustat_t::__softirq_pending to u16"

2018-10-10 Thread Frederic Weisbecker
This reverts commit 9aee5f8a7e30330d0a8f4c626dc924ca5590aba5. We are going to need the 16 high bits above in order to implement a softirq enable mask. x86 is the only architecture that doesn't use unsigned int to implement softirq_pending. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc:

[RFC PATCH 09/30] x86: Init softirq enabled field

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker All softirqs must be set enabled on boot. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: David S. Miller Cc: Mauro Carvalho Chehab Cc: Paul E. McKenney ---

[RFC PATCH 04/30] softirq: Normalize softirq_pending naming scheme

2018-10-10 Thread Frederic Weisbecker
From: Frederic Weisbecker Use the subsystem as the prefix to name the __softirq_data accessors. They are going to be extended and want a more greppable and standard naming sheme. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter

[RFC PATCH 03/30] softirq: Implement local_softirq_pending() below softirq vector definition

2018-10-10 Thread Frederic Weisbecker
The future extensions of this API are going to depend on the vector definitions. So order the code accordingly. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: David S. Miller Cc: Mauro Carvalho

[RFC PATCH 01/30] x86: Revert "x86/irq: Demote irq_cpustat_t::__softirq_pending to u16"

2018-10-10 Thread Frederic Weisbecker
This reverts commit 9aee5f8a7e30330d0a8f4c626dc924ca5590aba5. We are going to need the 16 high bits above in order to implement a softirq enable mask. x86 is the only architecture that doesn't use unsigned int to implement softirq_pending. Signed-off-by: Frederic Weisbecker Cc: Ingo Molnar Cc:

[workqueue/driver-core PATCH v2 4/5] driver core: Attach devices on CPU local to device node

2018-10-10 Thread Alexander Duyck
This change makes it so that we call the asynchronous probe routines on a CPU local to the device node. By doing this we should be able to improve our initialization time significantly as we can avoid having to access the device from a remote node which may introduce higher latency.

[workqueue/driver-core PATCH v2 4/5] driver core: Attach devices on CPU local to device node

2018-10-10 Thread Alexander Duyck
This change makes it so that we call the asynchronous probe routines on a CPU local to the device node. By doing this we should be able to improve our initialization time significantly as we can avoid having to access the device from a remote node which may introduce higher latency.

[workqueue/driver-core PATCH v2 5/5] PM core: Use new async_schedule_dev command

2018-10-10 Thread Alexander Duyck
This change makes it so that we use the device specific version of the async_schedule commands to defer various tasks related to power management. By doing this we should see a slight improvement in performance as any device that is sensitive to latency/locality in the setup will now be

[workqueue/driver-core PATCH v2 5/5] PM core: Use new async_schedule_dev command

2018-10-10 Thread Alexander Duyck
This change makes it so that we use the device specific version of the async_schedule commands to defer various tasks related to power management. By doing this we should see a slight improvement in performance as any device that is sensitive to latency/locality in the setup will now be

[workqueue/driver-core PATCH v2 2/5] async: Add support for queueing on specific NUMA node

2018-10-10 Thread Alexander Duyck
This patch introduces four new variants of the async_schedule_ functions that allow scheduling on a specific NUMA node. The first two functions are async_schedule_near and async_schedule_near_domain which end up mapping to async_schedule and async_schedule_domain but provide NUMA node specific

[workqueue/driver-core PATCH v2 2/5] async: Add support for queueing on specific NUMA node

2018-10-10 Thread Alexander Duyck
This patch introduces four new variants of the async_schedule_ functions that allow scheduling on a specific NUMA node. The first two functions are async_schedule_near and async_schedule_near_domain which end up mapping to async_schedule and async_schedule_domain but provide NUMA node specific

[workqueue/driver-core PATCH v2 3/5] driver core: Probe devices asynchronously instead of the driver

2018-10-10 Thread Alexander Duyck
This change makes it so that we probe devices asynchronously instead of the driver. This results in us seeing the same behavior if the device is registered before the driver or after. This way we can avoid serializing the initialization should the driver not be loaded until after the devices have

[workqueue/driver-core PATCH v2 3/5] driver core: Probe devices asynchronously instead of the driver

2018-10-10 Thread Alexander Duyck
This change makes it so that we probe devices asynchronously instead of the driver. This results in us seeing the same behavior if the device is registered before the driver or after. This way we can avoid serializing the initialization should the driver not be loaded until after the devices have

[workqueue/driver-core PATCH v2 1/5] workqueue: Provide queue_work_node to queue work near a given NUMA node

2018-10-10 Thread Alexander Duyck
This patch provides a new function queue_work_node which is meant to schedule work on a "random" CPU of the requested NUMA node. The main motivation for this is to help assist asynchronous init to better improve boot times for devices that are local to a specific node. For now we just default to

[workqueue/driver-core PATCH v2 1/5] workqueue: Provide queue_work_node to queue work near a given NUMA node

2018-10-10 Thread Alexander Duyck
This patch provides a new function queue_work_node which is meant to schedule work on a "random" CPU of the requested NUMA node. The main motivation for this is to help assist asynchronous init to better improve boot times for devices that are local to a specific node. For now we just default to

[workqueue/driver-core PATCH v2 0/5] Add NUMA aware async_schedule calls

2018-10-10 Thread Alexander Duyck
This patch set provides functionality that will help to improve the locality of the async_schedule calls used to provide deferred initialization. This patch set originally started out with me focused on just the one call to async_schedule_domain in the nvdimm tree that was being used to defer the

[workqueue/driver-core PATCH v2 0/5] Add NUMA aware async_schedule calls

2018-10-10 Thread Alexander Duyck
This patch set provides functionality that will help to improve the locality of the async_schedule calls used to provide deferred initialization. This patch set originally started out with me focused on just the one call to async_schedule_domain in the nvdimm tree that was being used to defer the

<    1   2   3   4   5   6   7   8   9   10   >