On Tue, 9 Oct 2018 17:32:16 -0700 John Hubbard wrote:
> > I'm not really understanding. Patch 3/3 changes just one infiniband
> > driver to use put_user_page(). But the changelogs here imply (to me)
> > that every user of get_user_pages() needs to be converted to
> > s/put_page/put_user_page/.
On Tue, 9 Oct 2018 17:32:16 -0700 John Hubbard wrote:
> > I'm not really understanding. Patch 3/3 changes just one infiniband
> > driver to use put_user_page(). But the changelogs here imply (to me)
> > that every user of get_user_pages() needs to be converted to
> > s/put_page/put_user_page/.
On Wed, Oct 10, 2018 at 06:23:24PM -0500, Eric W. Biederman wrote:
> Which is very problematic.
> a) In append only logs like git we can not edit history.
>Making it a mainters responsibility to edit the history, to do the
>impossible is a problem.
>
> b) There are no responsibilities of
On Wed, Oct 10, 2018 at 06:23:24PM -0500, Eric W. Biederman wrote:
> Which is very problematic.
> a) In append only logs like git we can not edit history.
>Making it a mainters responsibility to edit the history, to do the
>impossible is a problem.
>
> b) There are no responsibilities of
In _scif_prog_signal(), the boolean variable 'x100' is used to indicate
whether the MIC Coprocessor is X100. If 'x100' is true, the status
descriptor will be used to write the value to the destination. Otherwise, a
DMA pool will be allocated for this purpose. Specifically, if the DMA pool
is
In _scif_prog_signal(), the boolean variable 'x100' is used to indicate
whether the MIC Coprocessor is X100. If 'x100' is true, the status
descriptor will be used to write the value to the destination. Otherwise, a
DMA pool will be allocated for this purpose. Specifically, if the DMA pool
is
Hi Geert,
On 10/10/18 19:59, Geert Uytterhoeven wrote:
Hi Michael,
On Wed, Oct 10, 2018 at 12:07 AM Michael Schmitz wrote:
I agree the bug is neither subtle nor recent, not security relevant and
will affect only a handful of users at best.
If you're worried about weakening the rules around
Hi Geert,
On 10/10/18 19:59, Geert Uytterhoeven wrote:
Hi Michael,
On Wed, Oct 10, 2018 at 12:07 AM Michael Schmitz wrote:
I agree the bug is neither subtle nor recent, not security relevant and
will affect only a handful of users at best.
If you're worried about weakening the rules around
On 10/10/18 15:33, Eric S. Raymond wrote:
> Josh Triplett :
>>> The words removed by this patch are a political statement.
>>
>> Choosing not to say those words is a political statement.
>
> The situation is not symmetrical. Choosing the protected classes
> in the CoC is a *change* in its
On 10/10/18 15:33, Eric S. Raymond wrote:
> Josh Triplett :
>>> The words removed by this patch are a political statement.
>>
>> Choosing not to say those words is a political statement.
>
> The situation is not symmetrical. Choosing the protected classes
> in the CoC is a *change* in its
On Tue, 2 Oct 2018 10:38:19 -0400 Masayoshi Mizuma
wrote:
> From: Naoya Horiguchi
>
> ...
>
> Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap")
> Signed-off-by: Naoya Horiguchi
> Tested-by: Oscar Salvador
> Tested-by: Masayoshi Mizuma
This patch and [2/3]
On Tue, 2 Oct 2018 10:38:19 -0400 Masayoshi Mizuma
wrote:
> From: Naoya Horiguchi
>
> ...
>
> Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap")
> Signed-off-by: Naoya Horiguchi
> Tested-by: Oscar Salvador
> Tested-by: Masayoshi Mizuma
This patch and [2/3]
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_bde_gain_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_dht_spkgain_min_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_bde_gain_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_dht_spkgain_min_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_limiter_thresh_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_digital_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_limiter_thresh_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
Signed-off-by: Ryan Lee
---
Changes : Sort 'max98373_digital_tlv' in reverse order.
Fixed wrong use case of 'TLV_DB_SCALE_ITEM' - TLV_DB_SCALE_ITEM(min,
step, mute)
Max value was set in min field and negative value was set in
step field before.
On 10/10/18 1:59 AM, Jan Kara wrote:
> On Tue 09-10-18 17:42:09, John Hubbard wrote:
>> On 10/8/18 5:14 PM, Andrew Morton wrote:
>>> Also, maintainability. What happens if someone now uses put_page() by
>>> mistake? Kernel fails in some mysterious fashion? How can we prevent
>>> this from
James Bottomley writes:
> Resend to show accumulated tags and also to add a third patch listing
> the TAB as the reporting point as a few people seem to want. If it
> gets the same level of support, I'll send it in with the other two.
There is also:
> Our Responsibilities
>
On 10/10/18 1:59 AM, Jan Kara wrote:
> On Tue 09-10-18 17:42:09, John Hubbard wrote:
>> On 10/8/18 5:14 PM, Andrew Morton wrote:
>>> Also, maintainability. What happens if someone now uses put_page() by
>>> mistake? Kernel fails in some mysterious fashion? How can we prevent
>>> this from
James Bottomley writes:
> Resend to show accumulated tags and also to add a third patch listing
> the TAB as the reporting point as a few people seem to want. If it
> gets the same level of support, I'll send it in with the other two.
There is also:
> Our Responsibilities
>
There have been complaints around the fact that some softirq vectors
consume a lot of CPU at the expense of other's latency. A few solutions
have been proposed, mostly working around a fundamental design of softirqs
in Linux: a vector can not interrupt another while softirq are executing.
Also
There have been complaints around the fact that some softirq vectors
consume a lot of CPU at the expense of other's latency. A few solutions
have been proposed, mostly working around a fundamental design of softirqs
in Linux: a vector can not interrupt another while softirq are executing.
Also
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
From: Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to
Now that all callers are ready, we can push down the softirq enabled
mask to the core from callers such as spin_lock_bh(), local_bh_disable(),
rcu_read_lock_bh(), etc...
It is applied to the CPU vector enabled mask on __local_bh_disable_ip()
which then returns the old value to be restored on
From: Frederic Weisbecker
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to
Now that all callers are ready, we can push down the softirq enabled
mask to the core from callers such as spin_lock_bh(), local_bh_disable(),
rcu_read_lock_bh(), etc...
It is applied to the CPU vector enabled mask on __local_bh_disable_ip()
which then returns the old value to be restored on
Now we can rely on the vector enabled bits to know if some vector is
disabled. Hence we can also now drive the softirq offset on top of it.
As a result, the softirq offset don't need to nest anymore as the vector
enable mask does it on the stack on its behalf:
// Start with
Now we can rely on the vector enabled bits to know if some vector is
disabled. Hence we can also now drive the softirq offset on top of it.
As a result, the softirq offset don't need to nest anymore as the vector
enable mask does it on the stack on its behalf:
// Start with
From: Frederic Weisbecker
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to
From: Frederic Weisbecker
Tasklets and net-rx vectors don't quite get along. If one is interrupted
by another, we may run into a nasty spin_lock recursion:
[ 135.427198] Call Trace:
[ 135.429650]
[ 135.431690] dump_stack+0x67/0x95
[ 135.435024]
On 2018-10-10 3:08 p.m., Bjorn Helgaas wrote:
> Applied to pci/misc for v4.20, thanks!
>
> I split this into two patches so the important change doesn't get lost in
> the noise of the SWITCHTEC_QUIRK() addition:
>
> - Add the SWITCHTEC_QUIRK() macro, but don't change anything else
> -
From: Frederic Weisbecker
Make do_softirq() re-entrant and allow a vector, being either processed
or disabled, to be interrupted by another vector. This way a vector
won't be able to monopolize the CPU for a long while at the expense of
the others that may rely on some predictable latency,
From: Frederic Weisbecker
Disable a vector while it is being processed. This prepare for softirq
re-entrancy with an obvious single constraint: a vector can't be
interrupted by itself.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc:
From: Frederic Weisbecker
So far, the state of handling the disablement of softirqs and processing
their callbacks have been handled the same: increment the softirq offset,
trace softirqs off, preempt off, etc...
The only difference remains in the way the preempt count is incremented:
by 1 for
The current softirq_count() layout is designed as followed:
* Serving the softirq is done under SOFTIRQ_OFFSET. It makes the
softirq_count() odd and since it can't nest, due to softirq serving
not being re-entrant, it's fine to differenciate it from softirq
disablement that use even values.
On 2018-10-10 3:08 p.m., Bjorn Helgaas wrote:
> Applied to pci/misc for v4.20, thanks!
>
> I split this into two patches so the important change doesn't get lost in
> the noise of the SWITCHTEC_QUIRK() addition:
>
> - Add the SWITCHTEC_QUIRK() macro, but don't change anything else
> -
From: Frederic Weisbecker
Make do_softirq() re-entrant and allow a vector, being either processed
or disabled, to be interrupted by another vector. This way a vector
won't be able to monopolize the CPU for a long while at the expense of
the others that may rely on some predictable latency,
From: Frederic Weisbecker
Disable a vector while it is being processed. This prepare for softirq
re-entrancy with an obvious single constraint: a vector can't be
interrupted by itself.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc:
From: Frederic Weisbecker
So far, the state of handling the disablement of softirqs and processing
their callbacks have been handled the same: increment the softirq offset,
trace softirqs off, preempt off, etc...
The only difference remains in the way the preempt count is incremented:
by 1 for
The current softirq_count() layout is designed as followed:
* Serving the softirq is done under SOFTIRQ_OFFSET. It makes the
softirq_count() odd and since it can't nest, due to softirq serving
not being re-entrant, it's fine to differenciate it from softirq
disablement that use even values.
From: Frederic Weisbecker
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to
From: Frederic Weisbecker
Tasklets and net-rx vectors don't quite get along. If one is interrupted
by another, we may run into a nasty spin_lock recursion:
[ 135.427198] Call Trace:
[ 135.429650]
[ 135.431690] dump_stack+0x67/0x95
[ 135.435024]
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
In order to be able to disable softirqs at the vector level, we'll need
to be able to:
1) Pass as parameter the vector mask we want to disable. By default it's
going to be all of them (SOFTIRQ_ALL_MASK) to keep the current
behaviour. Each callsite will later need to be audited in the long
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to local_bh_disable(),
following a similar
This pair of function is implemented on top of spin_[un]lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
In order to be able to disable softirqs at the vector level, we'll need
to be able to:
1) Pass as parameter the vector mask we want to disable. By default it's
going to be all of them (SOFTIRQ_ALL_MASK) to keep the current
behaviour. Each callsite will later need to be audited in the long
This pair of function is implemented on top of __local_bh_disable_ip()
that is going to handle a softirq mask in order to apply finegrained
vector disablement. The lock function is going to return the previous
vectors enabled mask prior to the last call to local_bh_disable(),
following a similar
The vector pending bits will soon need to be opposed to the vector
enabled bits. As such, plain reset of new pending mask is not going to
be needed anymore. Instead we'll need to be able to clear specific bits.
Update the relevant API to allow that.
Signed-off-by: Frederic Weisbecker
Cc: Ingo
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
This pair of function is implemented on top of spin_lock_bh() that is
going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
The vector pending bits will soon need to be opposed to the vector
enabled bits. As such, plain reset of new pending mask is not going to
be needed anymore. Instead we'll need to be able to clear specific bits.
Update the relevant API to allow that.
Signed-off-by: Frederic Weisbecker
Cc: Ingo
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
This pair of function is implemented on top of spin_lock_bh() that is
going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
We are going to extend the softirq bits with an enabled vector mask.
Provide the field with a more generic name to later layout the pending
states on the lower bits and the enabled states on the higher bits.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc:
From: Frederic Weisbecker
Using the bottom-half masking APIs defined in linux/bottom-half.h won't
be possible without passing the relevant softirq vectors that are
currently defined in linux/interrupt.h
Yet we can't include linux/interrupt.h from linux/bottom-half.h due to
circular
From: Frederic Weisbecker
Check the enabled vector bits on softirq processing. Those that are
pending but disabled will be ignored and handled by the interrupted code
that disabled those vectors.
No effective change yet as the core isn't yet ready for softirq
re-entrancy. All softirqs should be
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
The bottom half masking APIs have become interestingly confusing with all
these flavours:
local_bh_enable()
_local_bh_enable()
local_bh_enable_ip()
__local_bh_enable_ip()
_local_bh_enable() is an exception here because it's the only
Disabling the softirqs is currently an all-or-nothing operation: either
all softirqs are enabled or none of them. However we plan to introduce a
per vector granularity of this ability to improve latency response and
make each softirq vector interruptible by the others.
The first step carried here
As we plan to narrow down local_bh_disable() to a per-vector disablement
granularity, a shortcut can be handy for code that want to disable all
of them and not care about carrying the bh enabled mask state prior to
the call.
(TODO: check that it is called while bh are ALL enabled because
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
This function is implemented on top of spin_lock_bh() that is going to
handle a softirq mask in order to apply finegrained vector disablement.
The lock function is going to return the previous vectors enabled mask
prior to the last call to local_bh_disable(), following a similar model
to that of
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
From: Frederic Weisbecker
Using the bottom-half masking APIs defined in linux/bottom-half.h won't
be possible without passing the relevant softirq vectors that are
currently defined in linux/interrupt.h
Yet we can't include linux/interrupt.h from linux/bottom-half.h due to
circular
From: Frederic Weisbecker
Check the enabled vector bits on softirq processing. Those that are
pending but disabled will be ignored and handled by the interrupted code
that disabled those vectors.
No effective change yet as the core isn't yet ready for softirq
re-entrancy. All softirqs should be
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
The bottom half masking APIs have become interestingly confusing with all
these flavours:
local_bh_enable()
_local_bh_enable()
local_bh_enable_ip()
__local_bh_enable_ip()
_local_bh_enable() is an exception here because it's the only
Disabling the softirqs is currently an all-or-nothing operation: either
all softirqs are enabled or none of them. However we plan to introduce a
per vector granularity of this ability to improve latency response and
make each softirq vector interruptible by the others.
The first step carried here
As we plan to narrow down local_bh_disable() to a per-vector disablement
granularity, a shortcut can be handy for code that want to disable all
of them and not care about carrying the bh enabled mask state prior to
the call.
(TODO: check that it is called while bh are ALL enabled because
This pair of function is implemented on top of local_bh_disable() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model
This function is implemented on top of spin_lock_bh() that is going to
handle a softirq mask in order to apply finegrained vector disablement.
The lock function is going to return the previous vectors enabled mask
prior to the last call to local_bh_disable(), following a similar model
to that of
This pair of function is implemented on top of spin_lock_bh() that
is going to handle a softirq mask in order to apply finegrained vector
disablement. The lock function is going to return the previous vectors
enabled mask prior to the last call to local_bh_disable(), following a
similar model to
We are going to extend the softirq bits with an enabled vector mask.
Provide the field with a more generic name to later layout the pending
states on the lower bits and the enabled states on the higher bits.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc:
From: Frederic Weisbecker
All softirqs must be set enabled on boot.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
Cc: Paul E. McKenney
---
From: Frederic Weisbecker
Use the subsystem as the prefix to name the __softirq_data accessors.
They are going to be extended and want a more greppable and standard
naming sheme.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter
The future extensions of this API are going to depend on the vector
definitions. So order the code accordingly.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho
This reverts commit 9aee5f8a7e30330d0a8f4c626dc924ca5590aba5.
We are going to need the 16 high bits above in order to implement
a softirq enable mask. x86 is the only architecture that doesn't use
unsigned int to implement softirq_pending.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc:
From: Frederic Weisbecker
All softirqs must be set enabled on boot.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho Chehab
Cc: Paul E. McKenney
---
From: Frederic Weisbecker
Use the subsystem as the prefix to name the __softirq_data accessors.
They are going to be extended and want a more greppable and standard
naming sheme.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter
The future extensions of this API are going to depend on the vector
definitions. So order the code accordingly.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc: Sebastian Andrzej Siewior
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Linus Torvalds
Cc: David S. Miller
Cc: Mauro Carvalho
This reverts commit 9aee5f8a7e30330d0a8f4c626dc924ca5590aba5.
We are going to need the 16 high bits above in order to implement
a softirq enable mask. x86 is the only architecture that doesn't use
unsigned int to implement softirq_pending.
Signed-off-by: Frederic Weisbecker
Cc: Ingo Molnar
Cc:
This change makes it so that we call the asynchronous probe routines on a
CPU local to the device node. By doing this we should be able to improve
our initialization time significantly as we can avoid having to access the
device from a remote node which may introduce higher latency.
This change makes it so that we call the asynchronous probe routines on a
CPU local to the device node. By doing this we should be able to improve
our initialization time significantly as we can avoid having to access the
device from a remote node which may introduce higher latency.
This change makes it so that we use the device specific version of the
async_schedule commands to defer various tasks related to power management.
By doing this we should see a slight improvement in performance as any
device that is sensitive to latency/locality in the setup will now be
This change makes it so that we use the device specific version of the
async_schedule commands to defer various tasks related to power management.
By doing this we should see a slight improvement in performance as any
device that is sensitive to latency/locality in the setup will now be
This patch introduces four new variants of the async_schedule_ functions
that allow scheduling on a specific NUMA node.
The first two functions are async_schedule_near and
async_schedule_near_domain which end up mapping to async_schedule and
async_schedule_domain but provide NUMA node specific
This patch introduces four new variants of the async_schedule_ functions
that allow scheduling on a specific NUMA node.
The first two functions are async_schedule_near and
async_schedule_near_domain which end up mapping to async_schedule and
async_schedule_domain but provide NUMA node specific
This change makes it so that we probe devices asynchronously instead of the
driver. This results in us seeing the same behavior if the device is
registered before the driver or after. This way we can avoid serializing
the initialization should the driver not be loaded until after the devices
have
This change makes it so that we probe devices asynchronously instead of the
driver. This results in us seeing the same behavior if the device is
registered before the driver or after. This way we can avoid serializing
the initialization should the driver not be loaded until after the devices
have
This patch provides a new function queue_work_node which is meant to
schedule work on a "random" CPU of the requested NUMA node. The main
motivation for this is to help assist asynchronous init to better improve
boot times for devices that are local to a specific node.
For now we just default to
This patch provides a new function queue_work_node which is meant to
schedule work on a "random" CPU of the requested NUMA node. The main
motivation for this is to help assist asynchronous init to better improve
boot times for devices that are local to a specific node.
For now we just default to
This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.
This patch set originally started out with me focused on just the one call
to async_schedule_domain in the nvdimm tree that was being used to
defer the
This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.
This patch set originally started out with me focused on just the one call
to async_schedule_domain in the nvdimm tree that was being used to
defer the
201 - 300 of 1504 matches
Mail list logo