On 20.05.2019 18:56, Christian Brauner wrote:
This adds the pidfd_open() syscall. It allows a caller to retrieve pollable
pidfds for a process which did not get created via CLONE_PIDFD, i.e. for a
process that is created via traditional fork()/clone() calls that is only
referenced by a PID:
int
On 22.05.2019 18:52, Christian Brauner wrote:> This adds the close_range()
syscall. It allows to efficiently close a range
> of file descriptors up to all file descriptors of a calling task.
>
> The syscall came up in a recent discussion around the new mount API and
> making new file descriptor
Function should_resched() is equal to (!preempt_count() need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc-lock.
Signed-off-by: Konstantin Khlebnikov khlebni...@yandex-team.ru
---
arch/powerpc/kvm/book3s_hv.c |2 +-
1 file changed, 1 insertion(+), 1 deletion
for that:
PREEMPT_DISABLE_OFFSET - offset after preempt_disable()
PREEMPT_LOCK_OFFSET - offset after spin_lock()
SOFTIRQ_DISABLE_OFFSET - offset after local_bh_distable()
SOFTIRQ_LOCK_OFFSET - offset after spin_lock_bh()
Signed-off-by: Konstantin Khlebnikov khlebni...@yandex-team.ru
This code is used only when CONFIG_PREEMPT=n and only in non-atomic context:
xen_in_preemptible_hcall is set only in privcmd_ioctl_hypercall().
Thus preempt_count is zero and should_resched() is equal to need_resched().
Signed-off-by: Konstantin Khlebnikov khlebni...@yandex-team.ru
---
drivers
On 15.07.2015 15:16, Eric Dumazet wrote:
On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero = should_resched() always returns false.
cond_resched_lock() worked iff
00:49
*To:* Konstantin Khlebnikov mailto:khlebni...@yandex-team.ru
*CC:* Grant Likely mailto:grant.lik...@linaro.org;
devicet...@vger.kernel.org mailto:devicet...@vger.kernel.org; Rob
Herring mailto:robh...@kernel.org; linux-ker...@vger.kernel.org
mailto:linux-ker...@vger.kernel.org; sparcli
On 13.04.2015 16:22, Rob Herring wrote:
On Wed, Apr 8, 2015 at 11:59 AM, Konstantin Khlebnikov
khlebni...@yandex-team.ru wrote:
Node 0 might be offline as well as any other numa node,
in this case kernel cannot handle memory allocation and crashes.
Signed-off-by: Konstantin Khlebnikov khlebni
On Thu, Apr 9, 2015 at 2:12 AM, Julian Calaby julian.cal...@gmail.com wrote:
Hi Konstantin,
On Thu, Apr 9, 2015 at 3:04 AM, Konstantin Khlebnikov
khlebni...@yandex-team.ru wrote:
On 08.04.2015 19:59, Konstantin Khlebnikov wrote:
Node 0 might be offline as well as any other numa node
On 08.04.2015 19:59, Konstantin Khlebnikov wrote:
Node 0 might be offline as well as any other numa node,
in this case kernel cannot handle memory allocation and crashes.
Example:
[0.027133] [ cut here ]
[0.027938] kernel BUG at include/linux/gfp.h:322
Node 0 might be offline as well as any other numa node,
in this case kernel cannot handle memory allocation and crashes.
Signed-off-by: Konstantin Khlebnikov khlebni...@yandex-team.ru
Fixes: 0c3f061c195c (of: implement of_node_to_nid as a weak function)
---
drivers/of/base.c |2 +-
include
On Thu, Apr 9, 2015 at 2:07 AM, Nishanth Aravamudan
n...@linux.vnet.ibm.com wrote:
On 08.04.2015 [20:04:04 +0300], Konstantin Khlebnikov wrote:
On 08.04.2015 19:59, Konstantin Khlebnikov wrote:
Node 0 might be offline as well as any other numa node,
in this case kernel cannot handle memory
12 matches
Mail list logo