Hi,
Along the same lines than the previous work. Details are in patch 1.
Patch 2 is an add on while eyeballing the code. Similar to the previous
patches, this has survived ltp testcases and various workloads.
Thanks,
Davidlohr
Davidlohr Bueso (2):
fs/epoll: loosen irq safety in ep_poll()
fs
Hi,
Along the same lines than the previous work. Details are in patch 1.
Patch 2 is an add on while eyeballing the code. Similar to the previous
patches, this has survived ltp testcases and various workloads.
Thanks,
Davidlohr
Davidlohr Bueso (2):
fs/epoll: loosen irq safety in ep_poll()
fs
was gone.
v3: Revise the commit log and comment again.
v2: Add customer testing results and remove wording that may cause
confusion.
Signed-off-by: Waiman Long
Reviewed-by: Davidlohr Bueso
was gone.
v3: Revise the commit log and comment again.
v2: Add customer testing results and remove wording that may cause
confusion.
Signed-off-by: Waiman Long
Reviewed-by: Davidlohr Bueso
On Wed, 25 Jul 2018, Andrew Morton wrote:
On Wed, 25 Jul 2018 11:56:20 -0700 Davidlohr Bueso wrote:
... 'tis easier on the eye.
true, but.
+#else
+#define ep_busy_loop(ep, nonblock) do { } while (0)
+#define ep_reset_busy_poll_napi_id(ep) do { } while (0)
+#define
On Wed, 25 Jul 2018, Andrew Morton wrote:
On Wed, 25 Jul 2018 11:56:20 -0700 Davidlohr Bueso wrote:
... 'tis easier on the eye.
true, but.
+#else
+#define ep_busy_loop(ep, nonblock) do { } while (0)
+#define ep_reset_busy_poll_napi_id(ep) do { } while (0)
+#define
... 'tis easier on the eye.
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 1b1abc461fc0..0ab82d0d4e02 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -391,7 +391,6
... 'tis easier on the eye.
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 1b1abc461fc0..0ab82d0d4e02 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -391,7 +391,6
On Wed, 18 Jul 2018, Waiman Long wrote:
The key here is that we don't want other incoming readers to observe
that there are waiters in the wait queue and hence have to go into the
slowpath until the single waiter in the queue is sure that it probably
will need to go to sleep if there is writer.
On Wed, 18 Jul 2018, Waiman Long wrote:
The key here is that we don't want other incoming readers to observe
that there are waiters in the wait queue and hence have to go into the
slowpath until the single waiter in the queue is sure that it probably
will need to go to sleep if there is writer.
On Sat, 21 Jul 2018, Peter Zijlstra wrote:
On Sat, Jul 21, 2018 at 10:21:20AM -0700, Davidlohr Bueso wrote:
On Fri, 20 Jul 2018, Andrew Morton wrote:
> We could open-code it locally. Add a couple of
> WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with
> Xen but su
On Sat, 21 Jul 2018, Peter Zijlstra wrote:
On Sat, Jul 21, 2018 at 10:21:20AM -0700, Davidlohr Bueso wrote:
On Fri, 20 Jul 2018, Andrew Morton wrote:
> We could open-code it locally. Add a couple of
> WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with
> Xen but su
On Fri, 20 Jul 2018, Andrew Morton wrote:
We could open-code it locally. Add a couple of
WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with
Xen but surely just reading the thing isn't too expensive?
We could also pass on the responsibility to lockdep and just use
On Fri, 20 Jul 2018, Andrew Morton wrote:
We could open-code it locally. Add a couple of
WARN_ON_ONCE(irqs_disabled())? That might need re-benchmarking with
Xen but surely just reading the thing isn't too expensive?
We could also pass on the responsibility to lockdep and just use
On Fri, 20 Jul 2018, Andrew Morton wrote:
Did you try measuring it on bare hardware?
I did and wasn't expecting much difference.
For a 2-socket 40-core (ht) IvyBridge on a few workloads, unfortunately
I don't have a xen environment and the results for Xen I do have (which numbers
are in
On Fri, 20 Jul 2018, Andrew Morton wrote:
Did you try measuring it on bare hardware?
I did and wasn't expecting much difference.
For a 2-socket 40-core (ht) IvyBridge on a few workloads, unfortunately
I don't have a xen environment and the results for Xen I do have (which numbers
are in
On Fri, 20 Jul 2018, Andrew Morton wrote:
On Fri, 20 Jul 2018 10:29:54 -0700 Davidlohr Bueso wrote:
Hi,
Both patches replace saving+restoring interrupts when taking the
ep->lock (now the waitqueue lock), with just disabling local irqs.
This shows immediate performance benefits in patc
On Fri, 20 Jul 2018, Andrew Morton wrote:
On Fri, 20 Jul 2018 10:29:54 -0700 Davidlohr Bueso wrote:
Hi,
Both patches replace saving+restoring interrupts when taking the
ep->lock (now the waitqueue lock), with just disabling local irqs.
This shows immediate performance benefits in patc
On Wed, 18 Jul 2018, Manfred Spraul wrote:
Hello Davidlohr,
On 07/17/2018 07:26 AM, Davidlohr Bueso wrote:
In order for load/store tearing to work, _all_ accesses to
the variable in question need to be done around READ and
WRITE_ONCE() macros. Ensure everyone does so for q->status
varia
On Wed, 18 Jul 2018, Manfred Spraul wrote:
Hello Davidlohr,
On 07/17/2018 07:26 AM, Davidlohr Bueso wrote:
In order for load/store tearing to work, _all_ accesses to
the variable in question need to be done around READ and
WRITE_ONCE() macros. Ensure everyone does so for q->status
varia
when releasing the file, but this also complies
with the above.
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 2247769eb941..1b1abc461fc0 100644
--- a/fs/eventpoll.c
+++ b
when releasing the file, but this also complies
with the above.
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 2247769eb941..1b1abc461fc0 100644
--- a/fs/eventpoll.c
+++ b
that is epoll intensive
running on a single Xen DomU.
1 Job7500 -->8800 enq/s (+17%)
2 Jobs 14000 --> 15200 enq/s (+8%)
3 Jobs 20500 --> 22300 enq/s (+8%)
4 Jobs 25000 --> 28000 enq/s (+8-12)%
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 9 -
der the hood: nginx and libevent benchmarks.
Details are in the individual patches.
Applies on top of mmotd.
Thanks!
Davidlohr Bueso (2):
fs/epoll: loosen irq safety in ep_scan_ready_list()
fs/epoll: loosen irq safety in epoll_insert() and epoll_remove()
fs/eventpoll.c |
der the hood: nginx and libevent benchmarks.
Details are in the individual patches.
Applies on top of mmotd.
Thanks!
Davidlohr Bueso (2):
fs/epoll: loosen irq safety in ep_scan_ready_list()
fs/epoll: loosen irq safety in epoll_insert() and epoll_remove()
fs/eventpoll.c |
that is epoll intensive
running on a single Xen DomU.
1 Job7500 -->8800 enq/s (+17%)
2 Jobs 14000 --> 15200 enq/s (+8%)
3 Jobs 20500 --> 22300 enq/s (+8%)
4 Jobs 25000 --> 28000 enq/s (+8-12)%
Signed-off-by: Davidlohr Bueso
---
fs/eventpoll.c | 9 -
On Mon, 16 Jul 2018, Bueso wrote:
In order for load/store tearing to work, _all_ accesses to
^ prevention
On Mon, 16 Jul 2018, Bueso wrote:
In order for load/store tearing to work, _all_ accesses to
^ prevention
In order for load/store tearing to work, _all_ accesses to
the variable in question need to be done around READ and
WRITE_ONCE() macros. Ensure everyone does so for q->status
variable for semtimedop().
Signed-off-by: Davidlohr Bueso
---
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+)
In order for load/store tearing to work, _all_ accesses to
the variable in question need to be done around READ and
WRITE_ONCE() macros. Ensure everyone does so for q->status
variable for semtimedop().
Signed-off-by: Davidlohr Bueso
---
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+)
No changes in semantics -- key init is false; replace
static_key_slow_inc with static_branch_inc
static_key_false with static_branch_unlikely
Added a '_key' suffix to crc10dif_fallback for better self
documentation.
Signed-off-by: Davidlohr Bueso
---
lib/crc-t10dif.c | 6
No changes in semantics -- key init is false; replace
static_key_slow_inc with static_branch_inc
static_key_false with static_branch_unlikely
Added a '_key' suffix to crc10dif_fallback for better self
documentation.
Signed-off-by: Davidlohr Bueso
---
lib/crc-t10dif.c | 6
lly calling into rounded_hashtable_size() and handling
things accordingly.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 411c4041ce83..89c3cfc8334d 100644
--- a/
lly calling into rounded_hashtable_size() and handling
things accordingly.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 411c4041ce83..89c3cfc8334d 100644
--- a/
On Thu, 14 Dec 2017, Christoph Hellwig wrote:
Hi all,
this series adds a strategic lockdep_assert_held to __wake_up_common
to ensure callers really do hold the wait_queue_head lock when calling
the unlocked wake_up variants. It turns out epoll did not do this
for a fairly common path (hit all
On Thu, 14 Dec 2017, Christoph Hellwig wrote:
Hi all,
this series adds a strategic lockdep_assert_held to __wake_up_common
to ensure callers really do hold the wait_queue_head lock when calling
the unlocked wake_up variants. It turns out epoll did not do this
for a fairly common path (hit all
On Wed, 11 Jul 2018, Mike Kravetz wrote:
This reverts commit ee8f248d266e ("hugetlb: add phys addr to struct
huge_bootmem_page")
At one time powerpc used this field and supporting code. However that
was removed with commit 79cc38ded1e1 ("powerpc/mm/hugetlb: Add support
for reserving gigantic
On Wed, 11 Jul 2018, Mike Kravetz wrote:
This reverts commit ee8f248d266e ("hugetlb: add phys addr to struct
huge_bootmem_page")
At one time powerpc used this field and supporting code. However that
was removed with commit 79cc38ded1e1 ("powerpc/mm/hugetlb: Add support
for reserving gigantic
On Mon, 09 Jul 2018, Manfred Spraul wrote:
From: Dmitry Vyukov
ipc_idr_alloc refactoring
ENOCHANGELOG
On Mon, 09 Jul 2018, Manfred Spraul wrote:
From: Dmitry Vyukov
ipc_idr_alloc refactoring
ENOCHANGELOG
ock() and get rid of the
function altogether.
[changelog mostly by manfred]
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 29 +++--
ipc/util.c | 36
ipc/util.h | 1 -
3 files changed, 23 insertions(+), 43 deletions(-)
diff --git a/ipc/shm.
ock() and get rid of the
function altogether.
[changelog mostly by manfred]
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 29 +++--
ipc/util.c | 36
ipc/util.h | 1 -
3 files changed, 23 insertions(+), 43 deletions(-)
diff --git a/ipc/shm.
On Tue, 10 Jul 2018, Manfred Spraul wrote:
Which patch do you prefer?
I have seen two versions, and if I have picked up the wrong one, then
I can change it.
Nah, you picked up the right one.
I was only pointing out at the alternative patch so that it didn't come
out of nowhere for the
On Tue, 10 Jul 2018, Manfred Spraul wrote:
Which patch do you prefer?
I have seen two versions, and if I have picked up the wrong one, then
I can change it.
Nah, you picked up the right one.
I was only pointing out at the alternative patch so that it didn't come
out of nowhere for the
ees Cook
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
ees Cook
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
On Mon, 09 Jul 2018, Andrew Morton wrote:
On Mon, 9 Jul 2018 17:10:18 +0200 Manfred Spraul
wrote:
From: Davidlohr Bueso
...
Signed-off-by: Davidlohr Bueso
Should these be From: dbu...@suse.de?
Not really, I've been doing this for years now -- makes backports
easier.
Thanks
On Mon, 09 Jul 2018, Andrew Morton wrote:
On Mon, 9 Jul 2018 17:10:18 +0200 Manfred Spraul
wrote:
From: Davidlohr Bueso
...
Signed-off-by: Davidlohr Bueso
Should these be From: dbu...@suse.de?
Not really, I've been doing this for years now -- makes backports
easier.
Thanks
On Mon, 09 Jul 2018, Manfred Spraul wrote:
@Davidlohr:
Please double check that I have taken the correct patches, and
that I didn't break anything.
Everything seems ok.
Patch 8 had an alternative patch that didn't change nowarn semantics for
the rhashtable resizing operations
On Mon, 09 Jul 2018, Manfred Spraul wrote:
@Davidlohr:
Please double check that I have taken the correct patches, and
that I didn't break anything.
Everything seems ok.
Patch 8 had an alternative patch that didn't change nowarn semantics for
the rhashtable resizing operations
:
- "obtain" function look up a pointer in the idr, without
acquiring the object lock.
- The caller is responsible for locking.
- _check means that the sequence number is checked.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
:
- "obtain" function look up a pointer in the idr, without
acquiring the object lock.
- The caller is responsible for locking.
- _check means that the sequence number is checked.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
,
thus an object with kern_ipc_perm.deleted=true may disappear at
the end of the next rcu grace period.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
,
thus an object with kern_ipc_perm.deleted=true may disappear at
the end of the next rcu grace period.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Reviewed-by: Davidlohr Bueso
On Thu, 05 Jul 2018, Andrew Morton wrote:
On Thu, 5 Jul 2018 17:12:36 +0200 Manfred Spraul
wrote:
Hi Dmitry,
On 07/05/2018 10:36 AM, Dmitry Vyukov wrote:
> [...]
> Hi Manfred,
>
> The series looks like a significant improvement to me. Thanks!
>
> I feel that this code can be further
On Thu, 05 Jul 2018, Andrew Morton wrote:
On Thu, 5 Jul 2018 17:12:36 +0200 Manfred Spraul
wrote:
Hi Dmitry,
On 07/05/2018 10:36 AM, Dmitry Vyukov wrote:
> [...]
> Hi Manfred,
>
> The series looks like a significant improvement to me. Thanks!
>
> I feel that this code can be further
the whole structure, something that
sysvsems also do; this is safe as it's a nop, having no secondary
effect afaict.
Reported-by: syzbot
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 62545ce19173..da81b374f9fd
the whole structure, something that
sysvsems also do; this is safe as it's a nop, having no secondary
effect afaict.
Reported-by: syzbot
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 62545ce19173..da81b374f9fd
On Fri, 22 Jun 2018, Davidlohr Bueso wrote:
This slightly changes the gfp flags passed on to nested_table_alloc() as it
will now
also use GFP_ATOMIC | __GFP_NOWARN. However, I consider this a positive
consequence
as for the same reasons we want nowarn semantics in bucket_table_alloc
On Fri, 22 Jun 2018, Davidlohr Bueso wrote:
This slightly changes the gfp flags passed on to nested_table_alloc() as it
will now
also use GFP_ATOMIC | __GFP_NOWARN. However, I consider this a positive
consequence
as for the same reasons we want nowarn semantics in bucket_table_alloc
Cc'ing Neil.
On Fri, 22 Jun 2018, Davidlohr Bueso wrote:
As of ce91f6ee5b3 (mm: kvmalloc does not fallback to vmalloc for incompatible
gfp flag),
we can simplify the caller and trust kvzalloc() to just do the right thing. For
the
case of the GFP_ATOMIC context, we can drop the __GFP_NORETRY
Cc'ing Neil.
On Fri, 22 Jun 2018, Davidlohr Bueso wrote:
As of ce91f6ee5b3 (mm: kvmalloc does not fallback to vmalloc for incompatible
gfp flag),
we can simplify the caller and trust kvzalloc() to just do the right thing. For
the
case of the GFP_ATOMIC context, we can drop the __GFP_NORETRY
reasons we want nowarn semantics in bucket_table_alloc().
Signed-off-by: Davidlohr Bueso
---
v2:
- Changes based on Neil's concerns about keeping nowarn flag.
- Better changelog.
lib/rhashtable.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/lib/rhashtable.c b/lib
reasons we want nowarn semantics in bucket_table_alloc().
Signed-off-by: Davidlohr Bueso
---
v2:
- Changes based on Neil's concerns about keeping nowarn flag.
- Better changelog.
lib/rhashtable.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/lib/rhashtable.c b/lib
On Fri, 22 Jun 2018, NeilBrown wrote:
On Thu, Jun 21 2018, Davidlohr Bueso wrote:
As of ce91f6ee5 (mm: kvmalloc does not fallback to vmalloc for incompatible gfp
flag),
we can simplify the caller and trust kvzalloc() to just do the right thing.
Hi,
it isn't clear to me that this is true
On Fri, 22 Jun 2018, NeilBrown wrote:
On Thu, Jun 21 2018, Davidlohr Bueso wrote:
As of ce91f6ee5 (mm: kvmalloc does not fallback to vmalloc for incompatible gfp
flag),
we can simplify the caller and trust kvzalloc() to just do the right thing.
Hi,
it isn't clear to me that this is true
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 9 -
ipc/namespace.c | 20
ipc/sem.c | 10 --
ipc/shm.c
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 9 -
ipc/namespace.c | 20
ipc/sem.c | 10 --
ipc/shm.c
/23/758
Thanks!
Davidlohr Bueso (4):
lib/rhashtable: simplify bucket_table_alloc()
lib/rhashtable: guarantee initial hashtable allocation
ipc: get rid of ids->tables_initialized hack
ipc: simplify ipc initialization
include/linux/ipc_namespace.h | 1 -
ipc/msg.c |
/23/758
Thanks!
Davidlohr Bueso (4):
lib/rhashtable: simplify bucket_table_alloc()
lib/rhashtable: guarantee initial hashtable allocation
ipc: get rid of ids->tables_initialized hack
ipc: simplify ipc initialization
include/linux/ipc_namespace.h | 1 -
ipc/msg.c |
becomes available) is the least
of our problems.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 26c9cd8a985a..411c4041ce83 100644
--- a/lib/rhashtable.c
+++ b/lib
all into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
---
include/linux/ipc_namespace.h | 1 -
ipc/util.c| 23 ---
2 files changed, 8 insertions(+), 16 deleti
becomes available) is the least
of our problems.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 26c9cd8a985a..411c4041ce83 100644
--- a/lib/rhashtable.c
+++ b/lib
all into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
---
include/linux/ipc_namespace.h | 1 -
ipc/util.c| 23 ---
2 files changed, 8 insertions(+), 16 deleti
As of ce91f6ee5 (mm: kvmalloc does not fallback to vmalloc for incompatible gfp
flag),
we can simplify the caller and trust kvzalloc() to just do the right thing.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/lib
As of ce91f6ee5 (mm: kvmalloc does not fallback to vmalloc for incompatible gfp
flag),
we can simplify the caller and trust kvzalloc() to just do the right thing.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/lib
On Thu, 14 Jun 2018, Steven Rostedt wrote:
Although the change log is a bit ambiguous in if it is fixing an actual
miss update, or if it is just quieting a false positive.
Davidlohr?
It fixes an update, not a false positive.
Thanks,
Davidlohr
On Thu, 14 Jun 2018, Steven Rostedt wrote:
Although the change log is a bit ambiguous in if it is fixing an actual
miss update, or if it is just quieting a false positive.
Davidlohr?
It fixes an update, not a false positive.
Thanks,
Davidlohr
If there are no objections, now that the merge window closed, could this
be considered for v4.19?
Thanks,
Davidlohr
On Tue, 10 Apr 2018, Davidlohr Bueso wrote:
By applying well known spin-on-lock-owner techniques, we can avoid the
blocking overhead during the process of when the task
If there are no objections, now that the merge window closed, could this
be considered for v4.19?
Thanks,
Davidlohr
On Tue, 10 Apr 2018, Davidlohr Bueso wrote:
By applying well known spin-on-lock-owner techniques, we can avoid the
blocking overhead during the process of when the task
On Sat, 02 Jun 2018, Herbert Xu wrote:
tbl = bucket_table_alloc(ht, size, GFP_KERNEL);
- if (tbl == NULL)
- return -ENOMEM;
+ if (unlikely(tbl == NULL)) {
+ size = min_t(u16, ht->p.min_size, HASH_MIN_SIZE);
You mean max_t?
Not really. I
On Sat, 02 Jun 2018, Herbert Xu wrote:
tbl = bucket_table_alloc(ht, size, GFP_KERNEL);
- if (tbl == NULL)
- return -ENOMEM;
+ if (unlikely(tbl == NULL)) {
+ size = min_t(u16, ht->p.min_size, HASH_MIN_SIZE);
You mean max_t?
Not really. I
On Sat, 02 Jun 2018, Herbert Xu wrote:
On Fri, Jun 01, 2018 at 09:53:47AM -0700, Davidlohr Bueso wrote:
Curious, are these users setting up the param structure dynamically
or something that they can pass along bogus values?
If that's the case then yes, I definitely agree.
It's just
On Sat, 02 Jun 2018, Herbert Xu wrote:
On Fri, Jun 01, 2018 at 09:53:47AM -0700, Davidlohr Bueso wrote:
Curious, are these users setting up the param structure dynamically
or something that they can pass along bogus values?
If that's the case then yes, I definitely agree.
It's just
On Sat, 02 Jun 2018, Herbert Xu wrote:
On Fri, Jun 01, 2018 at 09:01:21AM -0700, Davidlohr Bueso wrote:
For the purpose of making rhashtable_init() unable to fail,
we can replace the returning -EINVAL with WARN_ONs whenever
the caller passes bogus parameters during initialization.
Signed-off
On Sat, 02 Jun 2018, Herbert Xu wrote:
On Fri, Jun 01, 2018 at 09:01:21AM -0700, Davidlohr Bueso wrote:
For the purpose of making rhashtable_init() unable to fail,
we can replace the returning -EINVAL with WARN_ONs whenever
the caller passes bogus parameters during initialization.
Signed-off
becomes available) is the last
of our problems.
Suggested-by: Linus Torvalds
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 05a4b1b8b8ce..ae17da6f0c75 100644
--- a/lib
becomes available) is the last
of our problems.
Suggested-by: Linus Torvalds
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 05a4b1b8b8ce..ae17da6f0c75 100644
--- a/lib
For the purpose of making rhashtable_init() unable to fail,
we can replace the returning -EINVAL with WARN_ONs whenever
the caller passes bogus parameters during initialization.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 9 -
1 file changed, 4 insertions(+), 5 deletions
For the purpose of making rhashtable_init() unable to fail,
we can replace the returning -EINVAL with WARN_ONs whenever
the caller passes bogus parameters during initialization.
Signed-off-by: Davidlohr Bueso
---
lib/rhashtable.c | 9 -
1 file changed, 4 insertions(+), 5 deletions
Update the test module as such.
Signed-off-by: Davidlohr Bueso
---
lib/test_rhashtable.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
index f4000c137dbe..a894eb0407f0 100644
--- a/lib/test_rhashtable.c
+++ b/lib
Update the test module as such.
Signed-off-by: Davidlohr Bueso
---
lib/test_rhashtable.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
index f4000c137dbe..a894eb0407f0 100644
--- a/lib/test_rhashtable.c
+++ b/lib
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 9 -
ipc/namespace.c | 20
ipc/sem.c | 10 --
ipc/shm.c
all into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
---
include/linux/ipc_namespace.h | 1 -
ipc/util.c| 23 ---
2 files changed, 8 insertions(+), 16 deleti
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
---
ipc/msg.c | 9 -
ipc/namespace.c | 20
ipc/sem.c | 10 --
ipc/shm.c
all into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
---
include/linux/ipc_namespace.h | 1 -
ipc/util.c| 23 ---
2 files changed, 8 insertions(+), 16 deleti
the rhashtable test module. Trivial.
Please consider for v4.18.
Thanks!
[0] https://lkml.org/lkml/2018/5/23/758
Davidlohr Bueso (5):
lib/rhashtable: convert param sanitations to WARN_ON
lib/rhashtable: guarantee initial hashtable allocation
ipc: get rid of ids->tables_initialized hack
the rhashtable test module. Trivial.
Please consider for v4.18.
Thanks!
[0] https://lkml.org/lkml/2018/5/23/758
Davidlohr Bueso (5):
lib/rhashtable: convert param sanitations to WARN_ON
lib/rhashtable: guarantee initial hashtable allocation
ipc: get rid of ids->tables_initialized hack
Commit-ID: 595058b6675e4d2a70dcd867c84d922975f9d22b
Gitweb: https://git.kernel.org/tip/595058b6675e4d2a70dcd867c84d922975f9d22b
Author: Davidlohr Bueso
AuthorDate: Wed, 30 May 2018 15:49:40 -0700
Committer: Ingo Molnar
CommitDate: Thu, 31 May 2018 12:27:13 +0200
sched/headers: Fix
Commit-ID: 595058b6675e4d2a70dcd867c84d922975f9d22b
Gitweb: https://git.kernel.org/tip/595058b6675e4d2a70dcd867c84d922975f9d22b
Author: Davidlohr Bueso
AuthorDate: Wed, 30 May 2018 15:49:40 -0700
Committer: Ingo Molnar
CommitDate: Thu, 31 May 2018 12:27:13 +0200
sched/headers: Fix
401 - 500 of 4800 matches
Mail list logo