From: Edward Cree
Generally the check should be very cheap, as the sk_buff_head is in cache.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit b9f463d6c9849230043123a6335d59ac7fea4d5a)
Signed-off-by: Andrey Ryabinin
rted code, but the change makes
it more robust.
Link: http://lkml.kernel.org/r/20170405074700.29871-3-vba...@suse.cz
Signed-off-by: Vlastimil Babka
Suggested-by: Michal Hocko
Acked-by: Michal Hocko
Acked-by: Hillf Danton
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Andrey Ryabinin
Cc: Boris Brezillo
. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 4ce0017a373afaaa9ef17614d8fa4f6fde261d18)
Signed-off-by: Andrey Ryabinin
---
include/linux/list.h | 30 ++
net/core/dev.c | 33 +++--
2 files changed, 61 insertions
From: Edward Cree
Improves packet rate of 1-byte UDP receives by up to 10%.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit e090bfb9f19259b958387d2bd4938d66b324cd09)
Signed-off-by: Andrey Ryabinin
---
drivers/net
d pass
back the one ptype found in ptype_base[hash of skb->protocol].
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 88eb1944e18c1ba61da538ae9d1732832eb79b9d)
Signed-off-by: Andrey Ryabinin
-
or an
asynchronous accept to cause out-of-order receives, so presumably this is
considered OK.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 17266ee939849cb095ed7dd9edbec4162172226b)
Signed-off-by: Andrey Rya
From: Edward Cree
Just calls netif_receive_skb() in a loop.
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit f6ad8c1bcdf014272d08c55b9469536952a0a771)
Signed-off-by: Andrey Ryabinin
---
include/linux/netdevice.h | 1
e() to allow splitting on protocol changes).
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 5fa12739a53d0780265ed9d44d9ec9ba5f9ad00a)
Signed-off-by: Andrey Ryabinin
---
net/ipv4/ip_input.
(static_branch_unlikely(&generic_xdp_needed_key)) { }
block, we don't have generic XDP thing yet]
Signed-off-by: Andrey Ryabinin
---
net/core/dev.c | 63 +-
1 file changed, 58 insertions(+), 5 deletions(-)
diff --git a/net/core/de
From: Edward Cree
Signed-off-by: Edward Cree
Signed-off-by: David S. Miller
https://jira.sw.ru/browse/PSBM-88420
(cherry picked from commit 920572b73280a29e3a9f58807a8b90051b19ee60)
Signed-off-by: Andrey Ryabinin
---
include/trace/events/net.h | 7 +++
net/core/dev.c | 4
backport of
2d1b138505dc ("Handle multiple received packets at each stage")
series
https://jira.sw.ru/browse/PSBM-88420
Signed-off-by: Andrey Ryabinin
---
include/linux/skbuff.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 14d47
have generic XDP, so the hunk realted
to this feature ifdef'ed out in patch "net: core: Another step of skb receive
list processing".
Andrey Ryabinin (1):
net/skbuff: Add ->list to struct sk_buff;
Edward Cree (9):
net: core: trivial netif_receive_skb_list() entry point
sfc: b
ion => endless loop in next calc_load_ve() call
>
> https://jira.sw.ru/browse/PSBM-88251
>
> Signed-off-by: Konstantin Khorenko
>
> v2 changes:
> - change locking scheme: drop rcu, use "load_ve_lock" everywhere
> - drop tg->linked field, check if linked using list_empty()
> ---
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
cpu cgroup) to "ve_root_list" list =>
> // list corruption => endless loop in next calc_load_ve() call
>
> https://jira.sw.ru/browse/PSBM-88251
>
> Signed-off-by: Konstantin Khorenko
> ---
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
Historically the "Lat" column in /proc/vz/latency showed max latency
in 5 seconds. Chnage it to max latency in the last 2 minutes, the same
as in the /proc//vz_latency
Signed-off-by: Andrey Ryabinin
---
kernel/ve/vzstat.c | 2 +-
kernel/ve/vzstat_core.c | 17 +++-
Signed-off-by: Andrey Ryabinin
---
kernel/ve/vzwdog.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/ve/vzwdog.c b/kernel/ve/vzwdog.c
index def8eee062b5..6ef671d9a25b 100644
--- a/kernel/ve/vzwdog.c
+++ b/kernel/ve/vzwdog.c
@@ -78,12 +78,12 @@ static void
le line by line, reads 'Total_lat' and 'Calls' fields and skips to the next
line. Thus adding new field shouldn't break it.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
Reviewed-by: Denis V. Lunev
---
fs/proc/base.c |
in_task() returns true if we are executing in the task context.
Implementation has been stolen from upstream.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Reviewed-by: Denis V. Lunev
---
include/linux/preempt_mask.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a
000
cpu3 100 8700 316298
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
Reviewed-by: Denis V. Lunev
---
include/linux/kstat.h | 1 +
kernel/ve/vz
-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
Reviewed-by: Denis V. Lunev
---
fs/proc/base.c| 18 --
include/linux/kstat.h | 5 +
include/linux/sched.h | 4 ++--
kernel/exit.c | 6 ++
kernel/sched/fair.c | 3 +++
5 files changed, 28 insertions(+), 8
When we in interrupt, the 'current' is just any random task. We shouldn't
account per-task atomic allocations latency to random tasks. Use in_task()
macro to identify task context, and account per-task latency iff we in
task.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by:
ast 2 minutes and show the max of these
maxes.
Changes since v1:
- change period from 1sec to 2min
Andrey Ryabinin (5):
linux/preempt_mask.h: Add in_task() macro.
vz_latency: don't account allocations in interrupts to random tasks
/proc/vz/latency: distinguish atomic allocations in i
000
cpu3 100 8700 316298
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
---
include/linux/kstat.h | 1 +
kernel/ve/vzstat.c| 21 +
in_task() returns true if we are executing in the task context.
Implementation has been stolen from upstream.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
---
include/linux/preempt_mask.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/preempt_mask.h b
When we in interrupt, the 'current' is just any random task. We shouldn't
account per-task atomic allocations latency to random tasks. Use in_task()
macro to identify task context, and account per-task latency iff we in
task.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by:
le line by line, reads 'Total_lat' and 'Calls' fields and skips to the next
line. Thus adding new field shouldn't break it.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
---
Chnages since v2:
- fix header text.
- Change the way ma
On 08/20/2018 04:48 PM, Pavel Borzenkov wrote:
>> -seq_printf(m, "%-12s %20s %20s\n",
>> -"Type", "Total_lat", "Calls");
>> +seq_printf(m, "%-12s %20s %20s %20s\n",
>> +"Type", "Total_lat", "Calls", "Max (1sec)");
>
> Wrong header? Should be 2min
000
cpu3 100 8700 316298
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
---
include/linux/kstat.h | 1 +
kernel/ve/vzstat.c| 21 +
the max latency in last 2 minutes
(+/-10sec).
But the same logic can be implemented in user-space with less performance cost,
since in the kernel we would need to put that complex logic into every
allocation,
while in userspace this can be done only for task we want to monitor.
https://jira.
When we in interrupt, the 'current' is just any random task. We shouldn't
account per-task atomic allocations latency to random tasks. Use in_task()
macro to identify task context, and account per-task latency iff we in
task.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by:
in_task() returns true if we are executing in the task context.
Implementation has been stolen from upstream.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
---
include/linux/preempt_mask.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/preempt_mask.h b
On 08/20/2018 10:57 AM, Pavel Borzenkov wrote:
>
>
>> On 17 Aug 2018, at 19:40, Andrey Ryabinin wrote:
>>
>> Add to '/proc//vz_latency' column with maximal latency task have seen
>> in the last second.
>>
>> E.g.:
>>
>
On 08/20/2018 11:01 AM, Pavel Borzenkov wrote:
>
>
>> On 17 Aug 2018, at 19:40, Andrey Ryabinin wrote:
>>
>> Add to /proc/vz/latency 'alocirq' allocation type which shows allocation
>> latencies
>> done in irq contexts. 'alocatomic' no
When we in interrupt, the 'current' is just any random task. We shouldn't
account per-task atomic allocations latency to random tasks. Use in_task()
macro to identify task context, and account per-task latency iff we in
task.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by:
in_task() returns true if we are executing in the task context.
Implementation has been stolen from upstream.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
---
include/linux/preempt_mask.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/preempt_mask.h b
000
cpu3 100 8700 316298
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
---
include/linux/kstat.h | 1 +
kernel/ve/vzstat.c| 21 +
by line, reads 'Total_lat' and 'Calls' fields and skips to the next
line. Thus adding new field shouldn't break it.
https://jira.sw.ru/browse/PSBM-87797
Signed-off-by: Andrey Ryabinin
Cc: Pavel Borzenkov
---
fs/proc/base.c | 28
incl
to release used id.
https://jira.sw.ru/browse/PSBM-87670
Signed-off-by: Andrey Ryabinin
---
kernel/bc/beancounter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/bc/beancounter.c b/kernel/bc/beancounter.c
index 2cc0bca5b353..d8078eea727f 100644
--- a/kernel/bc/beancounter.c
exec_ub in struct request as well.
>
> We are safe to check for NULL request:req_ub to detect set it or not
> because request is zeroed on creation:
>
>get_request
> __get_request
> blk_rq_init
> memset(rq, 0, size
iteback_single_inode which
>sets proper exec_ub.
>
> __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
> {
> ...
> ub = rcu_dereference(inode->i_mapping->dirtied_ub);
> ...
> ub = set_exec_ub(ub);
> ret = __do_writeback_single_inode(inode, wbc);
> put_beancounter(set_exec_ub(ub));
> ...
>
> https://jira.sw.ru/browse/PSBM-86910
>
> Signed-off-by: Konstantin Khorenko
> ---
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
/PSBM-87338
Signed-off-by: Andrey Ryabinin
---
net/netfilter/ipset/ip_set_core.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_core.c
b/net/netfilter/ipset/ip_set_core.c
index c94094b6f500..916ef23e5d90 100644
--- a/net/netfilter/
is just bad as they might
be actively in use. So just reclaim without swapping in offline
callback.
https://jira.sw.ru/browse/PSBM-87281
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index
On a machine with a lot of cpus, the sd_alloc_entry() can
trigger a high order allocation, which is slow and may fail
if memory fragmentation is high. Use kvzalloc to fallback
0-order allocations if high-order isn't available.
Signed-off-by: Andrey Ryabinin
---
kernel/sched/core.c | 4 ++
x27;bufs' array.
Move the bufs allocations inside pipe_lock()/pipe_unlock() to fix this.
Fixes: dd3bb14f44a6 ("fuse: support splice() writing to fuse device")
Signed-off-by: Andrey Ryabinin
Cc: # v2.6.35
Signed-off-by: Miklos Szeredi
Signed-off-by: Andrey Ryabinin
---
fs/fuse/dev.c |
The 'bufs' array contains 'pipe->buffers' elements, but the
fuse_dev_splice_write() uses only 'pipe->nrbufs' elements.
So reduce the allocation size to 'pipe->nrbufs' elements.
Signed-off-by: Andrey Ryabinin
Signed-off-by: Miklos Szeredi
Signe
...
Use percpu_ref_tryget_live() in css_tryget() to fix this.
https://jira.sw.ru/browse/PSBM-75892
Signed-off-by: Andrey Ryabinin
---
include/linux/cgroup.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index e18fcd930eb1..9f43eb35
> debug such cases.
>
> https://jira.sw.ru/browse/PSBM-80869
>
> Signed-off-by: Konstantin Khorenko
>
> Changes:
> v2: move sem lock/unlock inside free_mnt_ns() to fix calling copy_mnt_ns()
> without semaphore taken.
> ---
Reviewed-by: Andrey Ryabinin
On 07/13/2018 03:34 PM, Konstantin Khorenko wrote:
> Do we need it to RK as well for older kernels?
>
Yes.
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
> On 07/13/2018 02:28 PM, Andrey Ryabinin wrote:
>> Charges to offline
eases ->memory counter of its parent A,
so mem_cgroup_reparent_charges(A) will never satisfy the condition
'A->memory - A->kmem > 0'
which is required to break the loop.
https://jira.sw.ru/browse/PSBM-86092
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 22 +++
ULL
and the second load to get the 'pid->level'.
Replacing ACCESS_ONCE() with READ_ONCE() in __rcu_access_pointer() magically
fix things for me. So let's do that.
Note: our release kernel seems not affected by this, because of different gcc
version.
Signed-off-by: Andrey Rya
Add #include . Needed for swap_slot_cache_enabled
variable.
Signed-off-by: Andrey Ryabinin
---
mm/tswap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/tswap.c b/mm/tswap.c
index 7b02c1142db2..551a6949b96a 100644
--- a/mm/tswap.c
+++ b/mm/tswap.c
@@ -16,6 +16,7 @@
#include
#include
Add #include . Needed for swap_slot_cache_enabled
variable.
Signed-off-by: Andrey Ryabinin
---
mm/tswap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/tswap.c b/mm/tswap.c
index 7b02c1142db2..e6804dcba6e2 100644
--- a/mm/tswap.c
+++ b/mm/tswap.c
@@ -16,6 +16,7
ks for __read_swap_cache_async(), so it should work for tswap.
https://jira.sw.ru/browse/PSBM-86344
Signed-off-by: Andrey Ryabinin
---
mm/tswap.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/mm/tswap.c b/mm/tswap.c
index b7a990e8cd8d..7b02c1142db2 100644
--- a/mm/tswap.c
+++ b/mm/
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 8
1 file changed, 8 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 755c09e050a7..59cf47972f9e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4010,7 +4010,6 @@ static void mem_cgroup_force_empty_list(struct
Rientjes
Cc: Hugh Dickins
Cc: [4.12+]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
(cherry picked from commit 8606a1a94da5c4e49c0fb28af62d2e75c6747716)
Signed-off-by: Andrey Ryabinin
---
mm/swapfile.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/sw
off-by: Linus Torvalds
https://jira.sw.ru/browse/PSBM-86091
(cherry picked from commit 2628bd6fc052bd85e9864dae4de494d8a6313391)
Signed-off-by: Andrey Ryabinin
---
include/linux/swap.h | 4
mm/swapfile.c| 23 +--
2 files changed, 21 insertions(+), 6 deletions(-)
d
ystem. Instead of endless loop, make several retries, WARN() and
break the loop if reparenting was unsuccessful.
https://jira.sw.ru/browse/PSBM-86092
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
erry picked from commit 6c4687cc17a788a6dd8de3e27dbeabb7cbd3e066)
Signed-off-by: Andrey Ryabinin
---
kernel/events/uprobes.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 64d8e350db56..a04ed7f94d8d 100644
--- a/kernel/events/uprobes.c
+++ b/
__add_to_page_cache_locked() sometimes uses
mem_cgroup_cancel_charge() instead of mem_cgroup_cancel_cache_charge()
which leads to leaking ->cache charge.
Signed-off-by: Andrey Ryabinin
---
mm/filemap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/filemap.c b
Cowed page on private mapping is considiered as anonymous page.
Charging it as a cache page is wrong and leads to leaking ->cache
increments.
Signed-off-by: Andrey Ryabinin
---
mm/memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
in
| 38 ++
> kernel/exit.c| 1 -
> kernel/fork.c | 7 +--
> 8 files changed, 92 insertions(+), 21 deletions(-)
>
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
n non-init user namespaces.
>
> Fixes: 371904f01f05 ("fuse: virtualize file system")
> https://jira.sw.ru/browse/PSBM-85886
>
> Signed-off-by: Konstantin Khorenko
Acked-by: Andrey Ryabinin
> ---
> fs/fuse/inode.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deleti
On 06/18/2018 05:54 PM, Vasily Averin wrote:
> Andrey,
> could you please take look at vz6,
> it seems it should be affected too, it isn't?
>
Yes, vz6 is also affected.
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo
should be zero on next module loading.
https://pmc.acronis.com/browse/VSTOR-11099
Signed-off-by: Andrey Ryabinin
---
net/netfilter/nf_conntrack_core.c | 2 +-
net/netfilter/nf_conntrack_standalone.c | 6 ++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/net
y picked from commit e23ed762db7ed1950a6408c3be80bc56909ab3d4)
Signed-off-by: Andrey Ryabinin
---
net/netfilter/ipset/ip_set_core.c | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/net/netfilter/ipset/ip_set_core.c
b/net/netfilter/ipset/ip_set_core.c
index 02b8ff
ables: introduce and use
xt_copy_counters_from_user")
Signed-off-by: Eric Dumazet
Cc: Willem de Bruijn
Acked-by: Florian Westphal
Signed-off-by: Pablo Neira Ayuso
(cherry picked from commit e466af75c074e76107ae1cd5a2823e9c61894ffb)
Signed-off-by: Andrey Ryabinin
---
net/netfilter/x_tables.c |
uot;[NETFILTER]: ctnetlink: fix deadlock in table dumping")
Signed-off-by: Liping Zhang
Signed-off-by: Pablo Neira Ayuso
(cherry picked from commit fefa92679dbe0c613e62b6c27235dcfbe9640ad1)
Signed-off-by: Andrey Ryabinin
---
net/netfilter/nf_conntrack_netlink.c | 7 ++-
1 file changed,
add_taint() callers usually call dump_stack(). The second
dump_stack() only polutes the log.
Signed-off-by: Andrey Ryabinin
---
kernel/panic.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/kernel/panic.c b/kernel/panic.c
index 333e36cd1175..6d33011078d2 100644
--- a/kernel/panic.c
Remove some debugging printk that accidentally sneaked
in patch "mm/memcg: Don't charge anon pages as cache"
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
ind
; // <- will cause rcu_read_unlock() without
lock
Take rcu lock before the last 'goto repeat;' in tcache_detach_page().
https://jira.sw.ru/browse/PSBM-81731
Signed-off-by: Andrey Ryabinin
---
mm/tcache.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/tcache.c b/mm/t
calloc() can involve retries
> and OOM killer which is undesireable.
>
> kvzalloc() handles all these aspects properly.
>
> https://jira.sw.ru/browse/PSBM-84234
>
> Signed-off-by: Oleg Babin
Reviewed-by: Andrey Ryabinin
___
De
order page is not available at the moment.
>
> https://jira.sw.ru/browse/HCI-53
> Signed-off-by: Oleg Babin
Reviewed-by: Andrey Ryabinin
In addition to this will 32b2921e6a7461fe63b71217067a6cf4bddb132f as well,
otherwise it's become too easy for userspace
On 05/08/2018 03:30 PM, Oleg Babin wrote:
> Size of rchan structure depends on NR_CPUS definition which can be
> configured by the user and can become quite large.
>
> E.g. if NR_CPUS equals to 5120 (real world scenario) then it makes
> sizeof(struct rchan) == 41320 meaning the 4th memory order.
,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
> On 04/23/2018 04:00 PM, Andrey Ryabinin wrote:
>> Currently shrink_slab() skips offlined cgroups during per-memcg reclaim.
>> So only global reclaim can shrink slabs from offlined cgroups.
>> This doesn
destroy cgroup almost immedieatly and reuse id.
https://jira.sw.ru/browse/PSBM-83628
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d68650ad7a53..590851f6c1d3 100644
--- a/mm/memcon
PSBM-83628
Signed-off-by: Andrey Ryabinin
---
mm/vmscan.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4922f734cdb4..aefa4bc33062 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -395,9 +395,6 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
0
[] ext4_fill_super+0x267/0x2cc0 [ext4]
[] mount_bdev+0x1f2/0x240
[] ext4_mount+0x44/0x60 [ext4]
[] mount_fs+0x39/0x1b0
[] vfs_kern_mount+0x67/0x110
[] do_mount+0x24c/0xb30
[] SyS_mount+0x96/0xf0
[] system_call_fastpath+0x16/0x1b
---[ end trace 8b5c76e01d611a1e ]---
https://jira.sw.ru
still can't understand how this
mess should work.
Use seqlock to protect iter updates.
https://jira.sw.ru/browse/PSBM-83369
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memc
become
> different, since there is exclusive sighand lock in do_send_sig_info().
>
> The patch converts fa_lock into rwlock_t, and this fixes two above
> deadlocks, as rwlock is allowed to be taken from interrupt handler
> by qrwlock design.
>
> https://jira.sw.ru/browse/PSBM-83102
>
> Signed-off-by: Kirill Tkhai
>
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
On 03/21/2018 04:10 PM, Konstantin Khorenko wrote:
> On 03/21/2018 03:59 PM, Andrey Ryabinin wrote:
>>
>>
>> On 03/21/2018 02:11 PM, Konstantin Khorenko wrote:
>>> sock_setsockopt()
>>> sk_attach_filter()
>>> sock_kmalloc()
>>>
>&
s
> provided.
>
> https://jira.sw.ru/browse/PSBM-82593
>
> Signed-off-by: Konstantin Khorenko
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
t; https://jira.sw.ru/browse/PSBM-82549
> Signed-off-by: Oleg Babin
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
On 03/21/2018 02:11 PM, Konstantin Khorenko wrote:
> sock_setsockopt()
> sk_attach_filter()
> sock_kmalloc()
>
> Memory size to be allocated depends on the number of rules provided by
> userspace, but not more than net.core.optmem_max (20480 by default),
> which still allows to allocate 3rd o
On 03/21/2018 02:53 PM, Oleg Babin wrote:
> In the implementation of ext4_kvmalloc() and ext4_kvzalloc() functions
> a first attempt to allocate memory with kmalloc() can legitimately fail
> in which case we will try to allocate memory with __vmalloc() instead.
> Given this we do not want kmalloc
801_smbus won't work
because it expects interrupts that it may not receive.
Signed-off-by: Chen Fan
Acked-by: Thomas Gleixner
Acked-by: Bjorn Helgaas
Signed-off-by: Rafael J. Wysocki
https://jira.sw.ru/browse/PSBM-82172
Signed-off-by: Andrey Ryabinin
---
drivers/acpi/pci_irq.c
than zero). Modify the procedure of checking the counter.
>
> https://jira.sw.ru/browse/PSBM-82202
> Signed-off-by: Oleg Babin
> ---
Acked-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
s://jira.sw.ru/browse/PSBM-82021
Signed-off-by: Andrey Ryabinin
---
Patch is only for kernels that don't have memory.cache.limit_in_bytes,
e.g. 3.10.0-693.11.6.vz7.40.4
Basically, this patch is for RK only.
mm/memcontrol.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memcontrol.
It's pointless to re-initialize the 'repeat' variable before
invalidation retry since it's used only in the first ivalidate
attempt (when synchronize_sched_once == true).
Signed-off-by: Andrey Ryabinin
---
mm/tcache.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions
validation retry.
https://jira.sw.ru/browse/PSBM-81760
Signed-off-by: Andrey Ryabinin
---
mm/tcache.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/tcache.c b/mm/tcache.c
index 6e0c10dd3339..31a9eb3d1e12 100644
--- a/mm/tcache.c
+++ b/mm/tcache.c
@@ -91
Relax kmem limit and bypass allocation anyway even if reclaim failed.
https://jira.sw.ru/browse/PSBM-81818
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d68650ad7a53
if kmem usage is above kmem.limit mem_cgroup_margin() we return
non-zero margin possibly leading to endless loop in try_charge().
https://jira.sw.ru/browse/PSBM-81818
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/memcontrol.c b/mm
https://jira.sw.ru/browse/PSBM-81803
> Signed-off-by: Oleg Babin
Reviewed-by: Andrey Ryabinin
___
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel
On 02/27/2018 01:27 AM, Andrei Vagin wrote:
> On Mon, Feb 26, 2018 at 03:29:51PM +0300, Oleg Babin wrote:
>> Currently we allocate more than eight pages of memory in
>> vhost_net_set_ubuf_info() function and we do not need
>> them to be physically contiguous, so it is feasible to
>> replace a cal
.sw.ru/browse/PSBM-81750
Signed-off-by: Andrey Ryabinin
---
mm/memcontrol.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 50cbcdc38f60..b4d61d72ccbf 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2769,13 +276
as cache.
Introduce mem_cgroup_try_charge_cache()/mem_cgroup_cancel_cache_charge()
and use them to charge cache pages.
https://jira.sw.ru/browse/PSBM-81750
Signed-off-by: Andrey Ryabinin
---
include/linux/memcontrol.h | 16 +
mm/filemap.c | 4 +--
mm/memcontrol.c
>
>
> include/linux/sched.h |5 +
> kernel/cgroup.c |3 +++
> kernel/sched/core.c | 38 --
> kernel/sched/sched.h |4
> kernel/ve/ve.c | 15 ++++++-
> 5 files changed, 62 insertions(+), 3
ad latency. When thread dies,
it submits its own latencies into shared task->signal.alloc_lat struct.
/proc//vz_latency - sums allocation latencies over all live threads
plus latencies of already dead tasks from task->signal.alloc_lat.
https://jira.sw.ru/browse/PSBM-81395
Signed-off-by: And
It seems that 'struct task_struct' not initialized to zero after
allocation. Thus we need to initialize alloc_lat explicitly.
https://jira.sw.ru/browse/PSBM-81395
Signed-off-by: Andrey Ryabinin
---
kernel/fork.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/fork.
It seems that 'struct task_struct' not initialized to zero after
allocation. Thus we need to initialize alloc_lat explicitly.
https://jira.sw.ru/browse/PSBM-81395
Signed-off-by: Andrey Ryabinin
---
kernel/fork.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/fork.
ad latency. When thread dies,
it submits its own latencies into shared task->signal.alloc_lat struct.
/proc//vz_latency - sums allocation latencies over all live threads
plus latencies of already dead tasks from task->signal.alloc_lat.
https://jira.sw.ru/browse/PSBM-81395
Signed-off-by: And
401 - 500 of 1031 matches
Mail list logo