e after
> the rcu_dereference, because it doesn't take into account the address
> dependency from the intermediate plain read. Hopefully we will add
> such things to the memory model later on. Concentrating on data races
> seems like enough for now.
>
> Some of the ideas
the
requested data.
BugLink: https://bugs.launchpad.net/bugs/1813244
Signed-off-by: Andrea Righi
---
Changes in v2:
- correctly resize to current_size+req_size (thanks to Pravin)
net/openvswitch/flow_netlink.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/openvswitch
reaks one of my tests (which I probe on do_IRQ).
>
> OK, it seems this patch is a bit redundant, because
> I found that these interrupt handler issue has been fixed
> by Andrea's commit before merge this patch.
>
> commit a50480cb6d61d5c5fc13308479407b628b6bc1c5
> Author: And
Hello,
On Thu, Mar 21, 2019 at 01:43:35PM +, Luis Chamberlain wrote:
> On Wed, Mar 20, 2019 at 03:01:12PM -0400, Andrea Arcangeli wrote:
> > but
> > that would be better be achieved through SECCOMP and not globally.'.
>
> That begs the question why not use seccomp for t
Hello,
On Tue, Mar 19, 2019 at 06:28:23PM +, Dr. David Alan Gilbert wrote:
> ---
> Userfaultfd can be misued to make it easier to exploit existing use-after-free
> (and similar) bugs that might otherwise only make a short window
> or race condition available. By using userfaultfd to stall a
ed users. When this is
> > set to zero, only privileged users (root user, or users with the
> > CAP_SYS_PTRACE capability) will be able to use the userfaultfd
> > syscalls.
> >
> > Suggested-by: Andrea Arcangeli
> > Suggested-by: Mike Rapoport
> > Signed-of
On Sat, Mar 16, 2019 at 05:38:54PM +0800, zhong jiang wrote:
> On 2019/3/16 5:39, Andrea Arcangeli wrote:
> > On Fri, Mar 08, 2019 at 03:10:08PM +0800, zhong jiang wrote:
> >> I can reproduce the issue in arm64 qemu machine. The issue will leave
> >> af
On Fri, Mar 08, 2019 at 03:10:08PM +0800, zhong jiang wrote:
> I can reproduce the issue in arm64 qemu machine. The issue will leave after
> applying the
> patch.
>
> Tested-by: zhong jiang
Thanks a lot for the quick testing!
> Meanwhile, I just has a little doubt whether it is necessary to
On Thu, Mar 14, 2019 at 11:58:15AM +0100, Paolo Bonzini wrote:
> On 14/03/19 00:44, Andrea Arcangeli wrote:
> > Then I thought we can add a tristate so an open of /dev/kvm would also
> > allow the syscall to make things more user friendly because
> > unprivileged container
dev/sde
- mount it:
# mount /dev/sdb /mnt
- run btrfs scrub in a loop:
# while :; do btrfs scrub start -BR /mnt; done
BugLink: https://bugs.launchpad.net/bugs/1812845
Reviewed-by: Johannes Thumshirn
Signed-off-by: Andrea Righi
---
Changes in v2:
- added a better description about this
On Wed, Mar 13, 2019 at 01:01:40PM -0700, Mike Kravetz wrote:
> On 3/13/19 11:52 AM, Andrea Arcangeli wrote:
> >
> > hugetlbfs is more complicated to detect, because even if you inherit
> > it from fork(), the services that mounts the fs may be in a different
>
h CRIU and KVM), I think Oracle
> > will need a one liner change in the Oracle setup to echo into that
> > file in addition of running the hugetlbfs mount.
>
> Hi Andrea, can you explain more in detail the risks of enabling
> userfaultfd for unprivileged users?
There's no more risk th
CRIU and KVM), I think Oracle
will need a one liner change in the Oracle setup to echo into that
file in addition of running the hugetlbfs mount.
Note that DPDK host bridge process will also need a one liner change
to do a dummy open/close of /dev/kvm to unblock the syscall.
Thanks,
Andrea
archs to add the cache flushes on kunmap too,
and then remove the cache flushes from the other places like copy_page
or we'd waste CPU. Then you'd have the best of both words, no double
flush and kunmap would be enough.
Thanks,
Andrea
/640
Signed-off-by: Andrea Righi
---
Changes in v4:
- fix a build bug when CONFIG_BLOCK is unset
block/blk-cgroup.c | 130 +++
block/blk-throttle.c | 11 ++-
fs/fs-writeback.c| 5 ++
fs/sync.c| 8
/640
Signed-off-by: Andrea Righi
---
Changes in v3:
- drop sync(2) isolation patches (this will be addressed by another
patch, potentially operating at the fs namespace level)
- use a per-bdi lock and a per-bdi list instead of a global lock and a
global list to save the list of sync(2
no need of setting
up any pagetables or to do any TLB flushes (except on 32bit archs if
the page is above the direct mapping but it never happens on 64bit
archs).
Thanks,
Andrea
On Fri, Mar 08, 2019 at 04:58:44PM +0800, Jason Wang wrote:
> Can I simply can set_page_dirty() before vunmap() in the mmu notifier
> callback, or is there any reason that it must be called within vumap()?
I also don't see any problem in doing it before vunmap. As far as the
mmu notifier and
tart at the latest in such case.
My prefer is generally to call gup_fast() followed by immediate
put_page() because I always want to drop FOLL_GET from gup_fast as a
whole to avoid 2 useless atomic ops per gup_fast.
I'll write more about vmap in answer to the other email.
Thanks,
Andrea
On Fri, Mar 08, 2019 at 12:22:20PM -0500, Josef Bacik wrote:
> On Thu, Mar 07, 2019 at 07:08:31PM +0100, Andrea Righi wrote:
> > = Problem =
> >
> > When sync() is executed from a high-priority cgroup, the process is forced
> > to
> > wait the completion of the
On Thu, Mar 07, 2019 at 05:07:01PM -0500, Josef Bacik wrote:
> On Thu, Mar 07, 2019 at 07:08:34PM +0100, Andrea Righi wrote:
> > Keep track of the inodes that have been dirtied by each blkcg cgroup and
> > make sure that a blkcg issuing a sync() can trigger the writeback + wait
&g
On Thu, Mar 07, 2019 at 05:10:53PM -0500, Josef Bacik wrote:
> On Thu, Mar 07, 2019 at 07:08:32PM +0100, Andrea Righi wrote:
> > Prevent priority inversion problem when a high-priority blkcg issues a
> > sync() and it is forced to wait the completion of all the writeback I/O
>
ap that call set_page_dirty
> on the page from the mmu notifier.
Agreed, that will solve all issues in vhost context with regard to
set_page_dirty, including the case the memory is backed by VM_SHARED ext4.
Thanks!
Andrea
long term GUP pins, so I'm
asking...
Thanks!
Andrea
ide the page table lock.
> > implying it's called just later.
>
> OK I missed the fact that _end actually calls
> mmu_notifier_invalidate_range internally. So that part is fine but the
> fact that you are trying to take page lock under VQ mutex and take same
> mutex within notif
only dirty pages that
belong to the cgroup itself (except for the root cgroup that would still
be able to write out all pages globally).
Signed-off-by: Andrea Righi
---
Documentation/admin-guide/cgroup-v2.rst | 9 ++
block/blk-throttle.c| 37
behavior is applied: sync() triggers the
writeback of any dirty page.
Signed-off-by: Andrea Righi
---
block/blk-cgroup.c | 47 ++
fs/fs-writeback.c | 52 +++---
fs/inode.c | 1 +
include/linux/blk
policy could be to
adjust the throttling I/O rate using the blkcg with the highest speed
from the list of waiters - priority inheritance, kinda).
Signed-off-by: Andrea Righi
---
block/blk-cgroup.c | 131 +++
block/blk-throttle.c | 11 ++-
fs/fs
user 0m0,001s
sys0m0,008s
[ Time range goes from 0.7s to 1.6s ]
Changes in v2:
- fix: properly keep track of sync waiters when a blkcg is writing to
many block devices at the same time
Andrea Righi (3):
blkcg: prevent priority inversion problem during sync()
blkcg: introduce io.
case when it gets a -ESRCH
retval.
Note that this fork feature is only ever needed in the non-cooperative
case, these things never need to happen when userfaultfd is used by an
app (or a lib) that is aware that it is using userfaultfd.
Thanks,
Andrea
owever that mm is on its way to exit_mmap as soon as the
ioclt returns and this only ever happens during race conditions, so
the way CRIU monitor works there wasn't anything fundamentally
concerning about this detail, despite it's remarkably "strange". Our
priority was to keep the fork code
rdered after up_read(mmap_sem) either.
Other than the above detail:
Reviewed-by: Andrea Arcangeli
Thanks,
Andrea
warning or as a reference to those developers
who are quivering to use (dep ; rfi): enjoy it, be careful.
Andrea
From: Andrea Greco
SPUR DOWN field return spurup inside of spurdown
Signed-off-by: Andrea Greco
---
drivers/net/wireless/ath/ath9k/debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath9k/debug.c
b/drivers/net/wireless/ath/ath9k/debug.c
index
From: Andrea Greco
Add net-link support for change CW in Ad-Hock network
Signed-off-by: Andrea Greco
---
net/wireless/nl80211.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index d91a408db113..4fcc63fa4380 100644
mply that the string is dumped
> in the panic path, and you never really know when you're going to panic.
> Even if you only write to the string before doing SMP bringup you might
> still have another CPU go rogue and panic before then.
>
> But I probably should have just not added the barrier, it's over
> paranoid and will almost certainly never matter in practice.
Oh, well, I can only echo you: if you don't care about the stores being
_observed_ out of order, you could simply remove the barrier; if you do
care, then you need "more paranoid" on the readers side. ;-)
Andrea
>
> cheers
On Wed, Feb 20, 2019 at 09:57:00AM +, Will Deacon wrote:
> On Wed, Feb 20, 2019 at 10:26:04AM +0100, Peter Zijlstra wrote:
> > On Tue, Feb 19, 2019 at 06:01:17PM -0800, Paul E. McKenney wrote:
> > > On Tue, Feb 19, 2019 at 11:57:37PM +0100, Andrea Parri wrote:
> >
On Wed, Feb 20, 2019 at 10:26:04AM +0100, Peter Zijlstra wrote:
> On Tue, Feb 19, 2019 at 06:01:17PM -0800, Paul E. McKenney wrote:
> > On Tue, Feb 19, 2019 at 11:57:37PM +0100, Andrea Parri wrote:
> > > Remove this subtle (and, AFAICT, unused) ordering: we can add it back,
On Mon, Feb 11, 2019 at 03:38:59PM +0100, Petr Mladek wrote:
> On Mon 2019-02-11 13:50:35, Andrea Parri wrote:
> > Hi Michael,
> >
> >
> > On Thu, Feb 07, 2019 at 11:46:29PM +1100, Michael Ellerman wrote:
> > > Arch code can set a "dump stack arch des
ONCE(*x, 1);
smp_store_release(y, 1);
}
P1(int *x, int *y, int *z)
{
int r0;
int r1;
int r2;
r0 = READ_ONCE(*y);
WRITE_ONCE(*z, r0);
r1 = smp_load_acquire(z);
r2 = READ_ONCE(*x);
}
exists (1:r0=1 /\ 1:r2=0)
Signed-off-by: Andrea Parri
Cc: Alan
Use "herd7" in each such reference.
Signed-off-by: Andrea Parri
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosawa
Cc: Daniel Lustig
---
The comment should say "Sometimes" for the result.
Signed-off-by: Andrea Parri
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosaw
Fixes to inline comments, documentation, script usage.
Cc: Alan Stern
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Boqun Feng
Cc: Nicholas Piggin
Cc: David Howells
Cc: Jade Alglave
Cc: Luc Maranget
Cc: "Paul E. McKenney"
Cc: Akira Yokosawa
Cc: Daniel Lustig
Andrea Parri (2):
to
> Fixes: 235b62176712 ("mm/swap: add cluster lock")
> Signed-off-by: "Huang, Ying"
> Not-Nacked-by: Hugh Dickins
> Cc: Paul E. McKenney
> Cc: Minchan Kim
> Cc: Johannes Weiner
> Cc: Tim Chen
> Cc: Mel Gorman
> Cc: Jérôme Glisse
> Cc: M
user 0m0,001s
sys0m0,008s
[ Time range goes from 0.7s to 1.6s ]
Andrea Righi (3):
blkcg: prevent priority inversion problem during sync()
blkcg: introduce io.sync_isolation
blkcg: implement sync() isolation
Documentation/admin-guide/cgroup-v2.rst | 9 +++
block/blk-cg
behavior is applied: sync() triggers the
writeback of any dirty page.
Signed-off-by: Andrea Righi
---
block/blk-cgroup.c | 47 ++
fs/fs-writeback.c | 52 +++---
fs/inode.c | 1 +
include/linux/blk
policy could be to
adjust the throttling I/O rate using the blkcg with the highest speed
from the list of waiters - priority inheritance, kinda).
Signed-off-by: Andrea Righi
---
block/blk-cgroup.c | 73
block/blk-throttle.c | 11 +++--
fs/fs
only dirty pages that
belong to the cgroup itself (except for the root cgroup that would still
be able to write out all pages globally).
Signed-off-by: Andrea Righi
---
Documentation/admin-guide/cgroup-v2.rst | 9 ++
block/blk-throttle.c| 37
hat forcefully invokes MMU
notifiers and forces host allocations and KVM page faults in order to
reallocate the same RAM in the same guest.
When there's memory pressure it's up to the host Linux VM to notice
there's plenty of MADV_FREE material to free at zero I/O cost before
starting swapping.
Thanks,
Andrea
nchmark i could
> run ?
We could also try a microbenchmark based on
ltp/testcases/kernel/mem/ksm/ksm02.c that already should trigger a
merge flood and a COW flood during its internal processing.
Thanks,
Andrea
On Thu, Feb 14, 2019 at 04:07:37PM +0800, Huang, Ying wrote:
> Before, we choose to use stop_machine() to reduce the overhead of hot
> path (page fault handler) as much as possible. But now, I found
> rcu_read_lock_sched() is just a wrapper of preempt_disable(). So maybe
> we can switch to RCU
ill the
idea of stop_machine would be to do those p->swap_map = NULL and
everything protected by the swap_lock, should be executed inside the
callback that runs like in a UP system to speedup the fast path
further.
Thanks,
Andrea
ze_kernel or
whatever it is called right now, but still RCU) solution isn't
preferable.
Thanks,
Andrea
Commit-ID: 02106f883cd745523f7766d90a739f983f19e650
Gitweb: https://git.kernel.org/tip/02106f883cd745523f7766d90a739f983f19e650
Author: Andrea Righi
AuthorDate: Wed, 13 Feb 2019 01:15:34 +0900
Committer: Ingo Molnar
CommitDate: Wed, 13 Feb 2019 08:16:41 +0100
kprobes: Prohibit probing
de API), so that here you could instead use preempt-disable +
synchronize_rcu{,expedited}(). This LWN article gives an overview of
the latest RCU API/semantics changes: https://lwn.net/Articles/777036/.
Andrea
the following LB-like pattern:
CPU0CPU1
reads p->swap_map lock(completion)
lock(completion)read completion->done
completion->done++ unlock(completion)
unlock(completion) p->swap_map = NULL
where CPU0 must see a non-NULL p->swap_map if CPU1 sees the
completion from CPU0.
Does this make sense?
Andrea
On Mon, Feb 11, 2019 at 10:39:34AM -0500, Josef Bacik wrote:
> On Sat, Feb 09, 2019 at 03:07:49PM +0100, Andrea Righi wrote:
> > This is an attempt to mitigate the priority inversion problem of a
> > high-priority blkcg issuing a sync() and being forced to wait the
> &
On Mon, Feb 11, 2019 at 02:09:31PM -0500, Jerome Glisse wrote:
> Yeah, between do you have any good workload for me to test this ? I
> was thinking of running few same VM and having KSM work on them. Is
> there some way to trigger KVM to fork ? As the other case is breaking
> COW after fork.
KVM
previously), and so print the
> + * uninitialised tail. But the whole string lives in BSS so in
> + * practice it should just see NULLs.
The comment doesn't say _why_ we need to order these stores: IOW, what
will or can go wrong without this order? T
with any definitive solution.
This patch is not a definitive solution either, but it's an attempt to
continue addressing this issue and handling the priority inversion
problem with sync() in a better way.
Signed-off-by: Andrea Righi
---
Changes in v2:
- fix: use the proper current blkcg
On Sat, Feb 09, 2019 at 01:06:33PM +0100, Andrea Righi wrote:
...
> +/**
> + * blkcg_wb_waiters_on_bdi - check for writeback waiters on a block device
> + * @bdi: block device to check
> + *
> + * Return true if any other blkcg is waiting for writeback on the target
> block
&
with any definitive solution.
This patch is not a definitive solution either, but it's an attempt to
continue addressing the issue and, hopefully, handle the priority
inversion problem with sync() in a better way.
Signed-off-by: Andrea Righi
---
block/blk-cgroup.c | 69
Commit-ID: c546951d9c9300065bad253ecdf1ac59ce9d06c8
Gitweb: https://git.kernel.org/tip/c546951d9c9300065bad253ecdf1ac59ce9d06c8
Author: Andrea Parri
AuthorDate: Mon, 21 Jan 2019 16:52:40 +0100
Committer: Ingo Molnar
CommitDate: Mon, 4 Feb 2019 09:13:21 +0100
sched/core: Use READ_ONCE
On Thu, Jan 31, 2019 at 01:37:04PM -0500, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> Use unsigned for event field in range struct so that we can also set
> flags with the event. This patch change the field and introduce the
> helper.
>
> Signed-off-by: Jérôme Glisse
&
On Thu, Jan 31, 2019 at 01:37:03PM -0500, Jerome Glisse wrote:
> @@ -207,8 +207,7 @@ static int __replace_page(struct vm_area_struct *vma,
> unsigned long addr,
>
> flush_cache_page(vma, addr, pte_pfn(*pvmw.pte));
> ptep_clear_flush_notify(vma, addr, pvmw.pte);
> -
On Fri, Feb 01, 2019 at 06:57:38PM -0500, Andrea Arcangeli wrote:
> If it's cleared with ptep_clear_flush_notify, change_pte still won't
> work. The above text needs updating with
> "ptep_clear_flush". set_pte_at_notify is all about having
> ptep_clear_flush only befo
pen is that the CPU could write to the page
through a TLB fill without page fault while the secondary MMUs still
read the old memory in the old readonly page.
Thanks,
Andrea
On Mon, Jan 21, 2019 at 04:52:40PM +0100, Andrea Parri wrote:
> move_queued_task() synchronizes with task_rq_lock() as follows:
>
> move_queued_task() task_rq_lock()
>
> [S] ->on_rq = MIGRATING [L] rq = task_rq()
> WMB (__set_task_cp
On Thu, Jan 17, 2019 at 04:31:32PM +0100, Andrea Parri wrote:
> On Mon, Nov 26, 2018 at 05:44:12PM +0100, Andrea Parri wrote:
> > As the comments for wake_up_bit() and waitqueue_active() point out,
> > the barriers are needed to order the clearing of the _FL_NO
gt; invalidate_range() already.
>
> CC: Benjamin Herrenschmidt
> CC: Paul Mackerras
> CC: Michael Ellerman
> CC: Alistair Popple
> CC: Alexey Kardashevskiy
> CC: Mark Hairgrove
> CC: Balbir Singh
> CC: David Gibson
> CC: Andrea Arcangeli
> CC: Jerome Glisse
&g
Fix wildcard patterns and add cgroup-v2 documentation.
Signed-off-by: Andrea Parri
---
MAINTAINERS | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 9f64f8d3740ed..a96054c1d870a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3906,9 +3906,10
d-gigabytes/terabytes
regions, async uffd-wp should perform much better.
Thanks,
Andrea
ior versions of these functions.
>
> Co-developed-by: Peter Zijlstra (Intel)
> Signed-off-by: Elena Reshetova
Reviewed-by: Andrea Parri
Andrea
> ---
> Documentation/core-api/refcount-vs-atomic.rst | 24 +---
> arch/x86/include/asm
and,
> and this was my conclusion that it should provide this, but I can easily be
> wrong
> on this.
>
> Andrea, Peter, could you please comment?
Short version: I am not convinced by the above sentence, and I suggest
to remove it (as done in
http://lkml.kernel.org/r/201901281429
MAP was
rightfully naked early on and quickly replaced by UFFDIO_COPY which
is more optimal to add memory to a mapping is small chunks, but we
can't remove memory with UFFDIO_COPY and UFFDIO_REMAP should be as
efficient as it gets when it comes to removing memory from a
mapping.
Thank you,
Andrea
On Mon, Jan 28, 2019 at 02:26:20PM -0500, Vivek Goyal wrote:
> On Mon, Jan 28, 2019 at 06:41:29PM +0100, Andrea Righi wrote:
> > Hi Vivek,
> >
> > sorry for the late reply.
> >
> > On Mon, Jan 21, 2019 at 04:47:15PM -0500, Vivek Goyal wrote:
> > > On Sat
Hi Vivek,
sorry for the late reply.
On Mon, Jan 21, 2019 at 04:47:15PM -0500, Vivek Goyal wrote:
> On Sat, Jan 19, 2019 at 11:08:27AM +0100, Andrea Righi wrote:
>
> [..]
> > Alright, let's skip the root cgroup for now. I think the point here is
> > if we want to provide s
after_ctrl_dep();
> + return true;
> +}
> +
> +return false;
There appears to be some white-space damage (here and in other places);
checkpatch.pl should point these and other style problems out.
Andrea
> }
>
o suggest s/marked/Marked, s/plain/Plain
and similarly for the other sets to be introduced.
Andrea
C sync-rcu-is-not-idempotent
{ }
P0(int *x, int *y)
{
int r0;
WRITE_ONCE(*x, 1);
synchronize_rcu();
synchronize_rcu();
r0 = READ_ONCE(*y);
}
pu()) to honor
this address dependency. Also, mark the accesses to ->cpu and ->on_rq
with READ_ONCE()/WRITE_ONCE() to comply with the LKMM.
Signed-off-by: Andrea Parri
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: "Paul E. McKenney"
Cc: Alan Stern
Cc: Will Deacon
---
Changes in v
On Mon, Jan 21, 2019 at 01:25:26PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 21, 2019 at 11:51:21AM +0100, Andrea Parri wrote:
> > On Wed, Jan 16, 2019 at 07:42:18PM +0100, Andrea Parri wrote:
> > > The smp_wmb() in move_queued_task() (c.f., __set_task_cpu()) pairs with
>
On Mon, Jan 21, 2019 at 01:29:11PM +0100, Dmitry Vyukov wrote:
> On Mon, Jan 21, 2019 at 12:45 PM Andrea Parri
> wrote:
> >
> > On Mon, Jan 21, 2019 at 10:52:37AM +0100, Dmitry Vyukov wrote:
> >
> > [...]
> >
> > > > Am I missing som
rt
>
> Suggested-by: Kees Cook
> Reviewed-by: David Windsor
> Reviewed-by: Hans Liljestrand
> Signed-off-by: Elena Reshetova
Reviewed-by: Andrea Parri
(Same remark about the reference in the commit message. ;-) )
Andrea
> ---
> kernel/kcov.c | 9 +
> 1 file cha
dependency? Loads
> can hoist across control dependency, no?
As you remarked, the doc. says CTRL+RELEASE (so yes, loads can hoist);
AFAICR, implementations do comply to this requirement.
(FWIW, I sometimes think at this "weird" ordering as a weak "acq_rel",
the
Commit-ID: 5b735eb1ce481b2f1674a47c0995944b1cb6f5d5
Gitweb: https://git.kernel.org/tip/5b735eb1ce481b2f1674a47c0995944b1cb6f5d5
Author: Andrea Parri
AuthorDate: Mon, 3 Dec 2018 15:04:49 -0800
Committer: Ingo Molnar
CommitDate: Mon, 21 Jan 2019 11:06:55 +0100
tools/memory-model: Model
On Wed, Jan 16, 2019 at 07:42:18PM +0100, Andrea Parri wrote:
> The smp_wmb() in move_queued_task() (c.f., __set_task_cpu()) pairs with
> the composition of the dependency and the ACQUIRE in task_rq_lock():
>
> move_queued_task() task_rq_lock()
>
>
On Fri, Jan 18, 2019 at 02:46:53PM -0500, Josef Bacik wrote:
> On Fri, Jan 18, 2019 at 07:44:03PM +0100, Andrea Righi wrote:
> > On Fri, Jan 18, 2019 at 11:35:31AM -0500, Josef Bacik wrote:
> > > On Fri, Jan 18, 2019 at 11:31:24AM +0100, Andrea Righi wrote:
> > > >
On Fri, Jan 18, 2019 at 06:07:45PM +0100, Paolo Valente wrote:
>
>
> > Il giorno 18 gen 2019, alle ore 17:35, Josef Bacik
> > ha scritto:
> >
> > On Fri, Jan 18, 2019 at 11:31:24AM +0100, Andrea Righi wrote:
> >> This is a redesign of my old cgroup-io-th
On Fri, Jan 18, 2019 at 11:35:31AM -0500, Josef Bacik wrote:
> On Fri, Jan 18, 2019 at 11:31:24AM +0100, Andrea Righi wrote:
> > This is a redesign of my old cgroup-io-throttle controller:
> > https://lwn.net/Articles/330531/
> >
> > I'm resuming this old patch to point
On Fri, Jan 18, 2019 at 10:10:22AM -0500, Alan Stern wrote:
> On Thu, 17 Jan 2019, Andrea Parri wrote:
>
> > > Can the compiler (maybe, it does?) transform, at the C or at the "asm"
> > > level, LB1's P0 in LB2's P0 (LB1 and LB2 are reported below)?
> > &
struct.stack_refcount to refcount_t
For the series, please feel free to add:
Reviewed-by: Andrea Parri
(You may still want to update the references to the 'refcount-vs-atomic'
doc. in the commit messages.)
Andrea
>
> fs/exec.c| 4 ++--
> fs/proc/task_nommu.c
it is hopefully soon
> in state to be merged to the documentation tree.
Just a remark to point out that that document got merged, even though
in a different location/format: c.f.,
b6e859f6cdd1 ("docs: refcount_t documentation")
Andrea
On Fri, Jan 18, 2019 at 12:04:17PM +0100, Paolo Valente wrote:
>
>
> > Il giorno 18 gen 2019, alle ore 11:31, Andrea Righi
> > ha scritto:
> >
> > This is a redesign of my old cgroup-io-throttle controller:
> > https://lwn.net/Articles/330531/
> >
&g
Document the filesystem I/O controller: description, usage, design,
etc.
Signed-off-by: Andrea Righi
---
Documentation/cgroup-v1/fsio-throttle.txt | 142 ++
1 file changed, 142 insertions(+)
create mode 100644 Documentation/cgroup-v1/fsio-throttle.txt
diff --git
A: Correct, the tradeoff here is to tolerate I/O bursts during writeback to
avoid priority inversion problems in the system.
Andrea Righi (3):
fsio-throttle: documentation
fsio-throttle: controller infrastructure
fsio-throttle: instrumentation
Documentation/cgroup-v1/fsio-throt
Apply the fsio controller to the opportune kernel functions to evaluate
and throttle filesystem I/O.
Signed-off-by: Andrea Righi
---
block/blk-core.c | 10 ++
include/linux/writeback.h | 7 ++-
mm/filemap.c | 20 +++-
mm/page-writeback.c
This is the core of the fsio-throttle controller: it defines the
interface to the cgroup subsystem and implements the I/O measurement and
throttling logic.
Signed-off-by: Andrea Righi
---
include/linux/cgroup_subsys.h | 4 +
include/linux/fsio-throttle.h | 43 +++
init/Kconfig
On Mon, Nov 26, 2018 at 05:44:12PM +0100, Andrea Parri wrote:
> As the comments for wake_up_bit() and waitqueue_active() point out,
> the barriers are needed to order the clearing of the _FL_NOT_READY
> bit and the waitqueue_active() load; match the implicit barrier in
> pre
On Wed, Jan 16, 2019 at 10:36:58PM +0100, Andrea Parri wrote:
> [...]
>
> > The difficulty with incorporating plain accesses in the memory model
> > is that the compiler has very few constraints on how it treats plain
> > accesses. It can eliminate them, duplicate them, r
**x, int *y, int *b)
{
int r0;
r0 = READ_ONCE(*y);
rcu_assign_pointer(*x, b);
}
exists (0:r0=b /\ 1:r0=1)
LB1 and LB2 are data-race free, according to the patch; LB1's "exists"
clause is not satisfiable, while LB2's "exists" clause is satisfiable.
601 - 700 of 5733 matches
Mail list logo