On Tue, Mar 26, 2019 at 03:32:12PM +0800, Aaron Lu wrote:
> On Fri, Mar 08, 2019 at 11:44:01AM -0800, Subhra Mazumdar wrote:
> >
> > On 2/22/19 4:45 AM, Mel Gorman wrote:
> > >On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> > >>On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra
>
On Fri, Mar 08, 2019 at 11:44:01AM -0800, Subhra Mazumdar wrote:
>
> On 2/22/19 4:45 AM, Mel Gorman wrote:
> >On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> >>On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
> >>>However; whichever way around you turn this cookie; it is
On Tue, Mar 12, 2019 at 7:36 AM Subhra Mazumdar
wrote:
>
>
> On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> >
> > On 3/10/19 9:23 PM, Aubrey Li wrote:
> >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> >> wrote:
> >>> expected. Most of the performance recovery happens in patch 15 which,
> >>>
On 2/18/19 8:56 AM, Peter Zijlstra wrote:
> A much 'demanded' feature: core-scheduling :-(
>
> I still hate it with a passion, and that is part of why it took a little
> longer than 'promised'.
>
> While this one doesn't have all the 'features' of the previous (never
> published) version and isn't
The original patch seems missing the following change for 32bit.
Thanks,
-Aubrey
diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index 9fbb10383434..78de28ebc45d 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -111,7 +111,7 @@ static u64
On Thu, Mar 14, 2019 at 8:35 AM Tim Chen wrote:
> >>
> >> One more NULL pointer dereference:
> >>
> >> Mar 12 02:24:46 aubrey-ivb kernel: [ 201.916741] core sched enabled
> >> [ 201.950203] BUG: unable to handle kernel NULL pointer dereference
> >> at 0008
> >> [ 201.950254]
>>
>> One more NULL pointer dereference:
>>
>> Mar 12 02:24:46 aubrey-ivb kernel: [ 201.916741] core sched enabled
>> [ 201.950203] BUG: unable to handle kernel NULL pointer dereference
>> at 0008
>> [ 201.950254] [ cut here ]
>> [ 201.959045] #PF error:
On Tue, Mar 12, 2019 at 3:45 PM Aubrey Li wrote:
>
> On Tue, Mar 12, 2019 at 7:36 AM Subhra Mazumdar
> wrote:
> >
> >
> > On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> > >
> > > On 3/10/19 9:23 PM, Aubrey Li wrote:
> > >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> > >> wrote:
> > >>>
Hi,
With core scheduling LTP reports 2 new failures related to
cgroups(memcg_stat_rss and memcg_move_charge_at_immigrate). I will try to debug
it.
Also "perf sched map" indicates there might be a small window when 2 processes
in different cgroups run together on one core.
In below case B0 and
On Tue, Mar 12, 2019 at 7:36 AM Subhra Mazumdar
wrote:
>
>
> On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> >
> > On 3/10/19 9:23 PM, Aubrey Li wrote:
> >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> >> wrote:
> >>> expected. Most of the performance recovery happens in patch 15 which,
> >>>
On Mon, Mar 11, 2019 at 05:20:19PM -0700, Greg Kerr wrote:
> On Mon, Mar 11, 2019 at 4:36 PM Subhra Mazumdar
> wrote:
> >
> >
> > On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> > >
> > > On 3/10/19 9:23 PM, Aubrey Li wrote:
> > >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> > >> wrote:
> >
On 3/11/19 5:20 PM, Greg Kerr wrote:
On Mon, Mar 11, 2019 at 4:36 PM Subhra Mazumdar
wrote:
On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
On 3/10/19 9:23 PM, Aubrey Li wrote:
On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
wrote:
expected. Most of the performance recovery happens in patch
On Mon, Mar 11, 2019 at 4:36 PM Subhra Mazumdar
wrote:
>
>
> On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
> >
> > On 3/10/19 9:23 PM, Aubrey Li wrote:
> >> On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
> >> wrote:
> >>> expected. Most of the performance recovery happens in patch 15 which,
> >>>
On 3/11/19 11:34 AM, Subhra Mazumdar wrote:
On 3/10/19 9:23 PM, Aubrey Li wrote:
On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
wrote:
expected. Most of the performance recovery happens in patch 15 which,
unfortunately, is also the one that introduces the hard lockup.
After applied
On 3/10/19 9:23 PM, Aubrey Li wrote:
On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
wrote:
expected. Most of the performance recovery happens in patch 15 which,
unfortunately, is also the one that introduces the hard lockup.
After applied Subhra's patch, the following is triggered by
On Sat, Mar 9, 2019 at 3:50 AM Subhra Mazumdar
wrote:
>
> expected. Most of the performance recovery happens in patch 15 which,
> unfortunately, is also the one that introduces the hard lockup.
>
After applied Subhra's patch, the following is triggered by enabling
core sched when a cgroup is
On 2/22/19 4:45 AM, Mel Gorman wrote:
On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
However; whichever way around you turn this cookie; it is expensive and nasty.
Do you (or anybody else) have numbers for real loads?
On 22/02/19 15:10, Peter Zijlstra wrote:
>> I agree on not bike shedding about the API, but can we agree on some of
>> the high level properties? For example, who generates the core
>> scheduling ids, what properties about them are enforced, etc.?
> It's an opaque cookie; the scheduler really
On 2/18/19 8:56 AM, Peter Zijlstra wrote:
A much 'demanded' feature: core-scheduling :-(
I still hate it with a passion, and that is part of why it took a little
longer than 'promised'.
While this one doesn't have all the 'features' of the previous (never
published) version and isn't L1TF
On Tue, Feb 26, 2019 at 4:26 PM Aubrey Li wrote:
>
> On Sat, Feb 23, 2019 at 3:27 AM Tim Chen wrote:
> >
> > On 2/22/19 6:20 AM, Peter Zijlstra wrote:
> > > On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote:
> > >> On 18/02/19 21:40, Peter Zijlstra wrote:
> > >>> On Mon, Feb 18, 2019
On Sat, Feb 23, 2019 at 3:27 AM Tim Chen wrote:
>
> On 2/22/19 6:20 AM, Peter Zijlstra wrote:
> > On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote:
> >> On 18/02/19 21:40, Peter Zijlstra wrote:
> >>> On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> On Mon, Feb
On 2/22/19 6:20 AM, Peter Zijlstra wrote:
> On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote:
>> On 18/02/19 21:40, Peter Zijlstra wrote:
>>> On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra
wrote:
>
>
On Fri, Feb 22, 2019 at 12:45:44PM +, Mel Gorman wrote:
> On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> > On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
> > >
> > > However; whichever way around you turn this cookie; it is expensive and
> > > nasty.
> >
> > Do you
On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote:
> On 18/02/19 21:40, Peter Zijlstra wrote:
> > On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> >> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra
> >> wrote:
> >>>
> >>> However; whichever way around you turn this
On Wed, Feb 20, 2019 at 10:33:55AM -0800, Greg Kerr wrote:
> > On Tue, Feb 19, 2019 at 02:07:01PM -0800, Greg Kerr wrote:
> Using cgroups could imply that a privileged user is meant to create and
> track all the core scheduling groups. It sounds like you picked cgroups
> out of ease of
On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
> >
> > However; whichever way around you turn this cookie; it is expensive and
> > nasty.
>
> Do you (or anybody else) have numbers for real loads?
>
> Because performance
On 18/02/19 21:40, Peter Zijlstra wrote:
> On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
>> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
>>>
>>> However; whichever way around you turn this cookie; it is expensive and
>>> nasty.
>>
>> Do you (or anybody else) have
On 2/21/19 6:03 AM, Peter Zijlstra wrote:
On Wed, Feb 20, 2019 at 06:53:08PM -0800, Subhra Mazumdar wrote:
On 2/18/19 9:49 AM, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
However; whichever way around you turn this cookie; it is expensive and nasty.
Do you
On 2/21/19 6:03 AM, Peter Zijlstra wrote:
On Wed, Feb 20, 2019 at 06:53:08PM -0800, Subhra Mazumdar wrote:
On 2/18/19 9:49 AM, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
However; whichever way around you turn this cookie; it is expensive and nasty.
Do you
On Wed, Feb 20, 2019 at 06:53:08PM -0800, Subhra Mazumdar wrote:
>
> On 2/18/19 9:49 AM, Linus Torvalds wrote:
> > On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
> > > However; whichever way around you turn this cookie; it is expensive and
> > > nasty.
> > Do you (or anybody else) have
On 2/18/19 9:49 AM, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
However; whichever way around you turn this cookie; it is expensive and nasty.
Do you (or anybody else) have numbers for real loads?
Because performance is all that matters. If performance is
On Wed, Feb 20, 2019 at 10:42:55AM +0100, Peter Zijlstra wrote:
>
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing in e-mail?
>
I am relieved to know that when my mail client
On 2/20/19 1:42 AM, Peter Zijlstra wrote:
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
On Tue, Feb 19, 2019 at 02:07:01PM -0800, Greg Kerr wrote:
Thanks for posting
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
On Tue, Feb 19, 2019 at 02:07:01PM -0800, Greg Kerr wrote:
> Thanks for posting this patchset Peter. Based on the patch
Thanks for posting this patchset Peter. Based on the patch titled, "sched: A
quick and dirty cgroup tagging interface," I believe cgroups are used to
define co-scheduling groups in this implementation.
Chrome OS engineers (kerr...@google.com, mpden...@google.com, and
pal...@google.com) are
* Linus Torvalds wrote:
> On Mon, Feb 18, 2019 at 12:40 PM Peter Zijlstra wrote:
> >
> > If there were close to no VMEXITs, it beat smt=off, if there were lots
> > of VMEXITs it was far far worse. Supposedly hosting people try their
> > very bestest to have no VMEXITs so it mostly works for
On Mon, Feb 18, 2019 at 12:40 PM Peter Zijlstra wrote:
>
> If there were close to no VMEXITs, it beat smt=off, if there were lots
> of VMEXITs it was far far worse. Supposedly hosting people try their
> very bestest to have no VMEXITs so it mostly works for them (with the
> obvious exception of
On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote:
> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
> >
> > However; whichever way around you turn this cookie; it is expensive and
> > nasty.
>
> Do you (or anybody else) have numbers for real loads?
>
> Because performance
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote:
>
> However; whichever way around you turn this cookie; it is expensive and nasty.
Do you (or anybody else) have numbers for real loads?
Because performance is all that matters. If performance is bad, then
it's pointless, since just turning
A much 'demanded' feature: core-scheduling :-(
I still hate it with a passion, and that is part of why it took a little
longer than 'promised'.
While this one doesn't have all the 'features' of the previous (never
published) version and isn't L1TF 'complete', I tend to like the structure
40 matches
Mail list logo