On 07/06/16 12:26, Daniel Bristot de Oliveira wrote:
> Ciao Juri,
>
> On 06/07/2016 10:30 AM, Juri Lelli wrote:
> > So, this and the partitioned one could actually overlap, since we don't
> > set cpu_exclusive. Is that right?
> >
> > I guess affinity mask of both m processes gets set correclty, b
Ciao Juri,
On 06/07/2016 10:30 AM, Juri Lelli wrote:
> So, this and the partitioned one could actually overlap, since we don't
> set cpu_exclusive. Is that right?
>
> I guess affinity mask of both m processes gets set correclty, but I'm
> not sure if we are missing one check in the admission cont
Oops.
While doing further tests on my patch I found a problem:
[ 82.390739] =
[ 82.390749] [ INFO: inconsistent lock state ]
[ 82.390759] 4.7.0-rc2+ #5 Not tainted
[ 82.390768] -
[ 82.390777] inconsistent {HARDIRQ-ON-W} ->
On 07/06/16 09:39, Daniel Bristot de Oliveira wrote:
> Ciao Juri,
>
Ciao, :-)
> On 06/07/2016 07:14 AM, Juri Lelli wrote:
> > Interesting. And your test is using cpuset controller to partion
> > DEADLINE tasks and then modify groups concurrently?
>
> Yes. I was studying the partitioning/admissi
Ciao Juri,
On 06/07/2016 07:14 AM, Juri Lelli wrote:
> Interesting. And your test is using cpuset controller to partion
> DEADLINE tasks and then modify groups concurrently?
Yes. I was studying the partitioning/admission control of the
deadline scheduler, to document it.
I was using the minimal
Hi,
On 06/06/16 19:24, Daniel Bristot de Oliveira wrote:
> While testing the deadline scheduler + cgroup setup I hit this
> warning.
>
> [ 132.612935] [ cut here ]
> [ 132.612951] WARNING: CPU: 5 PID: 0 at kernel/softirq.c:150
> __local_bh_enable_ip+0x6b/0x80
> [ 132.6
6 matches
Mail list logo