On Fri, Jul 22, 2016 at 08:50:44AM -0400, Chris Metcalf wrote:
> On 7/21/2016 10:20 PM, Christoph Lameter wrote:
> >On Thu, 21 Jul 2016, Chris Metcalf wrote:
> >>On 7/20/2016 10:04 PM, Christoph Lameter wrote:
> >>unstable, and then scheduling work to safely remove that timer.
> >>I haven't looked
We tested this with 4.7-rc7 and aside from the issue with
clocksource_watchdog() this is working fine.
Tested-by: Christoph Lameter
On Fri, 22 Jul 2016, Chris Metcalf wrote:
> > It already as a stable clocksource. Sorry but that was one of the criteria
> > for the server when we ordered them. Could this be clock adjustments?
>
> We probably need to get clock folks to jump in on this thread!
Guess so. I will have a look at thi
On 7/21/2016 10:20 PM, Christoph Lameter wrote:
On Thu, 21 Jul 2016, Chris Metcalf wrote:
On 7/20/2016 10:04 PM, Christoph Lameter wrote:
unstable, and then scheduling work to safely remove that timer.
I haven't looked at this code before (in kernel/time/clocksource.c
under CONFIG_CLOCKSOURCE_WA
On Thu, 21 Jul 2016, Chris Metcalf wrote:
> On 7/20/2016 10:04 PM, Christoph Lameter wrote:
> unstable, and then scheduling work to safely remove that timer.
> I haven't looked at this code before (in kernel/time/clocksource.c
> under CONFIG_CLOCKSOURCE_WATCHDOG) since the timers on
> arm64 and t
On 7/20/2016 10:04 PM, Christoph Lameter wrote:
We are trying to test the patchset on x86 and are getting strange
backtraces and aborts. It seems that the cpu before the cpu we are running
on creates an irq_work event that causes a latency event on the next cpu.
This is weird. Is there a new rou
We are trying to test the patchset on x86 and are getting strange
backtraces and aborts. It seems that the cpu before the cpu we are running
on creates an irq_work event that causes a latency event on the next cpu.
This is weird. Is there a new round robin IPI feature in the kernel that I
am not a
On 7/18/2016 6:11 PM, Andy Lutomirski wrote:
As an example, enough vmalloc/vfree activity will eventually cause
flush_tlb_kernel_range to be called and*boom*, there goes your shiny
production dataplane application.
Well, that's actually a refinement that I did not inflict on this patch
series.
On Thu, Jul 14, 2016 at 2:22 PM, Chris Metcalf wrote:
> On 7/14/2016 5:03 PM, Andy Lutomirski wrote:
>>
>> On Thu, Jul 14, 2016 at 1:48 PM, Chris Metcalf
>> wrote:
>>>
>>> Here is a respin of the task-isolation patch set. This primarily
>>> reflects feedback from Frederic and Peter Z.
>>
>> I st
On Thu, 14 Jul 2016, Andy Lutomirski wrote:
> As an example, enough vmalloc/vfree activity will eventually cause
> flush_tlb_kernel_range to be called and *boom*, there goes your shiny
> production dataplane application. Once virtually mapped kernel stacks
> happen, the frequency with which this
On 7/14/2016 5:03 PM, Andy Lutomirski wrote:
On Thu, Jul 14, 2016 at 1:48 PM, Chris Metcalf wrote:
Here is a respin of the task-isolation patch set. This primarily
reflects feedback from Frederic and Peter Z.
I still think this is the wrong approach, at least at this point. The
first step sh
On Thu, Jul 14, 2016 at 1:48 PM, Chris Metcalf wrote:
> Here is a respin of the task-isolation patch set. This primarily
> reflects feedback from Frederic and Peter Z.
I still think this is the wrong approach, at least at this point. The
first step should be to instrument things if necessary an
Here is a respin of the task-isolation patch set. This primarily
reflects feedback from Frederic and Peter Z.
Changes since v12:
- Rebased on v4.7-rc7.
- New default "strict" model for task isolation - tasks exit the
kernel from the initial prctl() to userspace, and can only legally
exit by
13 matches
Mail list logo