Re: Enable arm_global_timer for Zynq brakes boot

2013-08-20 Thread Daniel Lezcano
On 08/20/2013 02:57 AM, Stephen Boyd wrote:
> On 08/19, S??ren Brinkmann wrote:
>> Hi Stephen,
>>
>> On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
>>> On 08/16/13 10:28, S??ren Brinkmann wrote:
 On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
> On 08/12/2013 06:53 PM, S??ren Brinkmann wrote:
>> It's actually present. I have a clean 3.11-rc3 and the only changes are
>> my patch to enable the GT and Stephen's fix.
>> The cpuidle stats show both idle states being used.
> Ah, right. The tick_broadcast_mask is not set because the arm global
> timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
 Just to check in. Do you want any additional testing done? Or can I
 expect Stephens fix to get merged, so Zynq can use the GT?

>>>
>>> I was curious, can you use just the first hunk of the patch that applied
>>> to tick-broadcast.c to fix the problem? I think the answer is yes.
>>
>> Yes, that seems to be enough.
>>
> 
> Great thank you. I will split the patch into two pieces. That way
> we can discuss the merit of always using a timer that doesn't
> suffer from FEAT_C3_STOP over a timer that does.

Yes, that sounds a good idea.

  -- Daniel


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-20 Thread Daniel Lezcano
On 08/20/2013 02:57 AM, Stephen Boyd wrote:
 On 08/19, S??ren Brinkmann wrote:
 Hi Stephen,

 On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
 On 08/16/13 10:28, S??ren Brinkmann wrote:
 On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:53 PM, S??ren Brinkmann wrote:
 It's actually present. I have a clean 3.11-rc3 and the only changes are
 my patch to enable the GT and Stephen's fix.
 The cpuidle stats show both idle states being used.
 Ah, right. The tick_broadcast_mask is not set because the arm global
 timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
 Just to check in. Do you want any additional testing done? Or can I
 expect Stephens fix to get merged, so Zynq can use the GT?


 I was curious, can you use just the first hunk of the patch that applied
 to tick-broadcast.c to fix the problem? I think the answer is yes.

 Yes, that seems to be enough.

 
 Great thank you. I will split the patch into two pieces. That way
 we can discuss the merit of always using a timer that doesn't
 suffer from FEAT_C3_STOP over a timer that does.

Yes, that sounds a good idea.

  -- Daniel


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Stephen Boyd
On 08/19, S??ren Brinkmann wrote:
> Hi Stephen,
> 
> On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
> > On 08/16/13 10:28, S??ren Brinkmann wrote:
> > > On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
> > >> On 08/12/2013 06:53 PM, S??ren Brinkmann wrote:
> > >>> It's actually present. I have a clean 3.11-rc3 and the only changes are
> > >>> my patch to enable the GT and Stephen's fix.
> > >>> The cpuidle stats show both idle states being used.
> > >> Ah, right. The tick_broadcast_mask is not set because the arm global
> > >> timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
> > > Just to check in. Do you want any additional testing done? Or can I
> > > expect Stephens fix to get merged, so Zynq can use the GT?
> > >
> > 
> > I was curious, can you use just the first hunk of the patch that applied
> > to tick-broadcast.c to fix the problem? I think the answer is yes.
> 
> Yes, that seems to be enough.
> 

Great thank you. I will split the patch into two pieces. That way
we can discuss the merit of always using a timer that doesn't
suffer from FEAT_C3_STOP over a timer that does.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Sören Brinkmann
Hi Stephen,

On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
> On 08/16/13 10:28, Sören Brinkmann wrote:
> > On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
> >> On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
> >>> It's actually present. I have a clean 3.11-rc3 and the only changes are
> >>> my patch to enable the GT and Stephen's fix.
> >>> The cpuidle stats show both idle states being used.
> >> Ah, right. The tick_broadcast_mask is not set because the arm global
> >> timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
> > Just to check in. Do you want any additional testing done? Or can I
> > expect Stephens fix to get merged, so Zynq can use the GT?
> >
> 
> I was curious, can you use just the first hunk of the patch that applied
> to tick-broadcast.c to fix the problem? I think the answer is yes.

Yes, that seems to be enough.

# cat /proc/interrupts 
   CPU0   CPU1   
 27: 14  1   GIC  27  gt
 29:664759   GIC  29  twd
 43:725  0   GIC  43  ttc_clockevent
 82:214  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0 58  Timer broadcast interrupts
IPI2:   1224   1120  Rescheduling interrupts
IPI3:  0  0  Function call interrupts
IPI4: 44 50  Single function call interrupts
IPI5:  0  0  CPU stop interrupts
Err:  0

Timer list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
 max_delta_ns:   1207932479
 min_delta_ns:   18432
 mult:   233015
 shift:  32
 mode:   3
 next_event: 6008000 nsecs
 set_next_event: ttc_set_next_event
 set_mode:   ttc_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 0003
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: local_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 59075238755 nsecs
 set_next_event: twd_set_next_event
 set_mode:   twd_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

Tick Device: mode: 1
Per CPU device: 1
Clock Event Device: local_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 5908000 nsecs
 set_next_event: twd_set_next_event
 set_mode:   twd_set_mode
 event_handler:  hrtimer_interrupt
 retries:0


Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Stephen Boyd
On 08/16/13 10:28, Sören Brinkmann wrote:
> On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
>> On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
>>> It's actually present. I have a clean 3.11-rc3 and the only changes are
>>> my patch to enable the GT and Stephen's fix.
>>> The cpuidle stats show both idle states being used.
>> Ah, right. The tick_broadcast_mask is not set because the arm global
>> timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
> Just to check in. Do you want any additional testing done? Or can I
> expect Stephens fix to get merged, so Zynq can use the GT?
>

I was curious, can you use just the first hunk of the patch that applied
to tick-broadcast.c to fix the problem? I think the answer is yes.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Stephen Boyd
On 08/16/13 10:28, Sören Brinkmann wrote:
 On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
 It's actually present. I have a clean 3.11-rc3 and the only changes are
 my patch to enable the GT and Stephen's fix.
 The cpuidle stats show both idle states being used.
 Ah, right. The tick_broadcast_mask is not set because the arm global
 timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
 Just to check in. Do you want any additional testing done? Or can I
 expect Stephens fix to get merged, so Zynq can use the GT?


I was curious, can you use just the first hunk of the patch that applied
to tick-broadcast.c to fix the problem? I think the answer is yes.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Sören Brinkmann
Hi Stephen,

On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
 On 08/16/13 10:28, Sören Brinkmann wrote:
  On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
  On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
  It's actually present. I have a clean 3.11-rc3 and the only changes are
  my patch to enable the GT and Stephen's fix.
  The cpuidle stats show both idle states being used.
  Ah, right. The tick_broadcast_mask is not set because the arm global
  timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
  Just to check in. Do you want any additional testing done? Or can I
  expect Stephens fix to get merged, so Zynq can use the GT?
 
 
 I was curious, can you use just the first hunk of the patch that applied
 to tick-broadcast.c to fix the problem? I think the answer is yes.

Yes, that seems to be enough.

# cat /proc/interrupts 
   CPU0   CPU1   
 27: 14  1   GIC  27  gt
 29:664759   GIC  29  twd
 43:725  0   GIC  43  ttc_clockevent
 82:214  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0 58  Timer broadcast interrupts
IPI2:   1224   1120  Rescheduling interrupts
IPI3:  0  0  Function call interrupts
IPI4: 44 50  Single function call interrupts
IPI5:  0  0  CPU stop interrupts
Err:  0

Timer list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
 max_delta_ns:   1207932479
 min_delta_ns:   18432
 mult:   233015
 shift:  32
 mode:   3
 next_event: 6008000 nsecs
 set_next_event: ttc_set_next_event
 set_mode:   ttc_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 0003
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: local_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 59075238755 nsecs
 set_next_event: twd_set_next_event
 set_mode:   twd_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

Tick Device: mode: 1
Per CPU device: 1
Clock Event Device: local_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 5908000 nsecs
 set_next_event: twd_set_next_event
 set_mode:   twd_set_mode
 event_handler:  hrtimer_interrupt
 retries:0


Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-19 Thread Stephen Boyd
On 08/19, S??ren Brinkmann wrote:
 Hi Stephen,
 
 On Mon, Aug 19, 2013 at 04:00:36PM -0700, Stephen Boyd wrote:
  On 08/16/13 10:28, S??ren Brinkmann wrote:
   On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
   On 08/12/2013 06:53 PM, S??ren Brinkmann wrote:
   It's actually present. I have a clean 3.11-rc3 and the only changes are
   my patch to enable the GT and Stephen's fix.
   The cpuidle stats show both idle states being used.
   Ah, right. The tick_broadcast_mask is not set because the arm global
   timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.
   Just to check in. Do you want any additional testing done? Or can I
   expect Stephens fix to get merged, so Zynq can use the GT?
  
  
  I was curious, can you use just the first hunk of the patch that applied
  to tick-broadcast.c to fix the problem? I think the answer is yes.
 
 Yes, that seems to be enough.
 

Great thank you. I will split the patch into two pieces. That way
we can discuss the merit of always using a timer that doesn't
suffer from FEAT_C3_STOP over a timer that does.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-16 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
> On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
> > On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
> >> On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
> >>> On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
>  On 08/12/13 09:03, Sören Brinkmann wrote:
> > On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> >> On 08/09, Daniel Lezcano wrote:
> >>> yes, but at least the broadcast mechanism should send an IPI to cpu0 
> >>> to
> >>> wake it up, no ? As Stephen stated this kind of configuration should 
> >>> has
> >>> never been tested before so the tick broadcast code is not handling 
> >>> this
> >>> case properly IMHO.
> >>>
> >> If you have a per-cpu tick device that isn't suffering from
> >> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> >> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> >> is a bug in the preference logic or you should boost the rating
> >> of the arm global timer above the twd. Does this patch help? It
> >> should make the arm global timer the tick device and whatever the
> >> cadence timer you have into the broadcast device.
> > I finally got to test your patch. Unfortunately, it makes the system
> > hang even earlier:
> 
>  Sorry it had a bug depending on the registration order. Can you try this
>  one (tabs are probably spaces, sorry)? I will go read through this
>  thread to see if we already covered the registration order.
> >>>
> >>> That did it! Booted straight into the system. 
> >>
> >> Good news :)
> >>
> >>> The broadcast device is
> >>> the TTC instead of GT, now.
> >>>
> >>>   Tick Device: mode: 1
> >>>   Broadcast device
> >>>   Clock Event Device: ttc_clockevent
> >>>max_delta_ns:   1207932479
> >>>min_delta_ns:   18432
> >>>mult:   233015
> >>>shift:  32
> >>>mode:   1
> >>>next_event: 9223372036854775807 nsecs
> >>>set_next_event: ttc_set_next_event
> >>>set_mode:   ttc_set_mode
> >>>event_handler:  tick_handle_oneshot_broadcast
> >>>retries:0
> >>>   
> >>>   tick_broadcast_mask: 
> >>>   tick_broadcast_oneshot_mask: 
> >>
> >> At the first glance, the timer broadcast usage is not set, right ? Can
> >> you try with the cpuidle flag even if it is not needed ?
> > 
> > It's actually present. I have a clean 3.11-rc3 and the only changes are
> > my patch to enable the GT and Stephen's fix.
> > The cpuidle stats show both idle states being used.
> 
> Ah, right. The tick_broadcast_mask is not set because the arm global
> timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.

Just to check in. Do you want any additional testing done? Or can I
expect Stephens fix to get merged, so Zynq can use the GT?

Thanks,
Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-16 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 07:02:39PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
  On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
  On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
  On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
  On 08/12/13 09:03, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
  yes, but at least the broadcast mechanism should send an IPI to cpu0 
  to
  wake it up, no ? As Stephen stated this kind of configuration should 
  has
  never been tested before so the tick broadcast code is not handling 
  this
  case properly IMHO.
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
 
  Sorry it had a bug depending on the registration order. Can you try this
  one (tabs are probably spaces, sorry)? I will go read through this
  thread to see if we already covered the registration order.
 
  That did it! Booted straight into the system. 
 
  Good news :)
 
  The broadcast device is
  the TTC instead of GT, now.
 
Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
 max_delta_ns:   1207932479
 min_delta_ns:   18432
 mult:   233015
 shift:  32
 mode:   1
 next_event: 9223372036854775807 nsecs
 set_next_event: ttc_set_next_event
 set_mode:   ttc_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 
tick_broadcast_oneshot_mask: 
 
  At the first glance, the timer broadcast usage is not set, right ? Can
  you try with the cpuidle flag even if it is not needed ?
  
  It's actually present. I have a clean 3.11-rc3 and the only changes are
  my patch to enable the GT and Stephen's fix.
  The cpuidle stats show both idle states being used.
 
 Ah, right. The tick_broadcast_mask is not set because the arm global
 timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.

Just to check in. Do you want any additional testing done? Or can I
expect Stephens fix to get merged, so Zynq can use the GT?

Thanks,
Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
> On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
>> On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
>>> On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>> On 08/09, Daniel Lezcano wrote:
>>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>>> wake it up, no ? As Stephen stated this kind of configuration should has
>>> never been tested before so the tick broadcast code is not handling this
>>> case properly IMHO.
>>>
>> If you have a per-cpu tick device that isn't suffering from
>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>> is a bug in the preference logic or you should boost the rating
>> of the arm global timer above the twd. Does this patch help? It
>> should make the arm global timer the tick device and whatever the
>> cadence timer you have into the broadcast device.
> I finally got to test your patch. Unfortunately, it makes the system
> hang even earlier:

 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.
>>>
>>> That did it! Booted straight into the system. 
>>
>> Good news :)
>>
>>> The broadcast device is
>>> the TTC instead of GT, now.
>>>
>>> Tick Device: mode: 1
>>> Broadcast device
>>> Clock Event Device: ttc_clockevent
>>>  max_delta_ns:   1207932479
>>>  min_delta_ns:   18432
>>>  mult:   233015
>>>  shift:  32
>>>  mode:   1
>>>  next_event: 9223372036854775807 nsecs
>>>  set_next_event: ttc_set_next_event
>>>  set_mode:   ttc_set_mode
>>>  event_handler:  tick_handle_oneshot_broadcast
>>>  retries:0
>>> 
>>> tick_broadcast_mask: 
>>> tick_broadcast_oneshot_mask: 
>>
>> At the first glance, the timer broadcast usage is not set, right ? Can
>> you try with the cpuidle flag even if it is not needed ?
> 
> It's actually present. I have a clean 3.11-rc3 and the only changes are
> my patch to enable the GT and Stephen's fix.
> The cpuidle stats show both idle states being used.

Ah, right. The tick_broadcast_mask is not set because the arm global
timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.

Thanks
  -- Daniel


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
> On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
> > On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
> >> On 08/12/13 09:03, Sören Brinkmann wrote:
> >>> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>  On 08/09, Daniel Lezcano wrote:
> > yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> > wake it up, no ? As Stephen stated this kind of configuration should has
> > never been tested before so the tick broadcast code is not handling this
> > case properly IMHO.
> >
>  If you have a per-cpu tick device that isn't suffering from
>  FEAT_C3_STOP why wouldn't you use that for the tick versus a
>  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>  is a bug in the preference logic or you should boost the rating
>  of the arm global timer above the twd. Does this patch help? It
>  should make the arm global timer the tick device and whatever the
>  cadence timer you have into the broadcast device.
> >>> I finally got to test your patch. Unfortunately, it makes the system
> >>> hang even earlier:
> >>
> >> Sorry it had a bug depending on the registration order. Can you try this
> >> one (tabs are probably spaces, sorry)? I will go read through this
> >> thread to see if we already covered the registration order.
> > 
> > That did it! Booted straight into the system. 
> 
> Good news :)
> 
> > The broadcast device is
> > the TTC instead of GT, now.
> > 
> > Tick Device: mode: 1
> > Broadcast device
> > Clock Event Device: ttc_clockevent
> >  max_delta_ns:   1207932479
> >  min_delta_ns:   18432
> >  mult:   233015
> >  shift:  32
> >  mode:   1
> >  next_event: 9223372036854775807 nsecs
> >  set_next_event: ttc_set_next_event
> >  set_mode:   ttc_set_mode
> >  event_handler:  tick_handle_oneshot_broadcast
> >  retries:0
> > 
> > tick_broadcast_mask: 
> > tick_broadcast_oneshot_mask: 
> 
> At the first glance, the timer broadcast usage is not set, right ? Can
> you try with the cpuidle flag even if it is not needed ?

It's actually present. I have a clean 3.11-rc3 and the only changes are
my patch to enable the GT and Stephen's fix.
The cpuidle stats show both idle states being used.

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:23 PM, Stephen Boyd wrote:
> On 08/12/13 03:53, Daniel Lezcano wrote:
>> On 08/09/2013 07:27 PM, Stephen Boyd wrote:
>>> On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

>>> If you have a per-cpu tick device that isn't suffering from
>>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>>> is a bug in the preference logic or you should boost the rating
>>> of the arm global timer above the twd. Does this patch help? It
>>> should make the arm global timer the tick device and whatever the
>>> cadence timer you have into the broadcast device.
>>>
>>> ---8<
>>> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
>>> index 218bcb5..d3539e5 100644
>>> --- a/kernel/time/tick-broadcast.c
>>> +++ b/kernel/time/tick-broadcast.c
>>> @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
>>> clock_event_device *curdev,
>>> !(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
>>> return false;
>>>  
>>> +   if (cpumask_equal(newdev->cpumask, cpumask_of(smp_processor_id(
>>> +   return false;
>> Yes, that makes sense to prevent local timer devices to be a broadcast one.
>>
>> In the case of the arm global timer, each cpu will register their timer,
>> so the test will be ok but is it possible the cpu0 registers the timers
>> for the other cpus ?
> 
> As far as I know every tick device has to be registered on the CPU that
> it will be used on. See tick_check_percpu().

Ah, ok I see. Thx for the pointer.

>>> return !curdev || newdev->rating > curdev->rating;
>>>  }
>>>  
>>> diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
>>> index 64522ec..1628b9f 100644
>>> --- a/kernel/time/tick-common.c
>>> +++ b/kernel/time/tick-common.c
>>> @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
>>> clock_event_device *curdev,
>>> }
>>>  
>>> /*
>>> +* Prefer tick devices that don't suffer from FEAT_C3_STOP
>>> +* regardless of their rating
>>> +*/
>>> +   if (curdev && cpumask_equal(curdev->cpumask, newdev->cpumask) &&
>>> +   !(newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
>>> +   (curdev->features & CLOCK_EVT_FEAT_C3STOP))
>>> +   return true;
>> That sounds reasonable, but what is the acceptable gap between the
>> ratings ? I am wondering if there isn't too much heuristic in the tick
>> code...
> 
> Yes I wonder if we should just change the ratings of the clockevents.
> But it feels to me like the rating should only matter if the two are
> equal in features. Otherwise we can use the features to determine what
> we want.

Is it desirable for real time system ? (I am not expert in this area, so
may be this question has no sense :)


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
> On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
>> On 08/12/13 09:03, Sören Brinkmann wrote:
>>> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> wake it up, no ? As Stephen stated this kind of configuration should has
> never been tested before so the tick broadcast code is not handling this
> case properly IMHO.
>
 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
>>> I finally got to test your patch. Unfortunately, it makes the system
>>> hang even earlier:
>>
>> Sorry it had a bug depending on the registration order. Can you try this
>> one (tabs are probably spaces, sorry)? I will go read through this
>> thread to see if we already covered the registration order.
> 
> That did it! Booted straight into the system. 

Good news :)

> The broadcast device is
> the TTC instead of GT, now.
> 
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: ttc_clockevent
>max_delta_ns:   1207932479
>min_delta_ns:   18432
>mult:   233015
>shift:  32
>mode:   1
>next_event: 9223372036854775807 nsecs
>set_next_event: ttc_set_next_event
>set_mode:   ttc_set_mode
>event_handler:  tick_handle_oneshot_broadcast
>retries:0
>   
>   tick_broadcast_mask: 
>   tick_broadcast_oneshot_mask: 

At the first glance, the timer broadcast usage is not set, right ? Can
you try with the cpuidle flag even if it is not needed ?

Thanks
  -- Daniel


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
> On 08/12/13 09:03, Sören Brinkmann wrote:
> > On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> >> On 08/09, Daniel Lezcano wrote:
> >>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> >>> wake it up, no ? As Stephen stated this kind of configuration should has
> >>> never been tested before so the tick broadcast code is not handling this
> >>> case properly IMHO.
> >>>
> >> If you have a per-cpu tick device that isn't suffering from
> >> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> >> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> >> is a bug in the preference logic or you should boost the rating
> >> of the arm global timer above the twd. Does this patch help? It
> >> should make the arm global timer the tick device and whatever the
> >> cadence timer you have into the broadcast device.
> > I finally got to test your patch. Unfortunately, it makes the system
> > hang even earlier:
> 
> Sorry it had a bug depending on the registration order. Can you try this
> one (tabs are probably spaces, sorry)? I will go read through this
> thread to see if we already covered the registration order.

That did it! Booted straight into the system. The broadcast device is
the TTC instead of GT, now.

Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
 max_delta_ns:   1207932479
 min_delta_ns:   18432
 mult:   233015
 shift:  32
 mode:   1
 next_event: 9223372036854775807 nsecs
 set_next_event: ttc_set_next_event
 set_mode:   ttc_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 24216749370 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

Tick Device: mode: 1
Per CPU device: 1
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 2422000 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0


# cat /proc/interrupts 
   CPU0   CPU1   
 27:   1668   1640   GIC  27  gt
 29:  0  0   GIC  29  twd
 43:  0  0   GIC  43  ttc_clockevent
 82:536  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0  0  Timer broadcast interrupts
IPI2:   1264   1322  Rescheduling interrupts
IPI3:  0  0  Function call interrupts
IPI4: 24 70  Single function call interrupts
IPI5:  0  0  CPU stop interrupts
Err:  0


Thanks,
Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:40:52AM -0700, Stephen Boyd wrote:
> On 08/12/13 09:24, Sören Brinkmann wrote:
> > On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
> >> On 08/12/13 09:03, Sören Brinkmann wrote:
> >>> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>  On 08/09, Daniel Lezcano wrote:
> > yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> > wake it up, no ? As Stephen stated this kind of configuration should has
> > never been tested before so the tick broadcast code is not handling this
> > case properly IMHO.
> >
>  If you have a per-cpu tick device that isn't suffering from
>  FEAT_C3_STOP why wouldn't you use that for the tick versus a
>  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>  is a bug in the preference logic or you should boost the rating
>  of the arm global timer above the twd. Does this patch help? It
>  should make the arm global timer the tick device and whatever the
>  cadence timer you have into the broadcast device.
> >>> I finally got to test your patch. Unfortunately, it makes the system
> >>> hang even earlier:
> >> Sorry it had a bug depending on the registration order. Can you try this
> >> one (tabs are probably spaces, sorry)? I will go read through this
> >> thread to see if we already covered the registration order.
> > What is the base for your patch? I based my GT enable patch on 3.11-rc3
> > and for consistency in our debugging, I didn't move it elsewhere since.
> > Your patch doesn't apply cleanly on it. I see if I can work it out, just
> > let me know if it depends on something not available in 3.11-rc3.
> 
> I applied this on 3.11-rc4. I don't think anything has changed there
> between rc3 and rc4. so you're probably running into the whitespace problem.

That or the scissors are my problem. I never worked with scissors and it
looks kinda odd what happened when I am'ed the patch. Anyway, I just
manually merged it in.

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 09:24, Sören Brinkmann wrote:
> On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
>> On 08/12/13 09:03, Sören Brinkmann wrote:
>>> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> wake it up, no ? As Stephen stated this kind of configuration should has
> never been tested before so the tick broadcast code is not handling this
> case properly IMHO.
>
 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
>>> I finally got to test your patch. Unfortunately, it makes the system
>>> hang even earlier:
>> Sorry it had a bug depending on the registration order. Can you try this
>> one (tabs are probably spaces, sorry)? I will go read through this
>> thread to see if we already covered the registration order.
> What is the base for your patch? I based my GT enable patch on 3.11-rc3
> and for consistency in our debugging, I didn't move it elsewhere since.
> Your patch doesn't apply cleanly on it. I see if I can work it out, just
> let me know if it depends on something not available in 3.11-rc3.

I applied this on 3.11-rc4. I don't think anything has changed there
between rc3 and rc4. so you're probably running into the whitespace problem.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
> On 08/12/13 09:03, Sören Brinkmann wrote:
> > On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> >> On 08/09, Daniel Lezcano wrote:
> >>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> >>> wake it up, no ? As Stephen stated this kind of configuration should has
> >>> never been tested before so the tick broadcast code is not handling this
> >>> case properly IMHO.
> >>>
> >> If you have a per-cpu tick device that isn't suffering from
> >> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> >> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> >> is a bug in the preference logic or you should boost the rating
> >> of the arm global timer above the twd. Does this patch help? It
> >> should make the arm global timer the tick device and whatever the
> >> cadence timer you have into the broadcast device.
> > I finally got to test your patch. Unfortunately, it makes the system
> > hang even earlier:
> 
> Sorry it had a bug depending on the registration order. Can you try this
> one (tabs are probably spaces, sorry)? I will go read through this
> thread to see if we already covered the registration order.

What is the base for your patch? I based my GT enable patch on 3.11-rc3
and for consistency in our debugging, I didn't move it elsewhere since.
Your patch doesn't apply cleanly on it. I see if I can work it out, just
let me know if it depends on something not available in 3.11-rc3.

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 03:53, Daniel Lezcano wrote:
> On 08/09/2013 07:27 PM, Stephen Boyd wrote:
>> On 08/09, Daniel Lezcano wrote:
>>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>>> wake it up, no ? As Stephen stated this kind of configuration should has
>>> never been tested before so the tick broadcast code is not handling this
>>> case properly IMHO.
>>>
>> If you have a per-cpu tick device that isn't suffering from
>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>> is a bug in the preference logic or you should boost the rating
>> of the arm global timer above the twd. Does this patch help? It
>> should make the arm global timer the tick device and whatever the
>> cadence timer you have into the broadcast device.
>>
>> ---8<
>> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
>> index 218bcb5..d3539e5 100644
>> --- a/kernel/time/tick-broadcast.c
>> +++ b/kernel/time/tick-broadcast.c
>> @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
>> clock_event_device *curdev,
>>  !(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
>>  return false;
>>  
>> +if (cpumask_equal(newdev->cpumask, cpumask_of(smp_processor_id(
>> +return false;
> Yes, that makes sense to prevent local timer devices to be a broadcast one.
>
> In the case of the arm global timer, each cpu will register their timer,
> so the test will be ok but is it possible the cpu0 registers the timers
> for the other cpus ?

As far as I know every tick device has to be registered on the CPU that
it will be used on. See tick_check_percpu().

>
>>  return !curdev || newdev->rating > curdev->rating;
>>  }
>>  
>> diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
>> index 64522ec..1628b9f 100644
>> --- a/kernel/time/tick-common.c
>> +++ b/kernel/time/tick-common.c
>> @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
>> clock_event_device *curdev,
>>  }
>>  
>>  /*
>> + * Prefer tick devices that don't suffer from FEAT_C3_STOP
>> + * regardless of their rating
>> + */
>> +if (curdev && cpumask_equal(curdev->cpumask, newdev->cpumask) &&
>> +!(newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
>> +(curdev->features & CLOCK_EVT_FEAT_C3STOP))
>> +return true;
> That sounds reasonable, but what is the acceptable gap between the
> ratings ? I am wondering if there isn't too much heuristic in the tick
> code...

Yes I wonder if we should just change the ratings of the clockevents.
But it feels to me like the rating should only matter if the two are
equal in features. Otherwise we can use the features to determine what
we want.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 09:03, Sören Brinkmann wrote:
> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>> On 08/09, Daniel Lezcano wrote:
>>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>>> wake it up, no ? As Stephen stated this kind of configuration should has
>>> never been tested before so the tick broadcast code is not handling this
>>> case properly IMHO.
>>>
>> If you have a per-cpu tick device that isn't suffering from
>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>> is a bug in the preference logic or you should boost the rating
>> of the arm global timer above the twd. Does this patch help? It
>> should make the arm global timer the tick device and whatever the
>> cadence timer you have into the broadcast device.
> I finally got to test your patch. Unfortunately, it makes the system
> hang even earlier:

Sorry it had a bug depending on the registration order. Can you try this
one (tabs are probably spaces, sorry)? I will go read through this
thread to see if we already covered the registration order.

---8<
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 218bcb5..d3539e5 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
clock_event_device *curdev,
!(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
return false;
 
+   if (cpumask_equal(newdev->cpumask, cpumask_of(smp_processor_id(
+   return false;
+
return !curdev || newdev->rating > curdev->rating;
 }
 
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 64522ec..dd08f3b 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -244,12 +244,22 @@ static bool tick_check_preferred(struct 
clock_event_device *curdev,
return false;
}
 
+   if (!curdev)
+   return true;
+
+   /* Always prefer a tick device that doesn't suffer from FEAT_C3STOP */
+   if (!(newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
+(curdev->features & CLOCK_EVT_FEAT_C3STOP))
+   return true;
+   if ((newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
+   !(curdev->features & CLOCK_EVT_FEAT_C3STOP))
+   return false;
+
/*
 * Use the higher rated one, but prefer a CPU local device with a lower
 * rating than a non-CPU local device
 */
-   return !curdev ||
-   newdev->rating > curdev->rating ||
+   return newdev->rating > curdev->rating ||
   !cpumask_equal(curdev->cpumask, newdev->cpumask);
 }
 


-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 06:08:46PM +0200, Daniel Lezcano wrote:
> On 08/12/2013 06:03 PM, Sören Brinkmann wrote:
> > On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> >> On 08/09, Daniel Lezcano wrote:
> >>>
> >>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> >>> wake it up, no ? As Stephen stated this kind of configuration should has
> >>> never been tested before so the tick broadcast code is not handling this
> >>> case properly IMHO.
> >>>
> >>
> >> If you have a per-cpu tick device that isn't suffering from
> >> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> >> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> >> is a bug in the preference logic or you should boost the rating
> >> of the arm global timer above the twd. Does this patch help? It
> >> should make the arm global timer the tick device and whatever the
> >> cadence timer you have into the broadcast device.
> > 
> > I finally got to test your patch. Unfortunately, it makes the system
> > hang even earlier:
> 
> [ ... ]
> 
> Can you boot with maxcpus=1 and then give the result of timer_list ?

That doesn't seem to help anymore. It hangs too. Same picture w/ and w/o
'maxcpus', except for
[0.00] Kernel command line: console=ttyPS0,115200 earlyprintk 
maxcpus=1


Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:03 PM, Sören Brinkmann wrote:
> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>> On 08/09, Daniel Lezcano wrote:
>>>
>>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>>> wake it up, no ? As Stephen stated this kind of configuration should has
>>> never been tested before so the tick broadcast code is not handling this
>>> case properly IMHO.
>>>
>>
>> If you have a per-cpu tick device that isn't suffering from
>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>> is a bug in the preference logic or you should boost the rating
>> of the arm global timer above the twd. Does this patch help? It
>> should make the arm global timer the tick device and whatever the
>> cadence timer you have into the broadcast device.
> 
> I finally got to test your patch. Unfortunately, it makes the system
> hang even earlier:

[ ... ]

Can you boot with maxcpus=1 and then give the result of timer_list ?


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> On 08/09, Daniel Lezcano wrote:
> > 
> > yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> > wake it up, no ? As Stephen stated this kind of configuration should has
> > never been tested before so the tick broadcast code is not handling this
> > case properly IMHO.
> > 
> 
> If you have a per-cpu tick device that isn't suffering from
> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> is a bug in the preference logic or you should boost the rating
> of the arm global timer above the twd. Does this patch help? It
> should make the arm global timer the tick device and whatever the
> cadence timer you have into the broadcast device.

I finally got to test your patch. Unfortunately, it makes the system
hang even earlier:

Starting kernel ...

Uncompressing Linux... done, booting the kernel.
[0.00] Booting Linux on physical CPU 0x0
[0.00] Linux version 3.11.0-rc3-2-g391ac9b 
(sorenb@xsjandreislx) (gcc version 4.7.2 (Sourcery CodeBench Lite 2012.09-104) 
) #98 SMP PREEMPT Mon Aug 12 08:59:34 PDT 2013
[0.00] CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), 
cr=18c5387d
[0.00] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing 
instruction cache
[0.00] Machine: Xilinx Zynq Platform, model: Zynq ZC706 
Development Board
[0.00] bootconsole [earlycon0] enabled
[0.00] cma: CMA: reserved 16 MiB at 2e80
[0.00] Memory policy: ECC disabled, Data cache writealloc
[0.00] PERCPU: Embedded 9 pages/cpu @c149c000 s14720 r8192 
d13952 u36864
[0.00] Built 1 zonelists in Zone order, mobility grouping on.  
Total pages: 260624
[0.00] Kernel command line: console=ttyPS0,115200 earlyprintk
[0.00] PID hash table entries: 4096 (order: 2, 16384 bytes)
[0.00] Dentry cache hash table entries: 131072 (order: 7, 
524288 bytes)
[0.00] Inode-cache hash table entries: 65536 (order: 6, 262144 
bytes)
[0.00] Memory: 1004928K/1048576K available (4891K kernel code, 
307K rwdata, 1564K rodata, 338K init, 5699K bss, 43648K reserved, 270336K 
highmem)
[0.00] Virtual kernel memory layout:
[0.00] vector  : 0x - 0x1000   (   4 kB)
[0.00] fixmap  : 0xfff0 - 0xfffe   ( 896 kB)
[0.00] vmalloc : 0xf000 - 0xff00   ( 240 MB)
[0.00] lowmem  : 0xc000 - 0xef80   ( 760 MB)
[0.00] pkmap   : 0xbfe0 - 0xc000   (   2 MB)
[0.00] modules : 0xbf00 - 0xbfe0   (  14 MB)
[0.00]   .text : 0xc0008000 - 0xc0656128   (6457 kB)
[0.00]   .init : 0xc0657000 - 0xc06ab980   ( 339 kB)
[0.00]   .data : 0xc06ac000 - 0xc06f8c20   ( 308 kB)
[0.00].bss : 0xc06f8c20 - 0xc0c89aa4   (5700 kB)
[0.00] Preemptible hierarchical RCU implementation.
[0.00]  RCU lockdep checking is enabled.
[0.00]  Additional per-CPU info printed with stalls.
[0.00]  RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=2.
[0.00] NR_IRQS:16 nr_irqs:16 16
[0.00] slcr mapped to f0004000
[0.00] Zynq clock init
[0.00] sched_clock: 32 bits at 333MHz, resolution 3ns, wraps 
every 12884ms
[0.00] ttc0 #0 at f0006000, irq=43
[0.00] Console: colour dummy device 80x30
[0.00] Lock dependency validator: Copyright (c) 2006 Red Hat, 
Inc., Ingo Molnar
[0.00] ... MAX_LOCKDEP_SUBCLASSES:  8
[0.00] ... MAX_LOCK_DEPTH:  48
[0.00] ... MAX_LOCKDEP_KEYS:8191
[0.00] ... CLASSHASH_SIZE:  4096
[0.00] ... MAX_LOCKDEP_ENTRIES: 16384
[0.00] ... MAX_LOCKDEP_CHAINS:  32768
[0.00] ... CHAINHASH_SIZE:  16384
[0.00]  memory used by lock dependency info: 3695 kB
[0.00]  per task-struct memory footprint: 1152 bytes
[0.057541] Calibrating delay loop... 1325.46 BogoMIPS (lpj=6627328)
[0.100248] pid_max: default: 32768 minimum: 301
[0.103294] Mount-cache hash table entries: 512
[0.114364] CPU: Testing write buffer coherency: ok
[0.114513] ftrace: allocating 16143 entries in 48 pages
[0.155012] CPU0: thread -1, cpu 0, socket 0, mpidr 8000


Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/09/2013 07:27 PM, Stephen Boyd wrote:
> On 08/09, Daniel Lezcano wrote:
>>
>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>> wake it up, no ? As Stephen stated this kind of configuration should has
>> never been tested before so the tick broadcast code is not handling this
>> case properly IMHO.
>>
> 
> If you have a per-cpu tick device that isn't suffering from
> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> is a bug in the preference logic or you should boost the rating
> of the arm global timer above the twd. Does this patch help? It
> should make the arm global timer the tick device and whatever the
> cadence timer you have into the broadcast device.
> 
> ---8<
> diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
> index 218bcb5..d3539e5 100644
> --- a/kernel/time/tick-broadcast.c
> +++ b/kernel/time/tick-broadcast.c
> @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
> clock_event_device *curdev,
>   !(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
>   return false;
>  
> + if (cpumask_equal(newdev->cpumask, cpumask_of(smp_processor_id(
> + return false;

Yes, that makes sense to prevent local timer devices to be a broadcast one.

In the case of the arm global timer, each cpu will register their timer,
so the test will be ok but is it possible the cpu0 registers the timers
for the other cpus ?

>   return !curdev || newdev->rating > curdev->rating;
>  }
>  
> diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
> index 64522ec..1628b9f 100644
> --- a/kernel/time/tick-common.c
> +++ b/kernel/time/tick-common.c
> @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
> clock_event_device *curdev,
>   }
>  
>   /*
> +  * Prefer tick devices that don't suffer from FEAT_C3_STOP
> +  * regardless of their rating
> +  */
> + if (curdev && cpumask_equal(curdev->cpumask, newdev->cpumask) &&
> + !(newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
> + (curdev->features & CLOCK_EVT_FEAT_C3STOP))
> + return true;

That sounds reasonable, but what is the acceptable gap between the
ratings ? I am wondering if there isn't too much heuristic in the tick
code...


> +
> + /*
>* Use the higher rated one, but prefer a CPU local device with a lower
>* rating than a non-CPU local device
>*/
> 


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/09/2013 07:27 PM, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:

 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 
 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 
 ---8
 diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
 index 218bcb5..d3539e5 100644
 --- a/kernel/time/tick-broadcast.c
 +++ b/kernel/time/tick-broadcast.c
 @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
 clock_event_device *curdev,
   !(newdev-features  CLOCK_EVT_FEAT_ONESHOT))
   return false;
  
 + if (cpumask_equal(newdev-cpumask, cpumask_of(smp_processor_id(
 + return false;

Yes, that makes sense to prevent local timer devices to be a broadcast one.

In the case of the arm global timer, each cpu will register their timer,
so the test will be ok but is it possible the cpu0 registers the timers
for the other cpus ?

   return !curdev || newdev-rating  curdev-rating;
  }
  
 diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
 index 64522ec..1628b9f 100644
 --- a/kernel/time/tick-common.c
 +++ b/kernel/time/tick-common.c
 @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
 clock_event_device *curdev,
   }
  
   /*
 +  * Prefer tick devices that don't suffer from FEAT_C3_STOP
 +  * regardless of their rating
 +  */
 + if (curdev  cpumask_equal(curdev-cpumask, newdev-cpumask) 
 + !(newdev-features  CLOCK_EVT_FEAT_C3STOP) 
 + (curdev-features  CLOCK_EVT_FEAT_C3STOP))
 + return true;

That sounds reasonable, but what is the acceptable gap between the
ratings ? I am wondering if there isn't too much heuristic in the tick
code...


 +
 + /*
* Use the higher rated one, but prefer a CPU local device with a lower
* rating than a non-CPU local device
*/
 


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
  
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
  
 
 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.

I finally got to test your patch. Unfortunately, it makes the system
hang even earlier:

Starting kernel ...

Uncompressing Linux... done, booting the kernel.
[0.00] Booting Linux on physical CPU 0x0
[0.00] Linux version 3.11.0-rc3-2-g391ac9b 
(sorenb@xsjandreislx) (gcc version 4.7.2 (Sourcery CodeBench Lite 2012.09-104) 
) #98 SMP PREEMPT Mon Aug 12 08:59:34 PDT 2013
[0.00] CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), 
cr=18c5387d
[0.00] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing 
instruction cache
[0.00] Machine: Xilinx Zynq Platform, model: Zynq ZC706 
Development Board
[0.00] bootconsole [earlycon0] enabled
[0.00] cma: CMA: reserved 16 MiB at 2e80
[0.00] Memory policy: ECC disabled, Data cache writealloc
[0.00] PERCPU: Embedded 9 pages/cpu @c149c000 s14720 r8192 
d13952 u36864
[0.00] Built 1 zonelists in Zone order, mobility grouping on.  
Total pages: 260624
[0.00] Kernel command line: console=ttyPS0,115200 earlyprintk
[0.00] PID hash table entries: 4096 (order: 2, 16384 bytes)
[0.00] Dentry cache hash table entries: 131072 (order: 7, 
524288 bytes)
[0.00] Inode-cache hash table entries: 65536 (order: 6, 262144 
bytes)
[0.00] Memory: 1004928K/1048576K available (4891K kernel code, 
307K rwdata, 1564K rodata, 338K init, 5699K bss, 43648K reserved, 270336K 
highmem)
[0.00] Virtual kernel memory layout:
[0.00] vector  : 0x - 0x1000   (   4 kB)
[0.00] fixmap  : 0xfff0 - 0xfffe   ( 896 kB)
[0.00] vmalloc : 0xf000 - 0xff00   ( 240 MB)
[0.00] lowmem  : 0xc000 - 0xef80   ( 760 MB)
[0.00] pkmap   : 0xbfe0 - 0xc000   (   2 MB)
[0.00] modules : 0xbf00 - 0xbfe0   (  14 MB)
[0.00]   .text : 0xc0008000 - 0xc0656128   (6457 kB)
[0.00]   .init : 0xc0657000 - 0xc06ab980   ( 339 kB)
[0.00]   .data : 0xc06ac000 - 0xc06f8c20   ( 308 kB)
[0.00].bss : 0xc06f8c20 - 0xc0c89aa4   (5700 kB)
[0.00] Preemptible hierarchical RCU implementation.
[0.00]  RCU lockdep checking is enabled.
[0.00]  Additional per-CPU info printed with stalls.
[0.00]  RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=2.
[0.00] NR_IRQS:16 nr_irqs:16 16
[0.00] slcr mapped to f0004000
[0.00] Zynq clock init
[0.00] sched_clock: 32 bits at 333MHz, resolution 3ns, wraps 
every 12884ms
[0.00] ttc0 #0 at f0006000, irq=43
[0.00] Console: colour dummy device 80x30
[0.00] Lock dependency validator: Copyright (c) 2006 Red Hat, 
Inc., Ingo Molnar
[0.00] ... MAX_LOCKDEP_SUBCLASSES:  8
[0.00] ... MAX_LOCK_DEPTH:  48
[0.00] ... MAX_LOCKDEP_KEYS:8191
[0.00] ... CLASSHASH_SIZE:  4096
[0.00] ... MAX_LOCKDEP_ENTRIES: 16384
[0.00] ... MAX_LOCKDEP_CHAINS:  32768
[0.00] ... CHAINHASH_SIZE:  16384
[0.00]  memory used by lock dependency info: 3695 kB
[0.00]  per task-struct memory footprint: 1152 bytes
[0.057541] Calibrating delay loop... 1325.46 BogoMIPS (lpj=6627328)
[0.100248] pid_max: default: 32768 minimum: 301
[0.103294] Mount-cache hash table entries: 512
[0.114364] CPU: Testing write buffer coherency: ok
[0.114513] ftrace: allocating 16143 entries in 48 pages
[0.155012] CPU0: thread -1, cpu 0, socket 0, mpidr 8000


Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:03 PM, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:

 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.


 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 
 I finally got to test your patch. Unfortunately, it makes the system
 hang even earlier:

[ ... ]

Can you boot with maxcpus=1 and then give the result of timer_list ?


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 06:08:46PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:03 PM, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
 
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
 
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
 
 [ ... ]
 
 Can you boot with maxcpus=1 and then give the result of timer_list ?

That doesn't seem to help anymore. It hangs too. Same picture w/ and w/o
'maxcpus', except for
[0.00] Kernel command line: console=ttyPS0,115200 earlyprintk 
maxcpus=1


Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 09:03, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 I finally got to test your patch. Unfortunately, it makes the system
 hang even earlier:

Sorry it had a bug depending on the registration order. Can you try this
one (tabs are probably spaces, sorry)? I will go read through this
thread to see if we already covered the registration order.

---8
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 218bcb5..d3539e5 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
clock_event_device *curdev,
!(newdev-features  CLOCK_EVT_FEAT_ONESHOT))
return false;
 
+   if (cpumask_equal(newdev-cpumask, cpumask_of(smp_processor_id(
+   return false;
+
return !curdev || newdev-rating  curdev-rating;
 }
 
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 64522ec..dd08f3b 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -244,12 +244,22 @@ static bool tick_check_preferred(struct 
clock_event_device *curdev,
return false;
}
 
+   if (!curdev)
+   return true;
+
+   /* Always prefer a tick device that doesn't suffer from FEAT_C3STOP */
+   if (!(newdev-features  CLOCK_EVT_FEAT_C3STOP) 
+(curdev-features  CLOCK_EVT_FEAT_C3STOP))
+   return true;
+   if ((newdev-features  CLOCK_EVT_FEAT_C3STOP) 
+   !(curdev-features  CLOCK_EVT_FEAT_C3STOP))
+   return false;
+
/*
 * Use the higher rated one, but prefer a CPU local device with a lower
 * rating than a non-CPU local device
 */
-   return !curdev ||
-   newdev-rating  curdev-rating ||
+   return newdev-rating  curdev-rating ||
   !cpumask_equal(curdev-cpumask, newdev-cpumask);
 }
 


-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 03:53, Daniel Lezcano wrote:
 On 08/09/2013 07:27 PM, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.

 ---8
 diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
 index 218bcb5..d3539e5 100644
 --- a/kernel/time/tick-broadcast.c
 +++ b/kernel/time/tick-broadcast.c
 @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
 clock_event_device *curdev,
  !(newdev-features  CLOCK_EVT_FEAT_ONESHOT))
  return false;
  
 +if (cpumask_equal(newdev-cpumask, cpumask_of(smp_processor_id(
 +return false;
 Yes, that makes sense to prevent local timer devices to be a broadcast one.

 In the case of the arm global timer, each cpu will register their timer,
 so the test will be ok but is it possible the cpu0 registers the timers
 for the other cpus ?

As far as I know every tick device has to be registered on the CPU that
it will be used on. See tick_check_percpu().


  return !curdev || newdev-rating  curdev-rating;
  }
  
 diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
 index 64522ec..1628b9f 100644
 --- a/kernel/time/tick-common.c
 +++ b/kernel/time/tick-common.c
 @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
 clock_event_device *curdev,
  }
  
  /*
 + * Prefer tick devices that don't suffer from FEAT_C3_STOP
 + * regardless of their rating
 + */
 +if (curdev  cpumask_equal(curdev-cpumask, newdev-cpumask) 
 +!(newdev-features  CLOCK_EVT_FEAT_C3STOP) 
 +(curdev-features  CLOCK_EVT_FEAT_C3STOP))
 +return true;
 That sounds reasonable, but what is the acceptable gap between the
 ratings ? I am wondering if there isn't too much heuristic in the tick
 code...

Yes I wonder if we should just change the ratings of the clockevents.
But it feels to me like the rating should only matter if the two are
equal in features. Otherwise we can use the features to determine what
we want.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
 
 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.

What is the base for your patch? I based my GT enable patch on 3.11-rc3
and for consistency in our debugging, I didn't move it elsewhere since.
Your patch doesn't apply cleanly on it. I see if I can work it out, just
let me know if it depends on something not available in 3.11-rc3.

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Stephen Boyd
On 08/12/13 09:24, Sören Brinkmann wrote:
 On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 I finally got to test your patch. Unfortunately, it makes the system
 hang even earlier:
 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.
 What is the base for your patch? I based my GT enable patch on 3.11-rc3
 and for consistency in our debugging, I didn't move it elsewhere since.
 Your patch doesn't apply cleanly on it. I see if I can work it out, just
 let me know if it depends on something not available in 3.11-rc3.

I applied this on 3.11-rc4. I don't think anything has changed there
between rc3 and rc4. so you're probably running into the whitespace problem.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:40:52AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:24, Sören Brinkmann wrote:
  On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
  On 08/12/13 09:03, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
  Sorry it had a bug depending on the registration order. Can you try this
  one (tabs are probably spaces, sorry)? I will go read through this
  thread to see if we already covered the registration order.
  What is the base for your patch? I based my GT enable patch on 3.11-rc3
  and for consistency in our debugging, I didn't move it elsewhere since.
  Your patch doesn't apply cleanly on it. I see if I can work it out, just
  let me know if it depends on something not available in 3.11-rc3.
 
 I applied this on 3.11-rc4. I don't think anything has changed there
 between rc3 and rc4. so you're probably running into the whitespace problem.

That or the scissors are my problem. I never worked with scissors and it
looks kinda odd what happened when I am'ed the patch. Anyway, I just
manually merged it in.

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
 
 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.

That did it! Booted straight into the system. The broadcast device is
the TTC instead of GT, now.

Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
 max_delta_ns:   1207932479
 min_delta_ns:   18432
 mult:   233015
 shift:  32
 mode:   1
 next_event: 9223372036854775807 nsecs
 set_next_event: ttc_set_next_event
 set_mode:   ttc_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 24216749370 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

Tick Device: mode: 1
Per CPU device: 1
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 2422000 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0


# cat /proc/interrupts 
   CPU0   CPU1   
 27:   1668   1640   GIC  27  gt
 29:  0  0   GIC  29  twd
 43:  0  0   GIC  43  ttc_clockevent
 82:536  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0  0  Timer broadcast interrupts
IPI2:   1264   1322  Rescheduling interrupts
IPI3:  0  0  Function call interrupts
IPI4: 24 70  Single function call interrupts
IPI5:  0  0  CPU stop interrupts
Err:  0


Thanks,
Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
 On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 I finally got to test your patch. Unfortunately, it makes the system
 hang even earlier:

 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.
 
 That did it! Booted straight into the system. 

Good news :)

 The broadcast device is
 the TTC instead of GT, now.
 
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: ttc_clockevent
max_delta_ns:   1207932479
min_delta_ns:   18432
mult:   233015
shift:  32
mode:   1
next_event: 9223372036854775807 nsecs
set_next_event: ttc_set_next_event
set_mode:   ttc_set_mode
event_handler:  tick_handle_oneshot_broadcast
retries:0
   
   tick_broadcast_mask: 
   tick_broadcast_oneshot_mask: 

At the first glance, the timer broadcast usage is not set, right ? Can
you try with the cpuidle flag even if it is not needed ?

Thanks
  -- Daniel


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:23 PM, Stephen Boyd wrote:
 On 08/12/13 03:53, Daniel Lezcano wrote:
 On 08/09/2013 07:27 PM, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.

 ---8
 diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
 index 218bcb5..d3539e5 100644
 --- a/kernel/time/tick-broadcast.c
 +++ b/kernel/time/tick-broadcast.c
 @@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
 clock_event_device *curdev,
 !(newdev-features  CLOCK_EVT_FEAT_ONESHOT))
 return false;
  
 +   if (cpumask_equal(newdev-cpumask, cpumask_of(smp_processor_id(
 +   return false;
 Yes, that makes sense to prevent local timer devices to be a broadcast one.

 In the case of the arm global timer, each cpu will register their timer,
 so the test will be ok but is it possible the cpu0 registers the timers
 for the other cpus ?
 
 As far as I know every tick device has to be registered on the CPU that
 it will be used on. See tick_check_percpu().

Ah, ok I see. Thx for the pointer.

 return !curdev || newdev-rating  curdev-rating;
  }
  
 diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
 index 64522ec..1628b9f 100644
 --- a/kernel/time/tick-common.c
 +++ b/kernel/time/tick-common.c
 @@ -245,6 +245,15 @@ static bool tick_check_preferred(struct 
 clock_event_device *curdev,
 }
  
 /*
 +* Prefer tick devices that don't suffer from FEAT_C3_STOP
 +* regardless of their rating
 +*/
 +   if (curdev  cpumask_equal(curdev-cpumask, newdev-cpumask) 
 +   !(newdev-features  CLOCK_EVT_FEAT_C3STOP) 
 +   (curdev-features  CLOCK_EVT_FEAT_C3STOP))
 +   return true;
 That sounds reasonable, but what is the acceptable gap between the
 ratings ? I am wondering if there isn't too much heuristic in the tick
 code...
 
 Yes I wonder if we should just change the ratings of the clockevents.
 But it feels to me like the rating should only matter if the two are
 equal in features. Otherwise we can use the features to determine what
 we want.

Is it desirable for real time system ? (I am not expert in this area, so
may be this question has no sense :)


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Sören Brinkmann
On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
  On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
  On 08/12/13 09:03, Sören Brinkmann wrote:
  On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
  On 08/09, Daniel Lezcano wrote:
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
 
  If you have a per-cpu tick device that isn't suffering from
  FEAT_C3_STOP why wouldn't you use that for the tick versus a
  per-cpu tick device that has FEAT_C3_STOP? It sounds like there
  is a bug in the preference logic or you should boost the rating
  of the arm global timer above the twd. Does this patch help? It
  should make the arm global timer the tick device and whatever the
  cadence timer you have into the broadcast device.
  I finally got to test your patch. Unfortunately, it makes the system
  hang even earlier:
 
  Sorry it had a bug depending on the registration order. Can you try this
  one (tabs are probably spaces, sorry)? I will go read through this
  thread to see if we already covered the registration order.
  
  That did it! Booted straight into the system. 
 
 Good news :)
 
  The broadcast device is
  the TTC instead of GT, now.
  
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: ttc_clockevent
   max_delta_ns:   1207932479
   min_delta_ns:   18432
   mult:   233015
   shift:  32
   mode:   1
   next_event: 9223372036854775807 nsecs
   set_next_event: ttc_set_next_event
   set_mode:   ttc_set_mode
   event_handler:  tick_handle_oneshot_broadcast
   retries:0
  
  tick_broadcast_mask: 
  tick_broadcast_oneshot_mask: 
 
 At the first glance, the timer broadcast usage is not set, right ? Can
 you try with the cpuidle flag even if it is not needed ?

It's actually present. I have a clean 3.11-rc3 and the only changes are
my patch to enable the GT and Stephen's fix.
The cpuidle stats show both idle states being used.

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-12 Thread Daniel Lezcano
On 08/12/2013 06:53 PM, Sören Brinkmann wrote:
 On Mon, Aug 12, 2013 at 06:49:17PM +0200, Daniel Lezcano wrote:
 On 08/12/2013 06:32 PM, Sören Brinkmann wrote:
 On Mon, Aug 12, 2013 at 09:20:19AM -0700, Stephen Boyd wrote:
 On 08/12/13 09:03, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 I finally got to test your patch. Unfortunately, it makes the system
 hang even earlier:

 Sorry it had a bug depending on the registration order. Can you try this
 one (tabs are probably spaces, sorry)? I will go read through this
 thread to see if we already covered the registration order.

 That did it! Booted straight into the system. 

 Good news :)

 The broadcast device is
 the TTC instead of GT, now.

 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: ttc_clockevent
  max_delta_ns:   1207932479
  min_delta_ns:   18432
  mult:   233015
  shift:  32
  mode:   1
  next_event: 9223372036854775807 nsecs
  set_next_event: ttc_set_next_event
  set_mode:   ttc_set_mode
  event_handler:  tick_handle_oneshot_broadcast
  retries:0
 
 tick_broadcast_mask: 
 tick_broadcast_oneshot_mask: 

 At the first glance, the timer broadcast usage is not set, right ? Can
 you try with the cpuidle flag even if it is not needed ?
 
 It's actually present. I have a clean 3.11-rc3 and the only changes are
 my patch to enable the GT and Stephen's fix.
 The cpuidle stats show both idle states being used.

Ah, right. The tick_broadcast_mask is not set because the arm global
timer has not the CLOCK_EVT_FEAT_C3STOP feature flag set.

Thanks
  -- Daniel


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Stephen Boyd
On 08/09/13 10:48, Sören Brinkmann wrote:
> On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
>> On 08/09, Daniel Lezcano wrote:
>>> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
>>> wake it up, no ? As Stephen stated this kind of configuration should has
>>> never been tested before so the tick broadcast code is not handling this
>>> case properly IMHO.
>>>
>> If you have a per-cpu tick device that isn't suffering from
>> FEAT_C3_STOP why wouldn't you use that for the tick versus a
>> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
>> is a bug in the preference logic or you should boost the rating
>> of the arm global timer above the twd. Does this patch help? It
>> should make the arm global timer the tick device and whatever the
>> cadence timer you have into the broadcast device.
> I'm not sure I'm getting this right. But neither the cadence_ttc nor the
> arm_global_timer have the FEAT_C3_STOP flag set. So, shouldn't they be
> treated equally even with your change?

The cadence_ttc is a global clockevent, i.e. the irq can interrupt any
CPU, and it has a rating of 200. The arm global timer is a per-cpu
clockevent with a rating of 300. The TWD is a per-cpu clockevent with a
rating of 350. Because the arm global timer is rated higher than the
cadence_ttc but less than the TWD the arm global timer will fill in the
broadcast spot and the TWD will fill in the tick position. We really
want the arm global timer to fill in the tick position and the
cadence_ttc to fill in the broadcast spot (although the cadence_ttc
should never be needed because the arm global timer doesn't need help in
deep idle states).

Unless I got lost in all the combinations of tests you've done so far?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
> On 08/09, Daniel Lezcano wrote:
> > 
> > yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> > wake it up, no ? As Stephen stated this kind of configuration should has
> > never been tested before so the tick broadcast code is not handling this
> > case properly IMHO.
> > 
> 
> If you have a per-cpu tick device that isn't suffering from
> FEAT_C3_STOP why wouldn't you use that for the tick versus a
> per-cpu tick device that has FEAT_C3_STOP? It sounds like there
> is a bug in the preference logic or you should boost the rating
> of the arm global timer above the twd. Does this patch help? It
> should make the arm global timer the tick device and whatever the
> cadence timer you have into the broadcast device.

I'm not sure I'm getting this right. But neither the cadence_ttc nor the
arm_global_timer have the FEAT_C3_STOP flag set. So, shouldn't they be
treated equally even with your change?

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Stephen Boyd
On 08/09, Daniel Lezcano wrote:
> 
> yes, but at least the broadcast mechanism should send an IPI to cpu0 to
> wake it up, no ? As Stephen stated this kind of configuration should has
> never been tested before so the tick broadcast code is not handling this
> case properly IMHO.
> 

If you have a per-cpu tick device that isn't suffering from
FEAT_C3_STOP why wouldn't you use that for the tick versus a
per-cpu tick device that has FEAT_C3_STOP? It sounds like there
is a bug in the preference logic or you should boost the rating
of the arm global timer above the twd. Does this patch help? It
should make the arm global timer the tick device and whatever the
cadence timer you have into the broadcast device.

---8<
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 218bcb5..d3539e5 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
clock_event_device *curdev,
!(newdev->features & CLOCK_EVT_FEAT_ONESHOT))
return false;
 
+   if (cpumask_equal(newdev->cpumask, cpumask_of(smp_processor_id(
+   return false;
+
return !curdev || newdev->rating > curdev->rating;
 }
 
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 64522ec..1628b9f 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -245,6 +245,15 @@ static bool tick_check_preferred(struct clock_event_device 
*curdev,
}
 
/*
+* Prefer tick devices that don't suffer from FEAT_C3_STOP
+* regardless of their rating
+*/
+   if (curdev && cpumask_equal(curdev->cpumask, newdev->cpumask) &&
+   !(newdev->features & CLOCK_EVT_FEAT_C3STOP) &&
+   (curdev->features & CLOCK_EVT_FEAT_C3STOP))
+   return true;
+
+   /*
 * Use the higher rated one, but prefer a CPU local device with a lower
 * rating than a non-CPU local device
 */
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 11:32:42AM +0100, Srinivas KANDAGATLA wrote:
> On 08/08/13 18:11, Sören Brinkmann wrote:
> > Hi Daniel,
> > 
> > On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> >>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
>  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> > On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> >>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > Hi Daniel,
> >
> > On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> > (snip)
> >>
> >> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> >> the local
> >> timer will be stopped when entering to the idle state. In this 
> >> case, the
> >> cpuidle framework will call clockevents_notify(ENTER) and 
> >> switches to a
> >> broadcast timer and will call clockevents_notify(EXIT) when 
> >> exiting the
> >> idle state, switching the local timer back in use.
> >
> > I've been thinking about this, trying to understand how this 
> > makes my
> > boot attempts on Zynq hang. IIUC, the wrongly provided 
> > TIMER_STOP flag
> > would make the timer core switch to a broadcast device even 
> > though it
> > wouldn't be necessary. But shouldn't it still work? It sounds 
> > like we do
> > something useless, but nothing wrong in a sense that it should 
> > result in
> > breakage. I guess I'm missing something obvious. This timer 
> > system will
> > always remain a mystery to me.
> >
> > Actually this more or less leads to the question: What is this
> > 'broadcast timer'. I guess that is some clockevent device which 
> > is
> > common to all cores? (that would be the cadence_ttc for Zynq). 
> > Is the
> > hang pointing to some issue with that driver?
> 
>  If you look at the /proc/timer_list, which timer is used for 
>  broadcasting ?
> >>>
> >>> So, the correct run results (full output attached).
> >>>
> >>> The vanilla kernel uses the twd timers as local timers and the 
> >>> TTC as
> >>> broadcast device:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device  
> >>>   Clock Event Device: ttc_clockevent
> >>>
> >>> When I remove the offending CPUIDLE flag and add the DT fragment 
> >>> to
> >>> enable the global timer, the twd timers are still used as local 
> >>> timers
> >>> and the broadcast device is the global timer:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device
> >>>  
> >>>   Clock Event Device: arm_global_timer
> >>>
> >>> Again, since boot hangs in the actually broken case, I don't see 
> >>> way to
> >>> obtain this information for that case.
> >>
> >> Can't you use the maxcpus=1 option to ensure the system to boot up 
> >> ?
> >
> > Right, that works. I forgot about that option after you mentioned, 
> > that
> > it is most likely not that useful.
> >
> > Anyway, this are those sysfs files with an unmodified cpuidle 
> > driver and
> > the gt enabled and having maxcpus=1 set.
> >
> > /proc/timer_list:
> > Tick Device: mode: 1
> > Broadcast device
> > Clock Event Device: arm_global_timer
> >  max_delta_ns:   12884902005
> >  min_delta_ns:   1000
> >  mult:   715827876
> >  shift:  31
> >  mode:   3
> 
>  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> 
>  The previous timer_list output you gave me when removing the 
>  offending
>  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> 
>  Is it possible you try to get this output again right after 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Daniel Lezcano
On 08/09/2013 12:32 PM, Srinivas KANDAGATLA wrote:
> On 08/08/13 18:11, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>> (snip)
>>>
>>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
>>> the local
>>> timer will be stopped when entering to the idle state. In this 
>>> case, the
>>> cpuidle framework will call clockevents_notify(ENTER) and 
>>> switches to a
>>> broadcast timer and will call clockevents_notify(EXIT) when 
>>> exiting the
>>> idle state, switching the local timer back in use.
>>
>> I've been thinking about this, trying to understand how this 
>> makes my
>> boot attempts on Zynq hang. IIUC, the wrongly provided 
>> TIMER_STOP flag
>> would make the timer core switch to a broadcast device even 
>> though it
>> wouldn't be necessary. But shouldn't it still work? It sounds 
>> like we do
>> something useless, but nothing wrong in a sense that it should 
>> result in
>> breakage. I guess I'm missing something obvious. This timer 
>> system will
>> always remain a mystery to me.
>>
>> Actually this more or less leads to the question: What is this
>> 'broadcast timer'. I guess that is some clockevent device which 
>> is
>> common to all cores? (that would be the cadence_ttc for Zynq). 
>> Is the
>> hang pointing to some issue with that driver?
>
> If you look at the /proc/timer_list, which timer is used for 
> broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.
>>>
>>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>>
>> Right, that works. I forgot about that option after you mentioned, 
>> that
>> it is most likely not that useful.
>>
>> Anyway, this are those sysfs files with an unmodified cpuidle driver 
>> and
>> the gt enabled and having maxcpus=1 set.
>>
>> /proc/timer_list:
>>  Tick Device: mode: 1
>>  Broadcast device
>>  Clock Event Device: arm_global_timer
>>   max_delta_ns:   12884902005
>>   min_delta_ns:   1000
>>   mult:   715827876
>>   shift:  31
>>   mode:   3
>
> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>
> The previous timer_list output you gave me when removing the offending
> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>
> Is it possible you try to get this output again right after onlining 
> the
> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Mark Rutland
On Thu, Aug 08, 2013 at 06:22:36PM +0100, Stephen Boyd wrote:
> On 08/08/13 10:16, Mark Rutland wrote:
> > On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
> >> Hi Daniel,
> >>
> >> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> >>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
>  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> > On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> >> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> >>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> > On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> >> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> > On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >> Hi Daniel,
> >>
> >> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >> (snip)
> >>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> >>> the local
> >>> timer will be stopped when entering to the idle state. In 
> >>> this case, the
> >>> cpuidle framework will call clockevents_notify(ENTER) and 
> >>> switches to a
> >>> broadcast timer and will call clockevents_notify(EXIT) when 
> >>> exiting the
> >>> idle state, switching the local timer back in use.
> >> I've been thinking about this, trying to understand how this 
> >> makes my
> >> boot attempts on Zynq hang. IIUC, the wrongly provided 
> >> TIMER_STOP flag
> >> would make the timer core switch to a broadcast device even 
> >> though it
> >> wouldn't be necessary. But shouldn't it still work? It sounds 
> >> like we do
> >> something useless, but nothing wrong in a sense that it should 
> >> result in
> >> breakage. I guess I'm missing something obvious. This timer 
> >> system will
> >> always remain a mystery to me.
> >>
> >> Actually this more or less leads to the question: What is this
> >> 'broadcast timer'. I guess that is some clockevent device 
> >> which is
> >> common to all cores? (that would be the cadence_ttc for Zynq). 
> >> Is the
> >> hang pointing to some issue with that driver?
> > If you look at the /proc/timer_list, which timer is used for 
> > broadcasting ?
>  So, the correct run results (full output attached).
> 
>  The vanilla kernel uses the twd timers as local timers and the 
>  TTC as
>  broadcast device:
>  Tick Device: mode: 1
>  Broadcast device
>  Clock Event Device: ttc_clockevent
> 
>  When I remove the offending CPUIDLE flag and add the DT fragment 
>  to
>  enable the global timer, the twd timers are still used as local 
>  timers
>  and the broadcast device is the global timer:
>  Tick Device: mode: 1
>  Broadcast device
>  Clock Event Device: arm_global_timer
> 
>  Again, since boot hangs in the actually broken case, I don't see 
>  way to
>  obtain this information for that case.
> >>> Can't you use the maxcpus=1 option to ensure the system to boot 
> >>> up ?
> >> Right, that works. I forgot about that option after you mentioned, 
> >> that
> >> it is most likely not that useful.
> >>
> >> Anyway, this are those sysfs files with an unmodified cpuidle 
> >> driver and
> >> the gt enabled and having maxcpus=1 set.
> >>
> >> /proc/timer_list:
> >>   Tick Device: mode: 1
> >>   Broadcast device
> >>   Clock Event Device: arm_global_timer
> >>max_delta_ns:   12884902005
> >>min_delta_ns:   1000
> >>mult:   715827876
> >>shift:  31
> >>mode:   3
> > Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> >
> > The previous timer_list output you gave me when removing the 
> > offending
> > cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> >
> > Is it possible you try to get this output again right after 
> > onlining the
> > cpu1 in order to check if the broadcast device switches to SHUTDOWN 
> > ?
>  How do I do 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Srinivas KANDAGATLA
On 08/08/13 18:11, Sören Brinkmann wrote:
> Hi Daniel,
> 
> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
>>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> (snip)
>>
>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>> local
>> timer will be stopped when entering to the idle state. In this 
>> case, the
>> cpuidle framework will call clockevents_notify(ENTER) and 
>> switches to a
>> broadcast timer and will call clockevents_notify(EXIT) when 
>> exiting the
>> idle state, switching the local timer back in use.
>
> I've been thinking about this, trying to understand how this 
> makes my
> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
> flag
> would make the timer core switch to a broadcast device even 
> though it
> wouldn't be necessary. But shouldn't it still work? It sounds 
> like we do
> something useless, but nothing wrong in a sense that it should 
> result in
> breakage. I guess I'm missing something obvious. This timer 
> system will
> always remain a mystery to me.
>
> Actually this more or less leads to the question: What is this
> 'broadcast timer'. I guess that is some clockevent device which is
> common to all cores? (that would be the cadence_ttc for Zynq). Is 
> the
> hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?
>>>
>>> So, the correct run results (full output attached).
>>>
>>> The vanilla kernel uses the twd timers as local timers and the TTC 
>>> as
>>> broadcast device:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device  
>>> Clock Event Device: ttc_clockevent
>>>
>>> When I remove the offending CPUIDLE flag and add the DT fragment to
>>> enable the global timer, the twd timers are still used as local 
>>> timers
>>> and the broadcast device is the global timer:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device
>>>  
>>> Clock Event Device: arm_global_timer
>>>
>>> Again, since boot hangs in the actually broken case, I don't see 
>>> way to
>>> obtain this information for that case.
>>
>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>
> Right, that works. I forgot about that option after you mentioned, 
> that
> it is most likely not that useful.
>
> Anyway, this are those sysfs files with an unmodified cpuidle driver 
> and
> the gt enabled and having maxcpus=1 set.
>
> /proc/timer_list:
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: arm_global_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift:  31
>mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
>>>
>>> How do I do that? I tried to online CPU1 after booting with maxcpus=1
>>> and that didn't end well:
>>> # echo 1 > online && cat /proc/timer_list 
>>
>> Hmm, I was 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Srinivas KANDAGATLA
On 08/08/13 18:11, Sören Brinkmann wrote:
 Hi Daniel,
 
 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
 local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
 flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is 
 the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
 Tick Device: mode: 1
  
 Broadcast device  
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
  
 Broadcast device
  
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver 
 and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
max_delta_ns:   12884902005
min_delta_ns:   1000
mult:   715827876
shift:  31
mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
 # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
   diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
   index 38959c8..3ab11c1 100644
   --- a/kernel/time/clockevents.c
   +++ b/kernel/time/clockevents.c
   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
 */
void clockevents_shutdown(struct clock_event_device *dev)
{
   +   pr_info(ce-name:%s\n, dev-name);
   +   dump_stack();
   clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
   dev-next_event.tv64 = KTIME_MAX;
}

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

 Thanks for the traces.

 Sure.


 If you try 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Mark Rutland
On Thu, Aug 08, 2013 at 06:22:36PM +0100, Stephen Boyd wrote:
 On 08/08/13 10:16, Mark Rutland wrote:
  On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
  the local
  timer will be stopped when entering to the idle state. In 
  this case, the
  cpuidle framework will call clockevents_notify(ENTER) and 
  switches to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
  I've been thinking about this, trying to understand how this 
  makes my
  boot attempts on Zynq hang. IIUC, the wrongly provided 
  TIMER_STOP flag
  would make the timer core switch to a broadcast device even 
  though it
  wouldn't be necessary. But shouldn't it still work? It sounds 
  like we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer 
  system will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device 
  which is
  common to all cores? (that would be the cadence_ttc for Zynq). 
  Is the
  hang pointing to some issue with that driver?
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the 
  TTC as
  broadcast device:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment 
  to
  enable the global timer, the twd timers are still used as local 
  timers
  and the broadcast device is the global timer:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see 
  way to
  obtain this information for that case.
  Can't you use the maxcpus=1 option to ensure the system to boot 
  up ?
  Right, that works. I forgot about that option after you mentioned, 
  that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle 
  driver and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the 
  offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after 
  onlining the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN 
  ?
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
  # echo 1  online  cat /proc/timer_list
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
  I did this change:
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 38959c8..3ab11c1 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
  clock_event_device *dev,
  */
 void clockevents_shutdown(struct clock_event_device *dev)
 {
+   pr_info(ce-name:%s\n, dev-name);
+   dump_stack();
clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
dev-next_event.tv64 = KTIME_MAX;
 }
 
  It is hit a few times during boot, so I attach a full boot log. I 
  really
  don't know what to look for, but I hope you can spot something in it. I
  really appreciate 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Daniel Lezcano
On 08/09/2013 12:32 PM, Srinivas KANDAGATLA wrote:
 On 08/08/13 18:11, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which 
 is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver 
 and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
# echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
  diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  dev-next_event.tv64 = KTIME_MAX;
   }

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

 Thanks for the traces.

 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 11:32:42AM +0100, Srinivas KANDAGATLA wrote:
 On 08/08/13 18:11, Sören Brinkmann wrote:
  Hi Daniel,
  
  On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
 
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
  the local
  timer will be stopped when entering to the idle state. In this 
  case, the
  cpuidle framework will call clockevents_notify(ENTER) and 
  switches to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
 
  I've been thinking about this, trying to understand how this 
  makes my
  boot attempts on Zynq hang. IIUC, the wrongly provided 
  TIMER_STOP flag
  would make the timer core switch to a broadcast device even 
  though it
  wouldn't be necessary. But shouldn't it still work? It sounds 
  like we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer 
  system will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device which 
  is
  common to all cores? (that would be the cadence_ttc for Zynq). 
  Is the
  hang pointing to some issue with that driver?
 
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
 
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the 
  TTC as
  broadcast device:
Tick Device: mode: 1
   
Broadcast device  
Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment 
  to
  enable the global timer, the twd timers are still used as local 
  timers
  and the broadcast device is the global timer:
Tick Device: mode: 1
   
Broadcast device
   
Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see 
  way to
  obtain this information for that case.
 
  Can't you use the maxcpus=1 option to ensure the system to boot up 
  ?
 
  Right, that works. I forgot about that option after you mentioned, 
  that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle 
  driver and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3
 
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the 
  offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after onlining 
  the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
# echo 1  online  cat /proc/timer_list 
 
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
 
  I did this change:
  diff --git a/kernel/time/clockevents.c 
  b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
  clock_event_device *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  dev-next_event.tv64 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Stephen Boyd
On 08/09, Daniel Lezcano wrote:
 
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.
 

If you have a per-cpu tick device that isn't suffering from
FEAT_C3_STOP why wouldn't you use that for the tick versus a
per-cpu tick device that has FEAT_C3_STOP? It sounds like there
is a bug in the preference logic or you should boost the rating
of the arm global timer above the twd. Does this patch help? It
should make the arm global timer the tick device and whatever the
cadence timer you have into the broadcast device.

---8
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 218bcb5..d3539e5 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -77,6 +77,9 @@ static bool tick_check_broadcast_device(struct 
clock_event_device *curdev,
!(newdev-features  CLOCK_EVT_FEAT_ONESHOT))
return false;
 
+   if (cpumask_equal(newdev-cpumask, cpumask_of(smp_processor_id(
+   return false;
+
return !curdev || newdev-rating  curdev-rating;
 }
 
diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
index 64522ec..1628b9f 100644
--- a/kernel/time/tick-common.c
+++ b/kernel/time/tick-common.c
@@ -245,6 +245,15 @@ static bool tick_check_preferred(struct clock_event_device 
*curdev,
}
 
/*
+* Prefer tick devices that don't suffer from FEAT_C3_STOP
+* regardless of their rating
+*/
+   if (curdev  cpumask_equal(curdev-cpumask, newdev-cpumask) 
+   !(newdev-features  CLOCK_EVT_FEAT_C3STOP) 
+   (curdev-features  CLOCK_EVT_FEAT_C3STOP))
+   return true;
+
+   /*
 * Use the higher rated one, but prefer a CPU local device with a lower
 * rating than a non-CPU local device
 */
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Sören Brinkmann
On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
  
  yes, but at least the broadcast mechanism should send an IPI to cpu0 to
  wake it up, no ? As Stephen stated this kind of configuration should has
  never been tested before so the tick broadcast code is not handling this
  case properly IMHO.
  
 
 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.

I'm not sure I'm getting this right. But neither the cadence_ttc nor the
arm_global_timer have the FEAT_C3_STOP flag set. So, shouldn't they be
treated equally even with your change?

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-09 Thread Stephen Boyd
On 08/09/13 10:48, Sören Brinkmann wrote:
 On Fri, Aug 09, 2013 at 10:27:57AM -0700, Stephen Boyd wrote:
 On 08/09, Daniel Lezcano wrote:
 yes, but at least the broadcast mechanism should send an IPI to cpu0 to
 wake it up, no ? As Stephen stated this kind of configuration should has
 never been tested before so the tick broadcast code is not handling this
 case properly IMHO.

 If you have a per-cpu tick device that isn't suffering from
 FEAT_C3_STOP why wouldn't you use that for the tick versus a
 per-cpu tick device that has FEAT_C3_STOP? It sounds like there
 is a bug in the preference logic or you should boost the rating
 of the arm global timer above the twd. Does this patch help? It
 should make the arm global timer the tick device and whatever the
 cadence timer you have into the broadcast device.
 I'm not sure I'm getting this right. But neither the cadence_ttc nor the
 arm_global_timer have the FEAT_C3_STOP flag set. So, shouldn't they be
 treated equally even with your change?

The cadence_ttc is a global clockevent, i.e. the irq can interrupt any
CPU, and it has a rating of 200. The arm global timer is a per-cpu
clockevent with a rating of 300. The TWD is a per-cpu clockevent with a
rating of 350. Because the arm global timer is rated higher than the
cadence_ttc but less than the TWD the arm global timer will fill in the
broadcast spot and the TWD will fill in the tick position. We really
want the arm global timer to fill in the tick position and the
cadence_ttc to fill in the broadcast spot (although the cadence_ttc
should never be needed because the arm global timer doesn't need help in
deep idle states).

Unless I got lost in all the combinations of tests you've done so far?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Sören Brinkmann
On Thu, Aug 08, 2013 at 06:16:50PM +0100, Mark Rutland wrote:
> On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
> > Hi Daniel,
> > 
> > On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> > > On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> > > > On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> > > >> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> > > >>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> > >  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> > > > On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> > > >> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > > >>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> > >  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > > > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> > > >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > > >>> Hi Daniel,
> > > >>>
> > > >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
> > > >>> wrote:
> > > >>> (snip)
> > > 
> > >  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> > >  the local
> > >  timer will be stopped when entering to the idle state. In 
> > >  this case, the
> > >  cpuidle framework will call clockevents_notify(ENTER) and 
> > >  switches to a
> > >  broadcast timer and will call clockevents_notify(EXIT) when 
> > >  exiting the
> > >  idle state, switching the local timer back in use.
> > > >>>
> > > >>> I've been thinking about this, trying to understand how this 
> > > >>> makes my
> > > >>> boot attempts on Zynq hang. IIUC, the wrongly provided 
> > > >>> TIMER_STOP flag
> > > >>> would make the timer core switch to a broadcast device even 
> > > >>> though it
> > > >>> wouldn't be necessary. But shouldn't it still work? It sounds 
> > > >>> like we do
> > > >>> something useless, but nothing wrong in a sense that it 
> > > >>> should result in
> > > >>> breakage. I guess I'm missing something obvious. This timer 
> > > >>> system will
> > > >>> always remain a mystery to me.
> > > >>>
> > > >>> Actually this more or less leads to the question: What is this
> > > >>> 'broadcast timer'. I guess that is some clockevent device 
> > > >>> which is
> > > >>> common to all cores? (that would be the cadence_ttc for 
> > > >>> Zynq). Is the
> > > >>> hang pointing to some issue with that driver?
> > > >>
> > > >> If you look at the /proc/timer_list, which timer is used for 
> > > >> broadcasting ?
> > > >
> > > > So, the correct run results (full output attached).
> > > >
> > > > The vanilla kernel uses the twd timers as local timers and the 
> > > > TTC as
> > > > broadcast device:
> > > > Tick Device: mode: 1
> > > > Broadcast device
> > > > Clock Event Device: ttc_clockevent
> > > >
> > > > When I remove the offending CPUIDLE flag and add the DT 
> > > > fragment to
> > > > enable the global timer, the twd timers are still used as local 
> > > > timers
> > > > and the broadcast device is the global timer:
> > > > Tick Device: mode: 1
> > > > Broadcast device
> > > > Clock Event Device: arm_global_timer
> > > >
> > > > Again, since boot hangs in the actually broken case, I don't 
> > > > see way to
> > > > obtain this information for that case.
> > > 
> > >  Can't you use the maxcpus=1 option to ensure the system to boot 
> > >  up ?
> > > >>>
> > > >>> Right, that works. I forgot about that option after you 
> > > >>> mentioned, that
> > > >>> it is most likely not that useful.
> > > >>>
> > > >>> Anyway, this are those sysfs files with an unmodified cpuidle 
> > > >>> driver and
> > > >>> the gt enabled and having maxcpus=1 set.
> > > >>>
> > > >>> /proc/timer_list:
> > > >>>   Tick Device: mode: 1
> > > >>>   Broadcast device
> > > >>>   Clock Event Device: arm_global_timer
> > > >>>max_delta_ns:   12884902005
> > > >>>min_delta_ns:   1000
> > > >>>mult:   715827876
> > > >>>shift:  31
> > > >>>mode:   3
> > > >>
> > > >> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> > > >>
> > > >> The previous timer_list output you gave me when removing the 
> > > >> offending
> > > >> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> > > >>
> > > >> Is it possible you try to 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Stephen Boyd
On 08/08/13 10:16, Mark Rutland wrote:
> On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>> (snip)
>>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
>>> the local
>>> timer will be stopped when entering to the idle state. In this 
>>> case, the
>>> cpuidle framework will call clockevents_notify(ENTER) and 
>>> switches to a
>>> broadcast timer and will call clockevents_notify(EXIT) when 
>>> exiting the
>>> idle state, switching the local timer back in use.
>> I've been thinking about this, trying to understand how this 
>> makes my
>> boot attempts on Zynq hang. IIUC, the wrongly provided 
>> TIMER_STOP flag
>> would make the timer core switch to a broadcast device even 
>> though it
>> wouldn't be necessary. But shouldn't it still work? It sounds 
>> like we do
>> something useless, but nothing wrong in a sense that it should 
>> result in
>> breakage. I guess I'm missing something obvious. This timer 
>> system will
>> always remain a mystery to me.
>>
>> Actually this more or less leads to the question: What is this
>> 'broadcast timer'. I guess that is some clockevent device which 
>> is
>> common to all cores? (that would be the cadence_ttc for Zynq). 
>> Is the
>> hang pointing to some issue with that driver?
> If you look at the /proc/timer_list, which timer is used for 
> broadcasting ?
 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.
>>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>> Right, that works. I forgot about that option after you mentioned, 
>> that
>> it is most likely not that useful.
>>
>> Anyway, this are those sysfs files with an unmodified cpuidle driver 
>> and
>> the gt enabled and having maxcpus=1 set.
>>
>> /proc/timer_list:
>>   Tick Device: mode: 1
>>   Broadcast device
>>   Clock Event Device: arm_global_timer
>>max_delta_ns:   12884902005
>>min_delta_ns:   1000
>>mult:   715827876
>>shift:  31
>>mode:   3
> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>
> The previous timer_list output you gave me when removing the offending
> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>
> Is it possible you try to get this output again right after onlining 
> the
> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
 # echo 1 > online && cat /proc/timer_list
>>> Hmm, I was hoping to have a small delay before the kernel hangs but
>>> apparently this is not the case... :(
>>>
>>> I suspect the global timer 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Mark Rutland
On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
> Hi Daniel,
> 
> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> > On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> > > On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> > >> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> > >>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> >  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> > > On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> > >> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > >>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> > >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > >>> Hi Daniel,
> > >>>
> > >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> > >>> (snip)
> > 
> >  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> >  the local
> >  timer will be stopped when entering to the idle state. In this 
> >  case, the
> >  cpuidle framework will call clockevents_notify(ENTER) and 
> >  switches to a
> >  broadcast timer and will call clockevents_notify(EXIT) when 
> >  exiting the
> >  idle state, switching the local timer back in use.
> > >>>
> > >>> I've been thinking about this, trying to understand how this 
> > >>> makes my
> > >>> boot attempts on Zynq hang. IIUC, the wrongly provided 
> > >>> TIMER_STOP flag
> > >>> would make the timer core switch to a broadcast device even 
> > >>> though it
> > >>> wouldn't be necessary. But shouldn't it still work? It sounds 
> > >>> like we do
> > >>> something useless, but nothing wrong in a sense that it should 
> > >>> result in
> > >>> breakage. I guess I'm missing something obvious. This timer 
> > >>> system will
> > >>> always remain a mystery to me.
> > >>>
> > >>> Actually this more or less leads to the question: What is this
> > >>> 'broadcast timer'. I guess that is some clockevent device which 
> > >>> is
> > >>> common to all cores? (that would be the cadence_ttc for Zynq). 
> > >>> Is the
> > >>> hang pointing to some issue with that driver?
> > >>
> > >> If you look at the /proc/timer_list, which timer is used for 
> > >> broadcasting ?
> > >
> > > So, the correct run results (full output attached).
> > >
> > > The vanilla kernel uses the twd timers as local timers and the 
> > > TTC as
> > > broadcast device:
> > > Tick Device: mode: 1
> > > Broadcast device
> > > Clock Event Device: ttc_clockevent
> > >
> > > When I remove the offending CPUIDLE flag and add the DT fragment 
> > > to
> > > enable the global timer, the twd timers are still used as local 
> > > timers
> > > and the broadcast device is the global timer:
> > > Tick Device: mode: 1
> > > Broadcast device
> > > Clock Event Device: arm_global_timer
> > >
> > > Again, since boot hangs in the actually broken case, I don't see 
> > > way to
> > > obtain this information for that case.
> > 
> >  Can't you use the maxcpus=1 option to ensure the system to boot up 
> >  ?
> > >>>
> > >>> Right, that works. I forgot about that option after you mentioned, 
> > >>> that
> > >>> it is most likely not that useful.
> > >>>
> > >>> Anyway, this are those sysfs files with an unmodified cpuidle 
> > >>> driver and
> > >>> the gt enabled and having maxcpus=1 set.
> > >>>
> > >>> /proc/timer_list:
> > >>>   Tick Device: mode: 1
> > >>>   Broadcast device
> > >>>   Clock Event Device: arm_global_timer
> > >>>max_delta_ns:   12884902005
> > >>>min_delta_ns:   1000
> > >>>mult:   715827876
> > >>>shift:  31
> > >>>mode:   3
> > >>
> > >> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> > >>
> > >> The previous timer_list output you gave me when removing the 
> > >> offending
> > >> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> > >>
> > >> Is it possible you try to get this output again right after onlining 
> > >> the
> > >> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> > >
> > > How do I do that? I tried to online CPU1 after booting with maxcpus=1
> > > and that didn't end well:
> > > # echo 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Sören Brinkmann
Hi Daniel,

On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> > On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> >>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>  local
>  timer will be stopped when entering to the idle state. In this 
>  case, the
>  cpuidle framework will call clockevents_notify(ENTER) and 
>  switches to a
>  broadcast timer and will call clockevents_notify(EXIT) when 
>  exiting the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this 
> >>> makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
> >>> flag
> >>> would make the timer core switch to a broadcast device even 
> >>> though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds 
> >>> like we do
> >>> something useless, but nothing wrong in a sense that it should 
> >>> result in
> >>> breakage. I guess I'm missing something obvious. This timer 
> >>> system will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is 
> >>> the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for 
> >> broadcasting ?
> >
> > So, the correct run results (full output attached).
> >
> > The vanilla kernel uses the twd timers as local timers and the TTC 
> > as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> >
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local 
> > timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> >
> > Again, since boot hangs in the actually broken case, I don't see 
> > way to
> > obtain this information for that case.
> 
>  Can't you use the maxcpus=1 option to ensure the system to boot up ?
> >>>
> >>> Right, that works. I forgot about that option after you mentioned, 
> >>> that
> >>> it is most likely not that useful.
> >>>
> >>> Anyway, this are those sysfs files with an unmodified cpuidle driver 
> >>> and
> >>> the gt enabled and having maxcpus=1 set.
> >>>
> >>> /proc/timer_list:
> >>>   Tick Device: mode: 1
> >>>   Broadcast device
> >>>   Clock Event Device: arm_global_timer
> >>>max_delta_ns:   12884902005
> >>>min_delta_ns:   1000
> >>>mult:   715827876
> >>>shift:  31
> >>>mode:   3
> >>
> >> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> >>
> >> The previous timer_list output you gave me when removing the offending
> >> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> >>
> >> Is it possible you try to get this output again right after onlining 
> >> the
> >> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> >
> > How do I do that? I tried to online CPU1 after booting with maxcpus=1
> > and that didn't end well:
> > # echo 1 > online && cat /proc/timer_list 
> 
>  Hmm, I was hoping to have a small delay before the kernel hangs 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Sören Brinkmann
Hi Daniel,

On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
 
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
  local
  timer will be stopped when entering to the idle state. In this 
  case, the
  cpuidle framework will call clockevents_notify(ENTER) and 
  switches to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
 
  I've been thinking about this, trying to understand how this 
  makes my
  boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
  flag
  would make the timer core switch to a broadcast device even 
  though it
  wouldn't be necessary. But shouldn't it still work? It sounds 
  like we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer 
  system will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device which is
  common to all cores? (that would be the cadence_ttc for Zynq). Is 
  the
  hang pointing to some issue with that driver?
 
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
 
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the TTC 
  as
  broadcast device:
  Tick Device: mode: 1
   
  Broadcast device  
  Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment to
  enable the global timer, the twd timers are still used as local 
  timers
  and the broadcast device is the global timer:
  Tick Device: mode: 1
   
  Broadcast device
   
  Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see 
  way to
  obtain this information for that case.
 
  Can't you use the maxcpus=1 option to ensure the system to boot up ?
 
  Right, that works. I forgot about that option after you mentioned, 
  that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle driver 
  and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after onlining 
  the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
  # echo 1  online  cat /proc/timer_list 
 
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
 
  I did this change:
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 38959c8..3ab11c1 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
  *dev,
  */
 void clockevents_shutdown(struct clock_event_device *dev)
 {
+   pr_info(ce-name:%s\n, dev-name);
+   dump_stack();
clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
dev-next_event.tv64 = KTIME_MAX;
 }
 
  It is hit a few times during boot, so I attach a full boot log. I really
  don't know what to look for, but I hope you can spot something in it. I
  

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Mark Rutland
On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
 Hi Daniel,
 
 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
   On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
   On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
   On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
   On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
   On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
   On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
   On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
   On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
   On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
   On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
   Hi Daniel,
  
   On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
   (snip)
  
   the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
   the local
   timer will be stopped when entering to the idle state. In this 
   case, the
   cpuidle framework will call clockevents_notify(ENTER) and 
   switches to a
   broadcast timer and will call clockevents_notify(EXIT) when 
   exiting the
   idle state, switching the local timer back in use.
  
   I've been thinking about this, trying to understand how this 
   makes my
   boot attempts on Zynq hang. IIUC, the wrongly provided 
   TIMER_STOP flag
   would make the timer core switch to a broadcast device even 
   though it
   wouldn't be necessary. But shouldn't it still work? It sounds 
   like we do
   something useless, but nothing wrong in a sense that it should 
   result in
   breakage. I guess I'm missing something obvious. This timer 
   system will
   always remain a mystery to me.
  
   Actually this more or less leads to the question: What is this
   'broadcast timer'. I guess that is some clockevent device which 
   is
   common to all cores? (that would be the cadence_ttc for Zynq). 
   Is the
   hang pointing to some issue with that driver?
  
   If you look at the /proc/timer_list, which timer is used for 
   broadcasting ?
  
   So, the correct run results (full output attached).
  
   The vanilla kernel uses the twd timers as local timers and the 
   TTC as
   broadcast device:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: ttc_clockevent
  
   When I remove the offending CPUIDLE flag and add the DT fragment 
   to
   enable the global timer, the twd timers are still used as local 
   timers
   and the broadcast device is the global timer:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
  
   Again, since boot hangs in the actually broken case, I don't see 
   way to
   obtain this information for that case.
  
   Can't you use the maxcpus=1 option to ensure the system to boot up 
   ?
  
   Right, that works. I forgot about that option after you mentioned, 
   that
   it is most likely not that useful.
  
   Anyway, this are those sysfs files with an unmodified cpuidle 
   driver and
   the gt enabled and having maxcpus=1 set.
  
   /proc/timer_list:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: arm_global_timer
  max_delta_ns:   12884902005
  min_delta_ns:   1000
  mult:   715827876
  shift:  31
  mode:   3
  
   Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
  
   The previous timer_list output you gave me when removing the 
   offending
   cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
  
   Is it possible you try to get this output again right after onlining 
   the
   cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
  
   How do I do that? I tried to online CPU1 after booting with maxcpus=1
   and that didn't end well:
   # echo 1  online  cat /proc/timer_list
  
   Hmm, I was hoping to have a small delay before the kernel hangs but
   apparently this is not the case... :(
  
   I suspect the global timer is shutdown at one moment but I don't
   understand why and when.
  
   Can you add a stack trace in the clockevents_shutdown function with
   the clockevent device name ? Perhaps, we may see at boot time an
   interesting trace when it hangs.
  
   I did this change:
 diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
 index 38959c8..3ab11c1 100644
 --- a/kernel/time/clockevents.c
 +++ b/kernel/time/clockevents.c
 @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
   *dev,
   */
  void clockevents_shutdown(struct clock_event_device *dev)
  {
 +   pr_info(ce-name:%s\n, dev-name);
 +   dump_stack();
 clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
 dev-next_event.tv64 = KTIME_MAX;
  }
  
   It is hit a few times during boot, so I attach a full boot log. I really
   don't know what to look for, but I hope you can spot 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Stephen Boyd
On 08/08/13 10:16, Mark Rutland wrote:
 On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)
 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.
 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which 
 is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?
 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?
 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.
 Can't you use the maxcpus=1 option to ensure the system to boot up ?
 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver 
 and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
max_delta_ns:   12884902005
min_delta_ns:   1000
mult:   715827876
shift:  31
mode:   3
 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
 # echo 1  online  cat /proc/timer_list
 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.
 I did this change:
   diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
   index 38959c8..3ab11c1 100644
   --- a/kernel/time/clockevents.c
   +++ b/kernel/time/clockevents.c
   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
 */
void clockevents_shutdown(struct clock_event_device *dev)
{
   +   pr_info(ce-name:%s\n, dev-name);
   +   dump_stack();
   clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
   dev-next_event.tv64 = KTIME_MAX;
}

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.
 Thanks for the traces.
 Sure.

 If you try without the ttc_clockevent configured in the kernel (but with
 twd and gt), does it boot ?
 Absence of the TTC doesn't seem to make any 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-08 Thread Sören Brinkmann
On Thu, Aug 08, 2013 at 06:16:50PM +0100, Mark Rutland wrote:
 On Thu, Aug 08, 2013 at 06:11:26PM +0100, Sören Brinkmann wrote:
  Hi Daniel,
  
  On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
   On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
Hi Daniel,
   
On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
wrote:
(snip)
   
the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
the local
timer will be stopped when entering to the idle state. In 
this case, the
cpuidle framework will call clockevents_notify(ENTER) and 
switches to a
broadcast timer and will call clockevents_notify(EXIT) when 
exiting the
idle state, switching the local timer back in use.
   
I've been thinking about this, trying to understand how this 
makes my
boot attempts on Zynq hang. IIUC, the wrongly provided 
TIMER_STOP flag
would make the timer core switch to a broadcast device even 
though it
wouldn't be necessary. But shouldn't it still work? It sounds 
like we do
something useless, but nothing wrong in a sense that it 
should result in
breakage. I guess I'm missing something obvious. This timer 
system will
always remain a mystery to me.
   
Actually this more or less leads to the question: What is this
'broadcast timer'. I guess that is some clockevent device 
which is
common to all cores? (that would be the cadence_ttc for 
Zynq). Is the
hang pointing to some issue with that driver?
   
If you look at the /proc/timer_list, which timer is used for 
broadcasting ?
   
So, the correct run results (full output attached).
   
The vanilla kernel uses the twd timers as local timers and the 
TTC as
broadcast device:
Tick Device: mode: 1
Broadcast device
Clock Event Device: ttc_clockevent
   
When I remove the offending CPUIDLE flag and add the DT 
fragment to
enable the global timer, the twd timers are still used as local 
timers
and the broadcast device is the global timer:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
   
Again, since boot hangs in the actually broken case, I don't 
see way to
obtain this information for that case.
   
Can't you use the maxcpus=1 option to ensure the system to boot 
up ?
   
Right, that works. I forgot about that option after you 
mentioned, that
it is most likely not that useful.
   
Anyway, this are those sysfs files with an unmodified cpuidle 
driver and
the gt enabled and having maxcpus=1 set.
   
/proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3
   
Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
   
The previous timer_list output you gave me when removing the 
offending
cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
   
Is it possible you try to get this output again right after 
onlining the
cpu1 in order to check if the broadcast device switches to 
SHUTDOWN ?
   
How do I do that? I tried to online CPU1 after booting with 
maxcpus=1
and that didn't end well:
# echo 1  online  cat /proc/timer_list
   
Hmm, I was hoping to have a small delay before the kernel hangs but
apparently this is not the case... :(
   
I suspect the global timer is shutdown at one moment but I don't
understand why and when.
   
Can you add a stack trace in the clockevents_shutdown function with
the clockevent device name ? Perhaps, we may see at boot time an
interesting trace when it hangs.
   
I did this change:
  diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
clock_event_device *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 06:09:09PM +0200, Daniel Lezcano wrote:
> On 08/06/2013 03:18 PM, Michal Simek wrote:
> 
> [ ... ]
> 
> > Soren: Are you able to replicate this issue on QEMU?
> > If yes, it should be the best if you can provide Qemu, kernel .config/
> > rootfs and simple manual to Daniel how to reach that fault.
> 
>  I tried to download qemu for zynq but it fails:
> 
>  git clone git://git.xilinx.com/qemu-xarm.git
>  Cloning into 'qemu-xarm'...
>  fatal: The remote end hung up unexpectedly
> >>>
> >>> Not sure which site have you found but
> >>> it should be just qemu.git
> >>> https://github.com/Xilinx/qemu
> >>>
> >>> or github clone.
> >>
> >> Ok, cool I was able to clone it.
> >>
>  I am also looking for the option specified for the kernel:
> 
>  "The kernel needs to be built with this feature turned on (in
>  menuconfig, System Type->Xilinx Specific Features -> Device Tree At
>  Fixed Address)."
> >>
> >> Ok.
> >>
> >>> This also sound like a very ancient tree.
> >>> This is the latest kernel tree - master-next is the latest devel branch.
> >>> https://github.com/Xilinx/linux-xlnx
> >>
> >> Ok, cool. I have the right one.
> 
> Following the documentation, I was able to boot a kernel with qemu for
> the linux-xlnx and qemu-xilinx.
> 
> But this kernel is outdated regarding the upstream one, so I tried to
> boot a 3.11-rc4 kernel without success, I did the following:
> 
> I used the default config file from linux-xlnx for the upstream kernel.
> 
> I compiled the kernel with:
> 
> make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
> UIMAGE_LOADADDR=0x8000 uImage
> 
> I generated the dtb with:
> 
> make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs
> 
> For qemu, I started qemu with:
> 
> ./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
> -machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
> kernel/zImage -initrd filesystem/ramdisk.img
> 
> I tried with the dtb available for the upstream kernel:
> 
> zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb
> 
> Did I miss something ?

Some debugging hints in case you wanna go through this.
Add this additional option to configure:
 --extra-cflags="-DFDT_GENERIC_UTIL_ERR_DEBUG=1

That'll print out a lot of messages when the dtb is parsed. It's likely
that QEMU invalidates some vital node due to its compatible string being
unknown. In that case you can simply add it to the list of known devices
in 
hw/core/fdt_generic_devices.c
The list is pretty much at the end of that file. I try to get it running
here and might be able to send you a patch.

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 10:46:54AM +0200, Daniel Lezcano wrote:
> On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
> > Hi Daniel,
> > 
> > On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> >>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
>  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> > On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> >>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > Hi Daniel,
> >
> > On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> > (snip)
> >>
> >> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> >> the local
> >> timer will be stopped when entering to the idle state. In this 
> >> case, the
> >> cpuidle framework will call clockevents_notify(ENTER) and 
> >> switches to a
> >> broadcast timer and will call clockevents_notify(EXIT) when 
> >> exiting the
> >> idle state, switching the local timer back in use.
> >
> > I've been thinking about this, trying to understand how this 
> > makes my
> > boot attempts on Zynq hang. IIUC, the wrongly provided 
> > TIMER_STOP flag
> > would make the timer core switch to a broadcast device even 
> > though it
> > wouldn't be necessary. But shouldn't it still work? It sounds 
> > like we do
> > something useless, but nothing wrong in a sense that it should 
> > result in
> > breakage. I guess I'm missing something obvious. This timer 
> > system will
> > always remain a mystery to me.
> >
> > Actually this more or less leads to the question: What is this
> > 'broadcast timer'. I guess that is some clockevent device which 
> > is
> > common to all cores? (that would be the cadence_ttc for Zynq). 
> > Is the
> > hang pointing to some issue with that driver?
> 
>  If you look at the /proc/timer_list, which timer is used for 
>  broadcasting ?
> >>>
> >>> So, the correct run results (full output attached).
> >>>
> >>> The vanilla kernel uses the twd timers as local timers and the 
> >>> TTC as
> >>> broadcast device:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device  
> >>>   Clock Event Device: ttc_clockevent
> >>>
> >>> When I remove the offending CPUIDLE flag and add the DT fragment 
> >>> to
> >>> enable the global timer, the twd timers are still used as local 
> >>> timers
> >>> and the broadcast device is the global timer:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device
> >>>  
> >>>   Clock Event Device: arm_global_timer
> >>>
> >>> Again, since boot hangs in the actually broken case, I don't see 
> >>> way to
> >>> obtain this information for that case.
> >>
> >> Can't you use the maxcpus=1 option to ensure the system to boot up 
> >> ?
> >
> > Right, that works. I forgot about that option after you mentioned, 
> > that
> > it is most likely not that useful.
> >
> > Anyway, this are those sysfs files with an unmodified cpuidle 
> > driver and
> > the gt enabled and having maxcpus=1 set.
> >
> > /proc/timer_list:
> > Tick Device: mode: 1
> > Broadcast device
> > Clock Event Device: arm_global_timer
> >  max_delta_ns:   12884902005
> >  min_delta_ns:   1000
> >  mult:   715827876
> >  shift:  31
> >  mode:   3
> 
>  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> 
>  The previous timer_list output you gave me when removing the 
>  offending
>  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> 
>  Is it possible you try to get this output again right after 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 06:09:09PM +0200, Daniel Lezcano wrote:
> On 08/06/2013 03:18 PM, Michal Simek wrote:
> 
> [ ... ]
> 
> > Soren: Are you able to replicate this issue on QEMU?
> > If yes, it should be the best if you can provide Qemu, kernel .config/
> > rootfs and simple manual to Daniel how to reach that fault.
> 
>  I tried to download qemu for zynq but it fails:
> 
>  git clone git://git.xilinx.com/qemu-xarm.git
>  Cloning into 'qemu-xarm'...
>  fatal: The remote end hung up unexpectedly
> >>>
> >>> Not sure which site have you found but
> >>> it should be just qemu.git
> >>> https://github.com/Xilinx/qemu
> >>>
> >>> or github clone.
> >>
> >> Ok, cool I was able to clone it.
> >>
>  I am also looking for the option specified for the kernel:
> 
>  "The kernel needs to be built with this feature turned on (in
>  menuconfig, System Type->Xilinx Specific Features -> Device Tree At
>  Fixed Address)."
> >>
> >> Ok.
> >>
> >>> This also sound like a very ancient tree.
> >>> This is the latest kernel tree - master-next is the latest devel branch.
> >>> https://github.com/Xilinx/linux-xlnx
> >>
> >> Ok, cool. I have the right one.
> 
> Following the documentation, I was able to boot a kernel with qemu for
> the linux-xlnx and qemu-xilinx.
> 
> But this kernel is outdated regarding the upstream one, so I tried to
> boot a 3.11-rc4 kernel without success, I did the following:
> 
> I used the default config file from linux-xlnx for the upstream kernel.
> 
> I compiled the kernel with:
> 
> make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
> UIMAGE_LOADADDR=0x8000 uImage
> 
> I generated the dtb with:
> 
> make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs
> 
> For qemu, I started qemu with:
> 
> ./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
> -machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
> kernel/zImage -initrd filesystem/ramdisk.img
> 
> I tried with the dtb available for the upstream kernel:
> 
> zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb
> 
> Did I miss something ?

Unfortunately the public github QEMU is behind our internal development.
IIRC, there were a couple of DT compatible strings QEMU didn't know.
Those are sometimes removed making boot fail/hang.

Are you on IRC? It's probably easier to resolve this in a more direct
way (I'm 'sorenb' on freenode). Otherwise I could give you a few more
instructions for debugging this per mail?

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 03:18 PM, Michal Simek wrote:

[ ... ]

> Soren: Are you able to replicate this issue on QEMU?
> If yes, it should be the best if you can provide Qemu, kernel .config/
> rootfs and simple manual to Daniel how to reach that fault.

 I tried to download qemu for zynq but it fails:

 git clone git://git.xilinx.com/qemu-xarm.git
 Cloning into 'qemu-xarm'...
 fatal: The remote end hung up unexpectedly
>>>
>>> Not sure which site have you found but
>>> it should be just qemu.git
>>> https://github.com/Xilinx/qemu
>>>
>>> or github clone.
>>
>> Ok, cool I was able to clone it.
>>
 I am also looking for the option specified for the kernel:

 "The kernel needs to be built with this feature turned on (in
 menuconfig, System Type->Xilinx Specific Features -> Device Tree At
 Fixed Address)."
>>
>> Ok.
>>
>>> This also sound like a very ancient tree.
>>> This is the latest kernel tree - master-next is the latest devel branch.
>>> https://github.com/Xilinx/linux-xlnx
>>
>> Ok, cool. I have the right one.

Following the documentation, I was able to boot a kernel with qemu for
the linux-xlnx and qemu-xilinx.

But this kernel is outdated regarding the upstream one, so I tried to
boot a 3.11-rc4 kernel without success, I did the following:

I used the default config file from linux-xlnx for the upstream kernel.

I compiled the kernel with:

make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
UIMAGE_LOADADDR=0x8000 uImage

I generated the dtb with:

make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs

For qemu, I started qemu with:

./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
-machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
kernel/zImage -initrd filesystem/ramdisk.img

I tried with the dtb available for the upstream kernel:

zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb

Did I miss something ?

Thanks
  -- Daniel



>>
>>> Or there should be an option to use the latest kernel from kernel.org.
>>> (I think Soren is using it)
>>>
>>> Zynq is the part of multiplatfrom kernel and cadence ttc is there,
>>> dts is also in the mainline kernel.
>>>
 ps : apart that, well documented website !
>>>
>>> Can you send me the link to it?
>>
>> http://xilinx.wikidot.com/zynq-qemu
>> http://xilinx.wikidot.com/zynq-linux
> 
> I will find out information why it is still there.
> I think it was moved to the new location.
> 
>>
>>> This should be the main page for it.
>>> http://www.wiki.xilinx.com/


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 03:08 PM, Daniel Lezcano wrote:
> On 08/06/2013 02:41 PM, Michal Simek wrote:
>> On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
>>> On 08/06/2013 11:18 AM, Michal Simek wrote:
 On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
> On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
>> wrote:
>> (snip)
>>>
>>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle 
>>> framework the local
>>> timer will be stopped when entering to the idle state. In 
>>> this case, the
>>> cpuidle framework will call clockevents_notify(ENTER) and 
>>> switches to a
>>> broadcast timer and will call clockevents_notify(EXIT) when 
>>> exiting the
>>> idle state, switching the local timer back in use.
>>
>> I've been thinking about this, trying to understand how this 
>> makes my
>> boot attempts on Zynq hang. IIUC, the wrongly provided 
>> TIMER_STOP flag
>> would make the timer core switch to a broadcast device even 
>> though it
>> wouldn't be necessary. But shouldn't it still work? It 
>> sounds like we do
>> something useless, but nothing wrong in a sense that it 
>> should result in
>> breakage. I guess I'm missing something obvious. This timer 
>> system will
>> always remain a mystery to me.
>>
>> Actually this more or less leads to the question: What is 
>> this
>> 'broadcast timer'. I guess that is some clockevent device 
>> which is
>> common to all cores? (that would be the cadence_ttc for 
>> Zynq). Is the
>> hang pointing to some issue with that driver?
>
> If you look at the /proc/timer_list, which timer is used for 
> broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the 
 TTC as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT 
 fragment to
 enable the global timer, the twd timers are still used as 
 local timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't 
 see way to
 obtain this information for that case.
>>>
>>> Can't you use the maxcpus=1 option to ensure the system to boot 
>>> up ?
>>
>> Right, that works. I forgot about that option after you 
>> mentioned, that
>> it is most likely not that useful.
>>
>> Anyway, this are those sysfs files with an unmodified cpuidle 
>> driver and
>> the gt enabled and having maxcpus=1 set.
>>
>> /proc/timer_list:
>>  Tick Device: mode: 1
>>  Broadcast device
>>  Clock Event Device: 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 02:41 PM, Michal Simek wrote:
> On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
>> On 08/06/2013 11:18 AM, Michal Simek wrote:
>>> On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
>>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
> wrote:
> (snip)
>>
>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
>> the local
>> timer will be stopped when entering to the idle state. In 
>> this case, the
>> cpuidle framework will call clockevents_notify(ENTER) and 
>> switches to a
>> broadcast timer and will call clockevents_notify(EXIT) when 
>> exiting the
>> idle state, switching the local timer back in use.
>
> I've been thinking about this, trying to understand how this 
> makes my
> boot attempts on Zynq hang. IIUC, the wrongly provided 
> TIMER_STOP flag
> would make the timer core switch to a broadcast device even 
> though it
> wouldn't be necessary. But shouldn't it still work? It sounds 
> like we do
> something useless, but nothing wrong in a sense that it 
> should result in
> breakage. I guess I'm missing something obvious. This timer 
> system will
> always remain a mystery to me.
>
> Actually this more or less leads to the question: What is this
> 'broadcast timer'. I guess that is some clockevent device 
> which is
> common to all cores? (that would be the cadence_ttc for 
> Zynq). Is the
> hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?
>>>
>>> So, the correct run results (full output attached).
>>>
>>> The vanilla kernel uses the twd timers as local timers and the 
>>> TTC as
>>> broadcast device:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device  
>>> Clock Event Device: ttc_clockevent
>>>
>>> When I remove the offending CPUIDLE flag and add the DT 
>>> fragment to
>>> enable the global timer, the twd timers are still used as local 
>>> timers
>>> and the broadcast device is the global timer:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device
>>>  
>>> Clock Event Device: arm_global_timer
>>>
>>> Again, since boot hangs in the actually broken case, I don't 
>>> see way to
>>> obtain this information for that case.
>>
>> Can't you use the maxcpus=1 option to ensure the system to boot 
>> up ?
>
> Right, that works. I forgot about that option after you 
> mentioned, that
> it is most likely not that useful.
>
> Anyway, this are those sysfs files with an unmodified cpuidle 
> driver and
> the gt enabled and having maxcpus=1 set.
>
> /proc/timer_list:
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: arm_global_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift: 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
> On 08/06/2013 11:18 AM, Michal Simek wrote:
>> On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
>>> On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>>> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)
>
> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
> the local
> timer will be stopped when entering to the idle state. In 
> this case, the
> cpuidle framework will call clockevents_notify(ENTER) and 
> switches to a
> broadcast timer and will call clockevents_notify(EXIT) when 
> exiting the
> idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device 
 which is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?
>>>
>>> If you look at the /proc/timer_list, which timer is used for 
>>> broadcasting ?
>>
>> So, the correct run results (full output attached).
>>
>> The vanilla kernel uses the twd timers as local timers and the 
>> TTC as
>> broadcast device:
>>  Tick Device: mode: 1
>>  
>>  Broadcast device  
>>  Clock Event Device: ttc_clockevent
>>
>> When I remove the offending CPUIDLE flag and add the DT fragment 
>> to
>> enable the global timer, the twd timers are still used as local 
>> timers
>> and the broadcast device is the global timer:
>>  Tick Device: mode: 1
>>  
>>  Broadcast device
>>  
>>  Clock Event Device: arm_global_timer
>>
>> Again, since boot hangs in the actually broken case, I don't see 
>> way to
>> obtain this information for that case.
>
> Can't you use the maxcpus=1 option to ensure the system to boot 
> up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle 
 driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
>>>
>>> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>>>
>>> The previous timer_list output you gave me when removing the 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 11:18 AM, Michal Simek wrote:
> On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
>> On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this 
>>> makes my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided 
>>> TIMER_STOP flag
>>> would make the timer core switch to a broadcast device even 
>>> though it
>>> wouldn't be necessary. But shouldn't it still work? It sounds 
>>> like we do
>>> something useless, but nothing wrong in a sense that it should 
>>> result in
>>> breakage. I guess I'm missing something obvious. This timer 
>>> system will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which 
>>> is
>>> common to all cores? (that would be the cadence_ttc for Zynq). 
>>> Is the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for 
>> broadcasting ?
>
> So, the correct run results (full output attached).
>
> The vanilla kernel uses the twd timers as local timers and the 
> TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
>
> When I remove the offending CPUIDLE flag and add the DT fragment 
> to
> enable the global timer, the twd timers are still used as local 
> timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
>
> Again, since boot hangs in the actually broken case, I don't see 
> way to
> obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up 
 ?
>>>
>>> Right, that works. I forgot about that option after you mentioned, 
>>> that
>>> it is most likely not that useful.
>>>
>>> Anyway, this are those sysfs files with an unmodified cpuidle 
>>> driver and
>>> the gt enabled and having maxcpus=1 set.
>>>
>>> /proc/timer_list:
>>> Tick Device: mode: 1
>>> Broadcast device
>>> Clock Event Device: arm_global_timer
>>>  max_delta_ns:   12884902005
>>>  min_delta_ns:   1000
>>>  mult:   715827876
>>>  shift:  31
>>>  mode:   3
>>
>> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>>
>> The previous timer_list output you gave me when removing the 
>> offending
>> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>>
>> Is it possible you try to get this 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
> On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>> Hi Daniel,
>>
>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>> (snip)
>>>
>>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
>>> the local
>>> timer will be stopped when entering to the idle state. In this 
>>> case, the
>>> cpuidle framework will call clockevents_notify(ENTER) and 
>>> switches to a
>>> broadcast timer and will call clockevents_notify(EXIT) when 
>>> exiting the
>>> idle state, switching the local timer back in use.
>>
>> I've been thinking about this, trying to understand how this 
>> makes my
>> boot attempts on Zynq hang. IIUC, the wrongly provided 
>> TIMER_STOP flag
>> would make the timer core switch to a broadcast device even 
>> though it
>> wouldn't be necessary. But shouldn't it still work? It sounds 
>> like we do
>> something useless, but nothing wrong in a sense that it should 
>> result in
>> breakage. I guess I'm missing something obvious. This timer 
>> system will
>> always remain a mystery to me.
>>
>> Actually this more or less leads to the question: What is this
>> 'broadcast timer'. I guess that is some clockevent device which 
>> is
>> common to all cores? (that would be the cadence_ttc for Zynq). 
>> Is the
>> hang pointing to some issue with that driver?
>
> If you look at the /proc/timer_list, which timer is used for 
> broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.
>>>
>>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>>
>> Right, that works. I forgot about that option after you mentioned, 
>> that
>> it is most likely not that useful.
>>
>> Anyway, this are those sysfs files with an unmodified cpuidle driver 
>> and
>> the gt enabled and having maxcpus=1 set.
>>
>> /proc/timer_list:
>>  Tick Device: mode: 1
>>  Broadcast device
>>  Clock Event Device: arm_global_timer
>>   max_delta_ns:   12884902005
>>   min_delta_ns:   1000
>>   mult:   715827876
>>   shift:  31
>>   mode:   3
>
> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>
> The previous timer_list output you gave me when removing the offending
> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>
> Is it possible you try to get this output again right after onlining 
> the
> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
> Hi Daniel,
> 
> On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
>>> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> (snip)
>>
>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>> local
>> timer will be stopped when entering to the idle state. In this 
>> case, the
>> cpuidle framework will call clockevents_notify(ENTER) and 
>> switches to a
>> broadcast timer and will call clockevents_notify(EXIT) when 
>> exiting the
>> idle state, switching the local timer back in use.
>
> I've been thinking about this, trying to understand how this 
> makes my
> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
> flag
> would make the timer core switch to a broadcast device even 
> though it
> wouldn't be necessary. But shouldn't it still work? It sounds 
> like we do
> something useless, but nothing wrong in a sense that it should 
> result in
> breakage. I guess I'm missing something obvious. This timer 
> system will
> always remain a mystery to me.
>
> Actually this more or less leads to the question: What is this
> 'broadcast timer'. I guess that is some clockevent device which is
> common to all cores? (that would be the cadence_ttc for Zynq). Is 
> the
> hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?
>>>
>>> So, the correct run results (full output attached).
>>>
>>> The vanilla kernel uses the twd timers as local timers and the TTC 
>>> as
>>> broadcast device:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device  
>>> Clock Event Device: ttc_clockevent
>>>
>>> When I remove the offending CPUIDLE flag and add the DT fragment to
>>> enable the global timer, the twd timers are still used as local 
>>> timers
>>> and the broadcast device is the global timer:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device
>>>  
>>> Clock Event Device: arm_global_timer
>>>
>>> Again, since boot hangs in the actually broken case, I don't see 
>>> way to
>>> obtain this information for that case.
>>
>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>
> Right, that works. I forgot about that option after you mentioned, 
> that
> it is most likely not that useful.
>
> Anyway, this are those sysfs files with an unmodified cpuidle driver 
> and
> the gt enabled and having maxcpus=1 set.
>
> /proc/timer_list:
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: arm_global_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift:  31
>mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
>>>
>>> How do I do that? I tried to online CPU1 after booting with maxcpus=1
>>> and that didn't end well:
>>> # echo 1 > online && cat /proc/timer_list 
>>
>> Hmm, I 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,
 
 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
 local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
 flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is 
 the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
 Tick Device: mode: 1
  
 Broadcast device  
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
  
 Broadcast device
  
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver 
 and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
max_delta_ns:   12884902005
min_delta_ns:   1000
mult:   715827876
shift:  31
mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
 # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
   diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
   index 38959c8..3ab11c1 100644
   --- a/kernel/time/clockevents.c
   +++ b/kernel/time/clockevents.c
   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
 */
void clockevents_shutdown(struct clock_event_device *dev)
{
   +   pr_info(ce-name:%s\n, dev-name);
   +   dump_stack();
   clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
   dev-next_event.tv64 = KTIME_MAX;
}

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

 Thanks for the traces.

 Sure.


 If you try 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which 
 is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC 
 as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver 
 and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
# echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
  diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  dev-next_event.tv64 = KTIME_MAX;
   }

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

 Thanks for the traces.

 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 11:18 AM, Michal Simek wrote:
 On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which 
 is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the 
 TTC as
 broadcast device:
   Tick Device: mode: 1
  
   Broadcast device  
   Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment 
 to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
   Tick Device: mode: 1
  
   Broadcast device
  
   Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up 
 ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle 
 driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: arm_global_timer
  max_delta_ns:   12884902005
  min_delta_ns:   1000
  mult:   715827876
  shift:  31
  mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the 
 offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining 
 the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
   # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
 diff --git a/kernel/time/clockevents.c 
 b/kernel/time/clockevents.c
 index 38959c8..3ab11c1 100644
 --- a/kernel/time/clockevents.c
 +++ b/kernel/time/clockevents.c
 @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
 clock_event_device *dev,
   */
  void clockevents_shutdown(struct clock_event_device *dev)
  {
 +   pr_info(ce-name:%s\n, dev-name);
 +   dump_stack();
 clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
 dev-next_event.tv64 = KTIME_MAX;
  }

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
 On 08/06/2013 11:18 AM, Michal Simek wrote:
 On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In 
 this case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device 
 which is
 common to all cores? (that would be the cadence_ttc for Zynq). 
 Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the 
 TTC as
 broadcast device:
  Tick Device: mode: 1
  
  Broadcast device  
  Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment 
 to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
  Tick Device: mode: 1
  
  Broadcast device
  
  Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see 
 way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot 
 up ?

 Right, that works. I forgot about that option after you mentioned, 
 that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle 
 driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the 
 offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after 
 onlining the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN 
 ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
  # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
diff --git a/kernel/time/clockevents.c 
 b/kernel/time/clockevents.c
index 38959c8..3ab11c1 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
 clock_event_device *dev,
  */
 void clockevents_shutdown(struct clock_event_device *dev)
 {
+   pr_info(ce-name:%s\n, dev-name);
+   dump_stack();
clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
dev-next_event.tv64 = KTIME_MAX;
 }

 It is hit a few times during boot, so I attach a full boot log. 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 02:41 PM, Michal Simek wrote:
 On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
 On 08/06/2013 11:18 AM, Michal Simek wrote:
 On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
 wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
 the local
 timer will be stopped when entering to the idle state. In 
 this case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It sounds 
 like we do
 something useless, but nothing wrong in a sense that it 
 should result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device 
 which is
 common to all cores? (that would be the cadence_ttc for 
 Zynq). Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the 
 TTC as
 broadcast device:
 Tick Device: mode: 1
  
 Broadcast device  
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT 
 fragment to
 enable the global timer, the twd timers are still used as local 
 timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
  
 Broadcast device
  
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't 
 see way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot 
 up ?

 Right, that works. I forgot about that option after you 
 mentioned, that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle 
 driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
max_delta_ns:   12884902005
min_delta_ns:   1000
mult:   715827876
shift:  31
mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the 
 offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after 
 onlining the
 cpu1 in order to check if the broadcast device switches to 
 SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with 
 maxcpus=1
 and that didn't end well:
 # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
   diff --git a/kernel/time/clockevents.c 
 b/kernel/time/clockevents.c
   index 38959c8..3ab11c1 100644
   --- a/kernel/time/clockevents.c
   +++ b/kernel/time/clockevents.c
   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
 clock_event_device *dev,
 */
void clockevents_shutdown(struct clock_event_device *dev)
{
   +   pr_info(ce-name:%s\n, dev-name);
   +   dump_stack();
   clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
   dev-next_event.tv64 = KTIME_MAX;
   

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Michal Simek
On 08/06/2013 03:08 PM, Daniel Lezcano wrote:
 On 08/06/2013 02:41 PM, Michal Simek wrote:
 On 08/06/2013 02:30 PM, Daniel Lezcano wrote:
 On 08/06/2013 11:18 AM, Michal Simek wrote:
 On 08/06/2013 10:46 AM, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano 
 wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle 
 framework the local
 timer will be stopped when entering to the idle state. In 
 this case, the
 cpuidle framework will call clockevents_notify(ENTER) and 
 switches to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this 
 makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided 
 TIMER_STOP flag
 would make the timer core switch to a broadcast device even 
 though it
 wouldn't be necessary. But shouldn't it still work? It 
 sounds like we do
 something useless, but nothing wrong in a sense that it 
 should result in
 breakage. I guess I'm missing something obvious. This timer 
 system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is 
 this
 'broadcast timer'. I guess that is some clockevent device 
 which is
 common to all cores? (that would be the cadence_ttc for 
 Zynq). Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the 
 TTC as
 broadcast device:
Tick Device: mode: 1
  
Broadcast device  
Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT 
 fragment to
 enable the global timer, the twd timers are still used as 
 local timers
 and the broadcast device is the global timer:
Tick Device: mode: 1
  
Broadcast device
  
Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't 
 see way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot 
 up ?

 Right, that works. I forgot about that option after you 
 mentioned, that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle 
 driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the 
 offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after 
 onlining the
 cpu1 in order to check if the broadcast device switches to 
 SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with 
 maxcpus=1
 and that didn't end well:
# echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function 
 with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
  diff --git a/kernel/time/clockevents.c 
 b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
 clock_event_device *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Daniel Lezcano
On 08/06/2013 03:18 PM, Michal Simek wrote:

[ ... ]

 Soren: Are you able to replicate this issue on QEMU?
 If yes, it should be the best if you can provide Qemu, kernel .config/
 rootfs and simple manual to Daniel how to reach that fault.

 I tried to download qemu for zynq but it fails:

 git clone git://git.xilinx.com/qemu-xarm.git
 Cloning into 'qemu-xarm'...
 fatal: The remote end hung up unexpectedly

 Not sure which site have you found but
 it should be just qemu.git
 https://github.com/Xilinx/qemu

 or github clone.

 Ok, cool I was able to clone it.

 I am also looking for the option specified for the kernel:

 The kernel needs to be built with this feature turned on (in
 menuconfig, System Type-Xilinx Specific Features - Device Tree At
 Fixed Address).

 Ok.

 This also sound like a very ancient tree.
 This is the latest kernel tree - master-next is the latest devel branch.
 https://github.com/Xilinx/linux-xlnx

 Ok, cool. I have the right one.

Following the documentation, I was able to boot a kernel with qemu for
the linux-xlnx and qemu-xilinx.

But this kernel is outdated regarding the upstream one, so I tried to
boot a 3.11-rc4 kernel without success, I did the following:

I used the default config file from linux-xlnx for the upstream kernel.

I compiled the kernel with:

make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
UIMAGE_LOADADDR=0x8000 uImage

I generated the dtb with:

make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs

For qemu, I started qemu with:

./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
-machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
kernel/zImage -initrd filesystem/ramdisk.img

I tried with the dtb available for the upstream kernel:

zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb

Did I miss something ?

Thanks
  -- Daniel




 Or there should be an option to use the latest kernel from kernel.org.
 (I think Soren is using it)

 Zynq is the part of multiplatfrom kernel and cadence ttc is there,
 dts is also in the mainline kernel.

 ps : apart that, well documented website !

 Can you send me the link to it?

 http://xilinx.wikidot.com/zynq-qemu
 http://xilinx.wikidot.com/zynq-linux
 
 I will find out information why it is still there.
 I think it was moved to the new location.
 

 This should be the main page for it.
 http://www.wiki.xilinx.com/


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 06:09:09PM +0200, Daniel Lezcano wrote:
 On 08/06/2013 03:18 PM, Michal Simek wrote:
 
 [ ... ]
 
  Soren: Are you able to replicate this issue on QEMU?
  If yes, it should be the best if you can provide Qemu, kernel .config/
  rootfs and simple manual to Daniel how to reach that fault.
 
  I tried to download qemu for zynq but it fails:
 
  git clone git://git.xilinx.com/qemu-xarm.git
  Cloning into 'qemu-xarm'...
  fatal: The remote end hung up unexpectedly
 
  Not sure which site have you found but
  it should be just qemu.git
  https://github.com/Xilinx/qemu
 
  or github clone.
 
  Ok, cool I was able to clone it.
 
  I am also looking for the option specified for the kernel:
 
  The kernel needs to be built with this feature turned on (in
  menuconfig, System Type-Xilinx Specific Features - Device Tree At
  Fixed Address).
 
  Ok.
 
  This also sound like a very ancient tree.
  This is the latest kernel tree - master-next is the latest devel branch.
  https://github.com/Xilinx/linux-xlnx
 
  Ok, cool. I have the right one.
 
 Following the documentation, I was able to boot a kernel with qemu for
 the linux-xlnx and qemu-xilinx.
 
 But this kernel is outdated regarding the upstream one, so I tried to
 boot a 3.11-rc4 kernel without success, I did the following:
 
 I used the default config file from linux-xlnx for the upstream kernel.
 
 I compiled the kernel with:
 
 make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
 UIMAGE_LOADADDR=0x8000 uImage
 
 I generated the dtb with:
 
 make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs
 
 For qemu, I started qemu with:
 
 ./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
 -machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
 kernel/zImage -initrd filesystem/ramdisk.img
 
 I tried with the dtb available for the upstream kernel:
 
 zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb
 
 Did I miss something ?

Unfortunately the public github QEMU is behind our internal development.
IIRC, there were a couple of DT compatible strings QEMU didn't know.
Those are sometimes removed making boot fail/hang.

Are you on IRC? It's probably easier to resolve this in a more direct
way (I'm 'sorenb' on freenode). Otherwise I could give you a few more
instructions for debugging this per mail?

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 10:46:54AM +0200, Daniel Lezcano wrote:
 On 08/06/2013 03:28 AM, Sören Brinkmann wrote:
  Hi Daniel,
  
  On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
 
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework 
  the local
  timer will be stopped when entering to the idle state. In this 
  case, the
  cpuidle framework will call clockevents_notify(ENTER) and 
  switches to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
 
  I've been thinking about this, trying to understand how this 
  makes my
  boot attempts on Zynq hang. IIUC, the wrongly provided 
  TIMER_STOP flag
  would make the timer core switch to a broadcast device even 
  though it
  wouldn't be necessary. But shouldn't it still work? It sounds 
  like we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer 
  system will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device which 
  is
  common to all cores? (that would be the cadence_ttc for Zynq). 
  Is the
  hang pointing to some issue with that driver?
 
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
 
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the 
  TTC as
  broadcast device:
Tick Device: mode: 1
   
Broadcast device  
Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment 
  to
  enable the global timer, the twd timers are still used as local 
  timers
  and the broadcast device is the global timer:
Tick Device: mode: 1
   
Broadcast device
   
Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see 
  way to
  obtain this information for that case.
 
  Can't you use the maxcpus=1 option to ensure the system to boot up 
  ?
 
  Right, that works. I forgot about that option after you mentioned, 
  that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle 
  driver and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3
 
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the 
  offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after onlining 
  the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
# echo 1  online  cat /proc/timer_list 
 
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
 
  I did this change:
  diff --git a/kernel/time/clockevents.c 
  b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct 
  clock_event_device *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  dev-next_event.tv64 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-06 Thread Sören Brinkmann
On Tue, Aug 06, 2013 at 06:09:09PM +0200, Daniel Lezcano wrote:
 On 08/06/2013 03:18 PM, Michal Simek wrote:
 
 [ ... ]
 
  Soren: Are you able to replicate this issue on QEMU?
  If yes, it should be the best if you can provide Qemu, kernel .config/
  rootfs and simple manual to Daniel how to reach that fault.
 
  I tried to download qemu for zynq but it fails:
 
  git clone git://git.xilinx.com/qemu-xarm.git
  Cloning into 'qemu-xarm'...
  fatal: The remote end hung up unexpectedly
 
  Not sure which site have you found but
  it should be just qemu.git
  https://github.com/Xilinx/qemu
 
  or github clone.
 
  Ok, cool I was able to clone it.
 
  I am also looking for the option specified for the kernel:
 
  The kernel needs to be built with this feature turned on (in
  menuconfig, System Type-Xilinx Specific Features - Device Tree At
  Fixed Address).
 
  Ok.
 
  This also sound like a very ancient tree.
  This is the latest kernel tree - master-next is the latest devel branch.
  https://github.com/Xilinx/linux-xlnx
 
  Ok, cool. I have the right one.
 
 Following the documentation, I was able to boot a kernel with qemu for
 the linux-xlnx and qemu-xilinx.
 
 But this kernel is outdated regarding the upstream one, so I tried to
 boot a 3.11-rc4 kernel without success, I did the following:
 
 I used the default config file from linux-xlnx for the upstream kernel.
 
 I compiled the kernel with:
 
 make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-
 UIMAGE_LOADADDR=0x8000 uImage
 
 I generated the dtb with:
 
 make -j 5 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- dtbs
 
 For qemu, I started qemu with:
 
 ./arm-softmmu/qemu-system-arm -M arm-generic-fdt -nographic -smp 2
 -machine linux=on -serial mon:stdio -dtb zynq-zed.dtb -kernel
 kernel/zImage -initrd filesystem/ramdisk.img
 
 I tried with the dtb available for the upstream kernel:
 
 zynq-zc706.dtb, zynq-zc702.dtb and zynq-zed.dtb
 
 Did I miss something ?

Some debugging hints in case you wanna go through this.
Add this additional option to configure:
 --extra-cflags=-DFDT_GENERIC_UTIL_ERR_DEBUG=1

That'll print out a lot of messages when the dtb is parsed. It's likely
that QEMU invalidates some vital node due to its compatible string being
unknown. In that case you can simply add it to the list of known devices
in 
hw/core/fdt_generic_devices.c
The list is pretty much at the end of that file. I try to get it running
here and might be able to send you a patch.

Sören


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-08-05 Thread Sören Brinkmann
Hi Daniel,

On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> > On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> >>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>  local
>  timer will be stopped when entering to the idle state. In this 
>  case, the
>  cpuidle framework will call clockevents_notify(ENTER) and 
>  switches to a
>  broadcast timer and will call clockevents_notify(EXIT) when 
>  exiting the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this 
> >>> makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
> >>> flag
> >>> would make the timer core switch to a broadcast device even 
> >>> though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds 
> >>> like we do
> >>> something useless, but nothing wrong in a sense that it should 
> >>> result in
> >>> breakage. I guess I'm missing something obvious. This timer 
> >>> system will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is 
> >>> the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for 
> >> broadcasting ?
> >
> > So, the correct run results (full output attached).
> >
> > The vanilla kernel uses the twd timers as local timers and the TTC 
> > as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> >
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local 
> > timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> >
> > Again, since boot hangs in the actually broken case, I don't see 
> > way to
> > obtain this information for that case.
> 
>  Can't you use the maxcpus=1 option to ensure the system to boot up ?
> >>>
> >>> Right, that works. I forgot about that option after you mentioned, 
> >>> that
> >>> it is most likely not that useful.
> >>>
> >>> Anyway, this are those sysfs files with an unmodified cpuidle driver 
> >>> and
> >>> the gt enabled and having maxcpus=1 set.
> >>>
> >>> /proc/timer_list:
> >>>   Tick Device: mode: 1
> >>>   Broadcast device
> >>>   Clock Event Device: arm_global_timer
> >>>max_delta_ns:   12884902005
> >>>min_delta_ns:   1000
> >>>mult:   715827876
> >>>shift:  31
> >>>mode:   3
> >>
> >> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> >>
> >> The previous timer_list output you gave me when removing the offending
> >> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> >>
> >> Is it possible you try to get this output again right after onlining 
> >> the
> >> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> >
> > How do I do that? I tried to online CPU1 after booting with maxcpus=1
> > and that didn't end well:
> > # echo 1 > online && cat /proc/timer_list 
> 
>  Hmm, I was hoping to have a small delay before the kernel hangs 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-05 Thread Sören Brinkmann
Hi Daniel,

On Thu, Aug 01, 2013 at 07:48:04PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
  On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
 
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
  local
  timer will be stopped when entering to the idle state. In this 
  case, the
  cpuidle framework will call clockevents_notify(ENTER) and 
  switches to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
 
  I've been thinking about this, trying to understand how this 
  makes my
  boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
  flag
  would make the timer core switch to a broadcast device even 
  though it
  wouldn't be necessary. But shouldn't it still work? It sounds 
  like we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer 
  system will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device which is
  common to all cores? (that would be the cadence_ttc for Zynq). Is 
  the
  hang pointing to some issue with that driver?
 
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
 
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the TTC 
  as
  broadcast device:
  Tick Device: mode: 1
   
  Broadcast device  
  Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment to
  enable the global timer, the twd timers are still used as local 
  timers
  and the broadcast device is the global timer:
  Tick Device: mode: 1
   
  Broadcast device
   
  Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see 
  way to
  obtain this information for that case.
 
  Can't you use the maxcpus=1 option to ensure the system to boot up ?
 
  Right, that works. I forgot about that option after you mentioned, 
  that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle driver 
  and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after onlining 
  the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
  # echo 1  online  cat /proc/timer_list 
 
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
 
  I did this change:
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 38959c8..3ab11c1 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
  *dev,
  */
 void clockevents_shutdown(struct clock_event_device *dev)
 {
+   pr_info(ce-name:%s\n, dev-name);
+   dump_stack();
clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
dev-next_event.tv64 = KTIME_MAX;
 }
 
  It is hit a few times during boot, so I attach a full boot log. I really
  don't know what to look for, but I hope you can spot something in it. I
  

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Daniel Lezcano
On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
>>> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
 local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches 
 to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this makes 
>>> my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
>>> flag
>>> would make the timer core switch to a broadcast device even though 
>>> it
>>> wouldn't be necessary. But shouldn't it still work? It sounds like 
>>> we do
>>> something useless, but nothing wrong in a sense that it should 
>>> result in
>>> breakage. I guess I'm missing something obvious. This timer system 
>>> will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which is
>>> common to all cores? (that would be the cadence_ttc for Zynq). Is 
>>> the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for 
>> broadcasting ?
>
> So, the correct run results (full output attached).
>
> The vanilla kernel uses the twd timers as local timers and the TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
>
> When I remove the offending CPUIDLE flag and add the DT fragment to
> enable the global timer, the twd timers are still used as local timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
>
> Again, since boot hangs in the actually broken case, I don't see way 
> to
> obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?
>>>
>>> Right, that works. I forgot about that option after you mentioned, that
>>> it is most likely not that useful.
>>>
>>> Anyway, this are those sysfs files with an unmodified cpuidle driver and
>>> the gt enabled and having maxcpus=1 set.
>>>
>>> /proc/timer_list:
>>> Tick Device: mode: 1
>>> Broadcast device
>>> Clock Event Device: arm_global_timer
>>>  max_delta_ns:   12884902005
>>>  min_delta_ns:   1000
>>>  mult:   715827876
>>>  shift:  31
>>>  mode:   3
>>
>> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>>
>> The previous timer_list output you gave me when removing the offending
>> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>>
>> Is it possible you try to get this output again right after onlining the
>> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
>
> How do I do that? I tried to online CPU1 after booting with maxcpus=1
> and that didn't end well:
>   # echo 1 > online && cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the "clockevents_shutdown" function with
 the clockevent device name ? Perhaps, we may see at boot time an
 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Sören Brinkmann
On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
> On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> > On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> >> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> >>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > Hi Daniel,
> >
> > On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> > (snip)
> >>
> >> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
> >> local
> >> timer will be stopped when entering to the idle state. In this 
> >> case, the
> >> cpuidle framework will call clockevents_notify(ENTER) and switches 
> >> to a
> >> broadcast timer and will call clockevents_notify(EXIT) when 
> >> exiting the
> >> idle state, switching the local timer back in use.
> >
> > I've been thinking about this, trying to understand how this makes 
> > my
> > boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
> > flag
> > would make the timer core switch to a broadcast device even though 
> > it
> > wouldn't be necessary. But shouldn't it still work? It sounds like 
> > we do
> > something useless, but nothing wrong in a sense that it should 
> > result in
> > breakage. I guess I'm missing something obvious. This timer system 
> > will
> > always remain a mystery to me.
> >
> > Actually this more or less leads to the question: What is this
> > 'broadcast timer'. I guess that is some clockevent device which is
> > common to all cores? (that would be the cadence_ttc for Zynq). Is 
> > the
> > hang pointing to some issue with that driver?
> 
>  If you look at the /proc/timer_list, which timer is used for 
>  broadcasting ?
> >>>
> >>> So, the correct run results (full output attached).
> >>>
> >>> The vanilla kernel uses the twd timers as local timers and the TTC as
> >>> broadcast device:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device  
> >>>   Clock Event Device: ttc_clockevent
> >>>
> >>> When I remove the offending CPUIDLE flag and add the DT fragment to
> >>> enable the global timer, the twd timers are still used as local timers
> >>> and the broadcast device is the global timer:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device
> >>>  
> >>>   Clock Event Device: arm_global_timer
> >>>
> >>> Again, since boot hangs in the actually broken case, I don't see way 
> >>> to
> >>> obtain this information for that case.
> >>
> >> Can't you use the maxcpus=1 option to ensure the system to boot up ?
> >
> > Right, that works. I forgot about that option after you mentioned, that
> > it is most likely not that useful.
> >
> > Anyway, this are those sysfs files with an unmodified cpuidle driver and
> > the gt enabled and having maxcpus=1 set.
> >
> > /proc/timer_list:
> > Tick Device: mode: 1
> > Broadcast device
> > Clock Event Device: arm_global_timer
> >  max_delta_ns:   12884902005
> >  min_delta_ns:   1000
> >  mult:   715827876
> >  shift:  31
> >  mode:   3
> 
>  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> 
>  The previous timer_list output you gave me when removing the offending
>  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> 
>  Is it possible you try to get this output again right after onlining the
>  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> >>>
> >>> How do I do that? I tried to online CPU1 after booting with maxcpus=1
> >>> and that didn't end well:
> >>>   # echo 1 > online && cat /proc/timer_list 
> >>
> >> Hmm, I was hoping to have a small delay before the kernel hangs but
> >> apparently this is not the case... :(
> >>
> >> I suspect the global timer is shutdown at one moment but I don't
> >> understand why and when.
> >>
> >> Can you add a stack trace in the "clockevents_shutdown" function with
> >> the clockevent device name ? Perhaps, we may see at boot time an
> >> interesting trace when it hangs.
> > 
> > I did this 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Daniel Lezcano
On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
> On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
>> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> (snip)
>>
>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>> local
>> timer will be stopped when entering to the idle state. In this case, 
>> the
>> cpuidle framework will call clockevents_notify(ENTER) and switches 
>> to a
>> broadcast timer and will call clockevents_notify(EXIT) when exiting 
>> the
>> idle state, switching the local timer back in use.
>
> I've been thinking about this, trying to understand how this makes my
> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> would make the timer core switch to a broadcast device even though it
> wouldn't be necessary. But shouldn't it still work? It sounds like we 
> do
> something useless, but nothing wrong in a sense that it should result 
> in
> breakage. I guess I'm missing something obvious. This timer system 
> will
> always remain a mystery to me.
>
> Actually this more or less leads to the question: What is this
> 'broadcast timer'. I guess that is some clockevent device which is
> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?
>>>
>>> So, the correct run results (full output attached).
>>>
>>> The vanilla kernel uses the twd timers as local timers and the TTC as
>>> broadcast device:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device  
>>> Clock Event Device: ttc_clockevent
>>>
>>> When I remove the offending CPUIDLE flag and add the DT fragment to
>>> enable the global timer, the twd timers are still used as local timers
>>> and the broadcast device is the global timer:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device
>>>  
>>> Clock Event Device: arm_global_timer
>>>
>>> Again, since boot hangs in the actually broken case, I don't see way to
>>> obtain this information for that case.
>>
>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
>
> Right, that works. I forgot about that option after you mentioned, that
> it is most likely not that useful.
>
> Anyway, this are those sysfs files with an unmodified cpuidle driver and
> the gt enabled and having maxcpus=1 set.
>
> /proc/timer_list:
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: arm_global_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift:  31
>mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
>>>
>>> How do I do that? I tried to online CPU1 after booting with maxcpus=1
>>> and that didn't end well:
>>> # echo 1 > online && cat /proc/timer_list 
>>
>> Hmm, I was hoping to have a small delay before the kernel hangs but
>> apparently this is not the case... :(
>>
>> I suspect the global timer is shutdown at one moment but I don't
>> understand why and when.
>>
>> Can you add a stack trace in the "clockevents_shutdown" function with
>> the clockevent device name ? Perhaps, we may see at boot time an
>> interesting trace when it hangs.
> 
> I did this change:
>   diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
>   index 38959c8..3ab11c1 100644
>   --- a/kernel/time/clockevents.c
>   +++ b/kernel/time/clockevents.c
>   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
> *dev,
> */
>void clockevents_shutdown(struct clock_event_device *dev)
>

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Daniel Lezcano
On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
 local
 timer will be stopped when entering to the idle state. In this case, 
 the
 cpuidle framework will call clockevents_notify(ENTER) and switches 
 to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting 
 the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
 would make the timer core switch to a broadcast device even though it
 wouldn't be necessary. But shouldn't it still work? It sounds like we 
 do
 something useless, but nothing wrong in a sense that it should result 
 in
 breakage. I guess I'm missing something obvious. This timer system 
 will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC as
 broadcast device:
 Tick Device: mode: 1
  
 Broadcast device  
 Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local timers
 and the broadcast device is the global timer:
 Tick Device: mode: 1
  
 Broadcast device
  
 Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see way to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
   Tick Device: mode: 1
   Broadcast device
   Clock Event Device: arm_global_timer
max_delta_ns:   12884902005
min_delta_ns:   1000
mult:   715827876
shift:  31
mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
 # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.
 
 I did this change:
   diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
   index 38959c8..3ab11c1 100644
   --- a/kernel/time/clockevents.c
   +++ b/kernel/time/clockevents.c
   @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
 */
void clockevents_shutdown(struct clock_event_device *dev)
{
   +   pr_info(ce-name:%s\n, dev-name);
   +   dump_stack();
   clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
   dev-next_event.tv64 = KTIME_MAX;
}
 
 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

Thanks for the traces.

If you try without the ttc_clockevent configured in the kernel (but with
twd and gt), does it boot ?


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Sören Brinkmann
On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
  On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
  On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
  On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
  On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
  Hi Daniel,
 
  On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
  (snip)
 
  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
  local
  timer will be stopped when entering to the idle state. In this 
  case, the
  cpuidle framework will call clockevents_notify(ENTER) and switches 
  to a
  broadcast timer and will call clockevents_notify(EXIT) when 
  exiting the
  idle state, switching the local timer back in use.
 
  I've been thinking about this, trying to understand how this makes 
  my
  boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
  flag
  would make the timer core switch to a broadcast device even though 
  it
  wouldn't be necessary. But shouldn't it still work? It sounds like 
  we do
  something useless, but nothing wrong in a sense that it should 
  result in
  breakage. I guess I'm missing something obvious. This timer system 
  will
  always remain a mystery to me.
 
  Actually this more or less leads to the question: What is this
  'broadcast timer'. I guess that is some clockevent device which is
  common to all cores? (that would be the cadence_ttc for Zynq). Is 
  the
  hang pointing to some issue with that driver?
 
  If you look at the /proc/timer_list, which timer is used for 
  broadcasting ?
 
  So, the correct run results (full output attached).
 
  The vanilla kernel uses the twd timers as local timers and the TTC as
  broadcast device:
Tick Device: mode: 1
   
Broadcast device  
Clock Event Device: ttc_clockevent
 
  When I remove the offending CPUIDLE flag and add the DT fragment to
  enable the global timer, the twd timers are still used as local timers
  and the broadcast device is the global timer:
Tick Device: mode: 1
   
Broadcast device
   
Clock Event Device: arm_global_timer
 
  Again, since boot hangs in the actually broken case, I don't see way 
  to
  obtain this information for that case.
 
  Can't you use the maxcpus=1 option to ensure the system to boot up ?
 
  Right, that works. I forgot about that option after you mentioned, that
  it is most likely not that useful.
 
  Anyway, this are those sysfs files with an unmodified cpuidle driver and
  the gt enabled and having maxcpus=1 set.
 
  /proc/timer_list:
  Tick Device: mode: 1
  Broadcast device
  Clock Event Device: arm_global_timer
   max_delta_ns:   12884902005
   min_delta_ns:   1000
   mult:   715827876
   shift:  31
   mode:   3
 
  Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
 
  The previous timer_list output you gave me when removing the offending
  cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
 
  Is it possible you try to get this output again right after onlining the
  cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
 
  How do I do that? I tried to online CPU1 after booting with maxcpus=1
  and that didn't end well:
# echo 1  online  cat /proc/timer_list 
 
  Hmm, I was hoping to have a small delay before the kernel hangs but
  apparently this is not the case... :(
 
  I suspect the global timer is shutdown at one moment but I don't
  understand why and when.
 
  Can you add a stack trace in the clockevents_shutdown function with
  the clockevent device name ? Perhaps, we may see at boot time an
  interesting trace when it hangs.
  
  I did this change:
  diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
  index 38959c8..3ab11c1 100644
  --- a/kernel/time/clockevents.c
  +++ b/kernel/time/clockevents.c
  @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
  *dev,
*/
   void clockevents_shutdown(struct clock_event_device *dev)
   {
  +   pr_info(ce-name:%s\n, dev-name);
  +   dump_stack();
  clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
  dev-next_event.tv64 = KTIME_MAX;
   }
  
  It is hit a few times during boot, so I attach a full boot log. I really
  don't know what to look for, but I hope you can spot something in it. I
  really appreciate you taking the time.
 
 Thanks for the traces.

Sure.

 
 If you try without 

Re: Enable arm_global_timer for Zynq brakes boot

2013-08-01 Thread Daniel Lezcano
On 08/01/2013 07:43 PM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 07:29:12PM +0200, Daniel Lezcano wrote:
 On 08/01/2013 01:38 AM, Sören Brinkmann wrote:
 On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
 On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
 On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
 local
 timer will be stopped when entering to the idle state. In this 
 case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches 
 to a
 broadcast timer and will call clockevents_notify(EXIT) when 
 exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this makes 
 my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP 
 flag
 would make the timer core switch to a broadcast device even though 
 it
 wouldn't be necessary. But shouldn't it still work? It sounds like 
 we do
 something useless, but nothing wrong in a sense that it should 
 result in
 breakage. I guess I'm missing something obvious. This timer system 
 will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is 
 the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for 
 broadcasting ?

 So, the correct run results (full output attached).

 The vanilla kernel uses the twd timers as local timers and the TTC as
 broadcast device:
   Tick Device: mode: 1
  
   Broadcast device  
   Clock Event Device: ttc_clockevent

 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local timers
 and the broadcast device is the global timer:
   Tick Device: mode: 1
  
   Broadcast device
  
   Clock Event Device: arm_global_timer

 Again, since boot hangs in the actually broken case, I don't see way 
 to
 obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?

 Right, that works. I forgot about that option after you mentioned, that
 it is most likely not that useful.

 Anyway, this are those sysfs files with an unmodified cpuidle driver and
 the gt enabled and having maxcpus=1 set.

 /proc/timer_list:
 Tick Device: mode: 1
 Broadcast device
 Clock Event Device: arm_global_timer
  max_delta_ns:   12884902005
  min_delta_ns:   1000
  mult:   715827876
  shift:  31
  mode:   3

 Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

 The previous timer_list output you gave me when removing the offending
 cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

 Is it possible you try to get this output again right after onlining the
 cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

 How do I do that? I tried to online CPU1 after booting with maxcpus=1
 and that didn't end well:
   # echo 1  online  cat /proc/timer_list 

 Hmm, I was hoping to have a small delay before the kernel hangs but
 apparently this is not the case... :(

 I suspect the global timer is shutdown at one moment but I don't
 understand why and when.

 Can you add a stack trace in the clockevents_shutdown function with
 the clockevent device name ? Perhaps, we may see at boot time an
 interesting trace when it hangs.

 I did this change:
 diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
 index 38959c8..3ab11c1 100644
 --- a/kernel/time/clockevents.c
 +++ b/kernel/time/clockevents.c
 @@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
 *dev,
   */
  void clockevents_shutdown(struct clock_event_device *dev)
  {
 +   pr_info(ce-name:%s\n, dev-name);
 +   dump_stack();
 clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
 dev-next_event.tv64 = KTIME_MAX;
  }

 It is hit a few times during boot, so I attach a full boot log. I really
 don't know what to look for, but I hope you can spot something in it. I
 really appreciate you taking the time.

 Thanks for the traces.
 
 Sure.
 

 If you try without the ttc_clockevent configured in the kernel (but with
 twd and gt), does it 

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Sören Brinkmann
On Thu, Aug 01, 2013 at 01:01:27AM +0200, Daniel Lezcano wrote:
> On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> >>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>  On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the 
>  local
>  timer will be stopped when entering to the idle state. In this case, 
>  the
>  cpuidle framework will call clockevents_notify(ENTER) and switches 
>  to a
>  broadcast timer and will call clockevents_notify(EXIT) when exiting 
>  the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> >>> would make the timer core switch to a broadcast device even though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds like we 
> >>> do
> >>> something useless, but nothing wrong in a sense that it should result 
> >>> in
> >>> breakage. I guess I'm missing something obvious. This timer system 
> >>> will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for 
> >> broadcasting ?
> >
> > So, the correct run results (full output attached).
> >
> > The vanilla kernel uses the twd timers as local timers and the TTC as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> >
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> >
> > Again, since boot hangs in the actually broken case, I don't see way to
> > obtain this information for that case.
> 
>  Can't you use the maxcpus=1 option to ensure the system to boot up ?
> >>>
> >>> Right, that works. I forgot about that option after you mentioned, that
> >>> it is most likely not that useful.
> >>>
> >>> Anyway, this are those sysfs files with an unmodified cpuidle driver and
> >>> the gt enabled and having maxcpus=1 set.
> >>>
> >>> /proc/timer_list:
> >>>   Tick Device: mode: 1
> >>>   Broadcast device
> >>>   Clock Event Device: arm_global_timer
> >>>max_delta_ns:   12884902005
> >>>min_delta_ns:   1000
> >>>mult:   715827876
> >>>shift:  31
> >>>mode:   3
> >>
> >> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> >>
> >> The previous timer_list output you gave me when removing the offending
> >> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> >>
> >> Is it possible you try to get this output again right after onlining the
> >> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> > 
> > How do I do that? I tried to online CPU1 after booting with maxcpus=1
> > and that didn't end well:
> > # echo 1 > online && cat /proc/timer_list 
> 
> Hmm, I was hoping to have a small delay before the kernel hangs but
> apparently this is not the case... :(
> 
> I suspect the global timer is shutdown at one moment but I don't
> understand why and when.
> 
> Can you add a stack trace in the "clockevents_shutdown" function with
> the clockevent device name ? Perhaps, we may see at boot time an
> interesting trace when it hangs.

I did this change:
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 38959c8..3ab11c1 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -92,6 +92,8 @@ void clockevents_set_mode(struct clock_event_device 
*dev,
  */
 void clockevents_shutdown(struct clock_event_device *dev)
 {
+   pr_info("ce->name:%s\n", dev->name);
   

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 08/01/2013 12:18 AM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
>>> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
 On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, 
 the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this makes my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
>>> would make the timer core switch to a broadcast device even though it
>>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
>>> something useless, but nothing wrong in a sense that it should result in
>>> breakage. I guess I'm missing something obvious. This timer system will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which is
>>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for 
>> broadcasting ?
>
> So, the correct run results (full output attached).
>
> The vanilla kernel uses the twd timers as local timers and the TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
>
> When I remove the offending CPUIDLE flag and add the DT fragment to
> enable the global timer, the twd timers are still used as local timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
>
> Again, since boot hangs in the actually broken case, I don't see way to
> obtain this information for that case.

 Can't you use the maxcpus=1 option to ensure the system to boot up ?
>>>
>>> Right, that works. I forgot about that option after you mentioned, that
>>> it is most likely not that useful.
>>>
>>> Anyway, this are those sysfs files with an unmodified cpuidle driver and
>>> the gt enabled and having maxcpus=1 set.
>>>
>>> /proc/timer_list:
>>> Tick Device: mode: 1
>>> Broadcast device
>>> Clock Event Device: arm_global_timer
>>>  max_delta_ns:   12884902005
>>>  min_delta_ns:   1000
>>>  mult:   715827876
>>>  shift:  31
>>>  mode:   3
>>
>> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
>>
>> The previous timer_list output you gave me when removing the offending
>> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
>>
>> Is it possible you try to get this output again right after onlining the
>> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?
> 
> How do I do that? I tried to online CPU1 after booting with maxcpus=1
> and that didn't end well:
>   # echo 1 > online && cat /proc/timer_list 

Hmm, I was hoping to have a small delay before the kernel hangs but
apparently this is not the case... :(

I suspect the global timer is shutdown at one moment but I don't
understand why and when.

Can you add a stack trace in the "clockevents_shutdown" function with
the clockevent device name ? Perhaps, we may see at boot time an
interesting trace when it hangs.



>   [ 4689.992658] CPU1: Booted secondary processor
>   [ 4690.986295] CPU1: failed to come online
>   sh: write error: Input/output error
>   # [ 4691.045945] CPU1: thread -1, cpu 1, socket 0, mpidr 8001
>   [ 4691.045986] 
>   [ 4691.052972] ===
>   [ 4691.057349] [ INFO: suspicious RCU usage. ]
>   [ 4691.061413] 3.11.0-rc3-1-gc14f576-dirty #139 Not tainted
>   [ 4691.067026] ---
>   [ 4691.071129] kernel/sched/fair.c:5477 suspicious 
> rcu_dereference_check() usage!
>   [ 4691.078292] 
>   [ 4691.078292] other info that might help us debug this:
>   [ 4691.078292] 
>   [ 4691.086209] 
>   [ 4691.086209] RCU used illegally from offline CPU!
>   [ 

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Sören Brinkmann
On Wed, Jul 31, 2013 at 11:08:51PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> > On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> >> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> >>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>  On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> > Hi Daniel,
> >
> > On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> > (snip)
> >>
> >> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
> >> timer will be stopped when entering to the idle state. In this case, 
> >> the
> >> cpuidle framework will call clockevents_notify(ENTER) and switches to a
> >> broadcast timer and will call clockevents_notify(EXIT) when exiting the
> >> idle state, switching the local timer back in use.
> >
> > I've been thinking about this, trying to understand how this makes my
> > boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> > would make the timer core switch to a broadcast device even though it
> > wouldn't be necessary. But shouldn't it still work? It sounds like we do
> > something useless, but nothing wrong in a sense that it should result in
> > breakage. I guess I'm missing something obvious. This timer system will
> > always remain a mystery to me.
> >
> > Actually this more or less leads to the question: What is this
> > 'broadcast timer'. I guess that is some clockevent device which is
> > common to all cores? (that would be the cadence_ttc for Zynq). Is the
> > hang pointing to some issue with that driver?
> 
>  If you look at the /proc/timer_list, which timer is used for 
>  broadcasting ?
> >>>
> >>> So, the correct run results (full output attached).
> >>>
> >>> The vanilla kernel uses the twd timers as local timers and the TTC as
> >>> broadcast device:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device  
> >>>   Clock Event Device: ttc_clockevent
> >>>
> >>> When I remove the offending CPUIDLE flag and add the DT fragment to
> >>> enable the global timer, the twd timers are still used as local timers
> >>> and the broadcast device is the global timer:
> >>>   Tick Device: mode: 1
> >>>  
> >>>   Broadcast device
> >>>  
> >>>   Clock Event Device: arm_global_timer
> >>>
> >>> Again, since boot hangs in the actually broken case, I don't see way to
> >>> obtain this information for that case.
> >>
> >> Can't you use the maxcpus=1 option to ensure the system to boot up ?
> > 
> > Right, that works. I forgot about that option after you mentioned, that
> > it is most likely not that useful.
> > 
> > Anyway, this are those sysfs files with an unmodified cpuidle driver and
> > the gt enabled and having maxcpus=1 set.
> > 
> > /proc/timer_list:
> > Tick Device: mode: 1
> > Broadcast device
> > Clock Event Device: arm_global_timer
> >  max_delta_ns:   12884902005
> >  min_delta_ns:   1000
> >  mult:   715827876
> >  shift:  31
> >  mode:   3
> 
> Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)
> 
> The previous timer_list output you gave me when removing the offending
> cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).
> 
> Is it possible you try to get this output again right after onlining the
> cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

How do I do that? I tried to online CPU1 after booting with maxcpus=1
and that didn't end well:
# echo 1 > online && cat /proc/timer_list 
[ 4689.992658] CPU1: Booted secondary processor
[ 4690.986295] CPU1: failed to come online
sh: write error: Input/output error
# [ 4691.045945] CPU1: thread -1, cpu 1, socket 0, mpidr 8001
[ 4691.045986] 
[ 4691.052972] ===
[ 4691.057349] [ INFO: suspicious RCU usage. ]
[ 4691.061413] 3.11.0-rc3-1-gc14f576-dirty #139 Not tainted
[ 4691.067026] ---
[ 4691.071129] kernel/sched/fair.c:5477 suspicious 
rcu_dereference_check() usage!
[ 4691.078292] 
[ 4691.078292] other info that might help us debug this:
[ 4691.078292] 
[ 4691.086209] 
[ 4691.086209] RCU used illegally from offline CPU!
[ 4691.086209] rcu_scheduler_active = 1, debug_locks = 0
[ 4691.097216] 1 lock held by swapper/1/0:
[ 4691.100968]  #0:  (rcu_read_lock){.+.+..}, at: [] 
set_cpu_sd_state_idle+0x0/0x1e4
[ 4691.109250] 
[ 4691.109250] stack backtrace:
[ 4691.113531] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 
3.11.0-rc3-1-gc14f576-dirty #139
[ 4691.121755] [] (unwind_backtrace+0x0/0x128) from 
[] 

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 10:58 PM, Sören Brinkmann wrote:
> On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
>> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
>>> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> Hi Daniel,
>
> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> (snip)
>>
>> the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
>> timer will be stopped when entering to the idle state. In this case, the
>> cpuidle framework will call clockevents_notify(ENTER) and switches to a
>> broadcast timer and will call clockevents_notify(EXIT) when exiting the
>> idle state, switching the local timer back in use.
>
> I've been thinking about this, trying to understand how this makes my
> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> would make the timer core switch to a broadcast device even though it
> wouldn't be necessary. But shouldn't it still work? It sounds like we do
> something useless, but nothing wrong in a sense that it should result in
> breakage. I guess I'm missing something obvious. This timer system will
> always remain a mystery to me.
>
> Actually this more or less leads to the question: What is this
> 'broadcast timer'. I guess that is some clockevent device which is
> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for broadcasting ?
>>>
>>> So, the correct run results (full output attached).
>>>
>>> The vanilla kernel uses the twd timers as local timers and the TTC as
>>> broadcast device:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device  
>>> Clock Event Device: ttc_clockevent
>>>
>>> When I remove the offending CPUIDLE flag and add the DT fragment to
>>> enable the global timer, the twd timers are still used as local timers
>>> and the broadcast device is the global timer:
>>> Tick Device: mode: 1
>>>  
>>> Broadcast device
>>>  
>>> Clock Event Device: arm_global_timer
>>>
>>> Again, since boot hangs in the actually broken case, I don't see way to
>>> obtain this information for that case.
>>
>> Can't you use the maxcpus=1 option to ensure the system to boot up ?
> 
> Right, that works. I forgot about that option after you mentioned, that
> it is most likely not that useful.
> 
> Anyway, this are those sysfs files with an unmodified cpuidle driver and
> the gt enabled and having maxcpus=1 set.
> 
> /proc/timer_list:
>   Tick Device: mode: 1
>   Broadcast device
>   Clock Event Device: arm_global_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift:  31
>mode:   3

Here the mode is 3 (CLOCK_EVT_MODE_ONESHOT)

The previous timer_list output you gave me when removing the offending
cpuidle flag, it was 1 (CLOCK_EVT_MODE_SHUTDOWN).

Is it possible you try to get this output again right after onlining the
cpu1 in order to check if the broadcast device switches to SHUTDOWN ?

>next_event: 10808000 nsecs
>set_next_event: gt_clockevent_set_next_event
>set_mode:   gt_clockevent_set_mode
>event_handler:  tick_handle_oneshot_broadcast
>retries:0
>   
>   tick_broadcast_mask: 0001
>   tick_broadcast_oneshot_mask: 
>   
>   Tick Device: mode: 1
>   Per CPU device: 0
>   Clock Event Device: local_timer
>max_delta_ns:   12884902005
>min_delta_ns:   1000
>mult:   715827876
>shift:  31
>mode:   3
>next_event: 1069 nsecs
>set_next_event: twd_set_next_event
>set_mode:   twd_set_mode
>event_handler:  hrtimer_interrupt
>retries:0
> 
> # cat /proc/interrupts 
>  CPU0   
>27:252   GIC  27  gt
>29:626   GIC  29  twd
>43:  0   GIC  43  ttc_clockevent
>82:410   GIC  82  xuartps
>   IPI0:  0  CPU wakeup interrupts
>   IPI1:  0  Timer broadcast interrupts
>   IPI2:  0  Rescheduling interrupts
>   IPI3:  0  Function call interrupts
>   IPI4:  0  Single function call interrupts
>   IPI5:  0  CPU stop interrupts
>   Err:  0
> 
> 
>   Sören
> 
> 


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Sören Brinkmann
On Wed, Jul 31, 2013 at 10:49:06PM +0200, Daniel Lezcano wrote:
> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
>  timer will be stopped when entering to the idle state. In this case, the
>  cpuidle framework will call clockevents_notify(ENTER) and switches to a
>  broadcast timer and will call clockevents_notify(EXIT) when exiting the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> >>> would make the timer core switch to a broadcast device even though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
> >>> something useless, but nothing wrong in a sense that it should result in
> >>> breakage. I guess I'm missing something obvious. This timer system will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> > 
> > So, the correct run results (full output attached).
> > 
> > The vanilla kernel uses the twd timers as local timers and the TTC as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> > 
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> > 
> > Again, since boot hangs in the actually broken case, I don't see way to
> > obtain this information for that case.
> 
> Can't you use the maxcpus=1 option to ensure the system to boot up ?

Right, that works. I forgot about that option after you mentioned, that
it is most likely not that useful.

Anyway, this are those sysfs files with an unmodified cpuidle driver and
the gt enabled and having maxcpus=1 set.

/proc/timer_list:
Tick Device: mode: 1
Broadcast device
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 10808000 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  tick_handle_oneshot_broadcast
 retries:0

tick_broadcast_mask: 0001
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: local_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 1069 nsecs
 set_next_event: twd_set_next_event
 set_mode:   twd_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

# cat /proc/interrupts 
   CPU0   
 27:252   GIC  27  gt
 29:626   GIC  29  twd
 43:  0   GIC  43  ttc_clockevent
 82:410   GIC  82  xuartps
IPI0:  0  CPU wakeup interrupts
IPI1:  0  Timer broadcast interrupts
IPI2:  0  Rescheduling interrupts
IPI3:  0  Function call interrupts
IPI4:  0  Single function call interrupts
IPI5:  0  CPU stop interrupts
Err:  0


Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this makes my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
>>> would make the timer core switch to a broadcast device even though it
>>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
>>> something useless, but nothing wrong in a sense that it should result in
>>> breakage. I guess I'm missing something obvious. This timer system will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which is
>>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> 
> So, the correct run results (full output attached).
> 
> The vanilla kernel uses the twd timers as local timers and the TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
> 
> When I remove the offending CPUIDLE flag and add the DT fragment to
> enable the global timer, the twd timers are still used as local timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
> 
> Again, since boot hangs in the actually broken case, I don't see way to
> obtain this information for that case.

Can't you use the maxcpus=1 option to ensure the system to boot up ?


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Sören Brinkmann
On Wed, Jul 31, 2013 at 11:34:48AM +0200, Daniel Lezcano wrote:
> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
>  timer will be stopped when entering to the idle state. In this case, the
>  cpuidle framework will call clockevents_notify(ENTER) and switches to a
>  broadcast timer and will call clockevents_notify(EXIT) when exiting the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> >>> would make the timer core switch to a broadcast device even though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
> >>> something useless, but nothing wrong in a sense that it should result in
> >>> breakage. I guess I'm missing something obvious. This timer system will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> > 
> > So, the correct run results (full output attached).
> > 
> > The vanilla kernel uses the twd timers as local timers and the TTC as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> > 
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> > 
> > Again, since boot hangs in the actually broken case, I don't see way to
> > obtain this information for that case.
> 
> Is it possible the global timer driver simply does not work ? So when
> the cpuidle driver switches to it, the system stays stuck with no interrupt.
> 
> Removing the CPUIDLE_FLAG_TIMER_STOP prevents to use the broadcast timer
> (aka arm global timer), so the problem does not appear.
> 
> And when the C3STOP feature flag is added to the global timer, this one
> can't be a broadcast timer, so another clock is selected for that (I
> guess cadence_ttc). So again the problem does not appear.
> 
> I am more and more convinced the problem is not coming from the cpuidle
> driver. The cpuidle flag has just spotted a problem somewhere else and I
> suspect the arm_global_timer is not working for zynq.

I made the following experiment:
I removed all TTC and twd nodes from my dts, leaving the gt as only
timer source and the system boots. Interestingly, in this case no
broadcast device is available.

/proc/timer_list (shortended):
Tick Device: mode: 1
Broadcast device
Clock Event Device: 
tick_broadcast_mask: 
tick_broadcast_oneshot_mask: 

Tick Device: mode: 1
Per CPU device: 0
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 25654916518 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

Tick Device: mode: 1
Per CPU device: 1
Clock Event Device: arm_global_timer
 max_delta_ns:   12884902005
 min_delta_ns:   1000
 mult:   715827876
 shift:  31
 mode:   3
 next_event: 2566000 nsecs
 set_next_event: gt_clockevent_set_next_event
 set_mode:   gt_clockevent_set_mode
 event_handler:  hrtimer_interrupt
 retries:0

/proc/interrupts:
# cat /proc/interrupts 
   CPU0   CPU1   
 27:   1483   1623   GIC  27  gt
 82:507  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0  0  Timer broadcast interrupts
IPI2:   1211   1238  Rescheduling interrupts
IPI3:  0  0  Function call 

Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Sören Brinkmann
On Wed, Jul 31, 2013 at 09:27:25AM +0200, Daniel Lezcano wrote:
> On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> > On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
> >> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
> >>> Hi Daniel,
> >>>
> >>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
> >>> (snip)
> 
>  the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
>  timer will be stopped when entering to the idle state. In this case, the
>  cpuidle framework will call clockevents_notify(ENTER) and switches to a
>  broadcast timer and will call clockevents_notify(EXIT) when exiting the
>  idle state, switching the local timer back in use.
> >>>
> >>> I've been thinking about this, trying to understand how this makes my
> >>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
> >>> would make the timer core switch to a broadcast device even though it
> >>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
> >>> something useless, but nothing wrong in a sense that it should result in
> >>> breakage. I guess I'm missing something obvious. This timer system will
> >>> always remain a mystery to me.
> >>>
> >>> Actually this more or less leads to the question: What is this
> >>> 'broadcast timer'. I guess that is some clockevent device which is
> >>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
> >>> hang pointing to some issue with that driver?
> >>
> >> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> > 
> > So, the correct run results (full output attached).
> > 
> > The vanilla kernel uses the twd timers as local timers and the TTC as
> > broadcast device:
> > Tick Device: mode: 1
> >  
> > Broadcast device  
> > Clock Event Device: ttc_clockevent
> > 
> > When I remove the offending CPUIDLE flag and add the DT fragment to
> > enable the global timer, the twd timers are still used as local timers
> > and the broadcast device is the global timer:
> > Tick Device: mode: 1
> >  
> > Broadcast device
> >  
> > Clock Event Device: arm_global_timer
> > 
> > Again, since boot hangs in the actually broken case, I don't see way to
> > obtain this information for that case.
> 
> Hmm, interesting. Can you give the ouput of /proc/interrupts also with
> the global timer ?
Sure:

# cat /proc/interrupts 
   CPU0   CPU1   
 27: 14  1   GIC  27  gt
 29:841843   GIC  29  twd
 43:  0  0   GIC  43  ttc_clockevent
 82:563  0   GIC  82  xuartps
IPI0:  0  0  CPU wakeup interrupts
IPI1:  0  0  Timer broadcast interrupts
IPI2:   1266   1330  Rescheduling interrupts
IPI3:  0  0  Function call interrupts
IPI4: 34 59  Single function call interrupts
IPI5:  0  0  CPU stop interrupts
Err:  0

Sören


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this makes my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
>>> would make the timer core switch to a broadcast device even though it
>>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
>>> something useless, but nothing wrong in a sense that it should result in
>>> breakage. I guess I'm missing something obvious. This timer system will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which is
>>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> 
> So, the correct run results (full output attached).
> 
> The vanilla kernel uses the twd timers as local timers and the TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
> 
> When I remove the offending CPUIDLE flag and add the DT fragment to
> enable the global timer, the twd timers are still used as local timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
> 
> Again, since boot hangs in the actually broken case, I don't see way to
> obtain this information for that case.

Is it possible the global timer driver simply does not work ? So when
the cpuidle driver switches to it, the system stays stuck with no interrupt.

Removing the CPUIDLE_FLAG_TIMER_STOP prevents to use the broadcast timer
(aka arm global timer), so the problem does not appear.

And when the C3STOP feature flag is added to the global timer, this one
can't be a broadcast timer, so another clock is selected for that (I
guess cadence_ttc). So again the problem does not appear.

I am more and more convinced the problem is not coming from the cpuidle
driver. The cpuidle flag has just spotted a problem somewhere else and I
suspect the arm_global_timer is not working for zynq.


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
> On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
>> On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
>>> Hi Daniel,
>>>
>>> On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
>>> (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.
>>>
>>> I've been thinking about this, trying to understand how this makes my
>>> boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
>>> would make the timer core switch to a broadcast device even though it
>>> wouldn't be necessary. But shouldn't it still work? It sounds like we do
>>> something useless, but nothing wrong in a sense that it should result in
>>> breakage. I guess I'm missing something obvious. This timer system will
>>> always remain a mystery to me.
>>>
>>> Actually this more or less leads to the question: What is this
>>> 'broadcast timer'. I guess that is some clockevent device which is
>>> common to all cores? (that would be the cadence_ttc for Zynq). Is the
>>> hang pointing to some issue with that driver?
>>
>> If you look at the /proc/timer_list, which timer is used for broadcasting ?
> 
> So, the correct run results (full output attached).
> 
> The vanilla kernel uses the twd timers as local timers and the TTC as
> broadcast device:
>   Tick Device: mode: 1
>  
>   Broadcast device  
>   Clock Event Device: ttc_clockevent
> 
> When I remove the offending CPUIDLE flag and add the DT fragment to
> enable the global timer, the twd timers are still used as local timers
> and the broadcast device is the global timer:
>   Tick Device: mode: 1
>  
>   Broadcast device
>  
>   Clock Event Device: arm_global_timer
> 
> Again, since boot hangs in the actually broken case, I don't see way to
> obtain this information for that case.

Hmm, interesting. Can you give the ouput of /proc/interrupts also with
the global timer ?


-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
 would make the timer core switch to a broadcast device even though it
 wouldn't be necessary. But shouldn't it still work? It sounds like we do
 something useless, but nothing wrong in a sense that it should result in
 breakage. I guess I'm missing something obvious. This timer system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for broadcasting ?
 
 So, the correct run results (full output attached).
 
 The vanilla kernel uses the twd timers as local timers and the TTC as
 broadcast device:
   Tick Device: mode: 1
  
   Broadcast device  
   Clock Event Device: ttc_clockevent
 
 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local timers
 and the broadcast device is the global timer:
   Tick Device: mode: 1
  
   Broadcast device
  
   Clock Event Device: arm_global_timer
 
 Again, since boot hangs in the actually broken case, I don't see way to
 obtain this information for that case.

Hmm, interesting. Can you give the ouput of /proc/interrupts also with
the global timer ?


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Enable arm_global_timer for Zynq brakes boot

2013-07-31 Thread Daniel Lezcano
On 07/31/2013 12:34 AM, Sören Brinkmann wrote:
 On Tue, Jul 30, 2013 at 10:47:15AM +0200, Daniel Lezcano wrote:
 On 07/30/2013 02:03 AM, Sören Brinkmann wrote:
 Hi Daniel,

 On Mon, Jul 29, 2013 at 02:51:49PM +0200, Daniel Lezcano wrote:
 (snip)

 the CPUIDLE_FLAG_TIMER_STOP flag tells the cpuidle framework the local
 timer will be stopped when entering to the idle state. In this case, the
 cpuidle framework will call clockevents_notify(ENTER) and switches to a
 broadcast timer and will call clockevents_notify(EXIT) when exiting the
 idle state, switching the local timer back in use.

 I've been thinking about this, trying to understand how this makes my
 boot attempts on Zynq hang. IIUC, the wrongly provided TIMER_STOP flag
 would make the timer core switch to a broadcast device even though it
 wouldn't be necessary. But shouldn't it still work? It sounds like we do
 something useless, but nothing wrong in a sense that it should result in
 breakage. I guess I'm missing something obvious. This timer system will
 always remain a mystery to me.

 Actually this more or less leads to the question: What is this
 'broadcast timer'. I guess that is some clockevent device which is
 common to all cores? (that would be the cadence_ttc for Zynq). Is the
 hang pointing to some issue with that driver?

 If you look at the /proc/timer_list, which timer is used for broadcasting ?
 
 So, the correct run results (full output attached).
 
 The vanilla kernel uses the twd timers as local timers and the TTC as
 broadcast device:
   Tick Device: mode: 1
  
   Broadcast device  
   Clock Event Device: ttc_clockevent
 
 When I remove the offending CPUIDLE flag and add the DT fragment to
 enable the global timer, the twd timers are still used as local timers
 and the broadcast device is the global timer:
   Tick Device: mode: 1
  
   Broadcast device
  
   Clock Event Device: arm_global_timer
 
 Again, since boot hangs in the actually broken case, I don't see way to
 obtain this information for that case.

Is it possible the global timer driver simply does not work ? So when
the cpuidle driver switches to it, the system stays stuck with no interrupt.

Removing the CPUIDLE_FLAG_TIMER_STOP prevents to use the broadcast timer
(aka arm global timer), so the problem does not appear.

And when the C3STOP feature flag is added to the global timer, this one
can't be a broadcast timer, so another clock is selected for that (I
guess cadence_ttc). So again the problem does not appear.

I am more and more convinced the problem is not coming from the cpuidle
driver. The cpuidle flag has just spotted a problem somewhere else and I
suspect the arm_global_timer is not working for zynq.


-- 
 http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  http://www.facebook.com/pages/Linaro Facebook |
http://twitter.com/#!/linaroorg Twitter |
http://www.linaro.org/linaro-blog/ Blog

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   >