On 06/24/2013 05:58 PM, Sören Brinkmann wrote:
> ping?
>
> On Mon, Jun 17, 2013 at 03:47:40PM -0700, Soren Brinkmann wrote:
>> Zynq's Ethernet clocks are created by the following hierarchy:
>> mux0 ---> div0 ---> div1 ---> mux1 ---> gate
>> Rate change requests on the gate have to propagate
On 06/24/2013 05:58 PM, Sören Brinkmann wrote:
ping?
On Mon, Jun 17, 2013 at 03:47:40PM -0700, Soren Brinkmann wrote:
Zynq's Ethernet clocks are created by the following hierarchy:
mux0 --- div0 --- div1 --- mux1 --- gate
Rate change requests on the gate have to propagate all the way
ping?
On Mon, Jun 17, 2013 at 03:47:40PM -0700, Soren Brinkmann wrote:
> Zynq's Ethernet clocks are created by the following hierarchy:
> mux0 ---> div0 ---> div1 ---> mux1 ---> gate
> Rate change requests on the gate have to propagate all the way up to
> div0 to properly leverage all
ping?
On Mon, Jun 17, 2013 at 03:47:40PM -0700, Soren Brinkmann wrote:
Zynq's Ethernet clocks are created by the following hierarchy:
mux0 --- div0 --- div1 --- mux1 --- gate
Rate change requests on the gate have to propagate all the way up to
div0 to properly leverage all dividers.
Zynq's Ethernet clocks are created by the following hierarchy:
mux0 ---> div0 ---> div1 ---> mux1 ---> gate
Rate change requests on the gate have to propagate all the way up to
div0 to properly leverage all dividers. Mux1 was missing the
CLK_SET_RATE_PARENT flag, which is required to
Zynq's Ethernet clocks are created by the following hierarchy:
mux0 --- div0 --- div1 --- mux1 --- gate
Rate change requests on the gate have to propagate all the way up to
div0 to properly leverage all dividers. Mux1 was missing the
CLK_SET_RATE_PARENT flag, which is required to achieve
6 matches
Mail list logo