> -----Original Message-----
> From: Intel-wired-lan <[email protected]> On Behalf
> Of Grzegorz Nitka
> Sent: Thursday, March 26, 2026 5:29 PM
> To: [email protected]
> Cc: Vecera, Ivan <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected]; Kitszel,
> Przemyslaw <[email protected]>; [email protected];
> [email protected]; [email protected]; Kubalewski,
> Arkadiusz <[email protected]>; [email protected];
> [email protected]; [email protected];
> [email protected]; Nguyen, Anthony L
> <[email protected]>; [email protected]; [email protected]
> Subject: [Intel-wired-lan] [PATCH v4 net-next 8/8] ice: add TX
> reference clock (tx_clk) control for E825 devices
> 
> Add full support for selecting and controlling the TX SERDES
> reference clock on E825C hardware. E825C devicede supports selecting
> among multiple SERDES transmit reference clock sources (ENET, SyncE,
> EREF0), but imposes several routing constraints: on some paths a
> reference must be enabled on both PHY complexes, and ports sharing a
> PHY must coordinate usage so that a reference is not disabled while
> still in active use. Until now the driver did not expose this domain
> through the DPLL API, nor did it provide a coherent control layer
> for enabling, switching, or tracking TX reference clocks.
> 
> This patch implements full TX reference clock management for E825
> devices. Compared to previous iterations, the logic is now separated
> into a dedicated module (ice_txclk.c) which encapsulates all clock-
> selection rules, cross‑PHY dependencies, and the bookkeeping needed
> to ensure safe transitions. This allows the DPLL layer and the PTP
> code to remain focused on their respective roles.
> 
> Key additions:
> 
>   * A new txclk control module (`ice_txclk.c`) implementing:
>       - software usage tracking for each reference clock per PHY,
>       - peer‑PHY enable rules (SyncE required on both PHYs when used
> on
>         PHY0, EREF0 required on both when used on PHY1),
>       - safe disabling of unused reference clocks after switching,
>       - a single, driver‑internal entry point for clock changes.
> 
>   * Integration with the DPLL pin ops:
>       - pin‑set now calls into `ice_txclk_set_clk()` to request a
>         hardware switch,
>       - pin‑get reports the current SERDES reference by reading back
> the
>         active selector (`ice_get_serdes_ref_sel_e825c()`).
> 
>   * Wiring the requested reference clock into Auto‑Negotiation
> restart
>     through the already‑extended `ice_aq_set_link_restart_an()`.
> 
>   * After each link-up the driver verifies the effective hardware
> state
>     (`ice_txclk_verify()`) and updates its per‑PHY usage bitmaps,
>     correcting the requested/active state if the FW or AN flow
> applied a
>     different reference.
> 
>   * PTP PF initialization now seeds the ENET reference clock as
> enabled
>     by default for its port.
> 
> All reference clock transitions are serialized through the DPLL
> lock, and usage information is shared across all PFs belonging to
> the same E825C controller PF. This ensures that concurrent changes
> are coordinated and that shared PHYs never see an unexpected
> disable.
> 
> With this patch, E825 devices gain full userspace‑driven TXC
> reference clock selection via the DPLL subsystem, enabling complete
> SyncE support, precise multi‑clock setups, and predictable clock
> routing behavior.
> 
> Reviewed-by: Arkadiusz Kubalewski <[email protected]>
> Signed-off-by: Grzegorz Nitka <[email protected]>
> ---
>  drivers/net/ethernet/intel/ice/Makefile     |   2 +-
>  drivers/net/ethernet/intel/ice/ice.h        |  12 +
>  drivers/net/ethernet/intel/ice/ice_dpll.c   |  53 +++-
>  drivers/net/ethernet/intel/ice/ice_ptp.c    |  27 ++-
>  drivers/net/ethernet/intel/ice/ice_ptp.h    |   7 +
>  drivers/net/ethernet/intel/ice/ice_ptp_hw.c |  37 +++
> drivers/net/ethernet/intel/ice/ice_ptp_hw.h |  27 +++
> drivers/net/ethernet/intel/ice/ice_txclk.c  | 256
> ++++++++++++++++++++  drivers/net/ethernet/intel/ice/ice_txclk.h  |
> 41 ++++
>  9 files changed, 445 insertions(+), 17 deletions(-)  create mode
> 100644 drivers/net/ethernet/intel/ice/ice_txclk.c
>  create mode 100644 drivers/net/ethernet/intel/ice/ice_txclk.h
> 
> diff --git a/drivers/net/ethernet/intel/ice/Makefile
> b/drivers/net/ethernet/intel/ice/Makefile
> index 38db476ab2ec..95fd0c49800f 100644
> --- a/drivers/net/ethernet/intel/ice/Makefile
> +++ b/drivers/net/ethernet/intel/ice/Makefile
> @@ -54,7 +54,7 @@ ice-$(CONFIG_PCI_IOV) +=    \
>       ice_vf_mbx.o            \
>       ice_vf_vsi_vlan_ops.o   \
>       ice_vf_lib.o
> -ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice_dpll.o
> ice_tspll.o ice_cpi.o

...

>  static const struct dpll_pin_ops ice_dpll_rclk_ops = { diff --git
> a/drivers/net/ethernet/intel/ice/ice_ptp.c
> b/drivers/net/ethernet/intel/ice/ice_ptp.c
> index 094e96219f45..a75a1380097b 100644
> --- a/drivers/net/ethernet/intel/ice/ice_ptp.c
> +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
> @@ -4,6 +4,7 @@
>  #include "ice.h"
>  #include "ice_lib.h"
>  #include "ice_trace.h"
> +#include "ice_txclk.h"
> 
>  static const char ice_pin_names[][64] = {
>       "SDP0",
> @@ -54,11 +55,6 @@ static const struct ice_ptp_pin_desc
> ice_pin_desc_dpll[] = {
>       {  SDP3, {  3, -1 }, { 0, 0 }},
>  };
> 
> -static struct ice_pf *ice_get_ctrl_pf(struct ice_pf *pf) -{
> -     return !pf->adapter ? NULL : pf->adapter->ctrl_pf;
> -}
> -
>  static struct ice_ptp *ice_get_ctrl_ptp(struct ice_pf *pf)  {
>       struct ice_pf *ctrl_pf = ice_get_ctrl_pf(pf); @@ -1325,6
> +1321,10 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
>                               return;
>                       }
>               }
> +
> +             if (linkup)
> +                     ice_txclk_verify(pf);
msleep() under mutex smels badly... /* ice_cpi_wait_req0_ack0() and 
ice_cpi_wait_ack() */
pf->dplls.lock can be held ~2seconds blocking ice_dpll_periodic_work().
Can CPI handshake happen outside the lock?

> +
>               mutex_unlock(&pf->dplls.lock);
>       }
> 

...

> */
> --
> 2.39.3

Reply via email to