On Mon, Jun 07, 2021 at 05:02:52PM +0000, Eric Decker wrote: > 1. Only forward ptp management response messages to the uds port if it was > requested by the uds port. When ptp4l is configured as a boundary clock and > a host on ethernet is issuing excessive management requests (a PTP monitoring > tool) then the requests get forwarded across the boundary clock and the > subsequent responses get forwarded to the UDS port. The phc2sys and pmc > daemons are not expecting these responses which causes their receive buffers > to get full and then the ptp4l send buffers get full which causes ptp4l to > lock until phc2sys or pmc reads the messages from their uds ports. pmc > executes a command and exits so forwarding these messages to pmc does not > present a problem. ptp4l will lock until phc2sys read messages from the uds > port which happens every 60sec. The UDS code sends messages to the uds > address of the last process it received a message from. > 2. PTP management messages originated from UDS (pmc) do not have the > correct source clock id, and always have the same sequence count which > prevents end nodes on ethernet from responding.
That would be nice to have fixed. Please post the patches. > 3. Do not treat zero length packets to the PTP ports as an error, just > ignore them. The nmap port scanner sends zero length packets to probe ports. > If we return an error then the port will report it is closed, and the PTP > port will be reset, which is not desirable and could be a security threat. > Returning a fault causes the sockets associated with the PTP ports to be > closed, which is why the port reports it is closed. I believe this is already fixed in git. > Another significant change I had to make is regarding pmc. We are required > to report quite a few PTP attributes via our products webpage and other > tools. In order to get this data from a process in our product we must use > pmc. Pmc takes about 150ms to respond, which is a long time for an embedded > product. When are large number of attributes must be obtained it can take an > excessive amount of time to get all that data. Our target it to complete > each pmc transaction is less than 5ms. In order to solve this I changed some > of the timeouts hardcoded into pmc, and to avoid the time it takes to execute > a system command integrated pmc into our process. Instead of changing the hardcoded values, maybe a new option could be considered? Thanks, -- Miroslav Lichvar _______________________________________________ Linuxptp-users mailing list Linuxptp-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linuxptp-users