On 3 August 2021 6:07 pm, quoth Fontana Nicola:
> This, although a bit mind twisting, is functionally equivalent to the current
> one, i.e. both `ecrt_slave_config_pdos` and `autoregister_pdos` are called
> for all modules but the EK1100.

You're correct of course; I guess my brain was skidding over the return value 
comparison.

> Yes, I'm not interested in storing byte or bit offsets. With "hoping"
> you mean that libethercat can (at least theoretically) leave byte holes inside
> the process image? That would certainly create issues here, as I am implying
> everything is packed.
> 
> > (Which, to be fair, it always should -- until you move devices around
> > in your network or upgrade a device to a later revision with a different 
> > data
> > model.
> > What you have works for now, but it's brittle.  But if you're happy
> > with that caveat, then fine.)
> 
> Can you expand a bit here? If the devices can be moved around I would use
> aliases and if a new device cannot behave like the old one, that would
> require some code rewrite. But I think those concerns are not related to the
> fact I am "autoregistering" the PDO entries.

API-wise, the library is only guaranteeing that the PDOs that you register 
appear *somewhere* in the domain memory (specifically: at the offset/pointer 
returned).  (Technically, you only actually need to register one PDO from each 
SM to guarantee that the entire SM will appear -- registering every PDO is only 
confirming that the values also appeared in your ecrt_slave_config_pdos call; 
since you're iterating the same table in both cases it's a little pointless, 
especially if you're not saving the offsets.)

EtherCAT itself requires that all PDOs within a single SM will always appear 
"packed" in consecutive bytes, matching the declared SM layout.  It has no 
particular constraints between different SMs (although some slaves may 
misbehave if their own SMs are split between different packets, although that 
is supposed to be legal in general.)  But a particular master is entirely free 
to order the SMs from different slaves however it likes, or to (for example) 
insert padding between SMs to ensure that each one aligns on a convenient 
word/dword boundary, or to group RxPDOs and TxPDOs differently (or overlap 
them), or to request additional SMs you didn't ask for, although usually the 
preference is to minimize packet size.

From practical considerations, because ecrt_slave_config_reg_pdo_entry returns 
an offset (that cannot later be changed without restarting the configuration 
from scratch) and Etherlab doesn't do any alignment or extra background data 
(and doesn't overlap unless specifically requested), *with current 
implementation* you'll always end up with the domain memory being ordered with 
each slave's SM according to when it was first referred to by 
ecrt_slave_config_reg_pdo_entry and with no padding between.  Doing something 
different is only a theoretical possibility.

But it does mean that your process_image_t is tied to having exactly those 
slaves in exactly that order (registered order, not necessarily network order), 
making it brittle to changes in your design or wanting to re-use implementation 
for another project.  My point was that for more flexibility, you could make a 
per-slave "process_image_t" rather than a global one -- combined with actually 
using the offset/pointer for the first PDO in each SM, you can make each slave 
a reusable component that you could repeat as many times as needed for each 
project, rather than a one-off.  And this would also make you immune to SMs 
being ordered differently than you originally expected, in some theoretical 
future change.


Gavin Lambert
Senior Software Developer TOMRA Fresh Food

 


COMPAC SORTING EQUIPMENT LTD | 4 Henderson Pl | Onehunga | Auckland 1061 | New 
Zealand
Switchboard: +49 2630 96520 | https://www.tomra.com

The information contained in this communication and any attachment is 
confidential and may be legally privileged. It should only be read by the 
person(s) to whom it is addressed. If you have received this communication in 
error, please notify the sender and delete the communication.
-- 
Etherlab-users mailing list
Etherlab-users@etherlab.org
https://lists.etherlab.org/mailman/listinfo/etherlab-users

Reply via email to