> With my now modified cycle, I get a difference of about 4.45 cycles. As I now 
> receive the current position before I send out the next target position 
> (while it was the opposite way before), it makes sense it is one cycle more. 
> But I compare the current position I receive at the begin of the cycle with 
> the target position I send out right afterwards (I do not compare with the 
> one generated later in the cycle and send out the next

> cycle!)

>

> Therefore, I think, it is still not 100% clear:

> - I would assume that in my now modified cycle (when following your

> example) I have 3 cycles delay, not 4.



Yeah, working out how many cycles a round trip takes always gives me a 
headache, and I usually forget a step, so don't take what I said as gospel.





> - SYNC0 offset does not affect the 0.x cycle at all. This however makes sense 
> to me, as SYNC0 both affects the moment the target position is activated AND 
> the moment the current position is captured. So modifying it will just move 
> BOTH events together, making it invisible when comparing like this (of 
> course, the real axis position will move relative to the real time).



Correct.





> - The reason for the 0.45 cycles is probably some internal delay in the drive 
> between activating target position and capturing current position?



The drive has its own internal cycle rate.  The Yaskawa's I use run at 125us 
cycles (or was that 62.5us, I forget).  So your sync event will fall somewhere 
between these cycles with the target position being picked up for the drives 
next cycle, but the actual position returned is the one from the previous drive 
cycle.  You can usually find out your drives cycle rate in the documentation, 
or from its stated minimum supported cycle times.  (I've seen some that look 
like they run at 250us cycles.)  So that's some of the delay, the rest is 
possibly due to timings in the master side.





>

> <> What you need to do is find the Following Error PDO index, add that to <> 
> your drive PDO data, and use it.

>

> I read this now (also 0x60f4:0). It is however not giving me the delay I am 
> observing but the servo error due to axis regulation tuning. If I substract 
> the following error from the current position and then do the compare of 
> target position and modified current position I get very stable a delay of 
> 4.456x cycles during constand speed movement and just slightly changing 
> values during strong accleration. So I can measure the delay much more 
> precisely.



I thought you were comparing target position to actual position to calculate 
your own following error.  If not, then is it just for curiosity / 
understanding as to what the delay is made up from?  Or is it so that you have 
some idea how long after you transmit a target position that it will take to 
physically be there?  If it's this one, then the total round trip delay doesn't 
matter, only when the hardware is where you expect it to be does.  This time 
will be something like:

t = t @ start of cycle  +  Sync time of slave  +  (up to) slave cycle time  +  
master cycle time





>From your previous email:



> Is there anything else I need to keep an eye on when doing all the work after 
> "master send"? Can I safely assume that all TxPDOs from the drive are still 
> valid after the master send and I can read them after the master send using 
> EC_READ_*? Or should I save them when I write the cached RxPDOs? In my test 
> program, it works without caching though.



As Gavin said "overlapping PDOs" are off by default, so the TxPDO's (read data) 
are safe to read after writing the cached RxPDO's (write data).  Just make sure 
to don't overwrite any TxPDO read data.





As a bonus, just another little hacky trick I do...



I also call ecrt_domain_state() every cycle and ecrt_master_state() once every 
10 cycles, outputting changes to the system log.  I don't actually perform any 
diagnostic functions beyond that using these (e.g. send commands to figure out 
what is not communicating and do something about it) as that would take too 
long.  But I do need to know if any of my active drives have stopped 
communicating and deal with it immediately (abort the coordinated motion, 
logically disable the drive, raise and EStop etc).



How I do this is:

- master receive

- domain process

- ecrt_domain_state()

- ecrt_master_state()

- call preSendPDOData() on each of my slaves

- write cached PDO values

- domain queue

- dc sync

- master send

- call postSendPDOData() on each of my slaves

- perform application calcs (writes to PDO data is cached)



preSendPDOData() for my drives caches the just read LastErrorCode (0x603f:00) 
value and overwrites it with the value of 0xFFFF.

postSendPDOData() for my drives replaces the LastErrorCode value with the 
cached value (from above) to return it to its just read state.



What this does is sends out a 0xFFFF value in the TxPDO (read) data every 
cycle.  If the drive is communicating the PDO value will be replaced with the 
drives current error code value.  If it is not communicating, the value will 
come back as 0xFFFF as it has not been replaced.  If my "application calc" code 
sees this value on a drive (along with any other non-zero error code) it will 
halt any coordinated motion the drive is involved with and perform any other 
cleanup operations my application requires.



>From the drive point of view, your drive should have its own watchdog to stop 
>what it is doing if it looses communications.  But even if that is not 
>configured / enabled there will be no new target position (assuming cyclic 
>synchronous position mode) reaching the drive so it will stop moving anyway.  
>We also raise an electrical EStop condition if an active drive looses comms, 
>so a drive in velocity mode will also be stopped when the power is dropped via 
>the EStop.





Regards,

Graeme.




_______________________________________________
etherlab-users mailing list
etherlab-users@etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users

Reply via email to