I'd like to take the discussion on Manual-while-paused a step further, viewing 
it from a higher point of view.

Jmk's post WhyManualWhilePausedIsHard is very instructive in outlining what 
happens, but there are several assumptions in there which warrant questioning. 
Some of these assumptions hinge on the current status of EMC2, and can likely 
be remedied. Some statements assume a certain stratgey in dealing with the 
issue, and that might not be warranted because there might be other ways to 
deal with it. These are:

1. interpreter needs to be involved in an offsetting step
2. Manual-while-paused requires 'throwing away the motion queue' 
3. 'Backing up the interpreter' might be needed
4. The assumption that there might be 'two channels talking to the motion 
controller' which need synchronizing.
5. Using MDI means 'throwing away the motion queue'.

Before I begin, let me recap my suggestions about a rewrite/cleanup of the 
world model and interpreter state 
(http://wiki.linuxcnc.org/cgi-bin/emcinfo.pl?RemappingStatus). I suggested 
'useful Interpreter instantiation' and do do so, classify overall state into 
instantiable classes along the following lines:

- shared world model/machine state
- potentially shared config/UI parameter information
- potentially shared interpreter state (modal state)
- per-instance unshared execution state (oword, call stack etc; probably the 
NML queue too)

Let me add a state class here, and let's call it 'motion context' (roughly 
motion queue plus TP queue plus other state needed, like spindle, coolant etc). 
As for operations on motion contexts in task and motion, assume we have 
operations 'save motion context' and 'resume motion context'.

Also assume that task has been made into a class, and became instantiable just 
like the interpreter. At any time, there would be an active task instance, and 
the other instance(s) sleeping. Each task instance could refer to one or 
several interpreter instances, but at any time, a given task instance would 
have an active interpreter instance, and the other instance(s) sleeping. This 
model would lend itself to multithreading - one thread for task, one thread per 
interpreter instance; all threads not belonging to an active task or 
interpreter instance are blocked.

Now assume a task instance is running, and the user hits 'Pause'. What would 
happen now is roughly:
- the motion context is stored with the current task instance.
- the motion context is cleared.
- the interpreter's modal state is saved (very similar to M70).
- a second task instance becomes the active task instance, and the original 
task instance and its interpreter is put to sleep.
- The second task instance synchs with the world model, and now MDI mode etc 
etc is enabled.

When done (user hits 'Continue'), the original task instance, and with it its 
motion context, is restored, as well as the modal state (similar to M72). 
Motion needs to be blocked during the context change until restore is finished.

Now comes the hard part, reentering the original motion context, and that part 
is the one I'm most fuzzy about. If the machine pose matches the pose when 
'pause' was hit, then restoring modal state (spindle, coolant..) and waiting to 
spin up might suffice. If the poses dont match, a move might be required. I am 
not sure if this can be done fully automatic. In fact, in the presence of 
cycles this could be quite complex to do. Also, changed offsets by touchoff 
arent yet covered. I have no idea which role acceleration and blending play 
here.

A start would be to classify motion operations into 'restartable' as they 
trickle down from the interpreter. So the motions originating from a  G0,G1,G2 
might be tagged as 'restartable' and motion would react to 'Pause' only during 
such a motion command. This is not a complete solution, but maybe a useful 
start, and leaves the way open to make more operations restartable. It might be 
necessary to tag motion operations by their G-code origin, or a restart 
strategy for that matter. But that could be a second step. An operation like 
'feed move to pose where saved motion context stopped' might help.

Coming back to Jmk's post on the matter, note that the assumptions made do not 
hold when doing it as I outlined.

This is no small endeavour, so let me try to break this down into requirements:

1. clean up world model and interpreter instantiation as outlined, to have a 
truely instantiable interpreter, and a contained world model.
2. make task a class instance
3. make task execute in a thread, as well as the interpreter, and make 
interp/task NML queue a task local variable, with locking
4. implement 'put task to sleep' and 'wakeup task' operations
5. tag motions with origin (G...), maybe just an enum to start with
6. implement motion context snapshot and restore operations
7. do the 'reentry part' - this warrants more thought.

Note that several steps can be done independently before integrating it - for 
instance 6 is mostly independent of 1-3. Also, there's some fallout, like this 
bringing about most support for multiple spindle machines (barring motion and 
HAL dealing with that, which I'm unsure about).

It is debatable wether such an effort makes sense for the yield, and also 
wether this is actually doable within the current EMC2 project structure and 
process. I'd be curious what people think about it.

- Michael
 





------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Emc-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/emc-developers

Reply via email to