Michael Haberler wrote:
> Jon,
>
> Am 07.12.2012 um 04:37 schrieb Jon Elson:
>
>   
>> Michael Haberler wrote:
>>     
>>> sorry for what maybe sounds like a dumb question, but having read the 
>>> Proctor/Shackleford paper on the influence of jitter on steppers which 
>>> basically say: "all it causes is a loss of torque on the order of 10%" 
>>> (given the figures at the time the paper was written),
>>>
>>>       
>> That's a bit dismissive of Fred and/or Will, a major RT stutter will 
>> cause more
>> than a 10% loss of torque.
>>     
>
>
> this is the way I understood the gist of the paper, and I found that a quite 
> interesting summary
>
> not being a native speaker: can you fill me in what you consider "dismissive" 
> about that?
>   
OK, I did NOT read the paper, just the excerpt quoted yesterday.  My 
guess is that Fred/Will
were talking about a bounded stutter in the RT system, and made their 
10% estimate
based on the magnitude of the timing jitter.  I was expanding that to an 
UNbounded
jitter, without checking the paper to make sure they have excluded that 
possibility.
>   
>>  I did an experiment a LONG time ago when I
>> developed the first hardware step generator for EMC(1) in 2001.  I was able
>> to improve the top speed of a Sherline mill by a factor of about 5 by
>> creating much smoother step pulse trains than the software step generator.
>> I don't know the latency that machine had, but it was not terrible.
>>     
>
> what I'd be interested in: was that a consequence of the capability to 
> generate more continuous frequencies or a consequence of less jitter (which 
> was the point in the original question)?
>   
Well, to be most strict, the jitter and the granularity of frequencies 
are separate issues,
but in practice, they are tied to the interrupt rate of the step 
generation task.  I can't
separate out the two effects, although a special version of the ppmc 
driver could
be hacked up to force the frequencies to be more granular.  (Could be done
with Mesa hardware as well, of course.)  This might be an interesting 
academic
exercise, I'd like to know which one is dominant!  The comparison from 2001
was done with a Pentium Classic of about 100 MHz (maybe even a 60 MHz)
and I no longer recall what sort of jitter one got with those.  I'm 
pretty sure
the base thread needed to be run a LOT slower than on modern processors,
but I don't remember.  Maybe we could dig a sample Sherline .ini file
out of the archives and see what they did then.  (I so rarely run software
step generation that I don't have records or know much about that.)

Jon

------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
Emc-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/emc-developers

Reply via email to