Gene Heskett wrote:

>[snip]
>
>You are no doubt correct.  If the processor can keep up with the math, which 
>might imply an fpu in there too, there should be enough pins to drive the 
>hdwe.
>  
>
Heh - I could be wrong, it's correct to doubt ;)  CPU speed hasn't been 
the problem for about 10 yearsm even on a PC.  It's the interaction with 
the real world that's been screwed up so badly lately.

>>Linux and RTAI support isolated CPUsets.  You can tell Linux to not
>>schedule process on certain cores, and then tell RTAI to bind processes
>>to those cores.  EMC2 already does this by default when compiled on SMP
>>machines, though you do need to supply the correct boot parameters to
>>the kernel.
>>    
>>
>Its not been discussed on lkml (in a thread that got my attention), so I 
>wasn't aware of how well developed that already was.  Thanks.
>  
>
You might see something about isolcpus or cpusets if you search lkml.  I 
think one of the original reasons for adding it was so something like 
Oracle could run on its own processors, with its own disks etc.  It's 
been years since it was added.

>>In practice, I haven't seen a great improvement from simply going to a
>>multi-core CPU.  I haven't tested every machine out there though, so who
>>knows.
>>    
>>
>Which would imply a bus bandwidth bottleneck maybe?  
>
I don't think so.  I have a machine that I pared down to nothing - no X, 
ext2, most services like sound disabled (it's a bare HAL application, so 
I don't need the GUIs and all).  This runs with a core2 duo chip (dual 
core around 2 GHz).  Latencies were in the 16000 range before my 
runscript optimizations, they went down to about 5000-6000 after (unless 
I connect with ssh and actually use network or remote X software - then 
they shoot back up to 15-16k).  When I use an isolated CPU, I get 
roughly the same numbers, I don't recall exactly.  Strangely, if I put a 
do-nothing load on the Linux core (literally the bash line 'while true ; 
do echo "nothing" > /dev/null ; done'), latencies drop to around 200 or 
less, with some spikes to the 1000 range.  Yes, I mean under 1 
microsecond, and usually under 200 nanoseconds.  Note that this is with 
an Intel CPU which shares cache and memory controllers between processor 
cores.  I haven't tried the same setup on an AMD multicore CPU, I may do 
that some time just for the heck of it.

>I don't have a very good 
>mental estimate of how many actual gates are between the cpu and a pin of the 
>parport.
>
Many, but it's not gate delays that cause issues.

>With the parport having fallen from favor, I can easily see that 
>path getting slower and slower while others become more optimized to speed up 
>the popular stuff.  That is just how things are.
>  
>
It's only when you get to many-axis systems with loads of encoders and 
other I/O that the parport becomes "too slow".  Remember, it's about 
interrupt latency first (which is independent of the interface you use), 
data turnaround second, and throughput a distant last.

>Someone, maybe you, once made the comment that an accessory card for the 
>parport was actually faster than the onboard.  Have any tests been made to 
>somewhat define what the possible speedups might be? 10% isn't much, 400% 
>would solve real problems I'd think.
>  
>
Jon Elson has a lot of eperience with this, as well as Peter Wallace.  I 
don't think it was me :)
A plain old I/O card with 8255s or similar, connected via PCI bus, 
should be faster than a parport.  Parallel ports have extra delay logic 
to make sure they don't exceed the parallel port specs, which may be the 
main reason why they all take around a microsecond (+/- a few hundred 
ns) per I/O operation, regardless of bus or CPU speed.

>I have a Rosewill dual port card I am going to put into my shop box to drive 
>the lathe with when and if I get the lathe controllable, and I have the 3 
>axis xylotex card I took from the mill when I made the rotary table drive for 
>it & put a 4 axis xylotex on it.
>  
>
Take a look at the Mesa 7i43 card.  It's an FPGA card that connects to 
the parallel port and provides 48 I/Os which can be configured as PWM, 
step generator, encoder input, etc.  It's like $89 in single quantity.

- Steve


------------------------------------------------------------------------------
_______________________________________________
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users

Reply via email to