On 11 September 2012 05:42, EBo wrote:
> thread #1 thread #2
> wait until flag1 unset some instruction...
> some instruction...wait until flag1 unset
> set flag1 some instruction...
> some other instruction... set flag1
Ah. I had
On Monday 10 September 2012 16:45:59 Michael Haberler wrote:
> Reps Time(s) DGEFA DGESL OVERHEADKFLOPS
>
> 16 0.66 90.91% 3.03% 6.06% 35440.860
> 32 1.33 88.72% 3.01% 8.27% 36021.858
> 64 2.64 88.26
Michael,
Not sure where to start or finish here. It will be interesting to see
how HAL and Thread interact to deal with critical regions I can see it
working. The question is how elegantly we can get this to work. I
think it might be as simple as a semaphore (stretched between two user
defi
You are welcome...
I'm tired enough that I cannot think through this, but if the following
example honors full concurrency then I agree. Can you set up an
experimental test bench that fully exercises a number of critical
sections across threads? I am not sure if it would work in Glade or
not
Artec has built a 6 dof NURB in FPGA, and hacked EMC2 (sim) to feed it.
An open source solution for FPGA would be fascinating to me, but the
problem might have already been solved. What do you want to do that is
different than their solution?
On Tue, 11 Sep 2012 00:23:44 +0300, Topi Rinkinen w
On Tue, 11 Sep 2012 01:32:59 +0200, andy pugh wrote:
> On 10 September 2012 20:29, EBo wrote:
>> I do not think will work. Your blocking flag needs to be read and
>> set
>> with an atomic operation (ie. single instruction).
>
> Why isn't a block atomic in the context?
Did you intend your read/s
>
> > Clearly nobody is using it lately :)
>
>
> I use it for my lathe and haven't noticed any problems.
>
>
> > I was thinking that if this was embeddable in the screen and could access
> the
> > python script style wizards that it would be a very handy winner.
>
>
> I was thinking the op
On 10 September 2012 20:29, EBo wrote:
> I do not think will work. Your blocking flag needs to be read and set
> with an atomic operation (ie. single instruction).
Why isn't a block atomic in the context?
--
atp
If you can't fix it, you don't own it.
http://www.ifixit.com/Manifesto
--
On Mon, 2012-09-10 at 16:41 -0400, Kent A. Reed wrote:
> On 9/10/2012 10:45 AM, Michael Haberler wrote:
> > here's linpack figures for the Rpi and an Intel D525.
> >
> > the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
> >
> > (pull test code with wget http://www.netlib.org/
Hi,
I have RPi playing Internet radio, and another one waiting for some fun
activities.
I have been thinking integrating a small FPGA or CPLD with RPi,
especially targeted to CNC or motor control applications.
For starters one could use Lattice's MachXO2 based evaluation kits (USD
30), and connect
On 9/10/2012 10:45 AM, Michael Haberler wrote:
> here's linpack figures for the Rpi and an Intel D525.
>
> the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
>
> (pull test code with wget http://www.netlib.org/benchmark/linpackc.new)
>
> pi@raspberrypi ~/tests $ gcc -O linpack
On Mon, Sep 10, 2012 at 10:58:28AM -0500, Daniel Rogge wrote:
> +def report_gcode_error(self, result, seq, filename):
> + error_str = gcode.strerror(result)
> + sys.stderr.write("G-Code error in " + os.path.basename(filename) + "\n"
> + "Near line "
> + + st
On 9/10/2012 1:13 PM, Michael Haberler wrote:
> Am 10.09.2012 um 18:48 schrieb Jon Elson:
>
>> Michael Haberler wrote:
>>> here's linpack figures for the Rpi and an Intel D525.
>>>
>>> the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
>>>
>> Thanks much for doing this! Not a
I do not think will work. Your blocking flag needs to be read and set
with an atomic operation (ie. single instruction). See:
Mutual Exclusion: http://en.wikipedia.org/wiki/Mutual_exclusion
Atomicity: http://en.wikipedia.org/wiki/Atomicity_%28programming%29
and Concurrency: http://en.wikipedi
I took a moment and looked at that thread. Those numbers are worst
case. Another posted ~2us with little load. It would be nice to set up
a latency tester which artificially mucked with some threads to get
realistic numbers.
Hmmm... is there any instrumentation in the simulation code to
cal
Alex dug this out on Xenomai on the Rpi and it is an interesting read:
http://www.raspberrypi.org/phpBB3/viewtopic.php?t=12368&p=154691
> A quick (very quick) test shows typical latencies in the order of 20-30uS and
> peaking at ~70uS for user space tasks. The kernel space latencies are even
>
Point of view is everything.
You said:
the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
I would have said
the Rpi is only a factor of 6 slower for this benchmark
:-)
It's also a factor of 20 smaller (a guess) and uses a factor of 20 less power
(also a guess).
Regards
Am 10.09.2012 um 18:48 schrieb Jon Elson:
> Michael Haberler wrote:
>> here's linpack figures for the Rpi and an Intel D525.
>>
>> the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
>>
> Thanks much for doing this! Not a great result, though. My feeling is
> that the
>
Am 10.09.2012 um 18:07 schrieb Kent A. Reed:
> On 9/10/2012 10:45 AM, Michael Haberler wrote:
>> here's linpack figures for the Rpi and an Intel D525.
>>>
>>> <...>
>
> Thanks, Michael.
>
> I'm *still* waiting for my RPi so I spent some time looking up
> benchmarking info regarding ARM vs the
Michael Haberler wrote:
> here's linpack figures for the Rpi and an Intel D525.
>
> the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
>
Thanks much for doing this! Not a great result, though. My feeling is
that the
Atom processors are marginal for LinuxCNC, especially
On 9/10/2012 10:45 AM, Michael Haberler wrote:
> here's linpack figures for the Rpi and an Intel D525.
>
> the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
>
> (pull test code with wget http://www.netlib.org/benchmark/linpackc.new)
>
> pi@raspberrypi ~/tests $ gcc -O linpack
> File "/home/chris/emc2-dev/lib/python/gremlin.py", line 351, in
> report_gcode_error
> + str(seq) + " of\n" + filename + "\n" + error_str + "\n")
> IOError: [Errno 9] Bad file descriptor
>
> It actually does print the gcode error and file name properly...
> Is this because hal_gremlin need
here's linpack figures for the Rpi and an Intel D525.
the D5252 is almost a factor of 6 faster than the Rpi for this benchmark
(pull test code with wget http://www.netlib.org/benchmark/linpackc.new)
pi@raspberrypi ~/tests $ gcc -O linpackc.c -lm -olinpackc
pi@raspberrypi ~/tests $ ./linpackc
E
Am 10.09.2012 um 15:11 schrieb andy pugh:
> Ignoring (for a moment) the means of communication, could the
> handshaking be as simple as a few more M-codes?
>
> M90 clear flags
> M91 P1 wait for blocking flag 1 to clear
> M92 P2 set blocking flag 2
> M93 P2 release blocking flag 2
eventually I
The only info I can find on gwiz is on the wiki and seems to be quite
short. It does seem to be aimed at non-programmers but lacks the usual
screenshots that non-programmers might need in order to understand what
it can do and what to do to use it. In all honesty I thought it was a
dead project
could be we're reinventing some wheel here
how do manufacturers of controls deal with synchronization?
-m
Am 10.09.2012 um 14:30 schrieb Kenneth Lerman:
> The notion of statically created HAL pins being used to represent sync
> points seems to cause a problem if sync points are within loops.
>
Ignoring (for a moment) the means of communication, could the
handshaking be as simple as a few more M-codes?
M90 clear flags
M91 P1 wait for blocking flag 1 to clear
M92 P2 set blocking flag 2
M93 P2 release blocking flag 2
--
atp
If you can't fix it, you don't own it.
http://www.ifixit.com/
Am 10.09.2012 um 14:30 schrieb Kenneth Lerman:
> The notion of statically created HAL pins being used to represent sync
> points seems to cause a problem if sync points are within loops.
>
> They could no longer be reached monotonically.
I wrote the below, but 'undecidable' might have to be re
On Mon, Sep 10, 2012 at 1:30 PM, Kenneth Lerman
wrote:
> The notion of statically created HAL pins being used to represent sync
> points seems to cause a problem if sync points are within loops.
>
> They could no longer be reached monotonically.
When slaved, master and slave spindles would be dri
Frank has been the only source of significant positive feedback on this
effort.
I've obviously failed to communicate how powerful, useful, and easy to
use and expand this tool could be. (I guess all fathers think their
children are beautiful.)
I should probably do some more work on communicat
The notion of statically created HAL pins being used to represent sync
points seems to cause a problem if sync points are within loops.
They could no longer be reached monotonically.
Regards,
Ken
On 9/10/2012 2:05 AM, Michael Haberler wrote:
> Actually I can think of a much simpler solution wh
> Clearly nobody is using it lately :)
I use it for my lathe and haven't noticed any problems.
> I was thinking that if this was embeddable in the screen and could access
the
> python script style wizards that it would be a very handy winner.
I was thinking the opposite. It would be nice if
32 matches
Mail list logo