Please keep replies on-list.

For pick-and-place, there are a few open-source projects working on
solving the problem, such as OpenPNP:

http://openpnp.org/

Often, the "back-end" performing the motion does use gcode.

On 2/26/2019 2:01 PM, Chris Albertson wrote:
> I just read the TI's paper on this.   They describe the workflow for using
> the machine learning and vision subsystem.  And the workflow is not
> super-horrible as I feared.  It fact it seems straight forward if you are
> already familiar with machine learning and vision.   For others, the
> 15-second summary is this:
> 
> You develop your machine learning or vision system on high-end PC hardware
>> (at least a nVidida GTX10XX GPU)  under Linux.  You use familiar tools like
>> TensorFlow and openCV.    Then there is TI software that takes what you
>> have running on the PC and translates it to run on the much smaller device
>> on the Beagle Board.   The beagle is about 100x slower but speed is really
>> only needed for training a network, not needed to run it.
> 
> 
> For those not in the field. "Machine Learning" is almost a misnomer.  The
> training and learning happens on the big PC then we "freeze" a snapshot and
> move it the tiny chip where it never leans or changes behavior .  The
> learning happens only in the lab.
> 
> 
> Nowhere is my question:  The most simple use case of this that relates to
> Machine Kit is a pick and place machine.  This is a 2-axis machine that
> picks up a tiny part from a surface and drops it some other place on the
> surface.  It does not even have a true z-axis.   But the catch is finding
> the part and finding the *exact* place to drop the part.  For that we need
> a camera.   No-one uses g-code for these machines because we can't know in
> advance how the machine is to move.  So what we do is tell the machine
> where the parts are in general and where relative to the final assembly the
> part should go.
> 
> It seems to me this machine would replace the g-code interpreter with
> different logic but otherwise could work exactly the same.   Or perhaps the
> g-code is not read from file but there is a process that generates it in
> real time based on the camera.
> 
> What I'd like is to use my Harbor Freight Mili mill as a picking place
> machine.   This would be REALLY popular., simply place a little vacuum
> picker on the spindle and for very little cost you have a slow slow pick
> and place machine suitable for hobby level PCB assembly.   The HF mill
> should be accurate enough.
> 
> In any case, how were you planning to this vision and machine learning
> hardware to MK?
> 
> 
> On Tue, Feb 26, 2019 at 8:15 AM Charles Steinkuehler <
> char...@steinkuehler.net> wrote:
> 
>> FYI:
>> The BeagleBone-AI may be a good fit for your project:
>>
>>
>> https://www.facebook.com/photo.php?fbid=10218976824519992&set=a.2907631578284&type=3&theater
>>
>> It should do machine vision _much_ better than the BBB.  It's
>> basically the SoC from the X15 in a BeagleBone form factor.  I'm
>> working on getting Machinekit working on this board and verifying
>> capes work as expected.
>>


-- 
Charles Steinkuehler
char...@steinkuehler.net

-- 
website: http://www.machinekit.io blog: http://blog.machinekit.io github: 
https://github.com/machinekit
--- 
You received this message because you are subscribed to the Google Groups 
"Machinekit" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to machinekit+unsubscr...@googlegroups.com.
Visit this group at https://groups.google.com/group/machinekit.
For more options, visit https://groups.google.com/d/optout.

Reply via email to