To train a backprop  neural net to recognize a playing card viewed from an
iPad, I want to get the pixel counts of the training set card image and the
actual card image to correspond as much as practical by controlling how far
the real playing cards are from the iPad when real recognition is
attempted. I am attempting to train the network on artificially generated
images because such generation is much easier, I think.

I don't quite understand how to translate the camera lens' quality to pixel
count of the training set. My iPad produces .jpg pictures of "Dimension"720
x 960 (which for the example photo I used for this messages is 227 KB and
is not a playing card). When I open that picture in Photoshop Elements it
says that at 100% (RGB/8) the image is of size 10x13.333 inches and 72 ppi.
A playing card is 2.5x3.5 inches.

So can I get some guidance on whether  that is enough information and if
so, how to use the information?

Thanks,

-- 
(B=)
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to