On Wednesday, December 16, 2015 at 8:41:23 PM UTC+4, Chris Angelico wrote:
> On Thu, Dec 17, 2015 at 3:33 AM,  <fsn761...@gmail.com> wrote:
> > On Wednesday, December 16, 2015 at 6:33:56 PM UTC+4, Chris Angelico wrote:
> >> On Thu, Dec 17, 2015 at 1:21 AM,  <fsn761...@gmail.com> wrote:
> >> > I tried also another code (see below) and without scaling by 20 quality 
> >> > of recognition was very bad.
> >> >
> >> > from pytesseract import image_to_string
> >> > from PIL import Image
> >> >
> >> > im = Image.open("screen.png")
> >> > print(im)
> >> > im = im.resize((214*20,26*20), Image.ANTIALIAS)
> >> > print(image_to_string(im))
> >>
> >> If you need to scale by 20x20 to get the text recognition to work, I
> >> would recommend using something other than an anti-alias filter. Omit
> >> the second argument to use a simpler algorithm; you'll get a blocky
> >> result, which might parse more cleanly for you.
> >>
> >> ChrisA
> >
> > It didn't help to recognize words, the main problem is that image is 
> > inclined to the left, like backslash.
> 
> Interesting. I'm not sure what exactly is going on, as I can't see
> your image or the result, but is it possible that the text is getting
> wrapped? Try printing it to a file, then pulling the file up in a text
> editor with word wrap disabled. Maybe it'll look different.
> 
> ChrisA
When I saved image to a directory it was ok (quality) and recognition was good. 
(there are commented string, which saves into dictory: #image.save("saved.png") 
) It seems I did something wrong with image buffer.
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to