At present, the language processor for the input field is not at all 
flexible. The user somewhat needs to know the basic terms beforehand. I 
would like to work on that aspect and make it comparable to if not better 
than what Wolfram Alpha is at the moment. It would take a nice amount of 
time for a full-proof system, but this being an open-source project, I 
think it is a viable proposition.

Once that is done, we could use Google Cloud Vision (or make a custom 
neural network) to solve equations, integrations and other common 
mathematical problems supported by Sympy just by taking a photograph of the 
same. That could further be integrated into a mobile application for a 
better outreach.

Are these two propositions quite in the interest of the objectives of the 
community? If yes, then it might take an entire Summer of coding, so will 
it be feasible to start right now and continue till the end of Summer?

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/6807f616-2899-4c29-b3c6-319f62374600%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to