Hi all,
  I am a engineering graduate working on an NLP related project.I am new to
Natural language processing and finally understood something about it. My
aim of project is to design a lab for various activities for each grammar
topics, examples here  :


*Grammar topics examples :*
subject verb agreement
tense
verb forms
reported speech etc

*Activities for tenses topic examples.*
Fill in the blank with correct tense : I am  ______(to play) football
Change the tense of a  sentence
Recognize the tense of the sentence
etc....


I will elaborate 1 activity :*Recognize tense of a sentence .*
 *Objective of the activity :*
 We will give some sentences to student and they will identify tense of the
sentence in this activity.


*Our current procedure*
We will come to know tense of sentence using NLP at backend. We have a
corpus of English textbooks that we willl use for similar grammar topics.

For now I know there are two approaches to this.Rule based NLP and
Statistical NLP.

So,I can write down rules to identify data that is related to specific
activity or use statistical nlp .

what should I choose?

I know for now there exist various NLP api like Stanford NLP,Opennlp etc.
They have models for POS tagging , chunking etc..

*So  do I need to make model for each grammar topic if i use statistical
approach?*

I wonder *how can i make a model* for tense or any other topic and get the
data which i require for actiivites.

Does that model integrate with other NLP like Stanford etc....


*Is there any other approach?Please tell me if I am going wrong somewhere.*

Reply via email to