Hi

I am new to Spark and am working on document classification. Before model
fitting I need to do feature generation. Each document is to be converted to
a feature vector. However I am not sure how to do that. While testing
locally I have a static list of tokens and when I parse a file I do a lookup
and increment counters. 

In the case of Spark I can create an RDD which loads all the documents
however I am not sure if one files goes to one executor or multiple. If the
file is split then the feature vectors needs to be merged. But I am not able
to figure out how to do that.

Thanks
Rishi



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Feature-Generation-On-Spark-tp23617.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to