(Books on Spark are not produced by the Spark project, and this is not
the right place to ask about them. This question was already answered
offline, too.)

On Thu, Feb 26, 2015 at 6:38 PM, Deepak Vohra
<dvohr...@yahoo.com.invalid> wrote:
>   Ch 6 listing from Advanced Analytics with Spark generates error. The
> listing is
>
> def plainTextToLemmas(text: String, stopWords: Set[String], pipeline:
> StanfordCoreNLP)
>     : Seq[String] = {
>     val doc = new Annotation(text)
>     pipeline.annotate(doc)
>     val lemmas = new ArrayBuffer[String]()
>     val sentences = doc.get(classOf[SentencesAnnotation])
>     for (sentence <- sentences; token <-
> sentence.get(classOf[TokensAnnotation])) {
>       val lemma = token.get(classOf[LemmaAnnotation])
>       if (lemma.length > 2 && !stopWords.contains(lemma) &&
> isOnlyLetters(lemma)) {
>         lemmas += lemma.toLowerCase
>       }
>     }
>     lemmas
>   }
>
> The error is
>
> <console>:37: error: value foreach is not a member of
> java.util.List[edu.stanford.nlp.util.CoreMap]
>            for (sentence <- sentences; token <-
> sentence.get(classOf[TokensAnnot
> ation])) {
>                             ^

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to