Hi Jacek,

Thanks for your response. This is the code I am trying to execute

import org.apache.spark.sql.functions._
import com.databricks.spark.corenlp.functions._

val inputd = Seq(
  (1, "<xml>Stanford University is located in California. </xml>")
).toDF("id", "text")

val output =
inputd.select(cleanxml(col("text"))).withColumnRenamed("UDF(text)", "text")

val out = output.select(lemma(col("text"))).withColumnRenamed("UDF(text)",
"text")

output.show() works

Error happens when I execute *out.show()*



On Sun, Sep 18, 2016 at 11:58 AM, Jacek Laskowski <ja...@japila.pl> wrote:

> Hi Jonardhan,
>
> Can you share the code that you execute? What's the command? Mind
> sharing the complete project on github?
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Sun, Sep 18, 2016 at 8:01 PM, janardhan shetty
> <janardhan...@gmail.com> wrote:
> > Hi,
> >
> > I am trying to use lemmatization as a transformer and added belwo to the
> > build.sbt
> >
> >  "edu.stanford.nlp" % "stanford-corenlp" % "3.6.0",
> >     "com.google.protobuf" % "protobuf-java" % "2.6.1",
> >     "edu.stanford.nlp" % "stanford-corenlp" % "3.6.0" % "test" classifier
> > "models",
> >     "org.scalatest" %% "scalatest" % "2.2.6" % "test"
> >
> >
> > Error:
> > Exception in thread "main" java.lang.NoClassDefFoundError:
> > edu/stanford/nlp/pipeline/StanfordCoreNLP
> >
> > I have tried other versions of this spark package.
> >
> > Any help is appreciated..
>

Reply via email to