Is there an 'initial cause' listed under that exception you gave? As
NoClassDefFoundError is not exactly the same as ClassNotFoundException.
It meant that ColumnMapper couldn't initialize it's static initializer,
it could be because some other class couldn't be found, or it could be
some other non classloader related error. 

On 2015-03-31 10:42, Tiwari, Tarun wrote: 

> Hi Experts, 
> 
> I am getting java.lang.NoClassDefFoundError: 
> com/datastax/spark/connector/mapper/ColumnMapper while running a app to load 
> data to Cassandra table using the datastax spark connector 
> 
> Is there something else I need to import in the program or dependencies? 
> 
> RUNTIME ERROR: Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/datastax/spark/connector/mapper/ColumnMapper 
> 
> at ldCassandraTable.main(ld_Cassandra_tbl_Job.scala) 
> 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> 
> BELOW IS MY SCALA PROGRAM 
> 
> /*** ld_Cassandra_Table.scala ***/ 
> 
> import org.apache.spark.SparkContext 
> 
> import org.apache.spark.SparkContext._ 
> 
> import org.apache.spark.SparkConf 
> 
> import com.datastax.spark.connector 
> 
> import com.datastax.spark.connector._ 
> 
> object ldCassandraTable { 
> 
> def main(args: Array[String]) { 
> 
> val fileName = args(0) 
> 
> val tblName = args(1) 
> 
> val conf = new SparkConf(true).set("spark.cassandra.connection.host", 
> "<MASTER HOST>") .setMaster("<MASTER URL>") 
> .setAppName("LoadCassandraTableApp") 
> 
> val sc = new SparkContext(conf) 
> 
> sc.addJar("/home/analytics/Installers/spark-cassandra-connector-1.1.1/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.1.1.jar")
>  
> 
> val normalfill = sc.textFile(fileName).map(line => line.split('|')) 
> 
> normalfill.map(line => (line(0), line(1), line(2), line(3), line(4), line(5), 
> line(6), line(7), line(8), line(9), line(10), line(11), line(12), line(13), 
> line(14), line(15), line(16), line(17), line(18), line(19), line(20), 
> line(21))).saveToCassandra(keyspace, tblName, SomeColumns("wfctotalid", 
> "timesheetitemid", "employeeid", "durationsecsqty", "wageamt", "moneyamt", 
> "applydtm", "laboracctid", "paycodeid", "startdtm", "stimezoneid", 
> "adjstartdtm", "adjapplydtm", "enddtm", "homeaccountsw", "notpaidsw", 
> "wfcjoborgid", "unapprovedsw", "durationdaysqty", "updatedtm", 
> "totaledversion", "acctapprovalnum")) 
> 
> println("Records Loaded to ".format(tblName)) 
> 
> Thread.sleep(500) 
> 
> sc.stop() 
> 
> } 
> 
> } 
> 
> BELOW IS THE SBT FILE: 
> 
> name:= "POC" 
> 
> version := "0.0.1" 
> 
> scalaVersion := "2.10.4" 
> 
> // additional libraries 
> 
> libraryDependencies ++= Seq( 
> 
> "org.apache.spark" %% "spark-core" % "1.1.1" % "provided", 
> 
> "org.apache.spark" %% "spark-sql" % "1.1.1" % "provided", 
> 
> "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.1" % "provided" 
> 
> ) 
> 
> Regards, 
> 
> TARUN TIWARI | Workforce Analytics-ETL | KRONOS INDIA 
> 
> M: +91 9540 28 27 77 | Tel: +91 120 4015200 
> 
> Kronos | Time & Attendance * Scheduling * Absence Management * HR & Payroll * 
> Hiring * Labor Analytics 
> 
> JOIN KRONOS ON: KRONOS.COM [1] | FACEBOOK [2]|TWITTER [3]|LINKEDIN [4] 
> |YOUTUBE [5]
 

Links:
------
[1] http://www.kronos.com/
[2] http://www.kronos.com/facebook
[3] http://www.kronos.com/twitter
[4] http://www.kronos.com/linkedin
[5] http://www.kronos.com/youtube

Reply via email to