You can try pulling the jar with wget and using it with -jars with Spark shell. 
I used 1.0.3 with Spark 1.3.0 but with a different version of scala. From the 
stack trace it looks like Spark shell is just not seeing the csv jar...


Sent on the new Sprint Network from my Samsung Galaxy S®4.

<div>-------- Original message --------</div><div>From: Mohammed Omer 
<beancinemat...@gmail.com> </div><div>Date:04/22/2015  2:01 PM  (GMT-05:00) 
</div><div>To: user@spark.apache.org </div><div>Subject: Trouble working with 
Spark-CSV package (error: object databricks is not a member of package com) 
</div><div>
</div>Afternoon all,

I'm working with Scala 2.11.6, and Spark 1.3.1 built from source via:

`mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package`

The error is encountered when running spark shell via:

`spark-shell --packages com.databricks:spark-csv_2.11:1.0.3`

The full trace of the commands can be found at 
https://gist.github.com/momer/9d1ca583f9978ec9739d

Not sure if I've done something wrong, or if the documentation is outdated, 
or...? 

Would appreciate any input or push in the right direction!

Thank you,

Mo

Reply via email to