Hi Dan,

If you use spark <= 1.6, you can also do

$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.10:1.5.0

to quickly link the spark-csv jars to spark shell. Otherwise as Holden
suggested you link it in your maven/sbt dependencies. Spark guys assume
that their users have a good working knowledge on maven/sbt; you might need
to read on these before jumping to Spark.

Best,
Anastasios

On Fri, Sep 23, 2016 at 10:26 PM, Dan Bikle <bikle...@gmail.com> wrote:

>

> >

> hello world-of-spark,
>
> I am learning spark today.
>
> I want to understand the spark code in this repo:
>
> https://github.com/databricks/spark-csv
<https://github.com/databricks/spark-csv>
>
> In the README.md I see this info:
>
> Linking
>
> You can link against this library in your program at the following
coordinates:
> Scala 2.10
>
> groupId: com.databricks
> artifactId: spark-csv_2.10
> version: 1.5.0
>
> Scala 2.11
>
> groupId: com.databricks
> artifactId: spark-csv_2.11
> version: 1.5.0
>
> I want to know how I can use the above info.
>
> The people who wrote spark-csv should give some kind of example, demo, or
context.
>
> My understanding of Linking is limited.
>
> I have some experience operating sbt which I learned from this URL:
>
>
http://spark.apache.org/docs/latest/quick-start.html#self-contained-applications
<http://spark.apache.org/docs/latest/quick-start.html#self-contained-applications>
>
> The above URL does not give me enough information so that I can link
spark-csv with spark.
>
> Question:
> How do I learn how to use the info in the Linking section of the
README.md of
> https://github.com/databricks/spark-csv
<https://github.com/databricks/spark-csv>
> ??
>

-- 
-- Anastasios Zouzias

Reply via email to