So the good news is the csv library has been integrated into Spark 2.0 so
you don't need to use that package. On the other hand if your in an older
version you can included it using the standard sbt or  maven package
configuration.

On Friday, September 23, 2016, Dan Bikle <bikle...@gmail.com> wrote:

> hello world-of-spark,
>
> I am learning spark today.
>
> I want to understand the spark code in this repo:
>
> https://github.com/databricks/spark-csv
>
> In the README.md I see this info:
>
> Linking
>
> You can link against this library in your program at the following
> coordinates:
> Scala 2.10
>
> groupId: com.databricks
> artifactId: spark-csv_2.10
> version: 1.5.0
>
> Scala 2.11
>
> groupId: com.databricks
> artifactId: spark-csv_2.11
> version: 1.5.0
>
> I want to know how I can use the above info.
>
> The people who wrote spark-csv should give some kind of example, demo, or
> context.
>
> My understanding of Linking is limited.
>
> I have some experience operating sbt which I learned from this URL:
>
> http://spark.apache.org/docs/latest/quick-start.html#self-
> contained-applications
>
> The above URL does not give me enough information so that I can link
> spark-csv with spark.
>
> Question:
> How do I learn how to use the info in the Linking section of the README.md
> of
> https://github.com/databricks/spark-csv
> ??
>
>

-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Reply via email to