I know I can arrive at the same result with this code,
val range100 = spark.range(1,101).agg((sum('id) as
"sum")).first.get(0)
println(f"sum of range100 = $range100")
so I am not stuck,
I was just curious 😯 why the code breaks using the current link
libraries.
spark.range(1,101).r
I am using the method describe on this page for Scala development in
eclipse.
https://data-flair.training/blogs/create-spark-scala-project/
in the middle of the page you will find
*“y**ou will see lots of error due to missing libraries.*
viii. Add Spark Libraries”
Now that I have my own buil
If I'm understanding this correctly, you are building Spark from source and
using the built artifacts (jars) in some other project. Correct? If so,
then why are you concerning yourself with the directory structure that
Spark, internally, uses when building its artifacts? It should be a black
box
THANKS
It appears the directory containing the jars have been switched from
download version to source version.
In the download version it is just below parent directory called jars.
level 1.
In the git source version it is 4 levels down in the directory
/spark/assembly/target/scala-2.12/jars
If you are using Maven to manage your jar dependencies, the jar files
are located in the maven repository on your home directory. It is
usually in the .m2 directory.
Hope this helps.
-ND
On 6/23/20 3:21 PM, Anwar AliKhan wrote:
Hi,
I prefer to do most of my projects in Python and for that I