I am facing the same issue as listed here: http://apache-spark-user-list.1001560.n3.nabble.com/Packaging-a-spark-job-using-maven-td5615.html
Solution mentioned is here: https://gist.github.com/prb/d776a47bd164f704eecb However, I think I don't understand few things: 1) Why are jars being split into worker and driver? 2) Does it mean I now need to create 2 jars? 3) I am assuming I still need both jars in the path when I run this job? I am simply trying to execute a basic word count example.