Maybe you can try: spark-submit --class "sparkwithscala.SqlApp"  --jars
/home/lib/mysql-connector-java-5.1.34.jar --master spark://hadoop1:7077
/home/myjar.jar

Thanks!
-Terry

>  Hi there,
>
>
>
> I would like to use spark to access the data in mysql. So firstly  I tried
> to run the program using:
>
> spark-submit --class "sparkwithscala.SqlApp" --driver-class-path
> /home/lib/mysql-connector-java-5.1.34.jar --master local[4] /home/myjar.jar
>
>
>
> that returns me the correct results. Then I tried the standalone version
> using:
>
> spark-submit --class "sparkwithscala.SqlApp" --driver-class-path
> /home/lib/mysql-connector-java-5.1.34.jar --master spark://hadoop1:7077
> /home/myjar.jar
>
> (the mysql-connector-java-5.1.34.jar i have them on all worker nodes.)
>
> and the error is:
>
>
>
> Exception in thread "main" org.apache.spark.SparkException: Job aborted
> due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent
> failure: Lost task 0.3 in stage 0.0 (TID 3, 192.168.157.129):
> java.sql.SQLException: No suitable driver found for
> jdbc:mysql://hadoop1:3306/sparkMysqlDB?user=root&password=root
>
>
>
> I also found the similar problem before in
> https://jira.talendforge.org/browse/TBD-2244.
>
>
>
> Is this a bug to be fixed later? Or do I miss anything?
>
>
>
>
>
>
>
> Best regards,
>
> Jack
>
>
>

Reply via email to