OK,I found what the problem is: It couldn't work with mysql-connector-5.0.8.
I updated the connector version to 5.1.34 and it worked.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-DataFrame-with-MySQL-tp22178p22182.html
Sent from the Apache
OK, I have known that I could use jdbc connector to create DataFrame with
this command:
val jdbcDF = sqlContext.load(jdbc, Map(url -
jdbc:mysql://localhost:3306/video_rcmd?user=rootpassword=123456,
dbtable - video))
But I got this error:
java.sql.SQLException: No suitable driver found for ...
I have a cluster which running CDH5.1.0 with Spark component.
Because the default version of Spark from CDH5.1.0 is 1.0.0 while I want to
use some features of Spark 1.2.0, I compiled another Spark with Maven.
But when I run into Spark-shell and created a new SparkContext, I met the
below error:
finished a distributed project in hadoop streaming and it worked fine with
using memcached storage during mapping. Actually, it's a python project.
Now I want to move it to Spark. But when I called the memcached library, two
errors was found during computing. (Both)
1. File memcache.py, line 414,
tried to connect memcached in map with xmemcached lib, faild:
net.rubyeye.xmemcached.exception.MemcachedException: There is no available
connection at this moment
is there anybody succeed to use memcached?
--
View this message in context: