Re: How to use DataFrame with MySQL

2015-03-23 Thread gavin zhang
OK,I found what the problem is: It couldn't work with mysql-connector-5.0.8.
I updated the connector version to 5.1.34 and it worked.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-DataFrame-with-MySQL-tp22178p22182.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to use DataFrame with MySQL

2015-03-22 Thread gavin zhang
OK, I have known that I could use jdbc connector to create DataFrame with
this command:

val jdbcDF = sqlContext.load(jdbc, Map(url -
jdbc:mysql://localhost:3306/video_rcmd?user=rootpassword=123456,
dbtable - video))

But I got this error: 

java.sql.SQLException: No suitable driver found for ...

And I have tried to add jdbc jar to spark_path with both commands but
failed:

- spark-shell --jars mysql-connector-java-5.0.8-bin.jar
- SPARK_CLASSPATH=mysql-connector-java-5.0.8-bin.jar spark-shell

My Spark version is 1.3.0 while
`Class.forName(com.mysql.jdbc.Driver).newInstance` is worked.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-DataFrame-with-MySQL-tp22178.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Multiple running SparkContexts detected in the same JVM!

2015-02-03 Thread gavin zhang
I have a cluster which running CDH5.1.0 with Spark component.
Because the default version of Spark from CDH5.1.0 is 1.0.0 while I want to
use some features of Spark 1.2.0, I compiled another Spark with Maven.
But when I run into Spark-shell and created a new SparkContext, I met the
below error:

15/02/04 14:08:19 WARN SparkContext: Multiple running SparkContexts detected
in the same JVM!
org.apache.spark.SparkException: Only one SparkContext may be running in
this JVM (see SPARK-2243). To ignore this error, set
spark.driver.allowMultipleContexts = true. The currently running
SparkContext was created at
...

And I tried to delete the default Spark and
*set(spark.driver.allowMultipleContexts, true) * option, But It didn't
work.

How could I fix it? 





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Multiple-running-SparkContexts-detected-in-the-same-JVM-tp21492.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Memcached error when using during map

2014-09-03 Thread gavin zhang
 finished a distributed project in hadoop streaming and it worked fine with
using memcached storage during mapping. Actually, it's a python project.
Now I want to move it to Spark. But when I called the memcached library, two
errors was found during computing. (Both)
1. File memcache.py, line 414, in get
rkey, rlen = self._expectvalue(server)
ValueError: too many values to unpack
2. File memcache.py, line 714, in check_key
return key.translate(ill_map)
TypeError: character mapping must return integer, None or unicode
After adding exception handing, there was no successful cache got at all.
However, it works in hadoop streaming without any error. Why?
Attached my code.
code.zip
http://apache-spark-user-list.1001560.n3.nabble.com/file/n13341/code.zip  



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Memcached-error-when-using-during-map-tp13341.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to use memcached with spark

2014-09-03 Thread gavin zhang
tried to connect memcached in map with xmemcached lib, faild:
net.rubyeye.xmemcached.exception.MemcachedException: There is no available
connection at this moment
is there anybody succeed to use memcached?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-memcached-with-spark-tp13409.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org