finished a distributed project in hadoop streaming and it worked fine with
using memcached storage during mapping. Actually, it's a python project.
Now I want to move it to Spark. But when I called the memcached library, two
errors was found during computing. (Both)
1. File "memcache.py", line 414, in get
rkey, rlen = self._expectvalue(server)
ValueError: too many values to unpack
2. File "memcache.py", line 714, in check_key
return key.translate(ill_map)
TypeError: character mapping must return integer, None or unicode
After adding exception handing, there was no successful cache got at all.
However, it works in hadoop streaming without any error. Why?
Attached my code.
code.zip
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n13341/code.zip>  



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Memcached-error-when-using-during-map-tp13341.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to