[ https://issues.apache.org/jira/browse/SPARK-14751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15281700#comment-15281700 ]
Shivaram Venkataraman commented on SPARK-14751: ----------------------------------------------- [~sunrui] That sounds fine to me. Can we make it a whitelist of certain SQLTypes that we do this for ? The number of SQL types should be limited and we can be sure of ones that will work. Also printing a warning would be good. > SparkR fails on Cassandra map with numeric key > ---------------------------------------------- > > Key: SPARK-14751 > URL: https://issues.apache.org/jira/browse/SPARK-14751 > Project: Spark > Issue Type: Bug > Components: SparkR > Affects Versions: 1.6.1 > Reporter: Michał Matłoka > > Hi, > I have created an issue for spark cassandra connector ( > https://datastax-oss.atlassian.net/projects/SPARKC/issues/SPARKC-366 ) but > after a bit of digging it seems this is a better place for this issue: > {code} > CREATE TABLE test.map ( > id text, > somemap map<tinyint, decimal>, > PRIMARY KEY (id) > ); > insert into test.map(id, somemap) values ('a', { 0 : 12 }); > {code} > {code} > sqlContext <- sparkRSQL.init(sc) > test <-read.df(sqlContext, source = "org.apache.spark.sql.cassandra", > keyspace = "test", table = "map") > head(test) > {code} > Results in: > {code} > 16/04/19 14:47:02 ERROR RBackendHandler: dfToCols on > org.apache.spark.sql.api.r.SQLUtils failed > Error in readBin(con, raw(), stringLen, endian = "big") : > invalid 'n' argument > {code} > Problem occurs even for int key. For text key it works. Every scenario works > under scala & python. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org