Hi,

I'm new with both Cassandra and Spark and am experimenting with what Spark
SQL can do as it will affect my Cassandra data model.

What I need is a model that can accept arbitrary fields, similar to
Postgres's Hstore. Right now, I'm trying out the map type in Cassandra but
I'm getting the exception below when running my Spark SQL:

java.lang.RuntimeException: Can't access nested field in type
MapType(StringType,StringType,true)

The schema I have now is:
root
 |-- device_id: integer (nullable = true)
 |-- event_date: string (nullable = true)
 |-- fields: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)

And my Spark SQL is:
SELECT fields from raw_device_data where fields.driver = 'driver1'

>From what I gather, this should work for a JSON based RDD
(https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html).
 

Is this not supported for a Cassandra map type?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-query-key-value-in-Map-tp22517.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to