Thanks Todd, this was helpful! I also got some help from the other forum,
and for those that might run into this problem in the future, the solution
that worked for me was:

foreachRDD {r => r.map(x => data(x.getString(0),
x.getInt(1))).saveToCassandra("demo", "sqltest")}

On Thu, Jul 9, 2015 at 4:37 PM, Todd Nist <tsind...@gmail.com> wrote:

> foreachRDD returns a unit:
>
> def foreachRDD(foreachFunc: (RDD
> <https://spark.apache.org/docs/latest/api/scala/org/apache/spark/rdd/RDD.html>
> [T]) ⇒ Unit): Unit
>
> Apply a function to each RDD in this DStream. This is an output operator,
> so 'this' DStream will be registered as an output stream and therefore
> materialized.
>
> Change it to a map, foreach or some other form of transform.
>
> HTH
>
> -Todd
>
>
> On Thu, Jul 9, 2015 at 5:24 PM, Su She <suhsheka...@gmail.com> wrote:
>
>> Hello All,
>>
>> I also posted this on the Spark/Datastax thread, but thought it was also
>> 50% a spark question (or mostly a spark question).
>>
>> I was wondering what is the best practice to saving streaming Spark SQL (
>> https://github.com/Intel-bigdata/spark-streamingsql/blob/master/src/main/scala/org/apache/spark/sql/streaming/examples/KafkaDDL.scala)
>> results to Cassandra?
>>
>> The query looks like this:
>>
>>  streamSqlContext.sql(
>>       """
>>         |SELECT t.word, COUNT(t.word)
>>         |FROM (SELECT * FROM t_kafka) OVER (WINDOW '9' SECONDS, SLIDE '3'
>> SECONDS) AS t
>>         |GROUP BY t.word
>>       """.stripMargin)
>>       .foreachRDD { r => r.toString()}.map(x =>
>> x.split(",")).map(x=>data(x(0),x(1))).saveToCassandra("demo", "sqltest")
>>
>> I’m getting a message saying map isn’t a member of Unit.
>>
>> I thought since I'm converting it to a string I can call a map/save to
>> Cassandra function there, but it seems like I can't call map after
>> r.toString()?
>>
>> Please let me know if this is possible and what is the best way of doing
>> this. Thank you for the help!
>>
>> -Su
>>
>
>

Reply via email to