Hi list,
val dfs = spark
.read
.format("org.apache.spark.sql.cassandra")
.options(Map("cluster" -> "helloCassandra",
"spark.cassandra.connection.host" -> "127.0.0.1",
"spark.input.fetch.size_in_rows" -> "10",
"spark.cassandra.input.consistency.level" -> "ONE",
"table" ->
You have to register a Cassandra table in spark as dataframes
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/14_data_frames.md
Thanks
Sathish
On Mon, Jan 22, 2018 at 7:43 AM Conconscious wrote:
> Hi list,
>
> I have a Cassandra table with two
Hi list,
I have a Cassandra table with two fields; id bigint, kafka text
My goal is to read only the kafka field (that is a JSON) and infer the
schema
Hi have this skeleton code (not working):
sc.stop
import org.apache.spark._
import com.datastax.spark._
import