import org.apache.avro.Schemaimport org.apache.spark.sql.SparkSession
val schema = new Schema.Parser().parse(new File("user.avsc"))val spark
= SparkSession.builder().master("local").getOrCreate()
spark
  .read
  .format("com.databricks.spark.avro")
  .option("avroSchema", schema.toString)
  .load("src/test/resources/episodes.avro").show()


On Thu, Jun 29, 2017 at 1:59 AM, kant kodali <kanth...@gmail.com> wrote:

> Forgot to mention I am getting a stream of Avro records and I want to do
> Structured streaming on these Avro records but first I wan to be able to
> parse them and put them in a DataSet<Row> or something like that.
>
> On Thu, Jun 29, 2017 at 12:56 AM, kant kodali <kanth...@gmail.com> wrote:
>
>> Hi All,
>>
>> What's the simplest way to Read Avro records from Kafka and put it into
>> Spark DataSet/DataFrame without using Confluent Schema registry or Twitter
>> Bijection API?
>>
>> Thanks!
>>
>>
>>
>>
>
>

Reply via email to