Re: Spark Kafka adapter questions

2018-08-20 Thread Ted Yu
n$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:189)
>
> at
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:172)
>
> at
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:172)
>
> at
> org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:379)
>
> at
> org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:60)
>
> at
> org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:172)
>
> at
> org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
>
> at
> org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:166)
>
> at
> org.apache.spark.sql.execution.streaming.StreamExecution.org
> $apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:293)
>
> at
> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:203)
>
> 18/08/20 22:29:33 INFO AbstractCoordinator: Marking the coordinator
> :9093 (id: 2147483647 rack: null) dead for group
> spark-kafka-source-1aa50598-99d1-4c53-a73c-fa6637a219b2--1338794993-driver-0
>
> 18/08/20 22:29:34 INFO AbstractCoordinator: Discovered coordinator  FQDN>:9093 (id: 2147483647 rack: null) for group
> spark-kafka-source-1aa50598-99d1-4c53-a73c-fa6637a219b2--1338794993-driver-0.
>
> 
>
>
>
> Also, I’m not sure if it’s relevant but I am running on Databricks
> (currently working on running it on a local cluster to verify that it isn’t
> a Databricks issue). The only jars I’m using are the Spark-Kafka connector
> from github master on 8/8/18 and Kafka v2.0. Thanks so much for your help,
> let me know if there’s anything else I can provide
>
>
>
> Sincerely,
>
> Basil
>
>
>
> *From:* Ted Yu 
> *Sent:* Friday, August 17, 2018 4:20 PM
> *To:* basil.har...@microsoft.com.invalid
> *Cc:* dev 
> *Subject:* Re: Spark Kafka adapter questions
>
>
>
> If you have picked up all the changes for SPARK-18057, the Kafka “broker”
> supporting v1.0+ should be compatible with Spark's Kafka adapter.
>
>
>
> Can you post more details about the “failed to send SSL close message”
> errors ?
>
>
>
> (The default Kafka version is 2.0.0 in Spark Kafka adapter
> after SPARK-18057)
>
>
>
> Thanks
>
>
>
> On Fri, Aug 17, 2018 at 3:53 PM Basil Hariri <
> basil.har...@microsoft.com.invalid> wrote:
>
> Hi all,
>
>
>
> I work on Azure Event Hubs (Microsoft’s PaaS offering similar to Apache
> Kafka) and am trying to get our new Kafka head
> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fazure-event-hubs-for-kafka-ecosystems-in-public-preview%2F=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547293693=kNuSO1yNNJzOOyg%2FDRlyv4ZKB568f%2FKKn0zCnWQDK0A%3D=0>
> to play nice with Spark’s Kafka adapter. The goal is for our Kafka endpoint
> to be completely compatible with Spark’s Kafka adapter, but I’m running
> into some issues that I think are related to versioning. I’ve been trying
> to tinker with the kafka-0-10-sql
> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fspark%2Ftree%2Fmaster%2Fexternal%2Fkafka-0-10-sql=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547293693=s5BoYXcUhrVb5uaj3Y2soxjn8Zm3LFVOyGD8bwDZkkM%3D=0>
> and kafka-0-10
> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fspark%2Ftree%2Fmaster%2Fexternal%2Fkafka-0-10-sql=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547303703=5H9%2FFGxz1VsL0OfWx7mrsQU2cGIR7zB3VuMADZop9RE%3D=0>
> adapters on Github and was wondering if someone could take a second to
> point me in the right direction with:
>
>
>
>1. What is the difference between those two adapters? My hunch is that
>kafka-0-10-sql supports structured streaming while kafka-10-0 still uses
>Spark streaming, but I haven’t found anything to verify that.
>2. Event Hubs’ Kafka endpoint only supports Kafka 1.0 and later, and
>the errors I get when trying to connect to Spark (“failed to send SSL close
>message” / broken pipe errors) have usually shown up when using Kafka v0.10
>applications with our endpoint. I built from source after I saw that both
>libraries were updated for Kafka 2.0 support (late last week), but I’m
>still running into the same issues. Do Spark’s Kafka adapters generally
>downgrade to Kafka v0.10 protocols? If not, is there any other reason to
>believe that a Kafka “broker” that doesn’t support v0.10 protocols but
>supports v1.0+ would be incompatible with Spark’s Kafka adapter?
>
>
>
> Thanks in advance, please let me know if there’s a different place I
> should be posting this
>
>
>
> Sincerely,
>
> Basil
>
>
>
>


RE: Spark Kafka adapter questions

2018-08-20 Thread Basil Hariri
$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:293)
at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:203)
18/08/20 22:29:33 INFO AbstractCoordinator: Marking the coordinator :9093 (id: 2147483647 rack: null) dead for group 
spark-kafka-source-1aa50598-99d1-4c53-a73c-fa6637a219b2--1338794993-driver-0
18/08/20 22:29:34 INFO AbstractCoordinator: Discovered coordinator :9093 (id: 2147483647 rack: null) for group 
spark-kafka-source-1aa50598-99d1-4c53-a73c-fa6637a219b2--1338794993-driver-0.


Also, I’m not sure if it’s relevant but I am running on Databricks (currently 
working on running it on a local cluster to verify that it isn’t a Databricks 
issue). The only jars I’m using are the Spark-Kafka connector from github 
master on 8/8/18 and Kafka v2.0. Thanks so much for your help, let me know if 
there’s anything else I can provide

Sincerely,
Basil

From: Ted Yu 
Sent: Friday, August 17, 2018 4:20 PM
To: basil.har...@microsoft.com.invalid
Cc: dev 
Subject: Re: Spark Kafka adapter questions

If you have picked up all the changes for SPARK-18057, the Kafka “broker” 
supporting v1.0+ should be compatible with Spark's Kafka adapter.

Can you post more details about the “failed to send SSL close message” errors ?

(The default Kafka version is 2.0.0 in Spark Kafka adapter after SPARK-18057)

Thanks

On Fri, Aug 17, 2018 at 3:53 PM Basil Hariri 
mailto:basil.har...@microsoft.com.invalid>> 
wrote:
Hi all,

I work on Azure Event Hubs (Microsoft’s PaaS offering similar to Apache Kafka) 
and am trying to get our new Kafka 
head<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Fazure-event-hubs-for-kafka-ecosystems-in-public-preview%2F=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547293693=kNuSO1yNNJzOOyg%2FDRlyv4ZKB568f%2FKKn0zCnWQDK0A%3D=0>
 to play nice with Spark’s Kafka adapter. The goal is for our Kafka endpoint to 
be completely compatible with Spark’s Kafka adapter, but I’m running into some 
issues that I think are related to versioning. I’ve been trying to tinker with 
the 
kafka-0-10-sql<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fspark%2Ftree%2Fmaster%2Fexternal%2Fkafka-0-10-sql=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547293693=s5BoYXcUhrVb5uaj3Y2soxjn8Zm3LFVOyGD8bwDZkkM%3D=0>
 and 
kafka-0-10<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fspark%2Ftree%2Fmaster%2Fexternal%2Fkafka-0-10-sql=02%7C01%7CBasil.Hariri%40microsoft.com%7C9c2387763e53418d4b4e08d6049813a9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636701448547303703=5H9%2FFGxz1VsL0OfWx7mrsQU2cGIR7zB3VuMADZop9RE%3D=0>
 adapters on Github and was wondering if someone could take a second to point 
me in the right direction with:


  1.  What is the difference between those two adapters? My hunch is that 
kafka-0-10-sql supports structured streaming while kafka-10-0 still uses Spark 
streaming, but I haven’t found anything to verify that.
  2.  Event Hubs’ Kafka endpoint only supports Kafka 1.0 and later, and the 
errors I get when trying to connect to Spark (“failed to send SSL close 
message” / broken pipe errors) have usually shown up when using Kafka v0.10 
applications with our endpoint. I built from source after I saw that both 
libraries were updated for Kafka 2.0 support (late last week), but I’m still 
running into the same issues. Do Spark’s Kafka adapters generally downgrade to 
Kafka v0.10 protocols? If not, is there any other reason to believe that a 
Kafka “broker” that doesn’t support v0.10 protocols but supports v1.0+ would be 
incompatible with Spark’s Kafka adapter?

Thanks in advance, please let me know if there’s a different place I should be 
posting this

Sincerely,
Basil



Re: Spark Kafka adapter questions

2018-08-17 Thread Ted Yu
If you have picked up all the changes for SPARK-18057, the Kafka “broker”
supporting v1.0+ should be compatible with Spark's Kafka adapter.

Can you post more details about the “failed to send SSL close message”
errors ?

(The default Kafka version is 2.0.0 in Spark Kafka adapter after SPARK-18057
)

Thanks

On Fri, Aug 17, 2018 at 3:53 PM Basil Hariri
 wrote:

> Hi all,
>
>
>
> I work on Azure Event Hubs (Microsoft’s PaaS offering similar to Apache
> Kafka) and am trying to get our new Kafka head
> 
> to play nice with Spark’s Kafka adapter. The goal is for our Kafka endpoint
> to be completely compatible with Spark’s Kafka adapter, but I’m running
> into some issues that I think are related to versioning. I’ve been trying
> to tinker with the kafka-0-10-sql
>  and
> kafka-0-10
> 
> adapters on Github and was wondering if someone could take a second to
> point me in the right direction with:
>
>
>
>1. What is the difference between those two adapters? My hunch is that
>kafka-0-10-sql supports structured streaming while kafka-10-0 still uses
>Spark streaming, but I haven’t found anything to verify that.
>2. Event Hubs’ Kafka endpoint only supports Kafka 1.0 and later, and
>the errors I get when trying to connect to Spark (“failed to send SSL close
>message” / broken pipe errors) have usually shown up when using Kafka v0.10
>applications with our endpoint. I built from source after I saw that both
>libraries were updated for Kafka 2.0 support (late last week), but I’m
>still running into the same issues. Do Spark’s Kafka adapters generally
>downgrade to Kafka v0.10 protocols? If not, is there any other reason to
>believe that a Kafka “broker” that doesn’t support v0.10 protocols but
>supports v1.0+ would be incompatible with Spark’s Kafka adapter?
>
>
>
> Thanks in advance, please let me know if there’s a different place I
> should be posting this
>
>
>
> Sincerely,
>
> Basil
>
>
>


Spark Kafka adapter questions

2018-08-17 Thread Basil Hariri
Hi all,

I work on Azure Event Hubs (Microsoft's PaaS offering similar to Apache Kafka) 
and am trying to get our new Kafka 
head
 to play nice with Spark's Kafka adapter. The goal is for our Kafka endpoint to 
be completely compatible with Spark's Kafka adapter, but I'm running into some 
issues that I think are related to versioning. I've been trying to tinker with 
the 
kafka-0-10-sql
 and 
kafka-0-10 
adapters on Github and was wondering if someone could take a second to point me 
in the right direction with:


  1.  What is the difference between those two adapters? My hunch is that 
kafka-0-10-sql supports structured streaming while kafka-10-0 still uses Spark 
streaming, but I haven't found anything to verify that.
  2.  Event Hubs' Kafka endpoint only supports Kafka 1.0 and later, and the 
errors I get when trying to connect to Spark ("failed to send SSL close 
message" / broken pipe errors) have usually shown up when using Kafka v0.10 
applications with our endpoint. I built from source after I saw that both 
libraries were updated for Kafka 2.0 support (late last week), but I'm still 
running into the same issues. Do Spark's Kafka adapters generally downgrade to 
Kafka v0.10 protocols? If not, is there any other reason to believe that a 
Kafka "broker" that doesn't support v0.10 protocols but supports v1.0+ would be 
incompatible with Spark's Kafka adapter?

Thanks in advance, please let me know if there's a different place I should be 
posting this

Sincerely,
Basil