Re: [Spark streaming] No assigned partition error during seek

2017-12-01 Thread Cody Koeninger
il.com> > Date: Thursday, November 30, 2017 at 8:16 PM > To: Cody Koeninger <c...@koeninger.org> > Cc: "user@spark.apache.org" <user@spark.apache.org> > Subject: Re: [Spark streaming] No assigned partition error during seek > > > > I notice that 'Do

Re: [Spark streaming] No assigned partition error during seek

2017-12-01 Thread Qiao, Richard
) Best Regards Richard From: venkat <meven...@gmail.com> Date: Thursday, November 30, 2017 at 8:16 PM To: Cody Koeninger <c...@koeninger.org> Cc: "user@spark.apache.org" <user@spark.apache.org> Subject: Re: [Spark streaming] No assigned partition error during seek I noti

Re: [Spark streaming] No assigned partition error during seek

2017-11-30 Thread venkat
I notice that *'Do not* manually add dependencies on org.apache.kafka artifacts (e.g. kafka-clients). The spark-streaming-kafka-0-10 artifact has the appropriate transitive dependencies already, and different versions may be incompatible in hard to diagnose way' after your query. Does this imply

Re: [Spark streaming] No assigned partition error during seek

2017-11-30 Thread venkat
Yes I use latest Kafka clients 0.11 to determine beginning offsets without seek and also I use Kafka offsets commits externally. I dont find the spark async commit useful for our needs. Thanks Venkat On Fri, 1 Dec 2017 at 02:39 Cody Koeninger wrote: > You mentioned 0.11

Re: [Spark streaming] No assigned partition error during seek

2017-11-30 Thread Cody Koeninger
You mentioned 0.11 version; the latest version of org.apache.kafka kafka-clients artifact supported by DStreams is 0.10.0.1, for which it has an appropriate dependency. Are you manually depending on a different version of the kafka-clients artifact? On Fri, Nov 24, 2017 at 7:39 PM, venks61176

[Spark streaming] No assigned partition error during seek

2017-11-24 Thread venks61176
Version: 2.2 with Kafka010 Hi, We are running spark streaming on AWS and trying to process incoming messages on Kafka topics. All was well. Recently we wanted to migrate from 0.8 to 0.11 version of Spark library and Kafka 0.11 version of server. With this new version of software we are facing