gt; On Thu, Nov 17, 2016 at 10:48 AM, Hoang Bao Thien <hbthien0...@gmail.com>
> wrote:
> > Hi,
> >
> > Thanks for your comments. But in fact, I don't want to limit the size of
> > batches, it could be any greater size as it does.
> >
> > Thien
> >
&g
ches, use
> spark.streaming.kafka.maxRatePerPartition (assuming you're using
> createDirectStream)
>
> http://spark.apache.org/docs/latest/configuration.html#spark-streaming
>
> On Thu, Nov 17, 2016 at 12:52 AM, Hoang Bao Thien <hbthien0...@gmail.com>
> wrote:
> > Hi,
> >
> >
nstead of reading it directly with spark?)
> >>
> >> auto.offset.reset=largest just means that when starting the job
> >> without any defined offsets, it will start at the highest (most
> >> recent) available offsets. That's probably not what you want if
> >