Re: Kafka on windows

2017-04-11 Thread David Luu
I'm curious as well. That doc blurb doesn't give specifics. How is kafka
run (or tested) on Windows? Natively via the command line shell, etc. or
via cygwin, within a *nix VM on Windows, or via Windows 10's Ubuntu Linux
Bash shell? Would be interesting to see how each method I listed performs,
maybe the Windows 10 bash shell method might be most optimal among the list?

On Tue, Apr 11, 2017 at 9:41 PM, David Garcia  wrote:

> One issue is that Kafka leverage some very specific features of the linux
> kernel that are probably far different from Windows, so I imagine the
> performance profile is likewise much different.
>
> On 4/11/17, 8:52 AM, "Tomasz Rojek"  wrote:
>
> Hi All,
>
> We want to choose provider of messaging system in our company, one of
> possible choices is Apache Kafka. One of operating system that will
> host
> brokers is windows, according to documentation:
>
> https://kafka.apache.org/documentation.html#os
> "*We have seen a few issues running on Windows and Windows is not
> currently
> a well supported platform though we would be happy to change that.*"
>
> Can you please elaborate more on this. What exactly potential issues
> are we
> talking about? What functionalities of kafka are influenced by this?
> Maybe
> it occurs only on specific version of windows?
>
> Thank you in advance for any information.
>
> With Regards
> Tomasz Rojek
> Java Engineer
>
>
>


-- 
David Luu
Member of Technical Staff
Mist Systems, Inc.
1601 S. De Anza Blvd. #248
Cupertino, CA 95014


Re: Kafka on windows

2017-04-11 Thread David Garcia
One issue is that Kafka leverage some very specific features of the linux 
kernel that are probably far different from Windows, so I imagine the 
performance profile is likewise much different.

On 4/11/17, 8:52 AM, "Tomasz Rojek"  wrote:

Hi All,

We want to choose provider of messaging system in our company, one of
possible choices is Apache Kafka. One of operating system that will host
brokers is windows, according to documentation:

https://kafka.apache.org/documentation.html#os
"*We have seen a few issues running on Windows and Windows is not currently
a well supported platform though we would be happy to change that.*"

Can you please elaborate more on this. What exactly potential issues are we
talking about? What functionalities of kafka are influenced by this? Maybe
it occurs only on specific version of windows?

Thank you in advance for any information.

With Regards
Tomasz Rojek
Java Engineer




Kafka on windows

2017-04-11 Thread Tomasz Rojek
Hi All,

We want to choose provider of messaging system in our company, one of
possible choices is Apache Kafka. One of operating system that will host
brokers is windows, according to documentation:

https://kafka.apache.org/documentation.html#os
"*We have seen a few issues running on Windows and Windows is not currently
a well supported platform though we would be happy to change that.*"

Can you please elaborate more on this. What exactly potential issues are we
talking about? What functionalities of kafka are influenced by this? Maybe
it occurs only on specific version of windows?

Thank you in advance for any information.

With Regards
Tomasz Rojek
Java Engineer


Re: [VOTE] 0.10.2.1 RC0

2017-04-11 Thread Gwen Shapira
Wrong link :)
http://kafka.apache.org/documentation/#upgrade
and
http://kafka.apache.org/documentation/streams#streams_api_changes_0102

On Tue, Apr 11, 2017 at 5:57 PM, Gwen Shapira  wrote:
> FYI: I just updated the upgrade notes with Streams changes:
> http://kafka.apache.org/documentation/#gettingStarted
>
> On Fri, Apr 7, 2017 at 5:12 PM, Gwen Shapira  wrote:
>> Hello Kafka users, developers and client-developers,
>>
>> This is the first candidate for the release of Apache Kafka 0.10.2.1. This
>> is a bug fix release and it includes fixes and improvements from 24 JIRAs
>> (including a few critical bugs). See the release notes for more details:
>>
>> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Thursday, 13 April, 8am PT ***
>>
>> Your help in validating this bugfix release is super valuable, so
>> please take the time to test and vote!
>>
>> Few notes:
>> 1. There are missing "Notable Changes" in the docs:
>> https://github.com/apache/kafka/pull/2824
>> I will review, merge and update the docs by Monday.
>> 2. The last commit (KAFKA-4943 chery-pick) did not pass system tests
>> yet. We may need another RC if system tests fail tonight.
>>
>> Suggested tests:
>>  * Grab the source archive and make sure it compiles
>>  * Grab one of the binary distros and run the quickstarts against them
>>  * Extract and verify one of the site docs jars
>>  * Build a sample against jars in the staging repo
>>  * Validate GPG signatures on at least one file
>>  * Validate the javadocs look ok
>>
>> *
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging
>>
>> * Javadoc:
>> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/javadoc/
>>
>> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d08115f05da0e39c7f75b45e05d6d14ad5baf71d
>>
>> * Documentation:
>> http://kafka.apache.org/0102/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/0102/protocol.html
>>
>> Thanks,
>> Gwen Shapira
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] 0.10.2.1 RC0

2017-04-11 Thread Gwen Shapira
FYI: I just updated the upgrade notes with Streams changes:
http://kafka.apache.org/documentation/#gettingStarted

On Fri, Apr 7, 2017 at 5:12 PM, Gwen Shapira  wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for the release of Apache Kafka 0.10.2.1. This
> is a bug fix release and it includes fixes and improvements from 24 JIRAs
> (including a few critical bugs). See the release notes for more details:
>
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, 13 April, 8am PT ***
>
> Your help in validating this bugfix release is super valuable, so
> please take the time to test and vote!
>
> Few notes:
> 1. There are missing "Notable Changes" in the docs:
> https://github.com/apache/kafka/pull/2824
> I will review, merge and update the docs by Monday.
> 2. The last commit (KAFKA-4943 chery-pick) did not pass system tests
> yet. We may need another RC if system tests fail tonight.
>
> Suggested tests:
>  * Grab the source archive and make sure it compiles
>  * Grab one of the binary distros and run the quickstarts against them
>  * Extract and verify one of the site docs jars
>  * Build a sample against jars in the staging repo
>  * Validate GPG signatures on at least one file
>  * Validate the javadocs look ok
>
> *
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging
>
> * Javadoc:
> http://home.apache.org/~gwenshap/kafka-0.10.2.1-rc0/javadoc/
>
> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d08115f05da0e39c7f75b45e05d6d14ad5baf71d
>
> * Documentation:
> http://kafka.apache.org/0102/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0102/protocol.html
>
> Thanks,
> Gwen Shapira



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] 0.10.2.1 RC0

2017-04-11 Thread Gwen Shapira
Thanks for the feedback.

I'm not super familiar with the inner workings of Apache's Maven
repos, so I can't explain why we do things the way we do. I followed
the same process on all Apache projects I was on (Kafka, Sqoop,
Flume). Do you know projects that do things the way you suggested?

Either way, may be worthwhile to start a different discussion thread
about RC releases in Maven. Perhaps more knowledgable people will see
it and jump in.

Gwen

On Tue, Apr 11, 2017 at 4:31 PM, Steven Schlansker
 wrote:
>
>> On Apr 7, 2017, at 5:12 PM, Gwen Shapira  wrote:
>>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the first candidate for the release of Apache Kafka 0.10.2.1. This
>> is a bug fix release and it includes fixes and improvements from 24 JIRAs
>> (including a few critical bugs). See the release notes for more details:
>
> Hi Gwen,
>
> I downloaded and tested the RC with a small Kafka Streams app and the upgrade
> seems to have gone smoothly.  (I did not upgrade any brokers though).
>
> One question about the RC process -- currently it seems that the RC is 
> uploaded
> to a staging repo with the final release version.
>
> Would it not be easier for the community if instead the RC is uploaded to the
> main repo with a "-rc" version?
>
>
> Currently, you have to convince Maven to get "0.10.2.1" from the staging repo,
> and then when the final version hits Maven would never update in case there 
> were
> any post-RC changes.
>
> Additionally, if there are further RCs, it is quite easy to confuse yourself
> and not be sure exactly which RC jar you are running at any given time, and 
> the
> problem compounds itself when multiple developers or build boxes are involved.
>
> Many other projects instead would create a "0.10.2.1-rc0" version and publish
> that to the normal Maven Central -- that way it is publicly downloadable and
> strongly tagged / versioned as the RC.
>
> Has the Kafka project given any thought to this sort of a proposal?
> As a tester / outside user it would make the process a little easier.
>
> Either way, excited for the 0.10.2.1 release, and thanks for all the work!
>



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] 0.10.2.1 RC0

2017-04-11 Thread Steven Schlansker

> On Apr 7, 2017, at 5:12 PM, Gwen Shapira  wrote:
> 
> Hello Kafka users, developers and client-developers,
> 
> This is the first candidate for the release of Apache Kafka 0.10.2.1. This
> is a bug fix release and it includes fixes and improvements from 24 JIRAs
> (including a few critical bugs). See the release notes for more details:

Hi Gwen,

I downloaded and tested the RC with a small Kafka Streams app and the upgrade
seems to have gone smoothly.  (I did not upgrade any brokers though).

One question about the RC process -- currently it seems that the RC is uploaded
to a staging repo with the final release version.

Would it not be easier for the community if instead the RC is uploaded to the
main repo with a "-rc" version?


Currently, you have to convince Maven to get "0.10.2.1" from the staging repo,
and then when the final version hits Maven would never update in case there were
any post-RC changes.

Additionally, if there are further RCs, it is quite easy to confuse yourself
and not be sure exactly which RC jar you are running at any given time, and the
problem compounds itself when multiple developers or build boxes are involved.

Many other projects instead would create a "0.10.2.1-rc0" version and publish
that to the normal Maven Central -- that way it is publicly downloadable and
strongly tagged / versioned as the RC.

Has the Kafka project given any thought to this sort of a proposal?
As a tester / outside user it would make the process a little easier.

Either way, excited for the 0.10.2.1 release, and thanks for all the work!



signature.asc
Description: Message signed with OpenPGP using GPGMail


Kafka Streams Application does not start after 10.1 to 10.2 update if topics need to be auto-created

2017-04-11 Thread Dmitry Minkovsky
I updated from 10.1 and 10.2. I updated both the broker and maven
dependency.

I am using topic auto-create. With 10.1, starting the application with a
broker would sometimes result in an error like:

> Exception in thread "StreamThread-1"
org.apache.kafka.streams.errors.TopologyBuilderException: Invalid topology
building: stream-thread [StreamThread-1] Topic not found: $topic

But this would only happen once. Upon the second attempt, the topics are
already created and everything works fine.

But with 10.2 this error does not go away. I have confirmed and tested that
auto topic creation is enabled.

Here is the error/trace:


Exception in thread "StreamThread-1"
org.apache.kafka.streams.errors.TopologyBuilderException: Invalid topology
building: stream-thread [StreamThread-1] Topic not found: session-updates
at
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor$CopartitionedTopicsValidator.validate(StreamPartitionAssignor.java:734)
at
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.ensureCopartitioning(StreamPartitionAssignor.java:648)
at
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:368)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:339)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:488)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:89)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:438)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:420)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:764)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:745)
at
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:186)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:149)
at
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:116)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:493)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:322)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:253)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:172)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:334)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1030)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:582)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)


It does not occur if my topology only defines streams and tables. However,
when I attempt to join a stream and a table, this error is thrown:

// No error if this is in topology
KTable sessions = topology.table(byteStringSerde,
sessionSerde, "sessions", "sessions");

// No error if this is in topology
KStream sessionUpdates =
topology.stream(byteStringSerde, sessionUpdateSerde, "session-updates");

// Error if this is in topology
sessionUpdates
  .leftJoin(sessions, (update, value) -> {
  // do update, omitted
  })
  .filter((k, v) -> v != null)
  .to(byteStringSerde, sessionSerde, "sessions");


Re: Kafka producer drops large messages

2017-04-11 Thread Akhilesh Pathodia
Hi Smirit,

You will have to change some of broker configuration like message.max.bytes
to a larger value. The default value is 1 MB guess.

Please check below configs:

Broker Configuration


   -

   message.max.bytes

   Maximum message size the broker will accept. Must be smaller than the
   consumer fetch.message.max.bytes, or the consumer cannot consume the
   message.

   Default value: 100 (1 MB)
   -

   log.segment.bytes

   Size of a Kafka data file. Must be larger than any single message.

   Default value: 1073741824 (1 GiB)
   -

   replica.fetch.max.bytes

   Maximum message size a broker can replicate. Must be larger than
   message.max.bytes, or a broker can accept messages it cannot replicate,
   potentially resulting in data loss.

   Default value: 1048576 (1 MiB)

Thanks,
Akhilesh

On Wed, Apr 12, 2017 at 12:23 AM, Smriti Jha  wrote:

> Hello all,
>
> Can somebody shed light on kafka producer's behavior when the total size of
> all messages in the buffer (bounded by queue.buffering.max.ms) exceeds the
> socket buffer size (send.buffer.bytes)?
>
> I'm using Kafka v0.8.2 with the old Producer API and have noticed that our
> systems are dropping a few messages that are closer to 1MB in size. A few
> messages that are only a few KBs in size and are attempted to be sent
> around the same time as >1MB messages also get dropped. The official
> documentation does talk about never dropping a "send" in case the buffer
> has reached queue.buffering.max.messages but I don't think that applies to
> size of the messages.
>
> Thanks!
>


Re: Kafka security

2017-04-11 Thread Christian Csar
Don't hard code it. Martin's suggestion allows it to be read from a
configuration file or injected from another source such as an environment
variable at runtime.

If you neither of these are acceptable for corporate policy I suggest
asking how it has been handled before at your company.

Christian


On Apr 11, 2017 11:10, "IT Consultant" <0binarybudd...@gmail.com> wrote:

Thanks for your response .

We aren't allowed to hard code  password in any of our program

On Apr 11, 2017 23:39, "Mar Ian"  wrote:

> Since is a java property you could set the property (keystore password)
> programmatically,
>
> before you connect to kafka (ie, before creating a consumer or producer)
>
> System.setProperty("zookeeper.ssl.keyStore.password", password);
>
> martin
>
> 
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: April 11, 2017 2:01 PM
> To: users@kafka.apache.org
> Subject: Kafka security
>
> Hi All
>
> How can I avoid using password for keystore creation ?
>
> Our corporate policies doesn'tallow us to hardcore password. We are
> currently passing keystore password while accessing TLS enabled Kafka
> instance .
>
> I would like to use either passwordless keystore or avoid password for
> cleint accessing Kafka .
>
>
> Please help
>


Re: Kafka security

2017-04-11 Thread IT Consultant
Thanks for your response .

We aren't allowed to hard code  password in any of our program

On Apr 11, 2017 23:39, "Mar Ian"  wrote:

> Since is a java property you could set the property (keystore password)
> programmatically,
>
> before you connect to kafka (ie, before creating a consumer or producer)
>
> System.setProperty("zookeeper.ssl.keyStore.password", password);
>
> martin
>
> 
> From: IT Consultant <0binarybudd...@gmail.com>
> Sent: April 11, 2017 2:01 PM
> To: users@kafka.apache.org
> Subject: Kafka security
>
> Hi All
>
> How can I avoid using password for keystore creation ?
>
> Our corporate policies doesn'tallow us to hardcore password. We are
> currently passing keystore password while accessing TLS enabled Kafka
> instance .
>
> I would like to use either passwordless keystore or avoid password for
> cleint accessing Kafka .
>
>
> Please help
>


Re: Kafka security

2017-04-11 Thread Mar Ian
Since is a java property you could set the property (keystore password) 
programmatically,

before you connect to kafka (ie, before creating a consumer or producer)

System.setProperty("zookeeper.ssl.keyStore.password", password);

martin


From: IT Consultant <0binarybudd...@gmail.com>
Sent: April 11, 2017 2:01 PM
To: users@kafka.apache.org
Subject: Kafka security

Hi All

How can I avoid using password for keystore creation ?

Our corporate policies doesn'tallow us to hardcore password. We are
currently passing keystore password while accessing TLS enabled Kafka
instance .

I would like to use either passwordless keystore or avoid password for
cleint accessing Kafka .


Please help


Kafka security

2017-04-11 Thread IT Consultant
Hi All

How can I avoid using password for keystore creation ?

Our corporate policies doesn't​allow us to hardcore password. We are
currently passing keystore password while accessing TLS enabled Kafka
instance .

I would like to use either passwordless keystore or avoid password for
cleint accessing Kafka .


Please help


Re: auto.offset.reset for Kafka streams 0.10.2.0

2017-04-11 Thread Mahendra Kariya
Thanks for the clarification Matthais / Michael!

+1 to clear documentation around this because as far as I remember, default
for normal consumers is "latest" and since Streams internally use normal
consumers, the first intuition is that it will be "latest" for Streams as
well.


Best,
Mahendra



On Tue, Apr 11, 2017 at 12:31 PM, Michael Noll  wrote:

> It's also documented at
> http://docs.confluent.io/current/streams/developer-guide.html#non-streams-
> configuration-parameters
> .
>
> FYI: We have already begun syncing the Confluent docs for Streams into the
> Apache Kafka docs for Streams, but there's still quite some work left
> (volunteers are welcome :-P).
>
> -Michael
>
>
> On Tue, Apr 11, 2017 at 8:37 AM, Matthias J. Sax 
> wrote:
>
> > Default for Streams is "earliest"
> >
> > cf.
> > https://github.com/apache/kafka/blob/0.10.2.0/streams/
> > src/main/java/org/apache/kafka/streams/StreamsConfig.java#L405
> >
> >
> > -Matthias
> >
> > On 4/10/17 9:41 PM, Mahendra Kariya wrote:
> > > This was even my assumption. But I had to explicitly specify
> > > auto.offset.reset=latest. Without this config, it started from
> > "earliest"!
> > >
> > > On Tue, Apr 11, 2017 at 10:07 AM, Sachin Mittal 
> > wrote:
> > >
> > >> As far as I know default is latest, if no offsets are found. Otherwise
> > it
> > >> starts from the offset.
> > >>
> > >>
> > >> On Tue, Apr 11, 2017 at 8:51 AM, Mahendra Kariya <
> > >> mahendra.kar...@go-jek.com
> > >>> wrote:
> > >>
> > >>> Hey All,
> > >>>
> > >>> Is the auto offset reset set to "earliest" by default in Kafka
> streams
> > >>> 0.10.2.0? I thought default was "latest".
> > >>>
> > >>> I started a new Kafka streams application with a fresh application id
> > and
> > >>> it started consuming messages from the beginning.
> > >>>
> > >>
> > >
> >
> >
>


Re: auto.offset.reset for Kafka streams 0.10.2.0

2017-04-11 Thread Michael Noll
It's also documented at
http://docs.confluent.io/current/streams/developer-guide.html#non-streams-configuration-parameters
.

FYI: We have already begun syncing the Confluent docs for Streams into the
Apache Kafka docs for Streams, but there's still quite some work left
(volunteers are welcome :-P).

-Michael


On Tue, Apr 11, 2017 at 8:37 AM, Matthias J. Sax 
wrote:

> Default for Streams is "earliest"
>
> cf.
> https://github.com/apache/kafka/blob/0.10.2.0/streams/
> src/main/java/org/apache/kafka/streams/StreamsConfig.java#L405
>
>
> -Matthias
>
> On 4/10/17 9:41 PM, Mahendra Kariya wrote:
> > This was even my assumption. But I had to explicitly specify
> > auto.offset.reset=latest. Without this config, it started from
> "earliest"!
> >
> > On Tue, Apr 11, 2017 at 10:07 AM, Sachin Mittal 
> wrote:
> >
> >> As far as I know default is latest, if no offsets are found. Otherwise
> it
> >> starts from the offset.
> >>
> >>
> >> On Tue, Apr 11, 2017 at 8:51 AM, Mahendra Kariya <
> >> mahendra.kar...@go-jek.com
> >>> wrote:
> >>
> >>> Hey All,
> >>>
> >>> Is the auto offset reset set to "earliest" by default in Kafka streams
> >>> 0.10.2.0? I thought default was "latest".
> >>>
> >>> I started a new Kafka streams application with a fresh application id
> and
> >>> it started consuming messages from the beginning.
> >>>
> >>
> >
>
>


Re: auto.offset.reset for Kafka streams 0.10.2.0

2017-04-11 Thread Matthias J. Sax
Default for Streams is "earliest"

cf.
https://github.com/apache/kafka/blob/0.10.2.0/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L405


-Matthias

On 4/10/17 9:41 PM, Mahendra Kariya wrote:
> This was even my assumption. But I had to explicitly specify
> auto.offset.reset=latest. Without this config, it started from "earliest"!
> 
> On Tue, Apr 11, 2017 at 10:07 AM, Sachin Mittal  wrote:
> 
>> As far as I know default is latest, if no offsets are found. Otherwise it
>> starts from the offset.
>>
>>
>> On Tue, Apr 11, 2017 at 8:51 AM, Mahendra Kariya <
>> mahendra.kar...@go-jek.com
>>> wrote:
>>
>>> Hey All,
>>>
>>> Is the auto offset reset set to "earliest" by default in Kafka streams
>>> 0.10.2.0? I thought default was "latest".
>>>
>>> I started a new Kafka streams application with a fresh application id and
>>> it started consuming messages from the beginning.
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature