I have added a attachement containing complete trace in my initial mail.
On Mon, Aug 14, 2017 at 9:47 PM, Damian Guy wrote:
> Do you have the logs leading up to the exception?
>
> On Mon, 14 Aug 2017 at 06:52 Sameer Kumar wrote:
>
> > Exception while doing the join, cant decipher more on this.
got it..Thanks Guozhang.
On Tue, Aug 15, 2017 at 1:55 AM, Guozhang Wang wrote:
> Sameer,
>
> It is mainly to guard for concurrent access for interactive queries:
>
> https://kafka.apache.org/0110/documentation/streams/
> developer-guide#streams_interactive_queries
>
> In Kafka Streams, we allow
First question: We know that Kafka Streams commits offsets on intervals.
But what offsets are committed? Are the offsets for messages committed are
the ones which have just arrived at the source node? Or the messages that
have been through the entire pipeline? If the latter, how do we avoid data
lo
Hi,
connector-plugins endpoint does not list the transformations classes
currently. However if you are using the latest Kafka version ( >= 0.11.0)
one way to see if your transform is discovered during startup in the given
classpath is to notice whether a log message such as the one below is
printe
Gouzhang,
Will do, if it gets stuck in this loop again ill inspect the broker log
dirs. I'm running 0.11 release right now.
On Mon, Aug 14, 2017 at 4:22 PM, Guozhang Wang wrote:
> Garrett,
>
> What I get confused is that you mentioned it start spamming the logs, means
> that it falls into this
Sameer,
It is mainly to guard for concurrent access for interactive queries:
https://kafka.apache.org/0110/documentation/streams/developer-guide#streams_interactive_queries
In Kafka Streams, we allow users to independently query the running state
stores in real-time in their own caller thread wh
Garrett,
What I get confused is that you mentioned it start spamming the logs, means
that it falls into this endless loop of:
1) getting out-of-range exception
2) resetting offset by querying the broker of the offset
3) getting offset 0 from the broker,
4) send fetching request with starting 0, g
Hello Bart,
Thanks for your detailed explanation. I saw your motivation now and it
indeed validates itself as a single application that dynamically change
subscriptions.
As I mentioned Streams today do not have a native support for dynamically
changing subscriptions. That being said, If you would
Hi Srikanth,
0.11.0.0 is looking pretty good so far. One concern is:
https://issues.apache.org/jira/browse/KAFKA-5600
There is no concrete plan for 0.11.0.1, but I'd expect a RC within a few
weeks (2 to 4, probably).
Ismael
On Mon, Aug 14, 2017 at 5:15 AM, Srikanth Sampath wrote:
> Hi,
> We
Do you have the logs leading up to the exception?
On Mon, 14 Aug 2017 at 06:52 Sameer Kumar wrote:
> Exception while doing the join, cant decipher more on this. Has anyone
> faced it. complete exception trace attached.
>
> 2017-08-14 11:15:55 ERROR ConsumerCoordinator:269 - User provided listene
0.11.0.0 is a brand new release, with a very large number of changes
compared to the previous stable release (0.10.2.1). As a thing that stores
data, I would not recommend you switch to it without a very large amount of
testing and validation, probably involving running a shadow setup of your
produ
Hi friends,
Anyone noticed that calling:
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe
--group group1
Is way slower than calling:
./bin/kafka-consumer-groups.sh --zookeeper zk.host:2181 --describe --group
group2
Im my case first command takes about 4min, second
Hi,
Would like to subscribe.
-Srikanth
Gouzhang,
Thanks for the reply! Based on what you said I am going to increase the
log.retention.hours a bunch and see what happens, things typically break
long before 48 hours, but your right the data could have expired by then
too. I'll pay attention to that as well.
As far as messing with
Hi Guozhang,
For the use-cases I have in mind, the offset of the source topics is
irrelevant to the state stored by the streams application.
So, when topic 'A' gets dropped and topic 'B' is added, I would prefer the
application to start reading from 'latest' but that is actually not *that*
import
Thanks you your help Eno and Guozhang.
Indeed I missed the obvious, I made a bad assumption about defaults, should
have checked the source code. I thought Kafka Streams was setting
AUTO_OFFSET_RESET_CONFIG to "earliest", and it is, but not for the version
I'm using! I'm using version 0.10.0.1 whic
I think your application (where the producer resides) is facing GC issues.
The time taken for the GC might be higher than the `request.timeout.ms`.
Check your `jvm.log` and update the `request.timeout.ms`. The same property
is applicable to producer, consumer and broker. Increase the config only
f
17 matches
Mail list logo