nning threads; If that doesn't elucidate the cause,
> you could move onto sampling or profiling via JMX to see what's taking up
> all that CPU.
>
> - Jordan Pilat
>
> On 2017-09-21 07:58, Elliot Crosby-McCullough freeagent.com> wrote:
> > Hello,
> >
rokers (i.e.,
> 0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
> format conversions both for incoming messages as well as for replication.
>
> --John
>
> On Thu, Sep 21, 2017 at 10:42 AM, Elliot Crosby-McCullough <
> elliot.crosby-mccullo...@freeagent.c
Nothing, that value (that group of values) was at default when we started
the debugging.
On 21 September 2017 at 15:08, Ismael Juma wrote:
> Thanks. What happens if you reduce num.replica.fetchers?
>
> On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> elliot
g one 50 partition topic in an otherwise empty cluster.
On 21 September 2017 at 14:20, Ismael Juma wrote:
> A couple of questions: how many partitions in the cluster and what are your
> broker configs?
>
> On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
>
Hello,
We've been trying to debug an issue with our kafka cluster for several days
now and we're close to out of options.
We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
nodes, plus a few streams clients and a ruby producer.
Two of the three brokers are pinning a core an
ed to schedule a call every N minutes of wall-clock time you'd
> need to use your own scheduler.
>
> Does that help?
> Michael
>
>
>
> On Tue, Mar 28, 2017 at 10:58 AM, Elliot Crosby-McCullough <
> elliot.crosby-mccullo...@freeagent.com> wrote:
>
> > Hi
Hi there,
I've written a simple processor which expects to have #process called on it
for each message and configures regular punctuate calls via
`context.schedule`.
Regardless of what configuration I try for timestamp extraction I cannot
get #punctuate to be called, despite #process being called
; Eno
>
> > On 11 Feb 2017, at 17:56, Elliot Crosby-McCullough
> wrote:
> >
> > For my own clarity, is there any actual distinction between
> > `stream.to('topic')`
> > where `topic` is set to compact and the upcoming
> `stream.toTable('topic')
For my own clarity, is there any actual distinction between
`stream.to('topic')`
where `topic` is set to compact and the upcoming `stream.toTable('topic')`
if you're not going to immediately use the table in this topology, i.e. if
you want to use it as a table in some other processor application?
another message that is referring
> to the same dimension. So you'd need to handle this somehow yourself.
>
> On Thu, 2 Feb 2017 at 08:26 Elliot Crosby-McCullough
> wrote:
>
> > Sorry I left out too much context there.
> >
> > The current plan is to take a ra
g them into DBs like redshift.
On 1 February 2017 at 22:40, Matthias J. Sax wrote:
> I am not sure if I can follow... what do you mean by "find or create"
> semantics?
>
> What do you mean by "my first pass processor"?
>
>
> -Matthias
>
> On
ain topology") and thus the current
> punctuate() issues should be resolved for this case.
>
> cf.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 99%3A+Add+Global+Tables+to+Kafka+Streams
>
>
>
> -Matthias
>
> On 2/1/17 10:31 AM, Elliot Crosby-McCulloug
cess because no new data gets appended that is longer than your
> > punctuation interval, some calls to punctuate might not fire.
> >
> > Let's say the KTable does not get an update for 5 Minutes, than you
> > would miss 9 calls to punctuate(), and get only a single ca
Hi there,
I've been reading through the Kafka Streams documentation and there seems
to be a tricky limitation that I'd like to make sure I've understood
correctly.
The docs[1] talk about the `punctuate` callback being based on stream time
and that all incoming partitions of all incoming topics mu
14 matches
Mail list logo