Hi List,
TL;DR
Looking for a Go library that can Decode binary data written with a
different Schema then the Reader.
We have a system in place that allows for multiple Avro schemas to be live
for data streams. There can be producers producing an earlier version of
the schema (version 1) onto the
Baki,
You can get this message "o.a.k.s.state.internals.WindowKeySchema :
Warning: window end time was truncated to Long.MAX"" when your
TimeWindowDeserializer is created without a windowSize. There are two
constructors for a TimeWindowDeserializer, are you using the one with
WindowSize?
https://
We have been giving this a bunch of thought lately. We attempted to
replace PARTITION_ASSIGNMENT_STRATEGY_CONFIG
with our implementation that hooks into our deployment service. The idea is
simple, the new deployment gets *Standby tasks assigned to them until they
are caught up*. Once they are caugh
:
> The commit interval is small to keep end-to-end processing latency
> small. For example, if data is repartitioned, a downstream task can only
> read the data after the upstream tasks commit its transaction.
>
> -Matthias
>
> On 4/16/19 9:56 AM, Scott Reynolds wrote:
> > Hi,
Hi,
I have been unable to determine why the default commit interval for
Exactly Once Streams application is 100L. This seems really aggressive and
does produce a large amount of offsets to our broker. I have changed this
in our application but I am worried I have now introduced a bug in the
appl
y private data and
> forgetting
> > the key when data has to deleted.
> > Sadly, our legal department after some checkins has conclude that this
> > approach is "to block" data but not deleting it, as a consequence it can
> > take us problems. If my guess about y
ned in the messages. So not deleting the message,
> but editing it.
> For doing that, my intention is to replicate the topic and apply a
> transformation over it.
> I think that frameworks like Kafka Streams or Apache Storm.
>
> Did anybody had to solve this problem?
>
> Thanks i
t; so the
> >>data pushed using this older version leader, will not be synced with
> other
> >>replicas and if this leader(older version) goes down for an
> upgrade, other
> >>updated replicas will be shown in in-sync column and become leader,
>
wastes valuable natural resources. Please don't print
> this message unless it is absolutely necessary. Thank you for thinking
> green!
>
> Sent from my iPhone
>
--
Scott Reynolds
Principal Engineer
[image: twilio] <http://www.twilio.com/?utm_source=email_signature>
MOBILE (630) 254-2474
EMAIL sreyno...@twilio.com
you cannot use a k10 client with a k8 cluster. The protocol changed
You CAN use a k8 client with a k10 cluster.
On Thu, Oct 6, 2016 at 12:00 PM Craig Swift
wrote:
> We're doing some fairly intensive data transformations in the current
> workers so it's not as straight forward as just reading/pr
List,
Documentation about Group Coordinator election seems to reference
zookeeper, but I am unable to find anything in zookeeper. Is it using kafka
to store this information like consumer offsets ?
Can someone explain how Group Coordinator election works ?
We had an incident during a reassignmen
election
happens nor where the heartbeat response is generated. Anyone have any
guidance on where to look or how to debug ? Grasping at straws at this
moment.
On Fri, Apr 15, 2016 at 10:36 AM Scott Reynolds
wrote:
> Awesome that is what I thought. Answer seems simple, speed up flush :-D,
> whic
eout.ms or
> have some timeout mechanism in the implementation of the flush method to
> return the control back to the framework so that it can send heartbeat to
> the coordinator.
>
> Thanks,
> Liquan
>
> On Fri, Apr 15, 2016 at 9:56 AM, Scott Reynolds
> wrote:
>
> >
List,
We are struggling with Kafka Connect settings. The process start up and
handle a bunch of messages and flush. Then slowly the Group coordinator
removes them.
This is has to be a interplay between Connect's flush interval and the call
to poll for each of these tasks. Here is my current setti
In a test in staging environment, we kill -9 the broker. It was started
back up by runit and started recovering. We are seeing errors like this:
WARN Found an corrupted index file,
/mnt/services/kafka/data/TOPIC-17/16763460.index, deleting and
rebuilding index... (kafka.log.Log)
The f
Ah I have been down this path. It is the zookeeper client. It resolves and
caches the ip addresses:
https://github.com/apache/zookeeper/blob/bd9a1448f9b29859092e6bdca93da121ec166b7a/src/java/main/org/apache/zookeeper/client/StaticHostProvider.java#L108
I believe they are cached forever.
We have h
Yep that is it. Thanks. I will watch the issue.
On Mon, Mar 14, 2016 at 1:13 PM Stevo Slavić wrote:
> I've recently created related ticket
> https://issues.apache.org/jira/browse/KAFKA-3390
>
> On Mon, Mar 14, 2016, 20:54 Scott Reynolds wrote:
>
> > >Conditional
>Conditional update of path
>/brokers/topics/messages.events/partitions/0/state with data
{"controller_epoch":2,"leader":492687262,"version":1,"leader_epoch":4,"isr":[492687262]}
and expected version 10 failed due to
org.apache.zookeeper.KeeperException$NoN
I believe this is caused by deleting the
<
luke.steen...@braintreepayments.com> wrote:
> Ah, that's a good idea. Do you know if kafka-manager works with kafka 0.9
> by chance? That would be a nice improvement of the cli tools.
>
> Thanks,
> Luke
>
>
> On Tue, Jan 12, 2016 at 4:53 PM, Scott Reynolds
> wrot
Luke,
We practice the same immutable pattern on AWS. To decommission a broker, we
use partition reassignment first to move the partitions off of the node and
preferred leadership election. To do this with a web ui, so that you can
handle it on lizard brain at 3 am, we have the Yahoo Kafka Manager
On Mon, Nov 16, 2015 at 8:27 AM, Abu-Obeid, Osama <
osama.abu-ob...@morganstanley.com> wrote:
> I can observe the same thing:
>
> - Lag values read through the Kafka consumer JMX is 0
>
This metric includes *uncommitted* offsets
- Lag values read through kafka-run-class.sh
> kafka.tools.ConsumerO
On Thu, Jun 4, 2015 at 1:55 PM, Otis Gospodnetić wrote:
> Hi,
>
> On Thu, Jun 4, 2015 at 4:26 PM, Scott Reynolds
> wrote:
>
> > I believe the JMX metrics reflect the consumer PRIOR to committing
> offsets
> > to Kafka / Zookeeper. But when you query from the co
I believe the JMX metrics reflect the consumer PRIOR to committing offsets
to Kafka / Zookeeper. But when you query from the command line using the
kafka tools, you are just getting the committed offsets.
On Thu, Jun 4, 2015 at 1:23 PM, Otis Gospodnetic wrote:
> Hi,
>
> Here's something potentia
On my brokers I am seeing this error log message:
Closing socket for /X because of error (X is the ip address of a consumer)
> 2015-01-27_17:32:58.29890 java.io.IOException: Connection reset by peer
> 2015-01-27_17:32:58.29890 at
> sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
> 2015
A question about 0.8.1.1 and acks. I was under the impression that setting
acks to 2 will not throw an exception when there is only one node in ISR.
Am I incorrect ? Thus the need for min_isr.
On Tue, Oct 14, 2014 at 11:50 AM, Kyle Banker wrote:
> It's quite difficult to infer from the docs the
25 matches
Mail list logo