Hi,
I am one of the maintainers of prometheus-kafka-consumer-group-exporter[1],
which exports consumer group offsets and lag to Prometheus. The way we
currently scrape this information is by periodically executing
`kafka-consumer-groups.sh --describe` for each group and parse the output.
<javascript:_e(%7B%7D,'cvml','syed.hussa...@theexchangelab.com');>
>
> T 0203 701 3177
>
>
> --
>
> Follow us on Twitter: @exchangelab <https://twitter.com/exchangelab> | Visit
> us on LinkedIn: The Exchange Lab
> <https://www.linkedin.com/company/the-exchan
rse the goal shouldn't be moving towards consul. It should just
> be
> > flexible enough for users to pick any distributed coordinated system.
> >
> >
> >
> >
> >
> >
> > On Mon, Sep 19, 2016 2:23 AM, Jens Rantil jens.ran...@tink.se
> >
t from the exponentially growing consul
> community.
>
>
>
>
> --
>
>
> Jennifer Fountain
> DevOPS
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
t; want to be spinning up additional brokers just so we can add more event
> >> stream, especially if the load for each is reasonable.
> >>
> >> Another option we were looking into was to not isolate at the
> >> topic/partition level but to keep a set of pendi
temp uneven
> balancing for better throughput), so if test publishes 40k messages only,
> only two partitions will actually get the data.
>
> Kind regards,
> Stevo Slavic.
>
> On Sun, Sep 11, 2016, 22:49 Jens Rantil <jens.ran...@tink.se> wrote:
>
> > Hi,
> &g
lag: 2 },
{ partition: 1, consumers: "192.168.1.2", lag: 2 },
{ partition: 2, consumers: "192.168.1.3", lag: 0 },
]
Clearly, it would be more optimial if "192.168.1.3" also takes care of
partition 1.
Cheers,
Jens
--
Jens Rantil
Backend engineer
Tink A
of our logs. If I reduce
topic retention to 3 days and brokers purge old logs, will consumer groups
automagically start consuming from the "new beginning" (that is, new
smallest offset)? This would save us some processing time...
Thanks,
Jens
--
Jens Rantil
Backend Developer @ Tin
r" <mudit.ku...@askme.in <javascript:;>>
> > > > To: users@kafka.apache.org <javascript:;>
> > > > Sent: Tuesday, May 24, 2016 3:53:26 PM
> > > > Subject: Re: Kafka encryption
> > > >
> > > > Yes,it does that.What specifically you are looking for?
> > > >
> > > >
> > > >
> > > >
> > > > On 5/24/16, 3:52 PM, "Snehalata Nagaje" <
> > > > snehalata.nag...@harbingergroup.com <javascript:;>> wrote:
> > > >
> > > > >Hi All,
> > > > >
> > > > >
> > > > >We have requirement of encryption in kafka.
> > > > >
> > > > >As per docs, we can configure kafka with ssl, for secured
> > communication.
> > > > >
> > > > >But does kafka also stores data in encrypted format?
> > > > >
> > > > >
> > > > >Thanks,
> > > > >Snehalata
> > > >
> > >
> >
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
sages?
>
>
>
> Kind regards,
>
>
>
> Jahn Roux
>
>
>
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
hich I might be missing. Please let
> me know if anyone knows.
>
>
> --
> *Regards,*
> *Ravi*
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<h
ssages it is polling, the inner process sometimes
> not able to finish the process within the heartbeat interval limit, which
> makes the consumer rebalancing kick in, again and again, this only happens
> when the consumer is way behind in offset e.g there are 10 messages to
> be processed in the topic.
>
> Thanks
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
explains how one could write a custom partitioner.
> I'd like to know how it was used to solve such data skew.
> We can compute some statistics on key distribution offline and use it in
> the partitioner.
> Is that a good idea? Or is it way too much logic for a partitioner?
> Any
Hi,
When I added a replicated broker to a cluster, will it first stream
historical logs from the master? Or will it simply starts storing new
messages from producers?
Thanks,
Jens
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you
ndividual or entity to whom they are addressed. If you have received this
> e-mail in error, please notify the sender by reply e-mail immediately and
> destroy all copies of the e-mail and any attachments.
> >
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
- Ensure only one consumer for each topic is running to avoid
>re-balancing
>
>
>
> --
> -Richard L. Burton III
> @rburton
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
gt;
> > This is a pretty basic question, but I don't think it is explained in the
> > JavaDoc
> >
> >
> >
> http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
> >
> > Thanks
> >
>
--
Jens Rantil
Backend Developer @ Tink
Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.
ps and Martin Kleppmann I would expect that someone
> had actually implemented some of the ideas they're been pushing. I'd also
> like to know what sort of problems Kafka would pose for long-term storage –
> would I need special storage nodes, or would replication be sufficient to
> ensure
Just making it more explicit: AFAIK, all Kafka consumers I've seen loads
the incoming messages into memory. Unless you make it possible to stream it
do disk or something you need to make sure your consumers has the available
memory.
Cheers,
Jens
On Fri, Mar 4, 2016 at 6:07 PM Cees de Groot
a message from a producer must be
> consumed by those kinds of consumers.
>
> Any advice and help would be really appreciated.
>
> Thanks in advance!
>
> Best regards
>
> Kim
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 3
; file modification stamps). This to me would indicate the above comment
> assertion is incorrect; we have encountered a non-ISR leader elected even
> though it is configured not to do so.
>
> Any ideas on how to work around this?
>
> Thank you,
>
> Tony Sparks
>
--
Je
ic ports ?
>
> Best Regards
> Munir Khan
>
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_compan
Kafka so although I know it has something to do with the
> brokers, I would like to know what has happened and what is the best way to
> fix it?
>
> TIA
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Face
roker using consumer.partitionsFor. If it can
> return partitioninfo it is considered live. Is this a good approach?
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedi
Hi,
I suggest you run a micro benchmark and test it for your usecase. Should be
pretty straight forward.
Cheers,
Jens
–
Skickat från Mailbox
On Thu, Feb 11, 2016 at 4:24 PM, yazgoo wrote:
> Hi everyone,
> I have multiple disks on my broker.
> Do you know if
sponsible for consumer/ producer side, I'm responsible for
> message broker and after 2 years I will have to prove that message exists
> on MessageBroker and I can prove that using e.g. logs from this time. ( log
> should look like this : Message ID (from message key ) , timestamp )
>
Hi again,
A somewhat related question is also how the heartbeat interval and session
timeout relates to the poll timeout. Must the poll timeout always be lower
than the heartbeat interval?
Cheers,
Jens
On Monday, February 8, 2016, Jens Rantil <jens.ran...@tink.se> wrote:
> Hi,
>
&
second heartbeat.
- Why can't session timeout simply be based on heartbeat interval?
Could anyone clarify this a bit? Also, if you are writing a new consumer,
what is your reasoning when setting these two value?
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
be ideal.
>
> Any suggestions?
>
> Thanks and Regards,
> Joe
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_
and a topic, when using the new Java consumer, how
can I figure out which partition the key will be written to? If not
possible, I will file a JIRA.
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <ht
gt; >you were talking about. Just some local network traffic, with the amount
> >depending on the size of the topic and your replication factor.
> >
> >-- Jim
> >
> >On 1/20/16, 10:37 AM, "Josh Wo" <z...@lendingclub.com> wrote:
> >
> >
keys
occasionally make the Kafka cluster unbalanced etc.
On a larger perspective, maybe it would be nice if a consumer group would
occasionally rebalance consumers based on lag.
Cheers,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
one instance of zookeeper and kafka on each node?
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.com/company/2735919?trk=vsrp_companies_r
g
> that I have say 10 devices to start with, later scaling it up to 10
> such devices.
>
> How should I model my topic? Should I create one topic per device?
>
> Thanks and Regards,
> Joe
>
> On Tue, Jan 19, 2016 at 4:58 PM, Jens Rantil <jens.ran...@tink.
Hi Josh,
Kafka will/can expire message logs after a certain TTL. You can't simply rely
on expiration for key rotation? That is, you start to produce messages with a
different key while your consumer temporarily handles the overlap of keys for
the duration of the TTL.
Just an idea,
Jens
Hi,
You are correct. The others will remain idle. This is why you generally want to
have at least the same number of partitions as consumers.
Cheers,
Jens
–
Skickat från Mailbox
On Sat, Jan 16, 2016 at 2:34 AM, Jason J. W. Williams
wrote:
> Hi,
> I'm
k: 11 x 4TB, CPU: 48 Core, RAM: 252 GB. We chose this configuration
> because our Hadoop cluster has that config and can easily handle that
> amount of data.
> 2. Having a bigger number of brokers but smaller broker config.
>
> I was hopping that somebody with more experience in using Kafka can advic
in zookeeper and that's why flume will not
> read messages from first offset.
>
> Is there any way to reset kafka offset in zookeeper?
>
> Thanks,
> Akhilesh
>
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebo
Hi,
Why don't your consumers instead subscribe to a single topic used to broadcast
to all of them? That way your consumers and producer will be much simpler.
Cheers,
Jens
–
Skickat från Mailbox
On Fri, Dec 18, 2015 at 4:16 PM, Abel . wrote:
> Hi,
> I have this
Hi,
In which part of the world?
Cheers,
Jens
–
Skickat från Mailbox
On Thu, Dec 17, 2015 at 8:23 AM, prabhu v
wrote:
> Hi,
> Can anyone provide me the link for the KAFKA USER Group meetings which
> happened on Jun. 14, 2012 and June 3, 2014??
> Link
eturn any fetches. That would
> simplify
> > things a little more:
> >
> > while (running) {
> > ConsumerRecords<K, V> records = consumer.poll(1000);
> > Future future = executor.submit(new Processor(records));
> > while (!complete(future, heartbeatIn
ght fetch, and commit, messages that I then collect on my first
`consumer.poll(0);` call? Since `consumer.poll(0);` then would return a
non-empty list, I would essentially ignoring messages? Or is the pause()
call both 1) making sure consumer#poll never returns anything _and_ 2)
pauses the backgrou
h level consumer API? I mean, it sounds
like it should gracefully handle slow consumption of varying size. I might
be wrong.
Thanks,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.
Hi again,
For the record I filed an issue about this here:
https://issues.apache.org/jira/browse/KAFKA-2986
Cheers,
Jens
–
Skickat från Mailbox
On Fri, Dec 11, 2015 at 7:56 PM, Jens Rantil <jens.ran...@tink.se> wrote:
> Hi,
> We've been experimenting a little with r
poll` method?
- Is Kafka a bad tool for our usecase?
Thanks and have a nice weekend,
Jens
--
Jens Rantil
Backend engineer
Tink AB
Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se
Facebook <https://www.facebook.com/#!/tink.se> Linkedin
<http://www.linkedin.co
45 matches
Mail list logo