Hi
In my java Kafka client I am seeing the below logs and I am not seeing any
data being written to Kafka 0.10. Can someone let me know what is going
wrong?
kafka “WARN [2018-03-22 20 <+912018032220>:31:27]
Thanks Ewen. Will take a look at the config and if there are any findings, will
come back here later.
--
Sent from my iPhone
On Mar 22, 2018, at 7:39 PM, Ewen Cheslack-Postava wrote:
The log is showing that the Connect worker is trying to make sure it has
read the entire
The log is showing that the Connect worker is trying to make sure it has
read the entire log and gets to offset 119, but some other worker says it
has read to offset 169. The two are in inconsistent states, so the one that
seems to be behind will not start work with potentially outdated
Hi Clayton,
I have forwarded the message to the Confluent mailing list and bcc'd the
kafka-users mailing list since this is a question about the Confluent
Docker images.
Ismael
On Thu, Mar 22, 2018 at 4:14 PM, Clayton Wohl wrote:
> I see the official upcoming Confluent
I see the official upcoming Confluent 4.1 Docker images are still using
Debian 8 (Jessie) and the July 2016 release of Azul JDK 8, 8u102-8.17.0.3.
https://github.com/confluentinc/cp-docker-images/blob/4.1.x/debian/base/Dockerfile
Wouldn't it make more sense to use the recent supported/LTS
Hi,
the WallclockTimestampExtractor is only applied to source topics (this
is a correctness fix included in 1.0 release:
https://issues.apache.org/jira/browse/KAFKA-4785). When writing records
into the repartition topics they get the timestamp for the input record
(ie, whatever
Hi, we're having an outage in our production Kafka and getting desperate, any
help would be appreciated.
On 3/14 our consumer (a Storm spout) started getting messages from only 20 out
of 40 partitions on a topic. We only noticed yesterday. Restarting the consumer
with a new consumer group does
What container orchestration package do you all use?
Some context - my company is just starting it's Kafka journey, and we are
trying to understand the operational concerns of maintaining a cluster.
On Thu, Mar 22, 2018 at 2:50 PM, Ted Yu wrote:
> Hi,Can you give a little
Hi Ben,
This sounds very similar to KAFKA-1194, an issue we encountered on our
Windows servers at work. Basically, Kafka is unable to rename log files as
they're locked by the OS. The bug is here and is still open:
https://issues.apache.org/jira/browse/KAFKA-1194. I am not aware of any
Hi,Can you give a little more detail on the type and size of the containers ?
Thanks
Original message From: Thomas Crayford
Date: 3/22/18 11:19 AM (GMT-08:00) To: Users
Subject: Re: Running kafka in containers
We (heroku)
We (heroku) have run databases in containers since 2012, and kafka works
just as well as everything else. So: yes
Is running a containerized kafka cluster a viable strategy for production?
All
Kindly disregard last message
Thank You
Martin
__
From: Martin Gainty
Sent: Thursday, March 22, 2018 1:13 PM
To: users@kafka.apache.org
Subject: Re: kafka offset replication factor - Any
Zach said the Toilet shutoff valve below leaks every so slightly
Zach said we can probably get another week from the toilet shutoff valve
to repair Toilet-shutoff-valve he has to drain the whole house down.. we're
gonna wait until Friday March 30 pm when im there
Meanwhile: make sure that
Are you sure on which kafka cluster & zookeeper cluster you have launched
"describe" command ?
Because you first log trace is "ERROR [KafkaApi-0] ..."
and your second is "[kfk01] ..."
De : Anand, Uttam
Envoyé : jeudi 22 mars 2018
I checked already. Topic is having replication factor as 1 only.
[kfk01]$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:52 ReplicationFactor:1 Configs:
-Original Message-
From: adrien ruffie
Hi,
We are trying to run an Kafka streams applications against a Kafka cluster
and some of the incoming messages have negative timestamp as some of the
producers are using older version of the Kafka.
Therefore we used WallclockTimeStampExtractor to patch those timestamps.
But also read in the
Hi Johnny,
If committing offsets puts this much load on the cluster, you might want to
consider committing them elsewhere. Maybe a key value store. Or if you send
the data you read from Kafka to a transactional store, you can write the
offsets there.
I hope this helps,
Andras
On Wed, Mar 21,
Hi,
Is it correct that out of all the consumer api calls, only poll does any real
activity?
Is it safe to assume that poll(0) will only fetch metadata, but will not have
time to get any response on select, returning an empty list of consumerRecords?
Excepting of course the case of thread being
Hi,
Any thoughts on the below?
Thanks!
-Original Message-
Sorry, I forgot to mention we're using version: kafka_2.12-1.0.0
-Original Message-
Hi,
We have an issue in our dev environments when running on Windows. To provide a
clean environment for testing we tear down the
Hi,
have you check if you have already not created a old topic with this
replication factor '3' ? You can check it by listing with the command
kafka-topcis.sh -- zookeeper ip:port --describe/list
Best regards,
Adrien
De : Anand, Uttam
21 matches
Mail list logo