Just to point to you all, I also get similar exception in my streams
application when producer is trying to commit something to changelog topic.
Error sending record to topic test-stream-key-table-changelog
org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s)
expired due t
@Robert Quinlivan: the producer is just the kafka-console-producer
shell that comes in the kafka/bin directory (kafka/bin/windows in my
case). Nothing special.
I'll try messing with acks because this problem is somewhat incidental
to what I'm trying to do which is see how big the log directory grow
The “NewShinyProducer” is also deprecated.
On 4/18/17, 5:41 PM, "David Garcia" wrote:
The console producer in the 0.10.0.0 release uses the old producer which
doesn’t have “backoff”…it’s really just for testing simple producing:
object ConsoleProducer {
def main(args: Ar
The console producer in the 0.10.0.0 release uses the old producer which
doesn’t have “backoff”…it’s really just for testing simple producing:
object ConsoleProducer {
def main(args: Array[String]) {
try {
val config = new ProducerConfig(args)
val reader =
Class.forName(c
I am curious how your producer is configured. The producer maintains an
internal buffer of messages to be sent over to the broker. Is it possible
you are terminating the producer code in your test before the buffer is
exhausted?
On Tue, Apr 18, 2017 at 5:29 PM, jan wrote:
> Thanks to both of you
Thanks to both of you. Some quick points:
I'd expect there to be backpressure from the producer if the broker is
busy ie. the broker would not respond to the console producer if the
broker was too busy accept more messages, and the producer would hang
on the socket. Alternatively I'd hope the cons
Kafka can be tuned for greater delivery guarantees, but such guarantees
come at the cost of latency and throughput (as they do in many other such
systems). If you are doing a simple end-to-end test you may want to look at
tuning the "acks" configuration setting to ensure you aren't dropping any
mes
> err, isn't it supposed to? Isn't the loss of data a very serious error?
Kafka can't fix networking issues like latencies, blinking, unavailability
or any other weird stuff. Kafka promises you to persist data if data
reaches Kafka. Data delivery responsibility to kafka is on your side. You
fail to
Kafka is very reliable when the broker actually gets the message and replies
back to the producer that it got the message (i.e. it won’t “lie”). Basically,
your producer tried to put too many bananas into the Bananer’s basket. And
yes, Windows is not supported. You will get much better perfor
Hi Serega,
> data didn't reach producer. So why should data appear in consumer?
err, isn't it supposed to? Isn't the loss of data a very serious error?
> loss rate is more or less similar [...] Not so bad.
That made me laugh at least. Is kafka intended to be a reliable
message delivery system,
Hi,
[2017-04-17 18:14:05,868] ERROR Error when sending message to topic
big_ptns1_repl1_nozip with key: null, value: 55 bytes with error:
(org.apache.kafka.clients.
producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 8
record(s) expired due to
Hi all, I'm something of a kafka n00b.
I posted the following in the google newsgroup, haven't had a reply
or even a single read so I'll try here. My original msg, slightly
edited, was:
(windows 2K8R2 fully patched, 16GB ram, fairly modern dual core xeon
server, latest version of java)
I'v
12 matches
Mail list logo