Hi,

I ran a JUnit test where i delivered a payload of ~5MB file to a Kafka
cluser (3 broker, 3 zookeeper). These are all setup on my laptop. My test
config for producer is the following:

max.request.size=5242880
> batch.size=8192
> key.serializer=org.apache.kafka.common.serialization.StringSerializer
> value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
> compression.type=gzip


Before I started the JUnit test I started all my kafka and zookeeper nodes,
so the leaders and followers are holding good. After this I started a kafka
consumer from command line which has the following consumer.properties:

zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
> group.id=test-consumer-group
> zookeeper.connection.timeout=6000


For Each broker I have the following config (broker id and log changed
correctly)

> #port numbers will be 9093 and 9094 for other 2 servers
> listeners=PLAINTEXT://localhost:9092
> broker.id=0
> listener.security.protocol.map=plaintext:plaintext,ssl;ssl
> num.network.threads=3
> num,io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=999999999
> log.dirs=/tmp/kafka-logs
> #the below partition info will be overwritten when I create topic with
> partition and
> #replication factor
> num.partitions=1
> compression.type=gzip
> delete.topic.enable=true


For each zookeeper (in my ensemble), I have the following (and respective
myid files are setup in the zookeeper logDir)

> dataDir=/tmp/zookeeper
> clientPort=2181
> autopurge.purgeInterval=120
> tickTime=5000
> initLimit=5
> syncLimit=2
> server.1=localhost:2666:3666
> server.2=localhost:2667:3667
> server.3=localhost:2668:3668


When i run my Junit test, I can see that my kafka-console-consumer is
displaying the file contents correctly, no issues. But when I run the the
kafka-producer-perf-test using the same producer config, i am getting the
following error

>
> >kafka-producer-perf-test.bat --topic testtopic11 --num-records 10
> --record-size 5000000 ---throughput -1 --producer.config
> C:/kafka/config/producer_metrics.properties
> org.apache.kafka.common.errors.RecordTooLargeException The request
> included a message larger than the max message size the server will accept.


yes I can see from the Kakfa doc area that message.max.bytes (default
1000000) definition:


The maximum size of a message that the server can receive. It is important
that this property be in sync with the maximum fetch size your consumers
use or else an unruly producer will be able to publish messages too large
for consumers to consume.
 If that's the case, how was it possible that I managed to receive 5 files
(transferring using JUnit Test) on kafka-console-consumer, but cannot do
testing when I specify record number and record size? Is there a
configuration that happens differently for this performance test?

Reply via email to