I don't understand how logcompaction works. I have create an configure a topic and consume from this topic
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic COMPACTION10 kafka-topics --alter --zookeeper localhost:2181 --config min.cleanable.dirty.ratio=0.01 --config cleanup.policy=compact --config segment.ms=100 --config delete.retention.ms=100 --config segment.bytes=1000 --topic COMPACTION10 Later, I have created an scala program to insert (k,V) from 1 to 100 with the same K and V. val data_amount = 100 val topic = "COMPACTION10" val kafkaConfiguration = new Properties kafkaConfiguration.put("bootstrap.servers", "localhost:9092") kafkaConfiguration.put("key.serializer", Class.forName("org.apache.kafka.common.serialization.StringSerializer")) kafkaConfiguration.put("value.serializer", Class.forName("org.apache.kafka.common.serialization.StringSerializer")) val producer = new KafkaProducer[String, String](kafkaConfiguration) for (id <- 1 to data_amount){ println( id ) val record = new ProducerRecord(topic, id.toString, id.toString) producer.send(record) } producer.close() After executing this, I got something like this: 1-1 2-2 3-3 4-4 5-5 6-6 7-7 8-8 9-9 10-10 11-11 12-12 13-13 14-14 15-15 16-16 17-17 18-18 19-19 20-20 21-21 22-22 23-23 24-24 25-25 26-26 27-27 57-57 58-58 59-59 60-60 61-61 62-62 63-63 64-64 65-65 66-66 67-67 68-68 Processed a total of 39 messages Why do I get only 39 messages and don't 100 messages?? All they have a different key, so, it shouldn't have compaction. If I send a new 100 values (1 to 100), I get about 50 messages with some new values. Later, I have tried to send values with an empty value to do a delete compaction, but it's not possible to send a None or null value in the producer. If I send "" value, they are recognized like new values: 1-1 2-2 3-3 4-4 5-5 6-6 7-7 8-8 9-9 10-10 11-11 12-12 13-13 14-14 15-15 16-16 17-17 18-18 19-19 20-20 21-21 22-22 23-23 24-24 25-25 26-26 27-27 57-57 58-58 59-59 60-60 61-61 62-62 63-63 64-64 65-65 66-66 67-67 68-68 1- 2- 3- 4- 5- 6- 7- 8- 9- 10- 11- 12- 13- 14- 15- 16- 17- 18- 19- 20- 21- 22- 23- 24- 25- 26- 27- 28- 29- 30- 31- 32- 33- 34- 35- 36- 37- Processed a total of 76 messages How could I send empty values to get a delete compaction of some keys? I have executed with these values with similar results: I executed with these new parameters with similar results. kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic COMPACTION11 kafka-topics --alter --zookeeper localhost:2181 --config min.cleanable.dirty.ratio=0.01 --config cleanup.policy=compact --config segment.ms=1000 --config delete.retention.ms=10000 --config segment.bytes=1000 --topic COMPACTION11