Question about the Log Compaction
Hi all I am confused about the Log Compaction logic,use OffsetMap to deduplicating the log. in my opinion when there is a hash conflict , data may be lost Eg: Record1(key1,offset1) Record2(key2,offset2) Conditionhash(key1) == hash(key2) && (offset1 < offset2) Result Record1 will be remove by mistake Did I misunderstand the implementation logic?please give me some guidance, thank you very much 1:OffsetMap put logic does not deal with the hash collision, if hash(key1) == hash(key2)key1 will be overwrire 2:the logic of retain record
Question for KafkaRequestHandler
Hi Folks, I am confused about the code below ,why the IO thread set the daemon ? in my thought , daemon thread is not suitable for some importment work def createHandler(id: Int): Unit = synchronized { runnables += new KafkaRequestHandler(id, brokerId, aggregateIdleMeter, threadPoolSize, requestChannel, apis, time) KafkaThread.daemon("kafka-request-handler-" + id, runnables(id)).start() }
Granting permission for Create KIP
Please grant permission for Create KIP to wiki ID: ruanliang_hualun