[ 
https://issues.apache.org/jira/browse/KAFKA-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450532#comment-15450532
 ] 

James Cheng commented on KAFKA-3410:
------------------------------------

KAFKA-3924 has not fixed this issue.

I followed steps 1-10 from my original bug report, using Kafka 0.10.0.1 (which 
contains the fix for KAFKA-3924). At Step 10, instead of broker2 exiting, it 
instead does a controlled shutdown.

{noformat}
[2016-08-30 16:51:03,374] FATAL [ReplicaFetcherThread-0-1], Exiting because log 
truncation is not allowed for topic test, Current leader 1's latest offset 0 is 
less than replica 2's latest offset 1 (kafka.server.ReplicaFetcherThread)
[2016-08-30 16:51:03,374] INFO [Kafka Server 2], shutting down 
(kafka.server.KafkaServer)
[2016-08-30 16:51:03,375] INFO [Kafka Server 2], Starting controlled shutdown 
(kafka.server.KafkaServer)
[2016-08-30 16:51:03,397] INFO [Kafka Server 2], Controlled shutdown succeeded 
(kafka.server.KafkaServer)
[2016-08-30 16:51:03,399] INFO [Socket Server on Broker 2], Shutting down 
(kafka.network.SocketServer)
[2016-08-30 16:51:03,403] INFO [Socket Server on Broker 2], Shutdown completed 
(kafka.network.SocketServer)
[2016-08-30 16:51:03,404] INFO [Kafka Request Handler on Broker 2], shutting 
down (kafka.server.KafkaRequestHandlerPool)
{noformat}

So the broker still takes itself completely offline, in response to a problem 
with a single partition.

One thing I noticed, the broker output all those lines and did a controlled 
shutdown. However, the java process did not exit. It still stays alive. And 
there is still an entry for that broker in zookeeper at /brokers/ids/2.

So the controlled shutdown didn't successfully complete.

> Unclean leader election and "Halting because log truncation is not allowed"
> ---------------------------------------------------------------------------
>
>                 Key: KAFKA-3410
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3410
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: James Cheng
>
> I ran into a scenario where one of my brokers would continually shutdown, 
> with the error message:
> [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because 
> log truncation is not allowed for topic test, Current leader 1's latest 
> offset 0 is less than replica 2's latest offset 151 
> (kafka.server.ReplicaFetcherThread)
> I managed to reproduce it with the following scenario:
> 1. Start broker1, with unclean.leader.election.enable=false
> 2. Start broker2, with unclean.leader.election.enable=false
> 3. Create topic, single partition, with replication-factor 2.
> 4. Write data to the topic.
> 5. At this point, both brokers are in the ISR. Broker1 is the partition 
> leader.
> 6. Ctrl-Z on broker2. (Simulates a GC pause or a slow network) Broker2 gets 
> dropped out of ISR. Broker1 is still the leader. I can still write data to 
> the partition.
> 7. Shutdown Broker1. Hard or controlled, doesn't matter.
> 8. rm -rf the log directory of broker1. (This simulates a disk replacement or 
> full hardware replacement)
> 9. Resume broker2. It attempts to connect to broker1, but doesn't succeed 
> because broker1 is down. At this point, the partition is offline. Can't write 
> to it.
> 10. Resume broker1. Broker1 resumes leadership of the topic. Broker2 attempts 
> to join ISR, and immediately halts with the error message:
> [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because 
> log truncation is not allowed for topic test, Current leader 1's latest 
> offset 0 is less than replica 2's latest offset 151 
> (kafka.server.ReplicaFetcherThread)
> I am able to recover by setting unclean.leader.election.enable=true on my 
> brokers.
> I'm trying to understand a couple things:
> * In step 10, why is broker1 allowed to resume leadership even though it has 
> no data?
> * In step 10, why is it necessary to stop the entire broker due to one 
> partition that is in this state? Wouldn't it be possible for the broker to 
> continue to serve traffic for all the other topics, and just mark this one as 
> unavailable?
> * Would it make sense to allow an operator to manually specify which broker 
> they want to become the new master? This would give me more control over how 
> much data loss I am willing to handle. In this case, I would want broker2 to 
> become the new master. Or, is that possible and I just don't know how to do 
> it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to