[jira] [Commented] (KAFKA-6188) Broker fails with FATAL Shutdown - log dirs have failed

2019-09-08 Thread prehistoricpenguin (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925309#comment-16925309
 ] 

prehistoricpenguin commented on KAFKA-6188:
---

[~jhandey]

We are running Kafka on Windows with my 
[patch|[https://github.com/apache/kafka/pull/6403]]  in our company, if you 
have the knowledge to build Kafka, you may have a try.

> Broker fails with FATAL Shutdown - log dirs have failed
> ---
>
> Key: KAFKA-6188
> URL: https://issues.apache.org/jira/browse/KAFKA-6188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, log
>Affects Versions: 1.0.0, 1.0.1
> Environment: Windows 10
>Reporter: Valentina Baljak
>Assignee: Dong Lin
>Priority: Blocker
>  Labels: windows
> Attachments: Segments are opened before deletion, 
> kafka_2.10-0.10.2.1.zip, output.txt
>
>
> Just started with version 1.0.0 after a 4-5 months of using 0.10.2.1. The 
> test environment is very simple, with only one producer and one consumer. 
> Initially, everything started fine, stand alone tests worked as expected. 
> However, running my code, Kafka clients fail after approximately 10 minutes. 
> Kafka won't start after that and it fails with the same error. 
> Deleting logs helps to start again, and the same problem occurs.
> Here is the error traceback:
> [2017-11-08 08:21:57,532] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,548] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,798] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2017-11-08 08:21:57,813] INFO [SocketServer brokerId=0] Started 1 acceptor 
> threads (kafka.network.SocketServer)
> [2017-11-08 08:21:57,829] INFO [ExpirationReaper-0-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [LogDirFailureHandler]: Starting 
> (kafka.server.ReplicaManager$LogDirFailureHandler)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Partitions  are 
> offline due to failure on log directory C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions  (kafka.server.ReplicaFetcherManager)
> [2017-11-08 08:21:57,892] INFO [ReplicaManager broker=0] Broker 0 stopped 
> fetcher for partitions  because they are in the failed log dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,892] INFO Stopping serving logs in dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
> [2017-11-08 08:21:57,892] FATAL Shutdown broker because all log dirs in 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (KAFKA-6188) Broker fails with FATAL Shutdown - log dirs have failed

2019-09-08 Thread prehistoricpenguin (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925309#comment-16925309
 ] 

prehistoricpenguin edited comment on KAFKA-6188 at 9/9/19 2:38 AM:
---

[~jhandey]

We are running Kafka on Windows with my [patch 
|[https://github.com/apache/kafka/pull/6403]] in our company, if you have the 
knowledge to build Kafka, you may have a try.


was (Author: prehistoricpenguin):
[~jhandey]

We are running Kafka on Windows with my 
[patch|[https://github.com/apache/kafka/pull/6403]]  in our company, if you 
have the knowledge to build Kafka, you may have a try.

> Broker fails with FATAL Shutdown - log dirs have failed
> ---
>
> Key: KAFKA-6188
> URL: https://issues.apache.org/jira/browse/KAFKA-6188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, log
>Affects Versions: 1.0.0, 1.0.1
> Environment: Windows 10
>Reporter: Valentina Baljak
>Assignee: Dong Lin
>Priority: Blocker
>  Labels: windows
> Attachments: Segments are opened before deletion, 
> kafka_2.10-0.10.2.1.zip, output.txt
>
>
> Just started with version 1.0.0 after a 4-5 months of using 0.10.2.1. The 
> test environment is very simple, with only one producer and one consumer. 
> Initially, everything started fine, stand alone tests worked as expected. 
> However, running my code, Kafka clients fail after approximately 10 minutes. 
> Kafka won't start after that and it fails with the same error. 
> Deleting logs helps to start again, and the same problem occurs.
> Here is the error traceback:
> [2017-11-08 08:21:57,532] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,548] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,798] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2017-11-08 08:21:57,813] INFO [SocketServer brokerId=0] Started 1 acceptor 
> threads (kafka.network.SocketServer)
> [2017-11-08 08:21:57,829] INFO [ExpirationReaper-0-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [LogDirFailureHandler]: Starting 
> (kafka.server.ReplicaManager$LogDirFailureHandler)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Partitions  are 
> offline due to failure on log directory C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions  (kafka.server.ReplicaFetcherManager)
> [2017-11-08 08:21:57,892] INFO [ReplicaManager broker=0] Broker 0 stopped 
> fetcher for partitions  because they are in the failed log dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,892] INFO Stopping serving logs in dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
> [2017-11-08 08:21:57,892] FATAL Shutdown broker because all log dirs in 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-6188) Broker fails with FATAL Shutdown - log dirs have failed

2019-03-12 Thread prehistoricpenguin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790418#comment-16790418
 ] 

prehistoricpenguin commented on KAFKA-6188:
---

Hi, [~lindong], [~TeilaRei]

I have made a fix for this issue and our manually test passed via this 
[PR|https://github.com/apache/kafka/pull/6403], can you please help me review 
my code? Thank you so much!

> Broker fails with FATAL Shutdown - log dirs have failed
> ---
>
> Key: KAFKA-6188
> URL: https://issues.apache.org/jira/browse/KAFKA-6188
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, log
>Affects Versions: 1.0.0, 1.0.1
> Environment: Windows 10
>Reporter: Valentina Baljak
>Assignee: Dong Lin
>Priority: Blocker
>  Labels: windows
> Attachments: Segments are opened before deletion, 
> kafka_2.10-0.10.2.1.zip, output.txt
>
>
> Just started with version 1.0.0 after a 4-5 months of using 0.10.2.1. The 
> test environment is very simple, with only one producer and one consumer. 
> Initially, everything started fine, stand alone tests worked as expected. 
> However, running my code, Kafka clients fail after approximately 10 minutes. 
> Kafka won't start after that and it fails with the same error. 
> Deleting logs helps to start again, and the same problem occurs.
> Here is the error traceback:
> [2017-11-08 08:21:57,532] INFO Starting log cleanup with a period of 30 
> ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,548] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager)
> [2017-11-08 08:21:57,798] INFO Awaiting socket connections on 0.0.0.0:9092. 
> (kafka.network.Acceptor)
> [2017-11-08 08:21:57,813] INFO [SocketServer brokerId=0] Started 1 acceptor 
> threads (kafka.network.SocketServer)
> [2017-11-08 08:21:57,829] INFO [ExpirationReaper-0-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
> [2017-11-08 08:21:57,845] INFO [LogDirFailureHandler]: Starting 
> (kafka.server.ReplicaManager$LogDirFailureHandler)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Partitions  are 
> offline due to failure on log directory C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
> (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,860] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions  (kafka.server.ReplicaFetcherManager)
> [2017-11-08 08:21:57,892] INFO [ReplicaManager broker=0] Broker 0 stopped 
> fetcher for partitions  because they are in the failed log dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.server.ReplicaManager)
> [2017-11-08 08:21:57,892] INFO Stopping serving logs in dir 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
> [2017-11-08 08:21:57,892] FATAL Shutdown broker because all log dirs in 
> C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8145) Broker fails with FATAL Shutdown on Windows, log or index renaming fail

2019-03-22 Thread prehistoricpenguin (JIRA)
prehistoricpenguin created KAFKA-8145:
-

 Summary: Broker fails with FATAL Shutdown on Windows, log or index 
renaming fail
 Key: KAFKA-8145
 URL: https://issues.apache.org/jira/browse/KAFKA-8145
 Project: Kafka
  Issue Type: Bug
  Components: core, log
Affects Versions: 2.1.1, 2.0.1, 1.1.1, 1.1.0
Reporter: prehistoricpenguin


When consumer offset log cleaning or Kafka log rolling triggered on Windows(The 
default rolling time is 168 hours, eg 1 week). Kafka broker will shut down due 
to file renaming fail, on Windows is't invalid to rename a file when it's 
opened, so this issue is Windows specific.

A topical error log during log rolling:
{quote}at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
at kafka.log.Log.deleteSegments(Log.scala:1161)
at kafka.log.Log.deleteOldSegments(Log.scala:1156)
at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
at kafka.log.Log.deleteOldSegments(Log.scala:1222)
at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
 Source)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Suppressed: java.nio.file.FileSystemException: 
Z:\tmp\kafka-logs\test-0\.log -> 
Z:\tmp\kafka-logs\test-0\.log.deleted: The process cannot 
access the file because it is being used by another process.
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
at java.nio.file.Files.move(Unknown Source)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
... 32 more{quote}
 
And the log from consumer offset log cleaner:
{quote}[2019-02-26 02:16:05,151] ERROR Failed to clean up log for 
__consumer_offsets-29 in dir C:\Program Files (x86)\...\tmp\kafka-logs due to 
IOException (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: C:\Program Files 
(x86)\...\tmp\kafka-logs\__consumer_offsets-29\.log.cleaned 
-> C:\Program Files 
(x86)...\tmp\kafka-logs\__consumer_offsets-29\.log.swap: 
The process cannot access the file because it is being used by another process.
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
at kafka.log.Log.replaceSegments(Log.scala:1644)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:535)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
at kafka.log.Cleaner.clean(LogCleaner.scala:438)
at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
at kafka.utils.ShutdownableT

[jira] [Updated] (KAFKA-8145) Broker fails with FATAL Shutdown on Windows, log or index renaming fail

2019-03-22 Thread prehistoricpenguin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

prehistoricpenguin updated KAFKA-8145:
--
Description: 
When consumer offset log cleaning or Kafka log rolling triggered on Windows(The 
default rolling time is 168 hours, eg 1 week). Kafka broker will shut down due 
to file renaming fail, on Windows is't invalid to rename a file when it's 
opened, so this issue is Windows specific.

A topical error log during log rolling:
{quote}at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.deleteSegments(Log.scala:1161)
 at kafka.log.Log.deleteOldSegments(Log.scala:1156)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
 at kafka.log.Log.deleteOldSegments(Log.scala:1222)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
 at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
 at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
 at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
 at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
 Source)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
 Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Suppressed: java.nio.file.FileSystemException: 
Z:\tmp\kafka-logs\test-0\.log -> 
Z:\tmp\kafka-logs\test-0\.log.deleted: The process cannot 
access the file because it is being used by another process.
 at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
 at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
 at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
 at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
 at java.nio.file.Files.move(Unknown Source)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 32 more
{quote}
 
 And the log from consumer offset log cleaner:
{quote}[2019-02-26 02:16:05,151] ERROR Failed to clean up log for 
__consumer_offsets-29 in dir C:\Program Files (x86)\...\tmp\kafka-logs due to 
IOException (kafka.server.LogDirFailureChannel)
 java.nio.file.FileSystemException: C:\Program Files 
(x86)\...\tmp\kafka-logs__consumer_offsets-29\.log.cleaned 
-> C:\Program Files 
(x86)...\tmp\kafka-logs__consumer_offsets-29\.log.swap: The 
process cannot access the file because it is being used by another process.
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.replaceSegments(Log.scala:1644)
 at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:535)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
 at kafka.log.Cleaner.clean(LogCleaner.scala:438)
 at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
 at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
 Suppressed: java.nio.file.FileSystemException: C:\Program Files 
(x86)\...\tmp\kafka-logs__consumer_offsets-29\.log.clea

[jira] [Commented] (KAFKA-8145) Broker fails with FATAL Shutdown on Windows, log or index renaming fail

2019-03-22 Thread prehistoricpenguin (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798791#comment-16798791
 ] 

prehistoricpenguin commented on KAFKA-8145:
---

I have tried to fixed it via this 
[PR|https://github.com/apache/kafka/pull/6403], and our manual test have 
passed.  Please review my patch.

> Broker fails with FATAL Shutdown on Windows, log or index renaming fail
> ---
>
> Key: KAFKA-8145
> URL: https://issues.apache.org/jira/browse/KAFKA-8145
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 1.1.0, 1.1.1, 2.0.1, 2.1.1
>Reporter: prehistoricpenguin
>Priority: Major
>  Labels: windows
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> When consumer offset log cleaning or Kafka log rolling triggered on 
> Windows(The default rolling time is 168 hours, eg 1 week). Kafka broker will 
> shut down due to file renaming fail, on Windows is't invalid to rename a file 
> when it's opened, so this issue is Windows specific.
> A topical error log during log rolling:
> {quote}at 
> kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.deleteSegments(Log.scala:1161)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1156)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1222)
>  at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
>  at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
>  at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
>  at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>  at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
>  Source)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>  at java.lang.Thread.run(Unknown Source)
>  Suppressed: java.nio.file.FileSystemException: 
> Z:\tmp\kafka-logs\test-0\.log -> 
> Z:\tmp\kafka-logs\test-0\.log.deleted: The process cannot 
> access the file because it is being used by another process.
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>  at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
>  at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
>  at java.nio.file.Files.move(Unknown Source)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 32 more
> {quote}
>  
>  And the log from consumer offset log cleaner:
> {quote}[2019-02-26 02:16:05,151] ERROR Failed to clean up log for 
> __consumer_offsets-29 in dir C:\Program Files (x86)\...\tmp\kafka-logs due to 
> IOException (kafka.server.LogDirFailureChannel)
>  java.nio.file.FileSystemException: C:\Program Files 
> (x86)\...\tmp\kafka-logs__consumer_offsets-29\.log.cleaned
>  -> C:\Program Files 
> (x86)...\tmp\kafka-logs__consumer_offsets-29\.log.swap: 
> The process cannot access the file because it is being used by another 
> process.
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at org.apache.ka

[jira] [Created] (KAFKA-8549) Kafka Windows start up failed due to topic name conflict

2019-06-17 Thread prehistoricpenguin (JIRA)
prehistoricpenguin created KAFKA-8549:
-

 Summary: Kafka Windows start up failed due to topic name conflict 
 Key: KAFKA-8549
 URL: https://issues.apache.org/jira/browse/KAFKA-8549
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.2.1
Reporter: prehistoricpenguin


We are running Kafka server on windows, we got this exception during Kafka 
server start up:
{code:java}
2019-06-11 14:50:48,537] ERROR Error while creating log for 
this_is_a_topic_name in dir C:\Program Files (x86)\dummy_path\tmp\kafka-logs 
(kafka.server.LogDirFailureChannel)
java.io.IOException: The requested operation cannot be performed on a file with 
a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:175)
at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:238)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:238)
at kafka.log.LogSegment.recover(LogSegment.scala:377)
at kafka.log.Log.recoverSegment(Log.scala:500)
at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:482)
at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
at kafka.log.Log.loadSegmentFiles(Log.scala:454)
at kafka.log.Log.$anonfun$loadSegments$1(Log.scala:565)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2034)
at kafka.log.Log.loadSegments(Log.scala:559)
at kafka.log.Log.(Log.scala:292)
at kafka.log.Log$.apply(Log.scala:2168)
at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:716)
at scala.Option.getOrElse(Option.scala:138)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:674)
at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:202)
at kafka.utils.Pool$$anon$1.apply(Pool.scala:61)
at 
java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:198)
at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:376)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.map(TraversableLike.scala:237)
at scala.collection.TraversableLike.map$(TraversableLike.scala:230)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:376)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
at kafka.cluster.Partition.makeLeader(Partition.scala:370)
at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1188)
at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1186)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1098)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:195)
at kafka.server.KafkaApis.handle(KafkaApis.scala:112)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
[2019-06-11 14:50:48,542] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir C:\Program Files (x86)\dummy_path\tmp\kafka-logs 
(kafka.server.ReplicaManager)
[2019-06-11 14:50:48,543] ERROR [ReplicaManager broker=0] Error while making 
broker the leader for partition Topic: this_is_a_topic_name; Partition: 0; 
Leader: None; AllReplicas: ; InSyncReplicas: in dir None 
(kafka.server

[jira] [Updated] (KAFKA-8549) Kafka Windows start up failed due to cannot be performed on a file with a user-mapped section open

2019-06-17 Thread prehistoricpenguin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

prehistoricpenguin updated KAFKA-8549:
--
Summary: Kafka Windows start up failed due to cannot be performed on a file 
with a user-mapped section open  (was: Kafka Windows start up failed due to 
topic name conflict )

> Kafka Windows start up failed due to cannot be performed on a file with a 
> user-mapped section open
> --
>
> Key: KAFKA-8549
> URL: https://issues.apache.org/jira/browse/KAFKA-8549
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.1
>Reporter: prehistoricpenguin
>Priority: Major
>  Labels: crash, windows
>
> We are running Kafka server on windows, we got this exception during Kafka 
> server start up:
> {code:java}
> 2019-06-11 14:50:48,537] ERROR Error while creating log for 
> this_is_a_topic_name in dir C:\Program Files (x86)\dummy_path\tmp\kafka-logs 
> (kafka.server.LogDirFailureChannel)
> java.io.IOException: The requested operation cannot be performed on a file 
> with a user-mapped section open
> at java.io.RandomAccessFile.setLength(Native Method)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:175)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:238)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:238)
> at kafka.log.LogSegment.recover(LogSegment.scala:377)
> at kafka.log.Log.recoverSegment(Log.scala:500)
> at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:482)
> at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
> at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
> at kafka.log.Log.loadSegmentFiles(Log.scala:454)
> at kafka.log.Log.$anonfun$loadSegments$1(Log.scala:565)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2034)
> at kafka.log.Log.loadSegments(Log.scala:559)
> at kafka.log.Log.(Log.scala:292)
> at kafka.log.Log$.apply(Log.scala:2168)
> at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:716)
> at scala.Option.getOrElse(Option.scala:138)
> at kafka.log.LogManager.getOrCreateLog(LogManager.scala:674)
> at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:202)
> at kafka.utils.Pool$$anon$1.apply(Pool.scala:61)
> at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
> at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:198)
> at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:376)
> at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at scala.collection.TraversableLike.map(TraversableLike.scala:237)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:230)
> at scala.collection.AbstractTraversable.map(Traversable.scala:108)
> at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:376)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
> at kafka.cluster.Partition.makeLeader(Partition.scala:370)
> at 
> kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1188)
> at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1186)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.s

[jira] [Updated] (KAFKA-8549) Kafka Windows start up fail due to cannot be performed on a file with a user-mapped section open

2019-06-17 Thread prehistoricpenguin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

prehistoricpenguin updated KAFKA-8549:
--
Summary: Kafka Windows start up fail due to cannot be performed on a file 
with a user-mapped section open  (was: Kafka Windows start up failed due to 
cannot be performed on a file with a user-mapped section open)

> Kafka Windows start up fail due to cannot be performed on a file with a 
> user-mapped section open
> 
>
> Key: KAFKA-8549
> URL: https://issues.apache.org/jira/browse/KAFKA-8549
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.2.1
>Reporter: prehistoricpenguin
>Priority: Major
>  Labels: crash, windows
>
> We are running Kafka server on windows, we got this exception during Kafka 
> server start up:
> {code:java}
> 2019-06-11 14:50:48,537] ERROR Error while creating log for 
> this_is_a_topic_name in dir C:\Program Files (x86)\dummy_path\tmp\kafka-logs 
> (kafka.server.LogDirFailureChannel)
> java.io.IOException: The requested operation cannot be performed on a file 
> with a user-mapped section open
> at java.io.RandomAccessFile.setLength(Native Method)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:175)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:238)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:238)
> at kafka.log.LogSegment.recover(LogSegment.scala:377)
> at kafka.log.Log.recoverSegment(Log.scala:500)
> at kafka.log.Log.$anonfun$loadSegmentFiles$3(Log.scala:482)
> at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792)
> at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
> at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791)
> at kafka.log.Log.loadSegmentFiles(Log.scala:454)
> at kafka.log.Log.$anonfun$loadSegments$1(Log.scala:565)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.log.Log.retryOnOffsetOverflow(Log.scala:2034)
> at kafka.log.Log.loadSegments(Log.scala:559)
> at kafka.log.Log.(Log.scala:292)
> at kafka.log.Log$.apply(Log.scala:2168)
> at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:716)
> at scala.Option.getOrElse(Option.scala:138)
> at kafka.log.LogManager.getOrCreateLog(LogManager.scala:674)
> at kafka.cluster.Partition.$anonfun$getOrCreateReplica$1(Partition.scala:202)
> at kafka.utils.Pool$$anon$1.apply(Pool.scala:61)
> at 
> java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
> at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:198)
> at kafka.cluster.Partition.$anonfun$makeLeader$3(Partition.scala:376)
> at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at scala.collection.TraversableLike.map(TraversableLike.scala:237)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:230)
> at scala.collection.AbstractTraversable.map(Traversable.scala:108)
> at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:376)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
> at kafka.cluster.Partition.makeLeader(Partition.scala:370)
> at 
> kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1188)
> at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1186)
> at 
> kafka.server.ReplicaManager.beco