[jira] [Created] (KAFKA-8149) ERROR Disk error while writing to recovery point file

2019-03-22 Thread wade wu (JIRA)
wade wu created KAFKA-8149:
--

 Summary: ERROR Disk error while writing to recovery point file
 Key: KAFKA-8149
 URL: https://issues.apache.org/jira/browse/KAFKA-8149
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.1
 Environment: Windows
Reporter: wade wu


[2019-03-17 02:55:14,458] ERROR Disk error while writing to recovery point file 
in directory I:\data\Kafka\kafka-datalogs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: 
H:\data\Kafka\kafka-datalogs\AzPubSubCompactTestNew1-0\01170892.snapshot
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 at 
sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
 at java.nio.file.Files.deleteIfExists(Files.java:1165)
 at 
kafka.log.ProducerStateManager$$anonfun$kafka$log$ProducerStateManager$$deleteSnapshotFiles$2.apply(ProducerStateManager.scala:458)
 at 
kafka.log.ProducerStateManager$$anonfun$kafka$log$ProducerStateManager$$deleteSnapshotFiles$2.apply(ProducerStateManager.scala:457)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
 at 
kafka.log.ProducerStateManager$.kafka$log$ProducerStateManager$$deleteSnapshotFiles(ProducerStateManager.scala:457)
 at 
kafka.log.ProducerStateManager$.deleteSnapshotsBefore(ProducerStateManager.scala:454)
 at 
kafka.log.ProducerStateManager.deleteSnapshotsBefore(ProducerStateManager.scala:763)
 at kafka.log.Log.deleteSnapshotsAfterRecoveryPointCheckpoint(Log.scala:1461)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1$$anonfun$apply$29$$anonfun$apply$31.apply(LogManager.scala:577)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1$$anonfun$apply$29$$anonfun$apply$31.apply(LogManager.scala:577)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1$$anonfun$apply$29.apply(LogManager.scala:577)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1$$anonfun$apply$29.apply(LogManager.scala:573)
 at scala.Option.foreach(Option.scala:257)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1.apply(LogManager.scala:573)
 at 
kafka.log.LogManager$$anonfun$kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir$1.apply(LogManager.scala:572)
 at scala.Option.foreach(Option.scala:257)
 at 
kafka.log.LogManager.kafka$log$LogManager$$checkpointLogRecoveryOffsetsInDir(LogManager.scala:572)
 at 
kafka.log.LogManager$$anonfun$checkpointLogRecoveryOffsets$1.apply(LogManager.scala:556)
 at 
kafka.log.LogManager$$anonfun$checkpointLogRecoveryOffsets$1.apply(LogManager.scala:556)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at kafka.log.LogManager.checkpointLogRecoveryOffsets(LogManager.scala:556)
 at kafka.log.LogManager.truncateTo(LogManager.scala:520)
 at 
kafka.cluster.Partition$$anonfun$truncateTo$1.apply$mcV$sp(Partition.scala:665)
 at kafka.cluster.Partition$$anonfun$truncateTo$1.apply(Partition.scala:665)
 at kafka.cluster.Partition$$anonfun$truncateTo$1.apply(Partition.scala:665)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:256)
 at kafka.cluster.Partition.truncateTo(Partition.scala:664)
 at 
kafka.server.ReplicaFetcherThread$$anonfun$maybeTruncate$1.apply(ReplicaFetcherThread.scala:320)
 at 
kafka.server.ReplicaFetcherThread$$anonfun$maybeTruncate$1.apply(ReplicaFetcherThread.scala:301)
 at scala.collection.immutable.Map$Map2.foreach(Map.scala:137)
 at 
kafka.server.ReplicaFetcherThread.maybeTruncate(ReplicaFetcherThread.scala:301)
 at 
kafka.server.AbstractFetcherThread$$anonfun$maybeTruncate$1.apply$mcV$sp(AbstractFetcherThread.scala:133)
 at 
kafka.server.AbstractFetcherThread$$anonfun$maybeTruncate$1.apply(AbstractFetcherThread.scala:130)
 at 
kafka.server.AbstractFetcherThread$$anonfun$maybeTruncate$1.apply(AbstractFetcherThread.scala:130)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at 
kafka.server.AbstractFetcherThread.maybeTruncate(AbstractFetcherThread.scala:130)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:100)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8099) java.nio.file.AccessDeniedException: .swap renamed to .log failed

2019-03-12 Thread wade wu (JIRA)
wade wu created KAFKA-8099:
--

 Summary: java.nio.file.AccessDeniedException: .swap renamed to 
.log failed
 Key: KAFKA-8099
 URL: https://issues.apache.org/jira/browse/KAFKA-8099
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.1
Reporter: wade wu


[2019-03-11 16:02:00,021] ERROR Error while loading log dir 
D:\data\Kafka\kafka-datalogs (kafka.log.LogManager)
java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log: The 
process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:376)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:223)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.replaceSegments(Log.scala:1697)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:391)
 at kafka.log.Log$$anonfun$completeSwapOperations$1.apply(Log.scala:380)
 at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
 at kafka.log.Log.completeSwapOperations(Log.scala:380)
 at kafka.log.Log.loadSegments(Log.scala:408)
 at kafka.log.Log.(Log.scala:216)
 at kafka.log.Log$.apply(Log.scala:1788)
 at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:260)
 at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:340)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.AccessDeniedException: 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log.swap -> 
D:\data\Kafka\kafka-datalogs\kattesttopic-23\03505795.log
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 18 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7020) Error when deleting topic with access denied exception

2018-06-20 Thread wade wu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16518342#comment-16518342
 ] 

wade wu commented on KAFKA-7020:


[~vahid] and this issue is similar to another one, there's also another PR: 
https://issues.apache.org/jira/browse/KAFKA-6983   

if you're merging the PR, please merge both of them. 

 

Thank you! 

 

Wade WU

> Error when deleting topic with access denied exception
> --
>
> Key: KAFKA-7020
> URL: https://issues.apache.org/jira/browse/KAFKA-7020
> Project: Kafka
>  Issue Type: Bug
>  Components: log
> Environment: Windows 10
>Reporter: wade wu
>Priority: Major
>
> Error collected from server.log of Kakfa broker: 
> 2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
> at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
>  at kafka.cluster.Partition.delete(Partition.scala:235)
>  at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
>  at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:198)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.AccessDeniedException: 
> D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 23 more
> [2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
> 

[jira] [Commented] (KAFKA-7020) Error when deleting topic with access denied exception

2018-06-19 Thread wade wu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517388#comment-16517388
 ] 

wade wu commented on KAFKA-7020:


[~vahid], Let me provide some repro steps: 

Environment: Windows 10, any version of Windows should be fine. 
 # create a Kafka cluster, in my case I am using 6 brokers, but any number 
should be fine;
 # install third party KafkaTool 2.0: [http://www.kafkatool.com/] 
 # Connect Kafka Tool to Kafka cluster, create a topic and send message to the 
topic
 # from Kafka Tool, manually delete the topic
 # Kafka server getting error in a few minutes, all the brokers are down

This error is due to the log file of the topic cannot be renamed, because the 
file handles are still opened, or the memory mapping file is not unmapped. 

> Error when deleting topic with access denied exception
> --
>
> Key: KAFKA-7020
> URL: https://issues.apache.org/jira/browse/KAFKA-7020
> Project: Kafka
>  Issue Type: Bug
>  Components: log
> Environment: Windows 10
>Reporter: wade wu
>Priority: Major
>
> Error collected from server.log of Kakfa broker: 
> 2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
> at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
>  at kafka.cluster.Partition.delete(Partition.scala:235)
>  at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
>  at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:198)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.AccessDeniedException: 
> D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 23 more
> [2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.L

[jira] [Commented] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-14 Thread wade wu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16512826#comment-16512826
 ] 

wade wu commented on KAFKA-6982:


You can manually add that line of code and build Kafka.


-- 
Best Regards
 吴清俊|Wade Wu


> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
> Fix For: 1.1.1
>
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
> ..
>  
> This line of code in SocketServer.scala causing the error: 
>                   {color:#33} currentProcessor = 
> currentProcessor{color:#d04437} % processors.size{color}{color}
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7020) Error when deleting topic with access denied exception

2018-06-07 Thread wade wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-7020:
---
Summary: Error when deleting topic with access denied exception  (was: 
Error while renaming dir for topic_XXX in log dir YYY)

> Error when deleting topic with access denied exception
> --
>
> Key: KAFKA-7020
> URL: https://issues.apache.org/jira/browse/KAFKA-7020
> Project: Kafka
>  Issue Type: Bug
>  Components: log
> Environment: Windows 10
>Reporter: wade wu
>Priority: Major
>
> Error collected from server.log of Kakfa broker: 
> 2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
> at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
>  at kafka.cluster.Partition.delete(Partition.scala:235)
>  at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
>  at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:198)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.AccessDeniedException: 
> D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 23 more
> [2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at kaf

[jira] [Commented] (KAFKA-7020) Error while renaming dir for topic_XXX in log dir YYY

2018-06-07 Thread wade wu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16505411#comment-16505411
 ] 

wade wu commented on KAFKA-7020:


This is very similar to this JIRA: 
https://issues.apache.org/jira/browse/KAFKA-6983 the fix is also similar. 

> Error while renaming dir for topic_XXX in log dir YYY
> -
>
> Key: KAFKA-7020
> URL: https://issues.apache.org/jira/browse/KAFKA-7020
> Project: Kafka
>  Issue Type: Bug
>  Components: log
> Environment: Windows 10
>Reporter: wade wu
>Priority: Major
>
> Error collected from server.log of Kakfa broker: 
> 2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
> at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>  at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
>  at kafka.cluster.Partition.delete(Partition.scala:235)
>  at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
>  at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:198)
>  at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
>  at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.AccessDeniedException: 
> D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
>  ... 23 more
> [2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
> D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
>  at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.renameDir(Log.scala:577)
>  at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
>  at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
>  at kafka.utils.CoreUtils$.i

[jira] [Created] (KAFKA-7020) Error while renaming dir for topic_XXX in log dir YYY

2018-06-07 Thread wade wu (JIRA)
wade wu created KAFKA-7020:
--

 Summary: Error while renaming dir for topic_XXX in log dir YYY
 Key: KAFKA-7020
 URL: https://issues.apache.org/jira/browse/KAFKA-7020
 Project: Kafka
  Issue Type: Bug
  Components: log
 Environment: Windows 10
Reporter: wade wu


Error collected from server.log of Kakfa broker: 

2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log dir 
D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
 at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
 at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.renameDir(Log.scala:577)
 at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
 at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)

at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
 at kafka.cluster.Partition.delete(Partition.scala:235)
 at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
 at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:198)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.AccessDeniedException: 
D:\data\Kafka\kafka-logs\test5-1 -> 
D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 23 more
[2018-06-07 15:05:17,805] ERROR Error while renaming dir for test5-1 in log dir 
D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: D:\data\Kafka\kafka-logs\test5-1 -> 
D:\data\Kafka\kafka-logs\test5-1.87985bad40e441e1a4d08af4541db7ce-delete
 at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
 at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
 at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.renameDir(Log.scala:577)
 at kafka.log.LogManager.asyncDelete(LogManager.scala:813)
 at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
 at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
 at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
 at kafka.cluster.Partition.delete(Partition.scala:235)
 at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891)
 at scala.collection.AbstractIterator.foreach(Iterator.sc

[jira] [Created] (KAFKA-6983) Error while deleting segments - The process cannot access the file because it is being used by another process

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6983:
--

 Summary: Error while deleting segments - The process cannot access 
the file because it is being used by another process
 Key: KAFKA-6983
 URL: https://issues.apache.org/jira/browse/KAFKA-6983
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 1.1.0
 Environment: Windows 10
Reporter: wade wu


..

[2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in 
dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
 at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
 at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601)
 at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
 at kafka.log.Log.deleteSegments(Log.scala:1161)
 at kafka.log.Log.deleteOldSegments(Log.scala:1156)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
 at kafka.log.Log.deleteOldSegments(Log.scala:1222)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
 at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
 at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
 at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
 at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.FileSystemException: 
D:\data\Kafka\kafka-logs\test4-1\.log -> 
D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The process 
cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
 ... 32 more

 

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread wade wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6982:
---
Description: 
Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
 [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
 [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
 java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..

 

This line of code in SocketServer.scala causing the error: 

                  {color:#33} currentProcessor = 
currentProcessor{color:#d04437} % processors.size{color}{color}

 

 

 

 

  was:
Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..


> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018

[jira] [Created] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-01 Thread wade wu (JIRA)
wade wu created KAFKA-6982:
--

 Summary: java.lang.ArithmeticException: / by zero
 Key: KAFKA-6982
 URL: https://issues.apache.org/jira/browse/KAFKA-6982
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 1.1.0
 Environment: Environment: Windows 10. 

Reporter: wade wu


Producer keeps sending messages to Kafka, Kafka is down. 

Server.log shows: 

..

[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to log 
file D:\data\Kafka\kafka-logs\__consumer_offsets-6\.log due 
to Corrupt index found, index file 
(D:\data\Kafka\kafka-logs\__consumer_offsets-6\.index) has 
non-zero size but the last offset is 0 which is no greater than the base offset 
0.}, recovering segment and rebuilding index files... (kafka.log.Log)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)
[2018-06-01 17:01:34,664] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:354)
 at java.lang.Thread.run(Thread.java:748)

..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6722) SensorAccess.getOrCreate should be more efficient

2018-05-31 Thread wade wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6722:
---
External issue URL: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-NEXT%3A+Get+rid+of+unnecessary+read+lock

> SensorAccess.getOrCreate should be more efficient
> -
>
> Key: KAFKA-6722
> URL: https://issues.apache.org/jira/browse/KAFKA-6722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: wade wu
>Priority: Major
>
> In this source code: 
> [https://github.com/Microsoft/kafka/blob/azpubsub-release-0-10-2/core/src/main/scala/kafka/server/SensorAccess.scala]
> The lock/unlock of read lock in getOrCreate() is not necessary, or it should 
> be refactored. For each request from Producer, this read lock lock/unlock is 
> called and lock/unlock, it is costing the time. 
> By using a temp variable, we can completely get rid of the read lock. 
> var sensor: Sensor = metrics.getSensor(sensorName)
> if (sensor == null) {
> lock.writeLock().lock()
> try
> {  var temp = new sensor(); tmp.add(metrics); sensor = temp; }
> catch()
> {...}
>  
> lock.writelock.unlock();
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6722) SensorAccess.getOrCreate should be more efficient

2018-04-23 Thread wade wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6722:
---
Description: 
In this source code: 
[https://github.com/Microsoft/kafka/blob/azpubsub-release-0-10-2/core/src/main/scala/kafka/server/SensorAccess.scala]

The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. 

By using a temp variable, we can completely get rid of the read lock. 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try

{  var temp = new sensor(); tmp.add(metrics); sensor = temp; }

catch()

{...}

 

lock.writelock.unlock();

 

 

  was:
The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. 

By using a temp variable, we can completely get rid of the read lock. 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{

 var temp = new sensor();

tmp.add(metrics);

sensor = temp;

}

catch()

{...}

 

lock.writelock.unlock();

 

 


> SensorAccess.getOrCreate should be more efficient
> -
>
> Key: KAFKA-6722
> URL: https://issues.apache.org/jira/browse/KAFKA-6722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: wade wu
>Priority: Major
>
> In this source code: 
> [https://github.com/Microsoft/kafka/blob/azpubsub-release-0-10-2/core/src/main/scala/kafka/server/SensorAccess.scala]
> The lock/unlock of read lock in getOrCreate() is not necessary, or it should 
> be refactored. For each request from Producer, this read lock lock/unlock is 
> called and lock/unlock, it is costing the time. 
> By using a temp variable, we can completely get rid of the read lock. 
> var sensor: Sensor = metrics.getSensor(sensorName)
> if (sensor == null) {
> lock.writeLock().lock()
> try
> {  var temp = new sensor(); tmp.add(metrics); sensor = temp; }
> catch()
> {...}
>  
> lock.writelock.unlock();
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6722) SensorAccess.getOrCreate should be more efficient

2018-04-23 Thread wade wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6722:
---
Description: 
The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. 

By using a temp variable, we can completely get rid of the read lock. 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{

 var temp = new sensor();

tmp.add(metrics);

sensor = temp;

}

catch()

{...}

 

lock.writelock.unlock();

 

 

  was:
The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. 

The existing code is doing this in order to wait until the sensor 
initialization is finished, but this can be done when the sensor is created 
under the write lock, by having the thread sleep for a while (few 
milliseconds), and this time can be amortized, since sensor creating is a one 
time thing.

It can be easily fixed using code below, and it is still thread safe: 

 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{




> SensorAccess.getOrCreate should be more efficient
> -
>
> Key: KAFKA-6722
> URL: https://issues.apache.org/jira/browse/KAFKA-6722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: wade wu
>Priority: Major
>
> The lock/unlock of read lock in getOrCreate() is not necessary, or it should 
> be refactored. For each request from Producer, this read lock lock/unlock is 
> called and lock/unlock, it is costing the time. 
> By using a temp variable, we can completely get rid of the read lock. 
> var sensor: Sensor = metrics.getSensor(sensorName)
> if (sensor == null) {
> lock.writeLock().lock()
> try{
>  var temp = new sensor();
> tmp.add(metrics);
> sensor = temp;
> }
> catch()
> {...}
>  
> lock.writelock.unlock();
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6722) SensorAccess.getOrCreate should be more efficient

2018-03-27 Thread wade wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wade wu updated KAFKA-6722:
---
Description: 
The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. 

The existing code is doing this in order to wait until the sensor 
initialization is finished, but this can be done when the sensor is created 
under the write lock, by having the thread sleep for a while (few 
milliseconds), and this time can be amortized, since sensor creating is a one 
time thing.

It can be easily fixed using code below, and it is still thread safe: 

 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{



  was:
The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. It can be easily fixed using 
code below, and it is still thread safe: 

 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{




> SensorAccess.getOrCreate should be more efficient
> -
>
> Key: KAFKA-6722
> URL: https://issues.apache.org/jira/browse/KAFKA-6722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: wade wu
>Priority: Major
>
> The lock/unlock of read lock in getOrCreate() is not necessary, or it should 
> be refactored. For each request from Producer, this read lock lock/unlock is 
> called and lock/unlock, it is costing the time. 
> The existing code is doing this in order to wait until the sensor 
> initialization is finished, but this can be done when the sensor is created 
> under the write lock, by having the thread sleep for a while (few 
> milliseconds), and this time can be amortized, since sensor creating is a one 
> time thing.
> It can be easily fixed using code below, and it is still thread safe: 
>  
> var sensor: Sensor = metrics.getSensor(sensorName)
> if (sensor == null) {
> lock.writeLock().lock()
> try{
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6722) SensorAccess.getOrCreate should be more efficient

2018-03-27 Thread wade wu (JIRA)
wade wu created KAFKA-6722:
--

 Summary: SensorAccess.getOrCreate should be more efficient
 Key: KAFKA-6722
 URL: https://issues.apache.org/jira/browse/KAFKA-6722
 Project: Kafka
  Issue Type: Improvement
Reporter: wade wu


The lock/unlock of read lock in getOrCreate() is not necessary, or it should be 
refactored. For each request from Producer, this read lock lock/unlock is 
called and lock/unlock, it is costing the time. It can be easily fixed using 
code below, and it is still thread safe: 

 

var sensor: Sensor = metrics.getSensor(sensorName)

if (sensor == null) {

lock.writeLock().lock()

try{





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)