[jira] [Created] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-02-19 Thread Kartik (JIRA)
Kartik created KAFKA-7950:
-

 Summary: Kafka tools GetOffsetShell -time description 
 Key: KAFKA-7950
 URL: https://issues.apache.org/jira/browse/KAFKA-7950
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.1.0
Reporter: Kartik
Assignee: Kartik


In Kafka GetOffsetShell tool, The --time description should contain information 
regarding what happens when the timestamp value  > recently committed timestamp 
is given.

 

Expected: "If timestamp value provided is greater than recently committed 
timestamp then no offset is returned. "

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-01 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7950:
--
Issue Type: Wish  (was: Improvement)

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782384#comment-16782384
 ] 

Kartik commented on KAFKA-7950:
---

Added Note, in the description which tells the user that, No offset is returned 
if the timestamp provided is greater than recently committed record timestamp.

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782385#comment-16782385
 ] 

Kartik commented on KAFKA-7950:
---

Ref: https://issues.apache.org/jira/browse/KAFKA-7794

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-02 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782622#comment-16782622
 ] 

Kartik commented on KAFKA-8010:
---

Hi [~mimaison] ,

 

If you check the *ConfigCommand.scala* code under 
"*kafka\core\src\main\scala\kafka\admin"*, It expects add-config to be provided 
in single quotes, Since you are providing add-config 
*"*sasl.jaas.config=KafkaServer *"* under double quotes, it's failing.

 

 

Command : 

./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
--entity-name 59 --alter --add-config 
*{color:#33}'{color}*sasl.jaas.config=KafkaServer *'*

 

Can you try the same?

 

Thanks.

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-05 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-8010:
--
Attachment: image-2019-03-05-19-41-44-461.png

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
> Attachments: image-2019-03-05-19-41-44-461.png
>
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-05 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784480#comment-16784480
 ] 

Kartik commented on KAFKA-8010:
---

[~mimaison] For me it's working.  Attaching the image, Ignore the warning 
message. The value is getting parsed when provided in a single quote.

 

!image-2019-03-05-19-41-44-461.png!

 

Are you still getting " Invalid entity config: all configs to be added must be 
in the format "key=val" error message even after proving the data in the single 
quote. Can you share the image?

 

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
> Attachments: image-2019-03-05-19-41-44-461.png
>
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-05 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16784480#comment-16784480
 ] 

Kartik edited comment on KAFKA-8010 at 3/5/19 2:16 PM:
---

[~mimaison] For me it's working.  Attaching the image, Ignore the warning 
message. The value is getting parsed when provided in a single quote.

!image-2019-03-05-19-45-47-168.png!

Are you still getting " Invalid entity config: all configs to be added must be 
in the format "key=val" error message even after proving the data in the single 
quote. Can you share the image?


was (Author: kartikvk1996):
[~mimaison] For me it's working.  Attaching the image, Ignore the warning 
message. The value is getting parsed when provided in a single quote.

 

!image-2019-03-05-19-41-44-461.png!

 

Are you still getting " Invalid entity config: all configs to be added must be 
in the format "key=val" error message even after proving the data in the single 
quote. Can you share the image?

 

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
> Attachments: image-2019-03-05-19-45-47-168.png
>
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8010) kafka-configs.sh does not allow setting config with an equal in the value

2019-03-05 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-8010:
--
Attachment: (was: image-2019-03-05-19-41-44-461.png)

> kafka-configs.sh does not allow setting config with an equal in the value
> -
>
> Key: KAFKA-8010
> URL: https://issues.apache.org/jira/browse/KAFKA-8010
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Mickael Maison
>Priority: Major
> Attachments: image-2019-03-05-19-45-47-168.png
>
>
> The sasl.jaas.config typically includes equals in its value. Unfortunately 
> the kafka-configs tool does not parse such values correctly and hits an error:
> ./kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 59 --alter --add-config "sasl.jaas.config=KafkaServer \{\n  
> org.apache.kafka.common.security.plain.PlainLoginModule required\n  
> username=\"myuser\"\n  password=\"mypassword\";\n};\nClient \{\n  
> org.apache.zookeeper.server.auth.DigestLoginModule required\n  
> username=\"myuser2\"\n  password=\"mypassword2\;\n};"
> requirement failed: Invalid entity config: all configs to be added must be in 
> the format "key=val"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8097) Kafka broker crashes with java.nio.file.FileSystemException Exception

2019-03-12 Thread Kartik (JIRA)
Kartik created KAFKA-8097:
-

 Summary: Kafka broker crashes with 
java.nio.file.FileSystemException Exception
 Key: KAFKA-8097
 URL: https://issues.apache.org/jira/browse/KAFKA-8097
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Kartik
Assignee: Kartik


Kafka broker crashes with below exception while deleting the segments

 

Exception thrown:

The process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
 at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:205)
 at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:489)
 at kafka.log.Log.asyncDeleteSegment(Log.scala:1907)
 at kafka.log.Log.deleteSegment(Log.scala:1892)
 at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1438)
 at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1438)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
 at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1438)
 at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
 at kafka.log.Log.maybeHandleIOException(Log.scala:1996)
 at kafka.log.Log.deleteSegments(Log.scala:1429)
 at kafka.log.Log.deleteOldSegments(Log.scala:1424)
 at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1502)
 at kafka.log.Log.deleteOldSegments(Log.scala:1492)
 at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:898)
 at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:895)
 at scala.collection.immutable.List.foreach(List.scala:388)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:895)
 at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
 at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
 at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Suppressed: java.nio.file.FileSystemException: 
C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index
 -> 
C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index.deleted:
 The process cannot access the file because it is being used by another process.

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
 at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
 at java.nio.file.Files.move(Files.java:1395)
 at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
 ... 30 more
[2019-03-12 12:14:12,830] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8097) Kafka broker crashes with java.nio.file.FileSystemException Exception

2019-04-03 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-8097:
--
Attachment: image-2019-04-03-21-29-23-187.png

> Kafka broker crashes with java.nio.file.FileSystemException Exception
> -
>
> Key: KAFKA-8097
> URL: https://issues.apache.org/jira/browse/KAFKA-8097
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Kartik
>Assignee: Kartik
>Priority: Minor
> Attachments: image-2019-04-03-21-29-23-187.png
>
>
> Kafka broker crashes with below exception while deleting the segments
>  
> Exception thrown:
> The process cannot access the file because it is being used by another 
> process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
>  at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:205)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:489)
>  at kafka.log.Log.asyncDeleteSegment(Log.scala:1907)
>  at kafka.log.Log.deleteSegment(Log.scala:1892)
>  at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1438)
>  at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1438)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>  at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1438)
>  at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1996)
>  at kafka.log.Log.deleteSegments(Log.scala:1429)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1424)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1502)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1492)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:898)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:895)
>  at scala.collection.immutable.List.foreach(List.scala:388)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:895)
>  at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
>  at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.FileSystemException: 
> C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index
>  -> 
> C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index.deleted:
>  The process cannot access the file because it is being used by another 
> process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
>  ... 30 more
> [2019-03-12 12:14:12,830] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8097) Kafka broker crashes with java.nio.file.FileSystemException Exception

2019-04-03 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808874#comment-16808874
 ] 

Kartik commented on KAFKA-8097:
---

We need to close the Stream opened on the file before renaming.

In the file AbstractIndex.java under /core/src/main/scala/kafka/log there is a 
function renameTo() we are renaming the file while the stream on old file is 
still open. We need to close the stream by calling closeHandler() function.

!image-2019-04-03-21-29-23-187.png!

[~huxi_2b] Can you comment on my observations, Am I correct?

> Kafka broker crashes with java.nio.file.FileSystemException Exception
> -
>
> Key: KAFKA-8097
> URL: https://issues.apache.org/jira/browse/KAFKA-8097
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Kartik
>Assignee: Kartik
>Priority: Minor
> Attachments: image-2019-04-03-21-29-23-187.png
>
>
> Kafka broker crashes with below exception while deleting the segments
>  
> Exception thrown:
> The process cannot access the file because it is being used by another 
> process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
>  at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:205)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:489)
>  at kafka.log.Log.asyncDeleteSegment(Log.scala:1907)
>  at kafka.log.Log.deleteSegment(Log.scala:1892)
>  at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1438)
>  at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1438)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>  at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1438)
>  at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1996)
>  at kafka.log.Log.deleteSegments(Log.scala:1429)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1424)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1502)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1492)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:898)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:895)
>  at scala.collection.immutable.List.foreach(List.scala:388)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:895)
>  at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395)
>  at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Suppressed: java.nio.file.FileSystemException: 
> C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index
>  -> 
> C:\Users\Documents\Kafka-runner\kafka\bin\windows\UsersDocumentsKafka-runnerkafkakafka-logs\test-9\.index.deleted:
>  The process cannot access the file because it is being used by another 
> process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
>  ... 30 more
> [2019-03-12 12:14:12,830] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8188) Zookeeper Connection Issue Take Down the Whole Kafka Cluster

2019-04-05 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16811415#comment-16811415
 ] 

Kartik commented on KAFKA-8188:
---

[~candicewan]

I see below JAAS config file missing error.

2019-04-03 08:25:19.611 
[zk-session-expiry-handler0-SendThread(host1:36100)]WARN 
org.apache.zookeeper.ClientCnxn - SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
*{color:#33}'file:/app0/common/config/ldap-auth.config'.{color}* Will 
continue connection to Zookeeper server without SASL authentication, if 
Zookeeper server allows it.
2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
host1/169.30.47.206:36100
2019-04-03 08:25:19.611 [zk-session-expiry-handler0-EventThread] ERROR 
kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Auth failed.

 

Because of this, the broker failed to connect to ZK. When it couldn't find the 
config file, it tried connecting to ZK without SASL auth, but ZK refused to 
connect.

 

> Zookeeper Connection Issue Take Down the Whole Kafka Cluster
> 
>
> Key: KAFKA-8188
> URL: https://issues.apache.org/jira/browse/KAFKA-8188
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Candice Wan
>Priority: Critical
> Attachments: thread_dump.log
>
>
> We recently upgraded to 2.1.1 and saw below zookeeper connection issues which 
> took down the whole cluster. We've got 3 nodes in the cluster, 2 of which 
> threw below exceptions at the same second.
> 2019-04-03 08:25:19.603 [main-SendThread(host2:36100)] WARN 
> org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, 
> session 0x10071ff9baf0001 has expired
>  2019-04-03 08:25:19.603 [main-SendThread(host2:36100)] INFO 
> org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, 
> session 0x10071ff9baf0001 has expired, closing socket connection
>  2019-04-03 08:25:19.605 [main-EventThread] INFO 
> org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 
> 0x10071ff9baf0001
>  2019-04-03 08:25:19.605 [zk-session-expiry-handler0] INFO 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Session expired.
>  2019-04-03 08:25:19.609 [zk-session-expiry-handler0] INFO 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Initializing a new 
> session to host1:36100,host2:36100,host3:36100.
>  2019-04-03 08:25:19.610 [zk-session-expiry-handler0] INFO 
> org.apache.zookeeper.ZooKeeper - Initiating client connection, 
> connectString=host1:36100,host2:36100,host3:36100 sessionTimeout=6000 
> watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@12f8b1d8
>  2019-04-03 08:25:19.610 [zk-session-expiry-handler0] INFO 
> o.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'file:/app0/common/config/ldap-auth.config'. Will continue connection to 
> Zookeeper server without SASL authentication, if Zookeeper server allows it.
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
> host1/169.30.47.206:36100
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-EventThread] ERROR 
> kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Auth failed.
>  2019-04-03 08:25:19.611 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Socket connection established, 
> initiating session, client: /169.20.222.18:56876, server: 
> host1/169.30.47.206:36100
>  2019-04-03 08:25:19.612 [controller-event-thread] INFO 
> k.controller.PartitionStateMachine - [PartitionStateMachine controllerId=3] 
> Stopped partition state machine
>  2019-04-03 08:25:19.613 [controller-event-thread] INFO 
> kafka.controller.ReplicaStateMachine - [ReplicaStateMachine controllerId=3] 
> Stopped replica state machine
>  2019-04-03 08:25:19.614 [controller-event-thread] INFO 
> kafka.controller.KafkaController - [Controller id=3] Resigned
>  2019-04-03 08:25:19.615 [controller-event-thread] INFO 
> kafka.zk.KafkaZkClient - Creating /brokers/ids/3 (is it secure? false)
>  2019-04-03 08:25:19.628 [zk-session-expiry-handler0-SendThread(host1:36100)] 
> INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on 
> server host1/169.30.47.206:36100, sessionid 

[jira] [Created] (KAFKA-15171) Kafka client poll never notifies when broker is down.

2023-07-10 Thread Kartik (Jira)
Kartik created KAFKA-15171:
--

 Summary: Kafka client poll never notifies when broker is down.
 Key: KAFKA-15171
 URL: https://issues.apache.org/jira/browse/KAFKA-15171
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 3.2.0
Reporter: Kartik
 Fix For: 3.2.1


Hi All, 

We are using apache camel to connect to the Kafka endpoint which internally 
uses Kafka-client to connect to the Kafka broker.

When the broker is down, the client keeps logging warn messages _*"node -1 
disconnected, Broker may not be available"*_ and never gives up and throws an 
exception, because of this application never stops polling.

Is there any config parameter in the client that allows the poll function to 
throw RuntimeException or this is a bug?

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-11-20-51-07-805.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16765075#comment-16765075
 ] 

Kartik commented on KAFKA-7794:
---

[~audhumla] I tried your steps and I was able to get the offset properly. The 
offset won't be visible if you are giving the timestamp > recent record added 
timestamp.

 

!image-2019-02-11-20-51-07-805.png!

 

If I provide the timestamp > 1549897929598 like '1549897929599' you won't get 
the offset. 

!image-2019-02-11-20-57-03-579.png!

 

 

When I provide the proper timestamp I get the proper offset

!image-2019-02-11-20-56-13-362.png!

 

 

Let me know If I am wrong.

 

 

 

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-11-20-57-03-579.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-11-20-56-13-362.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16765684#comment-16765684
 ] 

Kartik commented on KAFKA-7794:
---

Hi [~huxi_2b] , tagging you because you might know this. 

Ideally when the timestamp is provided > latest committed record timestamp, 
then is it good to return the latest offset right? or you want the error 
message should be thrown. based on your comment, I can work on this.

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-11 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik reassigned KAFKA-7794:
-

Assignee: Kartik

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16766100#comment-16766100
 ] 

Kartik commented on KAFKA-7794:
---

[~ijuma] Can you help me here? Should we return latest offset for the timestamp 
> latest committed record timestamp or just throw an error message. So that I 
can work accordingly on this issue.

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-13-11-43-28-873.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16766828#comment-16766828
 ] 

Kartik commented on KAFKA-7794:
---

[~audhumla]

I tried your steps and for me, it's working as expected.
 # Created a new topic with 10 partitions and replication factor = 1
 # Inserted 5000 rows !image-2019-02-13-11-43-28-873.png!
 # Consumed all the messages !image-2019-02-13-11-44-18-736.png!
 # Now if I give 10th timestamp value I get the offset properly. 
!image-2019-02-13-11-45-21-459.png!

 

Can you tell me which version you are testing? Looks like in new version the 
issue is fixed.

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png, image-2019-02-13-11-44-18-736.png, 
> image-2019-02-13-11-45-21-459.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-13-11-45-21-459.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png, image-2019-02-13-11-44-18-736.png, 
> image-2019-02-13-11-45-21-459.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-13-11-43-24-128.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-12 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-13-11-44-18-736.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png, image-2019-02-13-11-44-18-736.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-02-16 Thread Kartik (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kartik updated KAFKA-7794:
--
Attachment: image-2019-02-16-22-24-11-799.png

> kafka.tools.GetOffsetShell does not return the offset in some cases
> ---
>
> Key: KAFKA-7794
> URL: https://issues.apache.org/jira/browse/KAFKA-7794
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.10.2.2
>Reporter: Daniele Ascione
>Assignee: Kartik
>Priority: Critical
>  Labels: Kafka, ShellCommands, kafka-0.10, offset, shell, 
> shell-script, shellscript, tools, usability
> Attachments: image-2019-02-11-20-51-07-805.png, 
> image-2019-02-11-20-56-13-362.png, image-2019-02-11-20-57-03-579.png, 
> image-2019-02-12-16-19-25-170.png, image-2019-02-12-16-21-13-126.png, 
> image-2019-02-12-16-23-38-399.png, image-2019-02-13-11-43-24-128.png, 
> image-2019-02-13-11-43-28-873.png, image-2019-02-13-11-44-18-736.png, 
> image-2019-02-13-11-45-21-459.png, image-2019-02-16-22-24-11-799.png
>
>
> For some input for the timestamps (different from -1 or -2) the GetOffset is 
> not able to retrieve the offset.
> For example, if _x_ is the timestamp in that "not working range", and you 
> execute:
> {code:java}
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time x
> {code}
> The output is:
> {code:java}
> MY_TOPIC:8:
> MY_TOPIC:2:
> MY_TOPIC:5:
> MY_TOPIC:4:
> MY_TOPIC:7:
> MY_TOPIC:1:
> MY_TOPIC:9:{code}
> while after the last ":" an integer representing the offset is expected.
> 
> Steps to reproduce it:
>  # Consume all the messages from the beginning and print the timestamp:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
> messages{code}
>  # Sort the messages by timestamp and get some of the oldest messages:
> {code:java}
>  awk -F "CreateTime:" '{ print $2}' messages | sort -n > msg_sorted{code}
>  # Take (for example) the timestamp of the 10th oldest message, and see if 
> GetOffsetShell is not able to print the offset:
> {code:java}
> timestamp="$(sed '10q;d' msg_sorted | cut -f1)"
> bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --time $timestamp
> # The output should be something like:
> # MY_TOPIC:1:
> # MY_TOPIC:2:
> (repeated for every partition){code}
>  # Verify that the message with that timestamp is still in Kafka:
> {code:java}
> bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
> $KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
> "CreateTime:$timestamp" {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)