[jira] [Updated] (KAFKA-10641) ACL Command hangs with SSL as not existing with proper error code

2020-10-24 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy updated KAFKA-10641:

Summary: ACL Command hangs with SSL as not existing with proper error code  
(was: ACL Command hands with SSL as not existing with proper error code)

> ACL Command hangs with SSL as not existing with proper error code
> -
>
> Key: KAFKA-10641
> URL: https://issues.apache.org/jira/browse/KAFKA-10641
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Senthilnathan Muthusamy
>Assignee: Senthilnathan Muthusamy
>Priority: Minor
> Fix For: 2.7.0
>
>
> When using ACL Command with SSL mode, the process is not terminating after 
> successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10642) Expose the real stack trace if any exception occurred during SSL Client Trust Verification in extension

2020-10-24 Thread Senthilnathan Muthusamy (Jira)
Senthilnathan Muthusamy created KAFKA-10642:
---

 Summary: Expose the real stack trace if any exception occurred 
during SSL Client Trust Verification in extension
 Key: KAFKA-10642
 URL: https://issues.apache.org/jira/browse/KAFKA-10642
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.3.1, 2.4.0, 2.3.0
Reporter: Senthilnathan Muthusamy
Assignee: Senthilnathan Muthusamy
 Fix For: 2.7.0


If there is any exception occurred in the custom implementation of client trust 
verification (i.e. using security.provider), the inner exception is suppressed 
or hidden and not logged to the log file...

 

Below is an example stack trace not showing actual exception from the 
extension/custom implementation.

 

[2020-05-13 14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)[2020-05-13 
14:30:26,892] ERROR [KafkaServer id=423810470] Fatal error during KafkaServer 
startup. Prepare to shutdown (kafka.server.KafkaServer) 
org.apache.kafka.common.KafkaException: 
org.apache.kafka.common.config.ConfigException: Invalid value 
java.lang.RuntimeException: Delegated task threw Exception/Error for 
configuration A client SSLEngine created with the provided settings can't 
connect to a server SSLEngine created with those settings. at 
org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
 at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
 at 
org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
 at kafka.network.Processor.(SocketServer.scala:753) at 
kafka.network.SocketServer.newProcessor(SocketServer.scala:394) at 
kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:279)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:278) at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:241)
 at 
kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:238)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:238)
 at kafka.network.SocketServer.startup(SocketServer.scala:121) at 
kafka.server.KafkaServer.startup(KafkaServer.scala:265) at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at 
kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)Caused by: 
org.apache.kafka.common.config.ConfigException: Invalid value 
java.lang.RuntimeException: Delegated task threw Exception/Error for 
configuration A client SSLEngine created with the provided settings can't 
connect to a server SSLEngine created with those settings. at 
org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:100) 
at 
org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:69)
 ... 18 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10641) ACL Command hands with SSL as not existing with proper error code

2020-10-24 Thread Senthilnathan Muthusamy (Jira)
Senthilnathan Muthusamy created KAFKA-10641:
---

 Summary: ACL Command hands with SSL as not existing with proper 
error code
 Key: KAFKA-10641
 URL: https://issues.apache.org/jira/browse/KAFKA-10641
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.3.1, 2.4.0, 2.2.2, 2.2.1, 
2.3.0, 2.1.1, 2.2.0, 2.1.0, 2.0.1, 2.0.0
Reporter: Senthilnathan Muthusamy
Assignee: Senthilnathan Muthusamy
 Fix For: 2.7.0


When using ACL Command with SSL mode, the process is not terminating after 
successful ACL operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7061) Enhanced log compaction

2020-02-12 Thread Senthilnathan Muthusamy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035829#comment-17035829
 ] 

Senthilnathan Muthusamy commented on KAFKA-7061:


Opened PR - rebased with the latest apache/trunk

[https://github.com/apache/kafka/pull/8103]

> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
> config yields to the existing "log.cleanup.policy", i.e. if the latter's 
> value is `delete` not `compact`, then the previous config would be ignored.
> +From Jun Rao:+ With the timestamp/header strategy, the behavior of the 
> application may need to change. In particular, the application can't just 
> blindly take the record with a larger offset and assuming that it's the value 
> to keep. It needs to check the timestamp or the header now. So, it would be 
> useful to at least document this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7061) Enhanced log compaction

2020-02-10 Thread Senthilnathan Muthusamy (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17034004#comment-17034004
 ] 

Senthilnathan Muthusamy commented on KAFKA-7061:


[~guozhang] [~junrao] [~mjsax] looking to get this moving on... waiting for 
long time on the code review... can you please help moving. 12th is the code 
freeze date, right? are you guys thinking we can't make it for the 2.5 release?

> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
> config yields to the existing "log.cleanup.policy", i.e. if the latter's 
> value is `delete` not `compact`, then the previous config would be ignored.
> +From Jun Rao:+ With the timestamp/header strategy, the behavior of the 
> application may need to change. In particular, the application can't just 
> blindly take the record with a larger offset and assuming that it's the value 
> to keep. It needs to check the timestamp or the header now. So, it would be 
> useful to at least document this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7061) Enhanced log compaction

2019-11-11 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy updated KAFKA-7061:
---
Description: 
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
 The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.

+From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
config yields to the existing "log.cleanup.policy", i.e. if the latter's value 
is `delete` not `compact`, then the previous config would be ignored.

+From Jun Rao:+ With the timestamp/header strategy, the behavior of the 
application may need to change. In particular, the application can't just 
blindly take the record with a larger offset and assuming that it's the value 
to keep. It needs to check the timestamp or the header now. So, it would be 
useful to at least document this. 

  was:
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
 The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.

+From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
config yields to the existing "log.cleanup.policy", i.e. if the latter's value 
is `delete` not `compact`, then the previous config would be ignored.

 


> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
> config yields to the existing "log.cleanup.policy", i.e. if the latter's 
> value is `delete` not `compact`, then the previous config would be ignored.
> +From Jun Rao:+ With the timestamp/header strategy, the behavior of the 
> application may need to change. In particular, the application can't just 
> blindly take the record with a larger offset and assuming that it's the value 
> to keep. It needs to check the timestamp or the header now. So, it would be 
> useful to at least document this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7061) Enhanced log compaction

2019-11-05 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy updated KAFKA-7061:
---
Description: 
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
 The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.

+From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
config yields to the existing "log.cleanup.policy", i.e. if the latter's value 
is `delete` not `compact`, then the previous config would be ignored.

 

  was:
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
 The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.

+From Guozhang:+ We should emphasize that the newly introduced config yields to 
the existing "log.cleanup.policy", i.e. if the latter's value is `delete` not 
`compact`, then the previous config would be ignored.

 


> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize on the WIKI that the newly introduced 
> config yields to the existing "log.cleanup.policy", i.e. if the latter's 
> value is `delete` not `compact`, then the previous config would be ignored.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7061) Enhanced log compaction

2019-11-05 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy updated KAFKA-7061:
---
Description: 
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
 The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.

+From Guozhang:+ We should emphasize that the newly introduced config yields to 
the existing "log.cleanup.policy", i.e. if the latter's value is `delete` not 
`compact`, then the previous config would be ignored.

 

  was:
Enhance log compaction to support more than just offset comparison, so the 
insertion order isn't dictating which records to keep.

Default behavior is kept as it was, with the enhanced approached having to be 
purposely activated.
The enhanced compaction is done either via the record timestamp, by settings 
the new configuration as "timestamp" or via the record headers by setting this 
configuration to anything other than the default "offset" or the reserved 
"timestamp".

See 
[KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
 for more details.


> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
>  The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.
> +From Guozhang:+ We should emphasize that the newly introduced config yields 
> to the existing "log.cleanup.policy", i.e. if the latter's value is `delete` 
> not `compact`, then the previous config would be ignored.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-7061) Enhanced log compaction

2019-11-04 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy updated KAFKA-7061:
---
Affects Version/s: (was: 2.4.0)
   2.5.0

> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
> The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-7061) Enhanced log compaction

2019-10-15 Thread Senthilnathan Muthusamy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Senthilnathan Muthusamy reassigned KAFKA-7061:
--

Assignee: Senthilnathan Muthusamy  (was: Ning Liu)

> Enhanced log compaction
> ---
>
> Key: KAFKA-7061
> URL: https://issues.apache.org/jira/browse/KAFKA-7061
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.4.0
>Reporter: Luis Cabral
>Assignee: Senthilnathan Muthusamy
>Priority: Major
>  Labels: kip
>
> Enhance log compaction to support more than just offset comparison, so the 
> insertion order isn't dictating which records to keep.
> Default behavior is kept as it was, with the enhanced approached having to be 
> purposely activated.
> The enhanced compaction is done either via the record timestamp, by settings 
> the new configuration as "timestamp" or via the record headers by setting 
> this configuration to anything other than the default "offset" or the 
> reserved "timestamp".
> See 
> [KIP-280|https://cwiki.apache.org/confluence/display/KAFKA/KIP-280%3A+Enhanced+log+compaction]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)