[jira] [Updated] (KAFKA-15067) kafka SSL support with different ssl providers

2023-06-07 Thread Aldan Brito (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldan Brito updated KAFKA-15067:

Summary: kafka SSL support with different ssl providers  (was: kafka SSL 
support with differnt ssl providers)

> kafka SSL support with different ssl providers
> --
>
> Key: KAFKA-15067
> URL: https://issues.apache.org/jira/browse/KAFKA-15067
> Project: Kafka
>  Issue Type: Test
>  Components: security
>Reporter: Aldan Brito
>Priority: Major
>
> kafka SSL support with different ssl providers.
> configuring different ssl providers eg netty ssl providers.
> there is no documentation nor examples test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15067) kafka SSL support with differnt ssl providers

2023-06-07 Thread Aldan Brito (Jira)
Aldan Brito created KAFKA-15067:
---

 Summary: kafka SSL support with differnt ssl providers
 Key: KAFKA-15067
 URL: https://issues.apache.org/jira/browse/KAFKA-15067
 Project: Kafka
  Issue Type: Test
  Components: security
Reporter: Aldan Brito


kafka SSL support with different ssl providers.

configuring different ssl providers eg netty ssl providers.

there is no documentation nor examples test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13953) kafka Console consumer fails with CorruptRecordException

2022-06-03 Thread Aldan Brito (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17545837#comment-17545837
 ] 

Aldan Brito commented on KAFKA-13953:
-

hi [~junrao] , 

there is no evidence of data corruption on the kafka broker logs, 

is this expected, 

is  deleting the log segment the only way to recover the system ? – this would 
mean data loss right. 

> kafka Console consumer fails with CorruptRecordException 
> -
>
> Key: KAFKA-13953
> URL: https://issues.apache.org/jira/browse/KAFKA-13953
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, controller, core
>Affects Versions: 2.7.0
>Reporter: Aldan Brito
>Priority: Blocker
>
> Kafka consumer fails with corrupt record exception. 
> {code:java}
> opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server *.*.*.*: 
> --topic BQR-PULL-DEFAULT --from-beginning > 
> /opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
> [{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
> consumer process:  (kafka.tools.ConsoleConsumer$)
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record 
> to continue consumption.
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
>         at 
> kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
>         at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
>         at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
>         at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
>         at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> 0 is less than the minimum record overhead (14)
> Processed a total of 15765197 messages {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (KAFKA-13953) kafka Console consumer fails with CorruptRecordException

2022-06-01 Thread Aldan Brito (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aldan Brito updated KAFKA-13953:

Description: 
Kafka consumer fails with corrupt record exception. 
{code:java}
opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server *.*.*.*: 
--topic BQR-PULL-DEFAULT --from-beginning > 
/opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
[{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.KafkaException: Received exception when fetching the 
next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record to 
continue consumption.
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
        at 
kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
        at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 0 
is less than the minimum record overhead (14)
Processed a total of 15765197 messages {code}

  was:
Kafka consumer fails with corrupt record exception. 
{code:java}
opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.7.93:28104 
--topic BQR-PULL-DEFAULT --from-beginning > 
/opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
[{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.KafkaException: Received exception when fetching the 
next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record to 
continue consumption.
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
        at 
kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
        at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 0 
is less than the minimum record overhead (14)
Processed a total of 15765197 messages {code}


> kafka Console consumer fails with CorruptRecordException 
> -
>
> Key: KAFKA-13953
> URL: https://issues.apache.org/jira/browse/KAFKA-13953
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, controller, core
>Affects Versions: 2.7.0
>Reporter: Aldan Brito
>Priority: Blocker
>
> Kafka consumer fails with corrupt record exception. 
> {code:java}
> opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server *.*.*.*: 
> --topic BQR-PULL-DEFAULT --from-beginning > 
> /opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
> [{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
> consumer process:  (kafka.tools.ConsoleConsumer$)
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record 
> to continue consumption.
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
>         at 
> 

[jira] [Created] (KAFKA-13953) kafka Console consumer fails with CorruptRecordException

2022-06-01 Thread Aldan Brito (Jira)
Aldan Brito created KAFKA-13953:
---

 Summary: kafka Console consumer fails with CorruptRecordException 
 Key: KAFKA-13953
 URL: https://issues.apache.org/jira/browse/KAFKA-13953
 Project: Kafka
  Issue Type: Bug
  Components: consumer, controller, core
Affects Versions: 2.7.0
Reporter: Aldan Brito


Kafka consumer fails with corrupt record exception. 
{code:java}
opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.7.93:28104 
--topic BQR-PULL-DEFAULT --from-beginning > 
/opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
[{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.KafkaException: Received exception when fetching the 
next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record to 
continue consumption.
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
        at 
kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
        at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 0 
is less than the minimum record overhead (14)
Processed a total of 15765197 messages {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (KAFKA-13953) kafka Console consumer fails with CorruptRecordException

2022-06-01 Thread Aldan Brito (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17545271#comment-17545271
 ] 

Aldan Brito commented on KAFKA-13953:
-

hi [~junrao] hi [~ijuma], 

could you please have a look at this issue.

 

> kafka Console consumer fails with CorruptRecordException 
> -
>
> Key: KAFKA-13953
> URL: https://issues.apache.org/jira/browse/KAFKA-13953
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, controller, core
>Affects Versions: 2.7.0
>Reporter: Aldan Brito
>Priority: Blocker
>
> Kafka consumer fails with corrupt record exception. 
> {code:java}
> opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.7.93:28104 
> --topic BQR-PULL-DEFAULT --from-beginning > 
> /opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
> [{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating 
> consumer process:  (kafka.tools.ConsoleConsumer$)
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record 
> to continue consumption.
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
>         at 
> kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
>         at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
>         at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
>         at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
>         at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> 0 is less than the minimum record overhead (14)
> Processed a total of 15765197 messages {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (KAFKA-7376) After Kafka upgrade to v2.0.0 , Controller unable to communicate with brokers on SASL_SSL

2019-07-30 Thread Aldan Brito (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895844#comment-16895844
 ] 

Aldan Brito commented on KAFKA-7376:


Hi [~ijuma]

Even we are facing a similar issue w.r.t to SSL hostname verifications.

Scenario : 
we have two KAfKA listeners internal and external.
Internal listener is mapped to the FQDN of the Broker.
   eg: internal://FQDN:9092
External listener is mapped to user defined name.
   eg: external://testkafka:8109

while generating the SSL certificates, we have used CN name as the FQDN of the 
broker, 
and both the listener names are included in the SAN entries.

when client does a handhshake with the external listener ie. broker-list config 
of producer set to external://testkafka:8109, we get below exceptions.
{code:java}
Caused by: java.security.cert.CertificateException: No name matching testkafka 
found
at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:231)
at sun.security.util.HostnameChecker.match(HostnameChecker.java:96)
at 
sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
at 
sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436)
at 
sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252)
at 
sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
at 
sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1626)
{code}
and if we disable ssl.endpoint.algorithm for the external listener the 
handshake goes through fine.

if we have internal and external listeners with the FQDN and generate the 
certificates
CN as the FQDN
for eg : 
  internal://FQDN:9092
  external://FQDN:8109

client does a request with broker-list config of producer set to 
external://FQDN:8109 works fine

looks like the broker-list DNS domain name is verified against the CN name and 
does not consider SAN entries.

decrypted server keystore snapshot:
{code:java}
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions:

#1: ObjectId: 2.5.29.19 Criticality=false
BasicConstraints:[
CA:true
PathLen:2147483647
]

#2: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
serverAuth
clientAuth
]

#3: ObjectId: 2.5.29.15 Criticality=false
KeyUsage [
DigitalSignature
Non_repudiation
Key_Encipherment
]

#4: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
DNSName: kf-mykaf-0.kf-mykaf-headless.default.svc.cluster.local
DNSName: testkafka
]

{code}

> After Kafka upgrade to v2.0.0 , Controller unable to communicate with brokers 
> on SASL_SSL
> -
>
> Key: KAFKA-7376
> URL: https://issues.apache.org/jira/browse/KAFKA-7376
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 2.0.0
>Reporter: Sridhar
>Priority: Major
>
> Hi ,
> We upgraded our Kafka cluster (3x nodes running on AWS cloud) to 2.0.0 
> version and enabled security with SASL_SSL (plain) encryption for 
> Inter-broker and Client connection . 
> But there are lot of errors in the controller log for the inter-broker 
> communication .I have the followed exactly same steps as mentioned in the 
> document and set all kafka brokers fqdn hostname in the SAN 
> (SubjectAlternativeName) of my server certificate (selfsigned) .
> [http://kafka.apache.org/documentation.html#security|http://example.com/]
>  
> openssl s_client -connect kafka-3:9093
>  CONNECTED(0003)
>  depth=1
> Noticed someone else also facing the similar problem .
> [https://github.com/confluentinc/common/issues/158]
>  
>  
> {noformat}
> Server Configuration : 
> listeners=PLAINTEXT://kafka-3:9092,SASL_SSL://kafka-3:9093
> advertised.listeners=PLAINTEXT://kafka-3:9092,SASL_SSL://kafka-3:9093
> #Security
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> allow.everyone.if.no.acl.found=false
> security.inter.broker.protocol=SASL_SSL
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
> super.users=User:admin
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.truststore.location=/etc/kafka/ssl/kafka.server.truststore.jks
> ssl.truststore.password=
> ssl.keystore.location=/etc/kafka/ssl/kafka.server.keystore.jks
> ssl.keystore.password=
> ssl.key.password=
> #Zookeeper
> zookeeper.connect=zk-1:2181,zk-2:2181,zk-3:2181
> zookeeper.connection.timeout.ms=6000
> {noformat}
>  
>  
> {code:java}
>  
> [2018-09-04 12:02:57,289] WARN [RequestSendThread controllerId=2] Controller 
> 2's connection to broker kafka-3:9093 (id: 3 rack: eu-central-1c) was 
> unsuccessful (kafka.controller.RequestSendThread)
> org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake 
> failed
> Caused by: