[jira] (KAFKA-3700) CRL support

2023-11-08 Thread Igor Shipenkov (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-3700 ]


Igor Shipenkov deleted comment on KAFKA-3700:
---

was (Author: JIRAUSER280700):
For people, looking for OCSP support.
Use Oracle Java documentation "[Security Developer’s Guide - OCSP Stapling 
Configuration Properties - Setting Up a Java Server to Use OCSP 
Stapling|https://docs.oracle.com/en/java/javase/11/security/java-secure-socket-extension-jsse-reference-guide.html#GUID-423716FB-DA34-4C73-B3A1-EB4CE120BB62];
 to configure OCSP stapling on JVM level.
Basically, it's just
{quote}
Online Certificate Status Protocol (OCSP) stapling is enabled on the server by 
setting the system property {{jdk.tls.server.enableStatusRequestExtension}} to 
{{true}}. (It is set to {{false}} by default.) 
{quote}

I can confirm, that broker with additional command line option
{code}
-Djdk.tls.server.enableStatusRequestExtension=true
{code}
runs just fine and in traffic dump I see proper OCSP requests and responses.

This link is for Java 11, but this system property exists since Java 1.8.
Also it's end of 2023 now and people around use at least Kafka 3.1, but since 
it's JVM property, I think it's independent from Kafka version.

> CRL support
> ---
>
> Key: KAFKA-3700
> URL: https://issues.apache.org/jira/browse/KAFKA-3700
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
>Reporter: Vincent Bernat
>Priority: Major
>
> Hey!
> Currently, there is no way to specify a CRL to be checked when a client 
> presents its TLS certificate. Therefore, a revoked certificate is accepted. A 
> CRL can either be provided as an URL in a certificate but with a private 
> authority, it is more common to have one as a separate file. A 
> `ssl.crl.location` would come handy to specify a CRL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-3700) CRL support

2023-11-07 Thread Igor Shipenkov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17783882#comment-17783882
 ] 

Igor Shipenkov commented on KAFKA-3700:
---

For people, looking for OCSP support.
Use Oracle Java documentation "[Security Developer’s Guide - OCSP Stapling 
Configuration Properties - Setting Up a Java Server to Use OCSP 
Stapling|https://docs.oracle.com/en/java/javase/11/security/java-secure-socket-extension-jsse-reference-guide.html#GUID-423716FB-DA34-4C73-B3A1-EB4CE120BB62];
 to configure OCSP stapling on JVM level.
Basically, it's just
{quote}
Online Certificate Status Protocol (OCSP) stapling is enabled on the server by 
setting the system property {{jdk.tls.server.enableStatusRequestExtension}} to 
{{true}}. (It is set to {{false}} by default.) 
{quote}

I can confirm, that broker with additional command line option
{code}
-Djdk.tls.server.enableStatusRequestExtension=true
{code}
runs just fine and in traffic dump I see proper OCSP requests and responses.

This link is for Java 11, but this system property exists since Java 1.8.
Also it's end of 2023 now and people around use at least Kafka 3.1, but since 
it's JVM property, I think it's independent from Kafka version.

> CRL support
> ---
>
> Key: KAFKA-3700
> URL: https://issues.apache.org/jira/browse/KAFKA-3700
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.1
>Reporter: Vincent Bernat
>Priority: Major
>
> Hey!
> Currently, there is no way to specify a CRL to be checked when a client 
> presents its TLS certificate. Therefore, a revoked certificate is accepted. A 
> CRL can either be provided as an URL in a certificate but with a private 
> authority, it is more common to have one as a separate file. A 
> `ssl.crl.location` would come handy to specify a CRL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-13474) Regression in dynamic update of broker certificate

2021-12-02 Thread Igor Shipenkov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452158#comment-17452158
 ] 

Igor Shipenkov edited comment on KAFKA-13474 at 12/2/21, 8:36 AM:
--

Tried to reproduce problem on kafka 2.8.1. And problem is still there: I've 
updated certificate, I can see that listener use new certificate, but it still 
uses old certificate for client connections and when certificate expires, this 
broker can't connect to others, for example connection to controller get errors 
like
{code:none}
INFO [broker-2-to-controller] Failed authentication with localhost/127.0.0.1 
(SSL handshake failed) (org.apache.kafka.common.network.Selector)

ERROR [broker-2-to-controller] Connection to node 1 (localhost/127.0.0.1:9092) 
failed authentication due to: SSL handshake failed 
(org.apache.kafka.clients.NetworkClient)

ERROR [broker-2-to-controller-send-thread]: Failed to send the following 
request due to authentication error: ClientRequest(expectResponse=true, 
callback=kafka.server.BrokerToControllerRequestThread$$Lambda$1289/0x0008017d8440@28eb9d3c,
 destination=1, correlationId=7078, clientId=2, createdTimeMs=1638414992079, 
requestBuilder=AlterIsrRequestData(brokerId=2, brokerEpoch=1025, 
topics=[TopicData(name='amadeus-pnr', partitions=[long list of partitions])]) 
failed due to authentication error with controller 
(kafka.server.BrokerToControllerRequestThread)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLProtocolException: Unexpected handshake message: 
server_hello
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:254)
at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:437)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:689)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514)
at 
org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
at 
kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74)
at 
kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
{code}
and packet capture shows old client certificate in TLS handshake.
I will add 2.8.1 to list of affected versions.


was (Author: JIRAUSER280700):
Tried to reproduce problen on kafka 2.8.1. And problem is still there: I've 
updated certificate, I can see that listener use new certificate, but it still 
uses old certificate for client connections and when certificate expires, this 
broker can't connect to others, for example connection to controller get errors 
like
{code:none}
INFO [broker-2-to-controller] Failed authentication with localhost/127.0.0.1 
(SSL handshake failed) (org.apache.kafka.common.network.Selector)

ERROR [broker-2-to-controller] Connection to node 1 (localhost/127.0.0.1:9092) 
failed authentication due to: SSL handshake failed 
(org.apache.kafka.clients.NetworkClient)

ERROR [broker-2-to-controller-send-thread]: Failed to send the following 
request due to authentication error: ClientRequest(expectResponse=true, 
callback=kafka.server.BrokerToControllerRequestThread$$Lambda$1289/0x0008017d8440@28eb9d3c,
 destination=1, correlationId=7078, clientId=2, createdTimeMs=1638414992079, 
requestBuilder=AlterIsrRequestData(brokerId=2, brokerEpoch=1025, 
topics=[TopicData(name='amadeus-pnr', partitions=[long list of partitions])]) 
failed due to authentication error with controller

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker certificate

2021-12-02 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Summary: Regression in dynamic update of broker certificate  (was: 
Regression in dynamic update of broker client-side SSL factory)

> Regression in dynamic update of broker certificate
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2, 2.8.1, 3.0.0
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic broker 
> configuration update, old certificate is somehow still used for broker client 
> SSL factory. Because of this broker fails to create new connection to 
> controller after old certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating certificate, when it was changed with dynamic configuration. That 
> bug have been fixed in version 2.3 and I can confirm, that dynamic update 
> worked for us with kafka 2.4. But now we have updated clusters to 2.7 and see 
> this (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
>  * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
>  * Get vanilla version 2.7.2 (or 2.7.0) from 
> [https://kafka.apache.org/downloads] .
>  * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code:none}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
> traffic dump, you will get same error with default TLS 1.3 too)
>  ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
>  * Make basic client config (I use for it one of brokers' certificates):
> {code:none}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
>  * Create usual local self-signed PKI for test
>  ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
>  ** create keys for broker certificates and create requests from them as 
> usual (I'll use here same subject for all brokers)
>  ** create 2 certificates as usual
> {code:bash}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
>  ** Use "faketime" utility to make third certificate expire soon:
> {code:bash}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
>  ** create keystores from certificates and place them according to broker 
> configs from earlier
>  * Run 3 brokers with your configs like
> {code:bash}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something to run 3 brokers simultaneously)
>  ** you can check that one broker certificate will expire soon with
> {code:bash}
> openssl s_client -connect localhost:9093  -text | grep -A2 Valid
> {code}
>  * 

[jira] [Commented] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-12-01 Thread Igor Shipenkov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452200#comment-17452200
 ] 

Igor Shipenkov commented on KAFKA-13474:


Well, I tried version 3.0.0 and I can reproduce this problem on it just fine. 
Same "SSL handshake failed" error, still can see old client certificate in 
traffic dump.
Guess I'll just add this to affected version too.

> Regression in dynamic update of broker client-side SSL factory
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2, 2.8.1
>Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic broker 
> configuration update, old certificate is somehow still used for broker client 
> SSL factory. Because of this broker fails to create new connection to 
> controller after old certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating certificate, when it was changed with dynamic configuration. That 
> bug have been fixed in version 2.3 and I can confirm, that dynamic update 
> worked for us with kafka 2.4. But now we have updated clusters to 2.7 and see 
> this (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
>  * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
>  * Get vanilla version 2.7.2 (or 2.7.0) from 
> [https://kafka.apache.org/downloads] .
>  * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code:none}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
> traffic dump, you will get same error with default TLS 1.3 too)
>  ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
>  * Make basic client config (I use for it one of brokers' certificates):
> {code:none}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
>  * Create usual local self-signed PKI for test
>  ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
>  ** create keys for broker certificates and create requests from them as 
> usual (I'll use here same subject for all brokers)
>  ** create 2 certificates as usual
> {code:bash}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
>  ** Use "faketime" utility to make third certificate expire soon:
> {code:bash}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
>  ** create keystores from certificates and place them according to broker 
> configs from earlier
>  * Run 3 brokers with your configs like
> {code:bash}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something to run 3 brokers simultaneously)
>  ** you can check that one broker certificate will e

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-12-01 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Affects Version/s: 3.0.0

> Regression in dynamic update of broker client-side SSL factory
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2, 2.8.1, 3.0.0
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic broker 
> configuration update, old certificate is somehow still used for broker client 
> SSL factory. Because of this broker fails to create new connection to 
> controller after old certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating certificate, when it was changed with dynamic configuration. That 
> bug have been fixed in version 2.3 and I can confirm, that dynamic update 
> worked for us with kafka 2.4. But now we have updated clusters to 2.7 and see 
> this (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
>  * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
>  * Get vanilla version 2.7.2 (or 2.7.0) from 
> [https://kafka.apache.org/downloads] .
>  * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code:none}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
> traffic dump, you will get same error with default TLS 1.3 too)
>  ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
>  * Make basic client config (I use for it one of brokers' certificates):
> {code:none}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
>  * Create usual local self-signed PKI for test
>  ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
>  ** create keys for broker certificates and create requests from them as 
> usual (I'll use here same subject for all brokers)
>  ** create 2 certificates as usual
> {code:bash}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
>  ** Use "faketime" utility to make third certificate expire soon:
> {code:bash}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
>  ** create keystores from certificates and place them according to broker 
> configs from earlier
>  * Run 3 brokers with your configs like
> {code:bash}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something to run 3 brokers simultaneously)
>  ** you can check that one broker certificate will expire soon with
> {code:bash}
> openssl s_client -connect localhost:9093  -text | grep -A2 Valid
> {code}
>  * Issue new certificate to replace one, which will expire soon, place it in 
> keystore and replace old keystore wi

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-12-01 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Affects Version/s: 2.8.1

> Regression in dynamic update of broker client-side SSL factory
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2, 2.8.1
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic broker 
> configuration update, old certificate is somehow still used for broker client 
> SSL factory. Because of this broker fails to create new connection to 
> controller after old certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating certificate, when it was changed with dynamic configuration. That 
> bug have been fixed in version 2.3 and I can confirm, that dynamic update 
> worked for us with kafka 2.4. But now we have updated clusters to 2.7 and see 
> this (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
>  * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
>  * Get vanilla version 2.7.2 (or 2.7.0) from 
> [https://kafka.apache.org/downloads] .
>  * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code:none}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
> traffic dump, you will get same error with default TLS 1.3 too)
>  ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
>  * Make basic client config (I use for it one of brokers' certificates):
> {code:none}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
>  * Create usual local self-signed PKI for test
>  ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
>  ** create keys for broker certificates and create requests from them as 
> usual (I'll use here same subject for all brokers)
>  ** create 2 certificates as usual
> {code:bash}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
>  ** Use "faketime" utility to make third certificate expire soon:
> {code:bash}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
>  ** create keystores from certificates and place them according to broker 
> configs from earlier
>  * Run 3 brokers with your configs like
> {code:bash}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something to run 3 brokers simultaneously)
>  ** you can check that one broker certificate will expire soon with
> {code:bash}
> openssl s_client -connect localhost:9093  -text | grep -A2 Valid
> {code}
>  * Issue new certificate to replace one, which will expire soon, place it in 
> keystore and replace old keystore with it.
> 

[jira] [Commented] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-12-01 Thread Igor Shipenkov (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17452158#comment-17452158
 ] 

Igor Shipenkov commented on KAFKA-13474:


Tried to reproduce problen on kafka 2.8.1. And problem is still there: I've 
updated certificate, I can see that listener use new certificate, but it still 
uses old certificate for client connections and when certificate expires, this 
broker can't connect to others, for example connection to controller get errors 
like
{code:none}
INFO [broker-2-to-controller] Failed authentication with localhost/127.0.0.1 
(SSL handshake failed) (org.apache.kafka.common.network.Selector)

ERROR [broker-2-to-controller] Connection to node 1 (localhost/127.0.0.1:9092) 
failed authentication due to: SSL handshake failed 
(org.apache.kafka.clients.NetworkClient)

ERROR [broker-2-to-controller-send-thread]: Failed to send the following 
request due to authentication error: ClientRequest(expectResponse=true, 
callback=kafka.server.BrokerToControllerRequestThread$$Lambda$1289/0x0008017d8440@28eb9d3c,
 destination=1, correlationId=7078, clientId=2, createdTimeMs=1638414992079, 
requestBuilder=AlterIsrRequestData(brokerId=2, brokerEpoch=1025, 
topics=[TopicData(name='amadeus-pnr', partitions=[long list of partitions])]) 
failed due to authentication error with controller 
(kafka.server.BrokerToControllerRequestThread)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLProtocolException: Unexpected handshake message: 
server_hello
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:254)
at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:437)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:689)
at 
java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
at 
org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430)
at 
org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514)
at 
org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368)
at 
org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
at 
kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74)
at 
kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
{code}
and packet capture shows old client certificate in TLS handshake.
I will add 2.8.1 to list of affected versions.

> Regression in dynamic update of broker client-side SSL factory
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic broker 
> configuration update, old certificate is somehow still used for broker client 
> SSL factory. Because of this broker fails to create new connection to 
> controller after old certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating certificate, when it was changed with dynamic configuration. That 
> bug have been fixed in version 2.3 and I can confirm, that dynamic update 
> worked for us with kafka 2.4. But now we have updated clusters to 2.7 and see 
> this (or at least similar) problem again.
> h1. Affected versions
> First we've seen this 

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating certificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.
h1. Affected versions

First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine
h1. How to reproduce
 * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
 * Get vanilla version 2.7.2 (or 2.7.0) from 
[https://kafka.apache.org/downloads] .
 * Make basic broker config like this (don't forget to actually create 
log.dirs):
{code:none}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
 ** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
 * Make basic client config (I use for it one of brokers' certificates):
{code:none}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
 * Create usual local self-signed PKI for test
 ** generate self-signed CA certificate and private key. Place certificate in 
truststore.
 ** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
 ** create 2 certificates as usual
{code:bash}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
 ** Use "faketime" utility to make third certificate expire soon:
{code:bash}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
 ** create keystores from certificates and place them according to broker 
configs from earlier
 * Run 3 brokers with your configs like
{code:bash}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
 ** you can check that one broker certificate will expire soon with
{code:bash}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code:none}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.
 * If you make traffic dump (and you use TLS 1.2 or less) then you will see 
that broker client connection tries to use old certificate in TLS handshake.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself: 
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating certificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.
h1. Affected versions

First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine
h1. How to reproduce
 * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
 * Get vanilla version 2.7.2 (or 2.7.0) from 
[https://kafka.apache.org/downloads] .
 * Make basic broker config like this (don't forget to actually create 
log.dirs):
{code:none}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
 ** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
 * Make basic client config (I use for it one of brokers' certificate):
{code:none}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
 * Create usual local self-signed PKI for test
 ** generate self-signed CA certificate and private key. Place certificate in 
truststore.
 ** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
 ** create 2 certificates as usual
{code:bash}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
 ** Use "faketime" utility to make third certificate expire soon:
{code:bash}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
 ** create keystores from certificates and place them according to broker 
configs from earlier
 * Run 3 brokers with your configs like
{code:bash}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
 ** you can check that one broker certificate will expire soon with
{code:bash}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code:none}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.
 * If you make traffic dump (and you use TLS 1.2 or less) then you will see 
that client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself: 
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an iss

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.
h1. Affected versions

First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine
h1. How to reproduce
 * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
 * Get vanilla version 2.7.2 (or 2.7.0) from 
[https://kafka.apache.org/downloads] .
 * Make basic broker config like this (don't forget to actually create 
log.dirs):
{code:none}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
 ** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
 * Make basic client config (I use for it one of brokers' certificate):
{code:none}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
 * Create usual local self-signed PKI for test
 ** generate self-signed CA certificate and private key. Place certificate in 
truststore.
 ** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
 ** create 2 certificates as usual
{code:bash}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
 ** Use "faketime" utility to make third certificate expire soon:
{code:bash}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
 ** create keystores from certificates and place them according to broker 
configs from earlier
 * Run 3 brokers with your configs like
{code:bash}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
 ** you can check that one broker certificate will expire soon with
{code:bash}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code:none}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.
 * If you make traffic dump (and you use TLS 1.2 or less) then you will see 
that client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself: 
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an iss

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem

It seems, after updating listener SSL certificate with dynamic broker 
configuration update, old certificate is somehow still used for broker client 
SSL factory. Because of this broker fails to create new connection to 
controller after old certificate expires.
h1. History

Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.
h1. Affected versions

First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine
h1. How to reproduce
 * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
 * Get vanilla version 2.7.2 (or 2.7.0) from 
[https://kafka.apache.org/downloads] .
 * Make basic broker config like this (don't forget to actually create 
log.dirs):
{code:none}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)

 ** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
 * Make basic client config (I use for it one of brokers' certificate):
{code:none}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}

 * Create usual local self-signed PKI for test
 ** generate self-signed CA certificate and private key. Place certificate in 
truststore.
 ** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
 ** create 2 certificates as usual
{code:bash}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}

 ** Use "faketime" utility to make third certificate expire soon:
{code:bash}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}

 ** create keystores from certificates and place them according to broker 
configs from earlier
 * Run 3 brokers with your configs like
{code:bash}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)

 ** you can check that one broker certificate will expire soon with
{code:bash}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code:none}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.

 * If you make traffic dump (and you use TLS 1.2 or less) then you will see 
that client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself: 
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, wh

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code:none}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code:none}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code:bash}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code:bash}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code:bash}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code:bash}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code:none}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updatin

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself, then 
it even could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was c

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dyn

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something for run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dyn

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake in 
traffic dump, you will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something to run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dyn

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of broker client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Summary: Regression in dynamic update of broker client-side SSL factory  
(was: Regression in dynamic update of client-side SSL factory)

> Regression in dynamic update of broker client-side SSL factory
> --
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic configuration 
> update, old certificate is somehow still used for client SSL factory. Because 
> of this broker fails to create new connection to controller after old 
> certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating cetificate, when it was changed with dynamic configuration. That bug 
> have been fixed in version 2.3 and I can confirm, that dynamic update worked 
> for us with kafka 2.4. But now we have updated clusters to 2.7 and see this 
> (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
> * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
> * Get vanilla version 2.7.2 (or 2.7.0) from 
> https://kafka.apache.org/downloads .
> * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake, 
> you will get same error with default TLS 1.3 too)
> ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
> * Make basic client config (I use for it one of brokers' certificate):
> {code}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
> * Create usual local self-signed PKI for test
> ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
> ** create keys for broker certificates and create requests from them as usual 
> (I'll use here same subject for all brokers)
> ** create 2 certificates as usual
> {code}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
> ** Use "faketime" utility to make third certificate expire soon:
> {code}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
> ** create keystores from certificates and place them according to broker 
> configs from earlier
> * Run 3 brokers with your configs like
> {code}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something for run 3 brokers simultaneously)
> ** you can check that one broker certificate will expire soon with
> {code}
> openssl s_client -connect localhost:9093  -text | grep -A2 Valid
> {code}
> * Issue new certificate to replace one, which will expire soon, place it in 
> keystore and replac

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Description: 
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake, you 
will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something for run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:  
[^failed-controller-single-session-2029.pcap.gz] 
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!

  was:
h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic confi

[jira] [Updated] (KAFKA-13474) Regression in dynamic update of client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Shipenkov updated KAFKA-13474:
---
Attachment: failed-controller-single-session-2029.pcap.gz

> Regression in dynamic update of client-side SSL factory
> ---
>
> Key: KAFKA-13474
> URL: https://issues.apache.org/jira/browse/KAFKA-13474
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.7.0, 2.7.2
>    Reporter: Igor Shipenkov
>Priority: Major
> Attachments: failed-controller-single-session-2029.pcap.gz
>
>
> h1. Problem
> It seems, after updating listener SSL certificate with dynamic configuration 
> update, old certificate is somehow still used for client SSL factory. Because 
> of this broker fails to create new connection to controller after old 
> certificate expires.
> h1. History
> Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
> updating cetificate, when it was changed with dynamic configuration. That bug 
> have been fixed in version 2.3 and I can confirm, that dynamic update worked 
> for us with kafka 2.4. But now we have updated clusters to 2.7 and see this 
> (or at least similar) problem again.
> h1. Affected versions
> First we've seen this on confluent 6.1.2, which (I think) based on kafka 
> 2.7.0. Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce 
> problem on them just fine
> h1. How to reproduce
> * Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
> * Get vanilla version 2.7.2 (or 2.7.0) from 
> https://kafka.apache.org/downloads .
> * Make basic broker config like this (don't forget to actually create 
> log.dirs):
> {code}
> broker.id=1
> listeners=SSL://:9092
> advertised.listeners=SSL://localhost:9092
> log.dirs=/tmp/broker1/data
> zookeeper.connect=10.88.0.21:2181
> security.inter.broker.protocol=SSL
> ssl.protocol=TLSv1.2
> ssl.client.auth=required
> ssl.endpoint.identification.algorithm=
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.key.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> {code}
> (I use here TLS 1.2 just so I can see client certificate in TLS handshake, 
> you will get same error with default TLS 1.3 too)
> ** Repeat this config for another 2 brokers, changing id, listener port and 
> certificate accordingly.
> * Make basic client config (I use for it one of brokers' certificate):
> {code}
> security.protocol=SSL
> ssl.key.password=changeme1
> ssl.keystore.type=PKCS12
> ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
> ssl.keystore.password=changeme1
> ssl.truststore.type=PKCS12
> ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
> ssl.truststore.password=changeme
> ssl.endpoint.identification.algorithm=
> {code}
> * Create usual local self-signed PKI for test
> ** generate self-signed CA certificate and private key. Place certificate in 
> truststore.
> ** create keys for broker certificates and create requests from them as usual 
> (I'll use here same subject for all brokers)
> ** create 2 certificates as usual
> {code}
> openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker1.csr -out broker1.crt
> {code}
> ** Use "faketime" utility to make third certificate expire soon:
> {code}
> # date here is some point yesterday, so certificate will expire like 10-15 
> minutes from now
> faketime "2021-11-23 10:15" openssl x509 \
>-req -CAcreateserial -days 1 \
>-CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
>-in broker2.csr -out broker2.crt
> {code}
> ** create keystores from certificates and place them according to broker 
> configs from earlier
> * Run 3 brokers with your configs like
> {code}
> ./bin/kafka-server-start.sh server2.properties
> {code}
> (I start it here without daemon mode to see logs right on terminal - just use 
> "tmux" or something for run 3 brokers simultaneously)
> ** you can check that one broker certificate will expire soon with
> {code}
> openssl s_client -connect localhost:9093  -text | grep -A2 Valid
> {code}
> * Issue new certificate to replace one, which will expire soon, place it in 
> keystore and replace old keystore with it.
> * Use dynamic configuration to make broker re-read keystore:
&

[jira] [Created] (KAFKA-13474) Regression in dynamic update of client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)
Igor Shipenkov created KAFKA-13474:
--

 Summary: Regression in dynamic update of client-side SSL factory
 Key: KAFKA-13474
 URL: https://issues.apache.org/jira/browse/KAFKA-13474
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.7.2, 2.7.0
Reporter: Igor Shipenkov
 Attachments: failed-controller-single-session-2029.pcap.gz

h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake, you 
will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something for run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13474) Regression in dynamic update of client-side SSL factory

2021-11-23 Thread Igor Shipenkov (Jira)
Igor Shipenkov created KAFKA-13474:
--

 Summary: Regression in dynamic update of client-side SSL factory
 Key: KAFKA-13474
 URL: https://issues.apache.org/jira/browse/KAFKA-13474
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.7.2, 2.7.0
Reporter: Igor Shipenkov
 Attachments: failed-controller-single-session-2029.pcap.gz

h1. Problem
It seems, after updating listener SSL certificate with dynamic configuration 
update, old certificate is somehow still used for client SSL factory. Because 
of this broker fails to create new connection to controller after old 
certificate expires.

h1. History
Back in KAFKA-8336 there was an issue, when client-side SSL factory wasn't 
updating cetificate, when it was changed with dynamic configuration. That bug 
have been fixed in version 2.3 and I can confirm, that dynamic update worked 
for us with kafka 2.4. But now we have updated clusters to 2.7 and see this (or 
at least similar) problem again.

h1. Affected versions
First we've seen this on confluent 6.1.2, which (I think) based on kafka 2.7.0. 
Then I tried vanilla versions 2.7.0 and 2.7.2 and can reproduce problem on them 
just fine

h1. How to reproduce
* Have zookeeper somewhere (in my example it will be "10.88.0.21:2181").
* Get vanilla version 2.7.2 (or 2.7.0) from https://kafka.apache.org/downloads .
* Make basic broker config like this (don't forget to actually create log.dirs):
{code}
broker.id=1

listeners=SSL://:9092
advertised.listeners=SSL://localhost:9092

log.dirs=/tmp/broker1/data

zookeeper.connect=10.88.0.21:2181

security.inter.broker.protocol=SSL
ssl.protocol=TLSv1.2
ssl.client.auth=required
ssl.endpoint.identification.algorithm=
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.key.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
{code}
(I use here TLS 1.2 just so I can see client certificate in TLS handshake, you 
will get same error with default TLS 1.3 too)
** Repeat this config for another 2 brokers, changing id, listener port and 
certificate accordingly.
* Make basic client config (I use for it one of brokers' certificate):
{code}
security.protocol=SSL
ssl.key.password=changeme1
ssl.keystore.type=PKCS12
ssl.keystore.location=/tmp/broker1/secrets/broker1.keystore.p12
ssl.keystore.password=changeme1
ssl.truststore.type=PKCS12
ssl.truststore.location=/tmp/broker1/secrets/truststore.p12
ssl.truststore.password=changeme
ssl.endpoint.identification.algorithm=
{code}
* Create usual local self-signed PKI for test
** generate self-signed CA certificate and private key. Place certificate in 
truststore.
** create keys for broker certificates and create requests from them as usual 
(I'll use here same subject for all brokers)
** create 2 certificates as usual
{code}
openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker1.csr -out broker1.crt
{code}
** Use "faketime" utility to make third certificate expire soon:
{code}
# date here is some point yesterday, so certificate will expire like 10-15 
minutes from now
faketime "2021-11-23 10:15" openssl x509 \
   -req -CAcreateserial -days 1 \
   -CA ca/ca-cert.pem -CAkey ca/ca-key.pem \
   -in broker2.csr -out broker2.crt
{code}
** create keystores from certificates and place them according to broker 
configs from earlier
* Run 3 brokers with your configs like
{code}
./bin/kafka-server-start.sh server2.properties
{code}
(I start it here without daemon mode to see logs right on terminal - just use 
"tmux" or something for run 3 brokers simultaneously)
** you can check that one broker certificate will expire soon with
{code}
openssl s_client -connect localhost:9093  kafka.server.BrokerToControllerRequestThread)
{code}
and controller log will show something like
{code}
INFO [SocketServer brokerId=1] Failed authentication with /127.0.0.1 (SSL 
handshake failed) (org.apache.kafka.common.network.Selector)
{code}
and if broker with expired and changed certificate was controller itself it eve 
could not connect to itself.
* If you make traffic dump (and you use TLS 1.2 or less) then you will see that 
client tries to use old certificate.

Here is example of traffic dump, when broker with expired and dynamically 
changed certificate is current controller, so it can't connect to itself:
In this example you will see that "Server" use new certificate and "Client" use 
old certificate, but it's same broker!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: Ubuntu и 802x.1 на свитче

2011-02-25 Thread Igor Shipenkov
просто предполагается что помочь смогут только те кто знает что такое 802.1x

вот я правда хоть и знаю что это, но помочь могу только тем фактом что
с 802.1x работает wpa_supplicant (просто в стандарте wpa как раз
802.1x и применяется, только без проводов).

25 февраля 2011 г. 16:20 пользователь Илья Таскаев
ansusti...@gmail.com написал:
 Что за стандарт то такой?
 Думаешь тут умники одни?

 25 февраля 2011 г. 20:18 пользователь Тарас Перебейносов
 taras.perebeyno...@gmail.com написал:

 802x.1


 --
 C уважением, Илья Таскаев

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Ubuntu и 802x.1 на свитче

2011-02-25 Thread Igor Shipenkov
я тут кстати глянул в нетворк-менеджер. так вот тут прямо отдельная
вкладка для настройки 802.1x и там можно указывать и тип
аутентификации, и имя, и пароль, и сертификат, и пароль к сертификату.
подозреваю, что проблем с 802.1x в убунту нет.

25 февраля 2011 г. 16:18 пользователь Тарас Перебейносов
taras.perebeyno...@gmail.com написал:
 В связи с грядущей настройкой авторизации по стандарту 802x.1 на свитче к
 которому подключен мой комп с убунтой, хотелось бы узнать был ли у кого опыт
 общения с этим стандартом?
 Для винды используется авторизация по сертификату, можно ли это в убунте
 прикрутить?

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Посоветуйте ОС для слабого компа с терминальным клиентом RDP

2011-02-13 Thread Igor Shipenkov
если ставить с диска alternate то можно ставить минимальный набор. ну
и с диска server тоже.

13 февраля 2011 г. 14:11 пользователь Сергей Сулимов
mail...@sulimovsv.mail.narod.ru написал:
 Кстати, может кто подскажет есть ли консольный дистрибутив убунты, без
 иксов, рабочих столов и прочих гуев?

 13.02.2011 17:22, wrt пишет:

 12.02.2011 13:01, locke314 пишет:

 Минимализм во всём, кроме трудозатрат.

 Кто бы спорил :)

 С уважением,
 Сергей Сулимов

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: не кешируются ответы DNS-сервера

2010-11-18 Thread Igor Shipenkov
в убунту по умолчанию не стоит демон, кэширующий ответы днс. если очень
хочется, то можно поставить - пакет называется nscd

В Чт., 18/11/2010 в 17:59 +0300, Nikola Krasnoyarsky пишет:
 Запускаем в одной консоли 'tcpdump -ni eth0 port 53', в другой 'dig +short 
 www.ru @8.8.8.8'
 Видим в первой:
 17:57:08.363802 IP x.x.x.x.37036  8.8.8.8.53: 12747+ A? www.ru. (24)
 17:57:08.448849 IP 8.8.8.8.53  x.x.x.x.37036: 12747 1/0/0 A 194.87.0.50 (40)
 то есть направили запрос на гугль, получили ответ. Теперь, если выполнить 
 команду dig еще раз,
 видим в дампе тоже самое. Почему не происходит кеширования результатов?
 
 
 



-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Linux Exchange 2010

2010-11-15 Thread Igor Shipenkov
плагин для mapi не работает с русским (как минимум названия каталогов
превращаются в вопросики) а плагин для owa просто не работает с новыми
версиями эксченджа. Можно допилить их или подождать пока допилят другие.
если от эксченджа нужна только почта, то самый простой вариант - это как
раз пользоваться imap+smtp

В Вс., 14/11/2010 в 22:33 +0300, Taras Perebeynosov пишет:
 Есть ли какой-нибудь способ законнектиться почтовым клиентом для линукс
 к Exchange? IMAP и POP не катят. К 2007 получалось из Evolution, хоть и
 криво, но работало. А к 2010 не выходит... 
 
 
 



-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Система видеонаблюд ения с Web камеры в Ubuntu 10.04 из коробки

2010-08-23 Thread Igor Shipenkov
zoneminder же. это практически профессиональный софт для видеонаблюдения
и как раз удовлетворяет всем вашим требованиям.

В Втр, 24/08/2010 в 00:25 +1000, Munko O. Bazarzhapov пишет:
 Разыскиваю софт для видеонаблюдения с Web камеры в Ubuntu 10.04 из
 коробки
 с записью на диск только в случае изменения картинки
 с возможностью смотреть и/или управлять через http
 
 Может быть есть такое?
 ключевые слова для поиска в репозитарии подобрать не могу
 
 LLC Master-Byte
 Munko O. Bazarzhapov
 JabberID: v...@aginskoe.ru
 ICQ:169245258
 mail: vec...@gmail.com



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Шлюз.

2010-08-21 Thread Igor Shipenkov
Шлюз работает только как NAT или ещё и прокси есть?
Практика показывает что банк-клиенты очень плохо работают через прокси,
приходится даже делать для компьютеров с ними отдельное правило для
прямого выхода в интернет.

В Вск, 22/08/2010 в 01:21 +0300, Андрей Новосёлов пишет:
 Добрый день/ночь.
 Ситуация такая. На компьютер  поставлена Убунту 10.04. Раздаёт нет во
 внутреннюю сеть. Ходит почта, и т.д. Оказалось , что клиент-банк не
 может соединиться из внутренней сети. Работники поддержки утверждают ,
 что нужен только 443 порт. По всему айпишник, выдаваемый по рррое
 маскарадится, но специально порты не блокируются. Программа вся на
 жабе. Может кто сталкивался? Подскажите, что ещё не сделано?
 Заранее спасибо.
 



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: календарь

2010-08-12 Thread Igor Shipenkov
не-не, гуглкалендарь лучше. он заодно и смсками уведомлять о событиях
умеет. и добавлять записи в него можно из консоли с помощью googlecl.

В Чтв, 12/08/2010 в 20:17 +0400, Сергей Блохин пишет:
 Google.Docs?
 
Мне кажется в опенофисе есть мастер создания календарей, если у вас
смены меняются не часто, может так проще всего будет?
  спасибо, но нет у меня опенофиса, а ставить оный ради календаря - как-то 
  совсем
  глупо
  
 



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: календарь

2010-08-12 Thread Igor Shipenkov
да легко. делаем мероприятие длительностью 2 суток,выбираем повторение
события каждый день и уточняем что каждые 4 дня. что-то около 5 тычков
мышкой. сколько букв в команде для googlecl - не считал.

В Чтв, 12/08/2010 в 20:22 +0400, Сергей Блохин пишет:
 А в нём можно поставить событие на каждые 4 дня длительностью в 2 дня?
 Я так понял, что топикстартеру именно это и только это и надо?
 
 12.08.2010, 20:20, Igor Shipenkov ishipen...@gmail.com:
  не-не, гуглкалендарь лучше. он заодно и смсками уведомлять о событиях
  умеет. и добавлять записи в него можно из консоли с помощью googlecl.
 
  В Чтв, 12/08/2010 в 20:17 +0400, Сергей Блохин пишет:
 
   Google.Docs?
Мне кажется в опенофисе есть мастер создания календарей, если у вас
смены меняются не часто, может так проще всего будет?
   спасибо, но нет у меня опенофиса, а ставить оный ради календаря - как-то 
  совсем
   глупо
 
  --
  ubuntu-ru mailing list
  ubuntu-ru@lists.ubuntu.com
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru
 



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Sleeping процесс в top

2010-07-07 Thread Igor Shipenkov
это нормальный статус процесса.

В Срд, 07/07/2010 в 13:51 +0400, Людмила Бандурина пишет:
 Здравствуйте,
 
 
 Запустила поиск текста в файле: grep -rl 'текст' /
 Работает уже несколько часов, ничего пока не нашел.
 Смотрю top - и вижу в нем, что процесс grep обозначен как S, время
 выполнения у него стоит 0:00.06, и другие цифры у него не изменяются.
 Это он уже повис, и нужно его убить? Или он все еще ищет?
 
 С уважением, Людмила
 



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Программа Словарь

2009-12-18 Thread Igor Shipenkov
оно же до сих пор по протоколу dict работает? вот например список серверов
протокола dict. среди них парочка с англо-русскими словарями.

http://www.luetzschena-stahmeln.de/dictd/index.php

18 декабря 2009 г. 16:13 пользователь Владимир Бажанов
a...@dominion.dn.uaнаписал:

 Добрый день. Кто-нибудь пользуется Словарём, программой, идущей
 искаропки? Сегодня заинтересовало, как бы туда добавить русские словари.
 Поделитесь, если кто-то знает, где такие словари есть.


 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Нет дров - нет питания?

2009-12-15 Thread Igor Shipenkov
конкретно для усб в параметры загрузки можно написать usb.autosuspend=1 -
тогда при неиспользовании питание на усб удет отключаться.

16 декабря 2009 г. 4:57 пользователь Юрий Аполлов apoll...@gmail.comнаписал:

 Тогда вопрос: как это питание отключить?

 15 декабря 2009 г. 23:55 пользователь locke314 locke...@gmail.comнаписал:

 Как минимум на USB питание есть в любом случае. Так дело обстоит на
 виденных мной компьютерах (писюках)
 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru



 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Переходник USB-COM

2009-11-23 Thread Igor Shipenkov
большая часть таких переходников - на чипах pl2303 и ark3116. оба они
отлично работают в линуксе, благо поддержка в ядре уже давно. сам я
пользовался дата-кабелем на чите ark3116 - он работал отлично. а то что
драйверы к такому чипу под windows были только на прилагаемом диске, а в
интернете я их не нашёл - так это проблема windows.

24 ноября 2009 г. 12:07 пользователь Валерий Евгеньевич volers...@gmail.com
 написал:

 Рекомендую используй pci express у меня с ними меньше проблем было.  Те что
 с usb на com, не все подходят под windows. И многое зависит от устройства,
 которое подключать будите.

 24 ноября 2009 г. 8:48 пользователь Дмитрий Семенов 7no...@gmail.comнаписал:

 Добрый день!

 Есть необходимость в COM порте. На ноутбуке его соответственно нет.
 Погуглил, существуют переходники USB-COM.
 Вот собственно, кто-то использовал сие чюдо в Бунте?

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru




 --
 С уважением
 Best regards
 Sinyaev Valera
 vsiny...@voler.ru

 тел. +74963131047
 тел. +74957783073
 моб. +79265504470
 icq 238860819

 Участник проекта: www.traffpro.ru
 Мой сайт http://voler.ru/

 Установка и настройка серверов Linux. (Fedora, Ubuntu)
 Поднятие сервисов FTP, WWW, BILLING, SAMBA, SQUID+SQUIDGUARD+SARG, DNS,
 DHCP, MAIL, ATSLOG


 Какой дистрибутив пользуешь ты?


 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: размер окна терминала

2009-11-15 Thread Igor Shipenkov
надо запускать гном-терминал с опцией --geometry=200х50 (в мане написано X
geometry specification (see X man page), can be specified once per window
to be opened., так что ещё и положение окна можно задать).
отдельно в конфиге прописать геометрию окна, насколько я знаю, нельзя.

16 ноября 2009 г. 12:02 пользователь Vladimir Smagin be...@ms.tusur.ruнаписал:

 как сделать, чтобы терминал по умолчанию запускался не 80х25, а скажем
 200х50? используется gnome-terminal.











 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: размер окна терминала

2009-11-15 Thread Igor Shipenkov
это всего лишь описание - там есть и object class=GtkTable id=table24
и object class=GtkVBox id=vbox83 (кстати гном-терминал по умолчанию
как 80x24 а не 80x25 запускается, что сильно раздражает если в ADoM играть).

16 ноября 2009 г. 12:20 пользователь Блохин Серегей sblo...@yandex.ruнаписал:

  Дума, это то, что тебе надо.

 $ cat /usr/share/gnome-terminal/profile-preferences.ui | grep 25
   object class=GtkTable id=table25
 $ cat /usr/share/gnome-terminal/profile-preferences.ui | grep 80
   object class=GtkVBox id=vbox80


  Исходное сообщение 
 *От*: Vladimir Smagin 
 be...@ms.tusur.ruvladimir%20smagin%20%3cbe...@ms.tusur.ru%3e
 
 *Reply-to*: ubuntu-ru@lists.ubuntu.com
 *Кому*: ubuntu-ru@lists.ubuntu.com 
 ubuntu-ru@lists.ubuntu.com%22ubuntu...@lists.ubuntu.com%22%20%3cubuntu-ru@lists.ubuntu.com%3e
 
 *Тема*: размер окна терминала
 *Дата*: Mon, 16 Nov 2009 12:02:51 +0600


 как сделать, чтобы терминал по умолчанию запускался не 80х25, а скажем
 200х50? используется gnome-terminal.













 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: ADSL-роутер под Linux

2009-11-10 Thread Igor Shipenkov
dd-wrt - готовое решение с веб-интерфейсом (и даже платной pro-версией).
openwrt - скорее платформа, в которую можно доставить программ на свой вкус.
а вообще кодовая база у них одна.

11 ноября 2009 г. 13:44 пользователь Кирилл Феоктистов fea...@mail.ruнаписал:

 Здравствуйте, andrey i. mavlyanov! Вы писали 11.11.2009 9:35:

  посмотрите на список поддерживающих dd-wrt прошивку. и выбирайте.

 Граждане, а кто в теме, в чём существенные отличия OpenWRT от DD-WRT?

 С уважением, Кирилл Феоктистов.
 (926) 46-958-46
 ICQ: 344652942

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Остатки около*nix'овых вопросов)

2009-11-01 Thread Igor Shipenkov
нет не является. написано же - i386.
64-битная система называется как раз ubuntu-9.10-desktop-amd64.
и это не означает что она только для АМД - это общее название
архитектуры (у меня такая система стоит вообще на процессоре от ВИА).

В Вск, 01/11/2009 в 18:59 +0300, Krosheninnikov Artem пишет:
 И да, последний вопрос. Скачал ubuntu-9.10-desktop-i386, является ли 
 этот исошник 64 битным? Качал через торрент, если попробовать качать не 
 через торрент,то 64-битную верисию предлагают только amd64
 



signature.asc
Description: Эта часть  сообщения  подписана  цифровой  подписью
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Релиз 9.10

2009-10-06 Thread Igor Shipenkov
для неходящих по ссылкам: 22 дня до релиза

7 октября 2009 г. 12:04 пользователь Дмитрий Семенов 7no...@gmail.comнаписал:

 Coming soon :)

 http://www.ubuntu.com/

 7 октября 2009 г. 8:51 пользователь Yuriy Vlasov m...@mail.ru написал:

 Serge Matveenko пишет:

  Не удается заставить работать видео. Пробовал и на 8.04 и на старой
  версии Skype из репозитория medibuntu. Сама вебкамера работает
  нормально. Проверено программой cheese.
 
  Кто сталкивался с такой проблемой ? Имеются ли способы лечения ?
 
  Советую чуть-чуть подождать до 9.10 там очень все поменялось, очен-очен,
 вах!

 А когда ожидается релиз ?

 --
 [Team] Kalabaha
 The Ubuntu Counter Project - user number # 17409
 ICQ: 170701066  Skype: yura257

 Всего доброго, Юра.


 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru



 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Прокси-сервер

2009-10-05 Thread Igor Shipenkov
squid

6 октября 2009 г. 10:42 пользователь Boris Schyolokov 
metal-heart-alchem...@mail.kz написал:

 Здравствуйте.

 Подскажите, пожалуйста, прокси-сервер, обладающий следующими функциями:
 1. Наличие HTTP- и SOCKS-прокси, портмаппинга.
 2. Создание правил (запрет доступа на определенные ресурсы) для отдельных
 пользователей/групп пользователей.
 3. Выгрузка логов (статистики использования интернет) в файл.
 Ну, вроде все. Заранее спасибо.

 --
 С уважением,
  Boris  mailto:metal-heart-alchem...@mail.kz


 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Пакет ubuntu-minimal

2009-10-05 Thread Igor Shipenkov
это метапакет, который по зависимостям устанавливает минимальный набор ПО

6 октября 2009 г. 11:17 пользователь James Brown jbrownfi...@gmail.comнаписал:

 Для чего нужен этот пакет?
 Я попытался снести ntpdate, так как решил вместо него поставить ntp.
 aptitude требует удалить с ним  ubuntu-minimal.
 aptitude show  ubuntu-minimal
 Пакет: ubuntu-minimal
 Состояние: не установлен
 Версия: 1.140
 Приоритет: важный
 Раздел: metapackages
 Сопровождающий: Matt Zimmerman m...@ubuntu.com
 Размер в распакованном виде: 57,3k
 Зависимости: adduser, apt, apt-utils, bzip2, console-setup, debconf,
dhcp3-client, eject, gnupg, ifupdown,
 initramfs-tools,
iproute, iputils-ping, kbd, less, libc6-i686,
 locales,
lsb-release, makedev, mawk, module-init-tools,
net-tools, netbase, netcat, ntpdate, passwd, procps,
python, startup-tasks, sudo, sysklogd,
 system-services,
tasksel, tzdata, ubuntu-keyring, udev, upstart,
upstart-compat-sysv, upstart-logd, vim-tiny,
 whiptail
 Описание: Minimal core of Ubuntu
  This package depends on all of the packages in the Ubuntu minimal
 system, that
  is a functional command-line system with the following capabilities:

  * Boot
  * Detect hardware
  * Connect to a network
  * Install packages
  * Perform basic diagnostics

  It is also used to help ensure proper upgrades, so it is recommended
 that it
  not be removed.


 Судя по этому, это важный пакет и его удаление не рекомендуется.
 Удаляться же без него ntpdate не хочет.
 Попробовал поставить его отдельно от ntpdate после удаления их обоих, он
 затягивает ntpdate.
 Что это за пакет и что делать мне в данной ситуации?

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Чем потестить фс на произв одительность?

2009-09-29 Thread Igor Shipenkov
bonnie++

судя по описанию, она умеет тестировать как производительность жёсткого
диска, так и производительность файловой системы.

 30 сентября 2009 г. 3:39 пользователь Юрий Аполлов apoll...@gmail.comнаписал:
 Хочу сделать тесты разных фс, разных ядер, разных настроек, разных всяких.
 Интересует именно производительность фс, а не диска как такового (его я
 тестю с помощью dd). Какие есть предложения?

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru
-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Миграция почтового сервер а

2009-09-15 Thread Igor Shipenkov
ага, не отвечали. ибо экзим и постфикс - это MTA. а имап делают совсем
другие демоны.

16 сентября 2009 г. 11:09 пользователь Холин Сергей
zergiu...@yandex.ruнаписал:


  Слушайте, господа, кто то из пишущих ХОТЬ РАЗ В ЖИЗНИ прочитал чем
  занимется постфих и екзим?
 
  Ни то, ни ругое НИКОГДА не отвечали на ИМАП!!
  Никогда!!!
  Так что это как совершенно по барабану... :-))
 
 
 Что ты имеешь ввиду? Не отвечали на IMAP - т.е. никогда не умели с ним
 работать? Или что-то, что выше моих познаний в *nix'ах?

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Миграция почтового сервер а

2009-09-15 Thread Igor Shipenkov
поскольку почтовый сервер делится на демона, предоставляющего возможность
получения почты клиентом и на MTA, то можно сменить MTA а демона поп3 или
имап вообще сделать тем же и тогда с большой вероятностью ящики с письмами
можно будет перенести копированием (поправив в нужных местах конфиги).

16 сентября 2009 г. 11:56 пользователь Сергей Холин
zergiu...@yandex.ruнаписал:

 Igor Shipenkov пишет:
  ага, не отвечали. ибо экзим и постфикс - это MTA. а имап делают совсем
  другие демоны.

 Окей, пошёл в гугл читать, спасибо, что направили, но на самом деле,
 есть ли возможность данного переноса?

 --
 С уважением,.--.
 Холин Сергей Александрович,|@_@ |
 специалист IT ООО коди-Маркет|!_/ |
 Тел. 8-8142-672-000 доб. 220  //   \ \
 (| | )
/'\_   _/`\
\___)=(___/

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: radmin клиент для убунты

2009-08-28 Thread Igor Shipenkov
нет

28 августа 2009 г. 14:48 пользователь SpeedFreak speedfreak2...@ya.ruнаписал:

  есть ваще такое?
 [image: :-!]

 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Где искать gmodule-2.0 (Ubuntu 8.10)?

2009-08-03 Thread Igor Shipenkov
GModule - часть GLib, для сборки я так понимаю нужен libglib2.0-dev

3 августа 2009 г. 15:12 пользователь Евсюков Денис denis.evsyu...@gmail.com
 написал:

 Извечная проблема убунты? Какой репо подключить и какой пакет поставить?
 Столько проблем из-за этой архитектуры...

 3 августа 2009 г. 13:09 пользователь Denis Yurashkou
 dayfu...@gmail.com написал:
 
  Собственно, вопрос именно в этом.
 
  Попытался собрать mc-4.7.0 и получил ошибочку о том, что пакет
 gmodule-2.0 отсутствует.
  И где его искать?
  Уже везде обрыл... :(

 --
 Евсюков Денис Анатольевич
 ICQ: 168 043 475, JID: juev(at)jabber.ru
 Registered Linux User #442 821
 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Вопрос по файрфоксу.

2009-06-29 Thread Igor Shipenkov
для проверки орфографии опера может использовать аспелл ещё версии так с
8-ой
а вот с расширением функционала у неё плохо - что-то сделать конечно можно,
но трудно.

30 июня 2009 г. 11:07 пользователь Kirill Shatalaev
kir...@samaranet.infoнаписал:


  Извините, не в тему конечно, зато работает :)
  Если не принципиально нужно, то попробуйте opera.
  Тестирую уже неделю три редакции 10-чки. Нареканий пока нет кроме
  непонятной проблемы с RSS в турбо редакции оперы.

 А проверка орфографии в 10-ке есть и что-то типа файрбага?



 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Работа в Firefox без мыши

2009-06-28 Thread Igor Shipenkov
во-первых есть расширение vimperator - управление firefox в стиле vim.
а во-вторых есть расширение conkeror (впрочем оно уже выросло в отдельный
браузер вроде, не спутайте с кдешным konqueror) - управление браузером в
стиле emacs.

29 июня 2009 г. 9:32 пользователь pingwi...@mail.ru написал:

 Возможно ли работать в FF без мыши? Может посоветуете какое-нибудь
 дополнение,
 которое, например, при нажатии сочетания клавиш подсвечивает ссылк цифрами
 или
 буквами (вроде в konqeror что-то подобное я видел).
 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru


Re: Hardy. Rsync и ifconfig - показания т рафика

2009-06-18 Thread Igor Shipenkov
да, это действительно переполнение.
сам в своё время столкнулся с этим и у меня счётчик обнулялся как раз где-то
на 4 гигабайтах.
подробностей, правда, не знаю, возможно есть какой-то параметр ядра,
задающий размер счётчика до обнуления.

19 июня 2009 г. 12:34 пользователь San_Sanych ssan...@gmail.com написал:

 лень лезть в код - но может это просто переполнение long int?

 --
 Александр Вайтехович
 www: http://sanych.nnov.ru
 jabber: sanych{a}sanych.nnov.ru


 --
 ubuntu-ru mailing list
 ubuntu-ru@lists.ubuntu.com
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru

-- 
ubuntu-ru mailing list
ubuntu-ru@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-ru