Re: a little problem in quickstart

2022-06-26 Thread Men Lim
You don't need to put in the jar file name in the plug in.path variable.
Something like plugin.path=/kafka/plugin. Then have the jar file in that
plugin folder. Restart the worker and it will pick it up.

On Sun, Jun 26, 2022 at 8:04 AM mason lee  wrote:

>  Hi I’m new to Kafka and i can not pass step 6 in
> https://kafka.apache.org/quickstart, finally I found that the word ‘lib’
> in
> 'echo "plugin.path=lib/connect-file-3.2.0.jar’ should be ‘libs’.
> It bothered me for a while, I think a change would be better.
>


Re: securing sasl/scram username and password in kafka connect

2022-03-08 Thread Men Lim
(*#&(*&#($&(Q#Q #EQ$#!@#

I got it figured out.  I really have to read the error message more
carefully!  the error is:

Unable to connect: Access denied for user '${file:/app/data/cred/
*connector_credentials.prop*'@'172.x.x.x' (using password: YES)

*The file name was changed from connector_credentials.prop to
connector_credentials.properties!*  When I did a ps -aux | grep java.  I
saw 2 spids running the distributor, not sure how but there it was.  I
killed both, checked all the files to make sure they all say:
connector_credentials.properties.  Restarted the distributor and connector
and it is working now.

:bang head on table:
Thanks for your help Chris and Martin.


On Tue, Mar 8, 2022 at 8:01 AM Men Lim  wrote:

> HI Martin,
>
> the owner of the file is 'adm.'  I have switched to the user 'adm' and is
> executing everything under that credential.  Which portion of Chris'
> instruction are you referring to?
>
> thanks,
>
> On Tue, Mar 8, 2022 at 4:13 AM Martin Gainty  wrote:
>
>> Hi Mem
>>
>> UNIX / Linux Find File Owner Name - nixCraft (cyberciti.biz)<
>> https://www.cyberciti.biz/faq/unix-linux-find-file-owner-name/>
>> once you know who created your file
>> file:/app/data/cred/connector_credentials.prop
>> you will need to change credentials as the owner of the file
>>
>> then follow chris' instructions
>>
>> 
>> From: Chris Egerton 
>> Sent: Monday, March 7, 2022 4:48 PM
>> To: users@kafka.apache.org 
>> Subject: Re: securing sasl/scram username and password in kafka connect
>>
>> It looks like the file config provider isn't actually set up on the
>> Connect
>> worker. What does your Connect worker config look like (usually a file
>> called something like connect-distributed.properties)? Feel free to change
>> any sensitive values to a string like "", but please don't
>> remove
>> them entirely (they may be necessary for debugging).
>>
>> On Mon, Mar 7, 2022 at 4:39 PM Men Lim  wrote:
>>
>> > Thanks for the response Chris.  I went thru the setup again and it
>> appeared
>> > I might have had a typo somewhere last friday.  Currently, I'm running
>> into
>> > a file permission issue.
>> >
>> > the file has the following permissions:
>> >
>> > -rw-r--r-- 1 adm admn 88 Mar  7 21:23 connector_credentials.properties
>> >
>> > I have tried changing the pwd to 700 but still the same error:
>> >
>> > Unable to connect: Access denied for user
>> > '${file:/app/data/cred/connector_credentials.prop'@'172.x.x.x' (using
>> > password: YES)
>> >
>> > On Mon, Mar 7, 2022 at 1:55 PM Chris Egerton 
>> > wrote:
>> >
>> > > Hi Men,
>> > >
>> > > That config snippet has a small syntax error: all double quotes
>> should be
>> > > escaped. Assuming you tried something like this:
>> > >
>> > > "database.history.producer.sasl.jaas.config":
>> > > "org.apache.kafka.common.security.scram.ScramLoginModule required
>> > > username=\"${file:/path/file.pro:user\"} password=\"${file:/path/
>> > file.pro
>> > > :password}\";"
>> > >
>> > > and still ran into issues, we'd probably need to see log files or, at
>> the
>> > > very least, the stack trace for the task from the REST API (if it
>> failed
>> > at
>> > > all) in order to follow up and provide more help.
>> > >
>> > > Cheers,
>> > >
>> > > Chris
>> > >
>> > > On Mon, Mar 7, 2022 at 3:26 PM Men Lim  wrote:
>> > >
>> > > > Hi Chris,
>> > > > I was getting an unauthorized/authentication error message when I
>> was
>> > > > trying it out last Friday.  I tried looking for the exact message in
>> > the
>> > > > connect.log.* files but was not very successful.  In my connector
>> > file, I
>> > > > have
>> > > >
>> > > > {
>> > > >  "name":"blah",
>> > > >  "config": {
>> > > >  ...
>> > > >  ...
>> > > >  "database.history.producer.sasl.jaas.config":
>> > > > "org.apache.kafka.common.security.scram.ScramLoginModule required
>> > > > username=\"000\" password=\"00\

Re: securing sasl/scram username and password in kafka connect

2022-03-08 Thread Men Lim
HI Martin,

the owner of the file is 'adm.'  I have switched to the user 'adm' and is
executing everything under that credential.  Which portion of Chris'
instruction are you referring to?

thanks,

On Tue, Mar 8, 2022 at 4:13 AM Martin Gainty  wrote:

> Hi Mem
>
> UNIX / Linux Find File Owner Name - nixCraft (cyberciti.biz)<
> https://www.cyberciti.biz/faq/unix-linux-find-file-owner-name/>
> once you know who created your file
> file:/app/data/cred/connector_credentials.prop
> you will need to change credentials as the owner of the file
>
> then follow chris' instructions
>
> 
> From: Chris Egerton 
> Sent: Monday, March 7, 2022 4:48 PM
> To: users@kafka.apache.org 
> Subject: Re: securing sasl/scram username and password in kafka connect
>
> It looks like the file config provider isn't actually set up on the Connect
> worker. What does your Connect worker config look like (usually a file
> called something like connect-distributed.properties)? Feel free to change
> any sensitive values to a string like "", but please don't remove
> them entirely (they may be necessary for debugging).
>
> On Mon, Mar 7, 2022 at 4:39 PM Men Lim  wrote:
>
> > Thanks for the response Chris.  I went thru the setup again and it
> appeared
> > I might have had a typo somewhere last friday.  Currently, I'm running
> into
> > a file permission issue.
> >
> > the file has the following permissions:
> >
> > -rw-r--r-- 1 adm admn 88 Mar  7 21:23 connector_credentials.properties
> >
> > I have tried changing the pwd to 700 but still the same error:
> >
> > Unable to connect: Access denied for user
> > '${file:/app/data/cred/connector_credentials.prop'@'172.x.x.x' (using
> > password: YES)
> >
> > On Mon, Mar 7, 2022 at 1:55 PM Chris Egerton 
> > wrote:
> >
> > > Hi Men,
> > >
> > > That config snippet has a small syntax error: all double quotes should
> be
> > > escaped. Assuming you tried something like this:
> > >
> > > "database.history.producer.sasl.jaas.config":
> > > "org.apache.kafka.common.security.scram.ScramLoginModule required
> > > username=\"${file:/path/file.pro:user\"} password=\"${file:/path/
> > file.pro
> > > :password}\";"
> > >
> > > and still ran into issues, we'd probably need to see log files or, at
> the
> > > very least, the stack trace for the task from the REST API (if it
> failed
> > at
> > > all) in order to follow up and provide more help.
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Mon, Mar 7, 2022 at 3:26 PM Men Lim  wrote:
> > >
> > > > Hi Chris,
> > > > I was getting an unauthorized/authentication error message when I was
> > > > trying it out last Friday.  I tried looking for the exact message in
> > the
> > > > connect.log.* files but was not very successful.  In my connector
> > file, I
> > > > have
> > > >
> > > > {
> > > >  "name":"blah",
> > > >  "config": {
> > > >  ...
> > > >  ...
> > > >  "database.history.producer.sasl.jaas.config":
> > > > "org.apache.kafka.common.security.scram.ScramLoginModule required
> > > > username=\"000\" password=\"00\";",
> > > >  ...
> > > >   }
> > > > }
> > > >
> > > > I changed the database.history.producer.sasl.jaas.config to:
> > > >
> > > > "database.history.producer.sasl.jaas.config":
> > > > "org.apache.kafka.common.security.scram.ScramLoginModule required
> > > > username="${file:/path/file.pro:user"} password="${file:/path/
> file.pro
> > :
> > > > password}";",
> > > >
> > > > On Mon, Mar 7, 2022 at 9:46 AM Chris Egerton <
> fearthecel...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Men,
> > > > >
> > > > > The config provider mechanism should work for every property in a
> > > > connector
> > > > > config, and every property in a worker config except for the
> > > plugin.path
> > > > > property (see KAFKA-9845 [1]). You can also use it for only part
> of a
> > > > > single property, or even multiple parts, like 

Re: securing sasl/scram username and password in kafka connect

2022-03-07 Thread Men Lim
Chris, here's the content of the files

## distributor file:

bootstrap.servers=broker:9096
group.id=dbz-dev

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false

offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
offset.storage.partitions=3

config.storage.topic=connect-configs
config.storage.replication.factor=3

status.storage.topic=connect-status
status.storage.replication.factor=3

# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=1
rest.host.name=fqdn
rest.port=8083
rest.advertised.host.name=fqdn
rest.advertised.port=8083

sasl.mechanism=SCRAM-SHA-512
request.timeout.ms=2
retry.backoff.ms=500

config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="${file:/app/data/cred/connector_credentials.properties:kuser}"
password="${file:/app/data/cred/connector_credentials.properties:kpassword}";
security.protocol=SASL_SSL

consumer.sasl.mechanism=SCRAM-SHA-512
consumer.request.timeout.ms=30
consumer.retry.backoff.ms=500
consumer.buffer.memory=2097152
consumer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="${file:/app/data/cred/connector_credentials.properties:kuser}"
password="${file:/app/data/cred/connector_credentials.properties:kpassword}";
consumer.security.protocol=SASL_SSL

producer.sasl.mechanism=SCRAM-SHA-512
producer.request.timeout.ms=30
producer.retry.backoff.ms=500
producer.buffer.memory=2097152
producer.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="${file:/app/data/cred/connector_credentials.properties:kuser}"
password="${file:/app/data/cred/connector_credentials.properties:kpassword}";
producer.security.protocol=SASL_SSL

plugin.path=/app/kafka/plugins
## eof

## connector file
{
  "name": "dbz-panamax-list-domain-general-01",
  "config": {
  "auto.create.topics": "false",
  "binlog.buffer.size": "4048",
  "connector.class": "io.debezium.connector.mysql.MySqlConnector",
  "database.history.consumer.sasl.jaas.config":
"org.apache.kafka.common.security.scram.ScramLoginModule required
username=\"${file:/app/data/cred/connector_credentials.properties:kuser}\"
password=\"${file:/app/data/cred/connector_credentials.properties:kpassword}\";",
  "database.history.consumer.sasl.mechanism": "SCRAM-SHA-512",
  "database.history.consumer.security.protocol": "SASL_SSL",
  "database.history.kafka.bootstrap.servers": "broker:9096",
  "database.history.kafka.topic": "dbhistory.db",
  "database.history.producer.sasl.jaas.config":
"org.apache.kafka.common.security.scram.ScramLoginModule required
username=\"${file:/app/data/cred/connector_credentials.properties:kuser}\"
password=\"${file:/app/data/cred/connector_credentials.properties:kpassword}\";",
  "database.history.producer.sasl.mechanism": "SCRAM-SHA-512",
  "database.history.producer.security.protocol": "SASL_SSL",
  "database.hostname": "host",
  "database.include.list": "db_name",
  "database.password":
"${file:/app/data/cred/connector_credentials.properties:password}",
  "database.port": "9908",
  "database.server.name": "server_name",
  "database.user":
"${file:/app/data/cred/connector_credentials.properties:user}",
  "errors.log.enable": "true",
  "errors.log.include.messages": "true",
  "errors.tolerance": "all",
  "include.schema.changes": "false",
  "signal.data.collection": "dbz.debezium_signal",
  "snapshot.locking.mode": "minimal",
  "snapshot.mode": "initial",
  "table.include.list":
"list.lr_cust_extrnl_prod,list.lr_cust_vndr_info",
  "tasks.max": "1",
  "timestampConverter.format.datetime": "-MM-dd'T'HH:mm:ss.SSS'Z'",
  "timestampConverter.type":
"oryanmoshe.kafka.connect.util.TimestampConverter",
  "transforms.Reroute.key.enforce.uniqueness": "false",
  "transforms.Reroute.topic.regex": "(.*)",
  "transforms.Rerout

Re: securing sasl/scram username and password in kafka connect

2022-03-07 Thread Men Lim
Thanks for the response Chris.  I went thru the setup again and it appeared
I might have had a typo somewhere last friday.  Currently, I'm running into
a file permission issue.

the file has the following permissions:

-rw-r--r-- 1 adm admn 88 Mar  7 21:23 connector_credentials.properties

I have tried changing the pwd to 700 but still the same error:

Unable to connect: Access denied for user
'${file:/app/data/cred/connector_credentials.prop'@'172.x.x.x' (using
password: YES)

On Mon, Mar 7, 2022 at 1:55 PM Chris Egerton 
wrote:

> Hi Men,
>
> That config snippet has a small syntax error: all double quotes should be
> escaped. Assuming you tried something like this:
>
> "database.history.producer.sasl.jaas.config":
> "org.apache.kafka.common.security.scram.ScramLoginModule required
> username=\"${file:/path/file.pro:user\"} password=\"${file:/path/file.pro
> :password}\";"
>
> and still ran into issues, we'd probably need to see log files or, at the
> very least, the stack trace for the task from the REST API (if it failed at
> all) in order to follow up and provide more help.
>
> Cheers,
>
> Chris
>
> On Mon, Mar 7, 2022 at 3:26 PM Men Lim  wrote:
>
> > Hi Chris,
> > I was getting an unauthorized/authentication error message when I was
> > trying it out last Friday.  I tried looking for the exact message in the
> > connect.log.* files but was not very successful.  In my connector file, I
> > have
> >
> > {
> >  "name":"blah",
> >  "config": {
> >  ...
> >  ...
> >  "database.history.producer.sasl.jaas.config":
> > "org.apache.kafka.common.security.scram.ScramLoginModule required
> > username=\"000\" password=\"00\";",
> >  ...
> >   }
> > }
> >
> > I changed the database.history.producer.sasl.jaas.config to:
> >
> > "database.history.producer.sasl.jaas.config":
> > "org.apache.kafka.common.security.scram.ScramLoginModule required
> > username="${file:/path/file.pro:user"} password="${file:/path/file.pro:
> > password}";",
> >
> > On Mon, Mar 7, 2022 at 9:46 AM Chris Egerton 
> > wrote:
> >
> > > Hi Men,
> > >
> > > The config provider mechanism should work for every property in a
> > connector
> > > config, and every property in a worker config except for the
> plugin.path
> > > property (see KAFKA-9845 [1]). You can also use it for only part of a
> > > single property, or even multiple parts, like in this example
> (assuming a
> > > config provider named "file"):
> > >
> > >
> sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
> > > required username="${file:/some/file.properties:username}"
> > > password="${file:/some/file.properties:password}"
> > >
> > > What sorts of errors are you seeing when trying to use a config
> provider
> > > with sasl/scram credentials?
> > >
> > > [1] - https://issues.apache.org/jira/browse/KAFKA-9845
> > >
> > > Cheers,
> > >
> > > Chris
> > >
> > > On Mon, Mar 7, 2022 at 10:35 AM Men Lim  wrote:
> > >
> > > > Hi all,
> > > >
> > > > recently, I found out about
> > > >
> > > > config.providers=file
> > > >
> > > >
> > > >
> > >
> >
> config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
> > > >
> > > > This works great to remove our embedded database password into an
> > > external
> > > > file.  However, it does not work when I tried to do the same thing
> with
> > > the
> > > > sasl/scram username and password found in the distributor or
> connector
> > > file
> > > > for kafka connect:
> > > >
> > > >
> > sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
> > > > required \
> > > > username="000" password="some_password";
> > > >
> > > > I was wondering if there's a way to secure these passwords as well?
> > > >
> > > > Thanks,
> > > >
> > >
> >
>


Re: securing sasl/scram username and password in kafka connect

2022-03-07 Thread Men Lim
Hi Chris,
I was getting an unauthorized/authentication error message when I was
trying it out last Friday.  I tried looking for the exact message in the
connect.log.* files but was not very successful.  In my connector file, I
have

{
 "name":"blah",
 "config": {
 ...
 ...
 "database.history.producer.sasl.jaas.config":
"org.apache.kafka.common.security.scram.ScramLoginModule required
username=\"000\" password=\"00\";",
 ...
  }
}

I changed the database.history.producer.sasl.jaas.config to:

"database.history.producer.sasl.jaas.config":
"org.apache.kafka.common.security.scram.ScramLoginModule required
username="${file:/path/file.pro:user"} password="${file:/path/file.pro:
password}";",

On Mon, Mar 7, 2022 at 9:46 AM Chris Egerton 
wrote:

> Hi Men,
>
> The config provider mechanism should work for every property in a connector
> config, and every property in a worker config except for the plugin.path
> property (see KAFKA-9845 [1]). You can also use it for only part of a
> single property, or even multiple parts, like in this example (assuming a
> config provider named "file"):
>
> sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
> required username="${file:/some/file.properties:username}"
> password="${file:/some/file.properties:password}"
>
> What sorts of errors are you seeing when trying to use a config provider
> with sasl/scram credentials?
>
> [1] - https://issues.apache.org/jira/browse/KAFKA-9845
>
> Cheers,
>
> Chris
>
> On Mon, Mar 7, 2022 at 10:35 AM Men Lim  wrote:
>
> > Hi all,
> >
> > recently, I found out about
> >
> > config.providers=file
> >
> >
> >
> config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
> >
> > This works great to remove our embedded database password into an
> external
> > file.  However, it does not work when I tried to do the same thing with
> the
> > sasl/scram username and password found in the distributor or connector
> file
> > for kafka connect:
> >
> > sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
> > required \
> > username="000" password="some_password";
> >
> > I was wondering if there's a way to secure these passwords as well?
> >
> > Thanks,
> >
>


securing sasl/scram username and password in kafka connect

2022-03-07 Thread Men Lim
Hi all,

recently, I found out about

config.providers=file

config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider

This works great to remove our embedded database password into an external
file.  However, it does not work when I tried to do the same thing with the
sasl/scram username and password found in the distributor or connector file
for kafka connect:

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="000" password="some_password";

I was wondering if there's a way to secure these passwords as well?

Thanks,


Re: [ANNOUNCE] Apache Kafka 3.1.0

2022-02-02 Thread Men Lim
Yes thank you Isreal.  rmoff also clarified things for me over on reddit.

On Wed, Jan 26, 2022 at 6:41 PM Israel Ekpo  wrote:

> Kafka 3.x is production ready if you are running it with Zookeeper
>
> If you are running it in KRaft mode without Zookeeper that set up is not
> yet recommended for production scenarios
>
> I hope this clarifies your concerns
>
>
>
> On Wed, Jan 26, 2022 at 9:50 AM Men Lim  wrote:
>
> > i'm curious at what point can 3.x be production ready?
> >
> > On Mon, Jan 24, 2022 at 10:04 AM David Jacot  wrote:
> >
> > > The Apache Kafka community is pleased to announce the release for
> > > Apache Kafka 3.1.0.
> > >
> > > It is a major release that includes many new features, including:
> > >
> > > * Apache Kafka supports Java 17
> > > * The FetchRequest supports Topic IDs (KIP-516)
> > > * Extend SASL/OAUTHBEARER with support for OIDC (KIP-768)
> > > * Add broker count metrics (KIP-748)
> > > * Differentiate consistently metric latency measured in millis and
> > > nanos (KIP-773)
> > > * The eager rebalance protocol is deprecated (KAFKA-13439)
> > > * Add TaskId field to StreamsException (KIP-783)
> > > * Custom partitioners in foreign-key joins (KIP-775)
> > > * Fetch/findSessions queries with open endpoints for
> > > SessionStore/WindowStore (KIP-766)
> > > * Range queries with open endpoints (KIP-763)
> > > * Add total blocked time metric to Streams (KIP-761)
> > > * Add additional configuration to control MirrorMaker2 internal topics
> > > naming convention (KIP-690)
> > >
> > > You may read a more detailed list of features in the 3.1.0 blog post:
> > > https://blogs.apache.org/kafka/
> > >
> > > All of the changes in this release can be found in the release notes:
> > > https://www.apache.org/dist/kafka/3.1.0/RELEASE_NOTES.html
> > >
> > > You can download the source and binary release (Scala 2.12 and 2.13)
> > from:
> > > https://kafka.apache.org/downloads#3.1.0
> > >
> > >
> > >
> >
> ---
> > >
> > >
> > > Apache Kafka is a distributed streaming platform with four core APIs:
> > >
> > > ** The Producer API allows an application to publish a stream of
> records
> > to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> > > output stream to one or more output topics, effectively transforming
> the
> > > input streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> > > capture every change to a table.
> > >
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react
> > > to the streams of data.
> > >
> > >
> > > Apache Kafka is in use at large and small companies worldwide,
> including
> > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >
> > > A big thank you for the following 114 contributors to this release!
> > >
> > > A. Sophie Blee-Goldman, Alexander Iskuskov, Alexander Stohr, Almog
> > > Gavra, Andras Katona, Andrew Patterson, Andy Chambers, Andy Lapidas,
> > > Anna Sophie Blee-Goldman, Antony Stubbs, Arjun Satish, Bill Bejeck,
> > > Boyang Chen, Bruno Cadonna, CHUN-HAO TANG, Cheng Tan, Chia-Ping Tsai,
> > > Chris Egerton, Christo Lolov, Colin P. McCabe, Cong Ding, Daniel
> > > Urban, David Arthur, David Jacot, David Mao, Dmitriy Fishman, Edoardo
> > > Comar, Ewen Cheslack-Postava, Greg Harris, Guozhang Wang, Igor Soarez,
> > > Ismael Juma, Israel Ekpo, Ivan Ponomarev, Jakub Scholz, James Galasyn,
> &g

Re: Log4j 1.2

2022-01-26 Thread Men Lim
Thanks Ed.

On Mon, Jan 24, 2022 at 2:21 PM Edward Capriolo 
wrote:

> In general you can delete log4j1.jar
> Replace with log4jcore_2.17.1.jar
> And log4japi_2.17.1.jar
>
> Ed
>
> On Monday, January 24, 2022, Men Lim  wrote:
>
> > Is there a write out of the steps that need to be taken?
> >
> > On Mon, Jan 24, 2022 at 10:36 AM Edward Capriolo 
> > wrote:
> >
> > > Explained in another thread log4j api is separate from implementation.
> > Its
> > > possible to remove log4j 1.2 jars from classpath and upgrade to log4j
> > > 2.17.1 without changing a line of code in kafka.
> > >
> > >
> > > On Monday, January 10, 2022, Tauzell, Dave <
> dave.tauz...@surescripts.com
> > >
> > > wrote:
> > >
> > > > Thanks.  Those KIPs show that there is a fair amount of work for
> this.
> > > >
> > > > From: Israel Ekpo 
> > > > Date: Monday, January 10, 2022 at 9:32 AM
> > > > To: users@kafka.apache.org 
> > > > Subject: [EXTERNAL] Re: Log4j 1.2
> > > > There are two KIPs already related to this effort
> > > >
> > > > KIP-653
> > > > https://urldefense.com/v3/__https://cwiki.apache.org/
> > > > confluence/display/KAFKA/KIP-653*3A*Upgrade*log4j*to*
> > > > log4j2__;JSsrKys!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > > > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQiwF2wVQ$<
> > > https://urldefense.com/v3/__https:/
> > > > cwiki.apache.org/confluence/display/KAFKA/KIP-653*3A*
> > > > Upgrade*log4j*to*log4j2__;JSsrKys!!K_cMf-SQz-o!L-WI4wlYZXr-
> > > > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQiwF2wVQ$>
> > > >
> > > > KIP-676
> > > > https://urldefense.com/v3/__https://cwiki.apache.org/
> > > > confluence/display/KAFKA/KIP-676*3A*Respect*logging*
> > > > hierarchy__;JSsrKw!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > > > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQF_CNUlw$<
> > > https://urldefense.com/v3/__https:/
> > > > cwiki.apache.org/confluence/display/KAFKA/KIP-676*3A*
> > > > Respect*logging*hierarchy__;JSsrKw!!K_cMf-SQz-o!L-WI4wlYZXr-
> > > > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQF_CNUlw$>
> > > >
> > > > I believe the work is in progress, feel free to reach out to the
> > > > contributors if you are able to contribute to the effort by coding,
> > > > reviewing PRs, submitting documentation etc
> > > >
> > > >
> > > > Israel Ekpo
> > > > Lead Instructor, IzzyAcademy.com
> > > > https://urldefense.com/v3/__https://www.youtube.com/c/
> > > > izzyacademy__;!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > > > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fTVljfFMg$<
> > > https://urldefense.com/v3/__https:/
> > > > www.youtube.com/c/izzyacademy__;!!K_cMf-SQz-o!L-WI4wlYZXr-
> > > > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fTVljfFMg$>
> > > > https://urldefense.com/v3/__https://izzyacademy.com/__;!!
> > > > K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-
> > > > dphlW1fQ3lp3_fQ$<https://urldefense.com/v3/__https:/
> > > > izzyacademy.com/__;!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > > > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQ3lp3_fQ$>
> > > >
> > > >
> > > > On Mon, Jan 10, 2022 at 10:12 AM Brosy, Franziska <
> > > > franziska.br...@wido.bv.aok.de> wrote:
> > > >
> > > > > Well. Hopefully there is someone who is able and willingly to do
> that
> > > > > work.
> > > > > I'm so sorry that I can't help.
> > > > >
> > > > > Best regards
> > > > > Franziska
> > > > >
> > > > > -Ursprüngliche Nachricht-
> > > > > Von: Tauzell, Dave 
> > > > > Gesendet: Montag, 10. Januar 2022 14:30
> > > > > An: users@kafka.apache.org
> > > > > Betreff: Re: Log4j 1.2
> > > > >
> > > > > Log4j 2.x isn't a drop-in replacement for 1.x.   It isn't a
> difficult
> > > > > change but somebody does need to go through all the source code and
> > do
> > > > the
> > > > > work.
> > > > >
> > > > >
> > > > > -Dave
> > > > >
> > > > > From: Brosy, Franziska 
> > > > > Date: Monday,

Re: [ANNOUNCE] Apache Kafka 3.1.0

2022-01-26 Thread Men Lim
i'm curious at what point can 3.x be production ready?

On Mon, Jan 24, 2022 at 10:04 AM David Jacot  wrote:

> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.1.0.
>
> It is a major release that includes many new features, including:
>
> * Apache Kafka supports Java 17
> * The FetchRequest supports Topic IDs (KIP-516)
> * Extend SASL/OAUTHBEARER with support for OIDC (KIP-768)
> * Add broker count metrics (KIP-748)
> * Differentiate consistently metric latency measured in millis and
> nanos (KIP-773)
> * The eager rebalance protocol is deprecated (KAFKA-13439)
> * Add TaskId field to StreamsException (KIP-783)
> * Custom partitioners in foreign-key joins (KIP-775)
> * Fetch/findSessions queries with open endpoints for
> SessionStore/WindowStore (KIP-766)
> * Range queries with open endpoints (KIP-763)
> * Add total blocked time metric to Streams (KIP-761)
> * Add additional configuration to control MirrorMaker2 internal topics
> naming convention (KIP-690)
>
> You may read a more detailed list of features in the 3.1.0 blog post:
> https://blogs.apache.org/kafka/
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/3.1.0/RELEASE_NOTES.html
>
> You can download the source and binary release (Scala 2.12 and 2.13) from:
> https://kafka.apache.org/downloads#3.1.0
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream of records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 114 contributors to this release!
>
> A. Sophie Blee-Goldman, Alexander Iskuskov, Alexander Stohr, Almog
> Gavra, Andras Katona, Andrew Patterson, Andy Chambers, Andy Lapidas,
> Anna Sophie Blee-Goldman, Antony Stubbs, Arjun Satish, Bill Bejeck,
> Boyang Chen, Bruno Cadonna, CHUN-HAO TANG, Cheng Tan, Chia-Ping Tsai,
> Chris Egerton, Christo Lolov, Colin P. McCabe, Cong Ding, Daniel
> Urban, David Arthur, David Jacot, David Mao, Dmitriy Fishman, Edoardo
> Comar, Ewen Cheslack-Postava, Greg Harris, Guozhang Wang, Igor Soarez,
> Ismael Juma, Israel Ekpo, Ivan Ponomarev, Jakub Scholz, James Galasyn,
> Jason Gustafson, Jeff Kim, Jim Galasyn, JoeCqupt, Joel Hamill, John
> Gray, John Roesler, Jongho Jeon, Jorge Esteban Quilcate Otoya, Jose
> Sancio, Josep Prat, José Armando García Sancio, Jun Rao, Justine
> Olshan, Kalpesh Patel, Kamal Chandraprakash, Kevin Zhang, Kirk True,
> Konstantine Karantasis, Kowshik Prakasam, Leah Thomas, Lee Dongjin,
> Lucas Bradstreet, Luke Chen, Manikumar Reddy, Matthew Wong, Matthias
> J. Sax, Michael Carter, Mickael Maison, Nigel Liang, Niket, Niket
> Goel, Oliver Hutchison, Omnia G H Ibrahim, Patrick Stuedi, Phil
> Hardwick, Prateek Agarwal, Rajini Sivaram, Randall Hauch, René Kerner,
> Richard Yu, Rohan, Ron Dagostino, Ryan Dielhenn, Sanjana Kaundinya,
> Satish Duggana, Sergio Peña, Sherzod Mamadaliev, Stanislav Vodetskyi,
> Ted Yu, Tom Bentley, Tomas Forsman, Tomer Wizman, Uwe Eisele, Victoria
> Xia, Viktor Somogyi-Vass, Vincent Jiang, Walker Carlson, Weisheng
> Yang, Xavier Léauté, Yanwen(Jason) Lin, Yi Ding, Zara Lim, andy0x01,
> dengziming, feyman2016, ik, ik.lim, jem, jiangyuan, kpatelatwork,
> leah, loboya~, lujiefsi, sebbASF, singingMan, vamossagar12,
> wenbingshen
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
>
> David
>


Re: Log4j 1.2

2022-01-24 Thread Men Lim
Is there a write out of the steps that need to be taken?

On Mon, Jan 24, 2022 at 10:36 AM Edward Capriolo 
wrote:

> Explained in another thread log4j api is separate from implementation. Its
> possible to remove log4j 1.2 jars from classpath and upgrade to log4j
> 2.17.1 without changing a line of code in kafka.
>
>
> On Monday, January 10, 2022, Tauzell, Dave 
> wrote:
>
> > Thanks.  Those KIPs show that there is a fair amount of work for this.
> >
> > From: Israel Ekpo 
> > Date: Monday, January 10, 2022 at 9:32 AM
> > To: users@kafka.apache.org 
> > Subject: [EXTERNAL] Re: Log4j 1.2
> > There are two KIPs already related to this effort
> >
> > KIP-653
> > https://urldefense.com/v3/__https://cwiki.apache.org/
> > confluence/display/KAFKA/KIP-653*3A*Upgrade*log4j*to*
> > log4j2__;JSsrKys!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQiwF2wVQ$<
> https://urldefense.com/v3/__https:/
> > cwiki.apache.org/confluence/display/KAFKA/KIP-653*3A*
> > Upgrade*log4j*to*log4j2__;JSsrKys!!K_cMf-SQz-o!L-WI4wlYZXr-
> > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQiwF2wVQ$>
> >
> > KIP-676
> > https://urldefense.com/v3/__https://cwiki.apache.org/
> > confluence/display/KAFKA/KIP-676*3A*Respect*logging*
> > hierarchy__;JSsrKw!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQF_CNUlw$<
> https://urldefense.com/v3/__https:/
> > cwiki.apache.org/confluence/display/KAFKA/KIP-676*3A*
> > Respect*logging*hierarchy__;JSsrKw!!K_cMf-SQz-o!L-WI4wlYZXr-
> > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQF_CNUlw$>
> >
> > I believe the work is in progress, feel free to reach out to the
> > contributors if you are able to contribute to the effort by coding,
> > reviewing PRs, submitting documentation etc
> >
> >
> > Israel Ekpo
> > Lead Instructor, IzzyAcademy.com
> > https://urldefense.com/v3/__https://www.youtube.com/c/
> > izzyacademy__;!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fTVljfFMg$<
> https://urldefense.com/v3/__https:/
> > www.youtube.com/c/izzyacademy__;!!K_cMf-SQz-o!L-WI4wlYZXr-
> > uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-dphlW1fTVljfFMg$>
> > https://urldefense.com/v3/__https://izzyacademy.com/__;!!
> > K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-8HJxhQCFXxP5ZBkw7oE0I-
> > dphlW1fQ3lp3_fQ$ > izzyacademy.com/__;!!K_cMf-SQz-o!L-WI4wlYZXr-uPSEVkCzLZonJXmNKveV-
> > 8HJxhQCFXxP5ZBkw7oE0I-dphlW1fQ3lp3_fQ$>
> >
> >
> > On Mon, Jan 10, 2022 at 10:12 AM Brosy, Franziska <
> > franziska.br...@wido.bv.aok.de> wrote:
> >
> > > Well. Hopefully there is someone who is able and willingly to do that
> > > work.
> > > I'm so sorry that I can't help.
> > >
> > > Best regards
> > > Franziska
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: Tauzell, Dave 
> > > Gesendet: Montag, 10. Januar 2022 14:30
> > > An: users@kafka.apache.org
> > > Betreff: Re: Log4j 1.2
> > >
> > > Log4j 2.x isn't a drop-in replacement for 1.x.   It isn't a difficult
> > > change but somebody does need to go through all the source code and do
> > the
> > > work.
> > >
> > >
> > > -Dave
> > >
> > > From: Brosy, Franziska 
> > > Date: Monday, January 10, 2022 at 3:16 AM
> > > To: users@kafka.apache.org 
> > > Subject: [EXTERNAL] AW: Log4j 1.2
> > > Hi Roger,
> > >
> > > maybe I wasn't clear enough. I'm not using kafka by myself. I'm
> customer
> > > of the MicroStrategy Plattform. MicroStrategy uses Kafka. Here is the
> > > problem. An old Log4j 1.2 is delivered with kafka.
> > >
> > >
> > > https://urldefense.com/v3/__https://www.apache.org/dyn/
> > closer.cgi?path=*kafka*3.0.0*kafka_2.13-3.0.0.tgz__;Ly8v!!K_cMf-SQz-o!
> > LrFhvuhmLy3pfMBGcljRQDNs7bR9WN7rnggwu3lskqPDIWy8R-xYG0aDEMAezzMT0F_bmQ$<
> > https://urldefense.com/v3/__https:/www.apache.org/dyn/closer.cgi?
> > path=*kafka*3.0.0*kafka_2.13-3.0.0.tgz__;Ly8v!!K_cMf-SQz-o!
> > LrFhvuhmLy3pfMBGcljRQDNs7bR9WN7rnggwu3lskqPDIWy8R-xYG0aDEMAezzMT0F_bmQ$>
> > > <
> > > https://urldefense.com/v3/__https:/www.apache.org/dyn/
> > closer.cgi?path=*kafka*3.0.0*kafka_2.13-3.0.0.tgz__;Ly8v!!K_cMf-SQz-o!
> > LrFhvuhmLy3pfMBGcljRQDNs7bR9WN7rnggwu3lskqPDIWy8R-xYG0aDEMAezzMT0F_bmQ$
> > > >
> > > kafka_2.13-3.0.0\libs\log4j-1.2.17.jar
> > >
> > > Your advice to cve-2021-44228 is outdated. It is solved in Log4j 2.17!
> > > So why is kafka delivered with Log4j 1.2 instead of Log4j 2.17??
> > >
> > > Stick to a very old version is definitely not secure! Yes, you can use
> a
> > > smartphone with Android 4.2 but you wouldn't expect there is an
> emergency
> > > to do so - would you?
> > >
> > > Can you please tell me when kafka will be upgraded to Log4j at least
> > 2.17?
> > > Otherwise can you please tell me what's the reason to stick to such an
> > old
> > > Log4j version and run into security risks?
> > >
> > > Best regards
> > > Franziska
> > >
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: Murilo Tavares 
> > > Gesendet: Freitag, 7. Januar 2022 20:23
> > > An

question: kafka connect mm2 curl command

2021-11-15 Thread Men Lim
I am running AWS MSK v2.7.0 with SASL/SCRAM. I have an ec2 instance running
the kafka connector and mirror maker 2 (MM2). I was able to start up the
distributor w/o any issue but when I run the curl command to start the
worker, the output look fine but no data are replicating.

curl -s -X POST -H 'Content-Type: application/json' --data
@/app/kafka/config/worker.json
http://host.cloud.domain.com:8083/connectors -v

output of the above command:

*   Trying 172.x.x.x:8083...
* Connected to host.cloud2.domain.com (172.x.x.x) port 8083 (#0)
> POST /connectors HTTP/1.1
> Host: host.cloud2.domain.com:8083
> User-Agent: curl/7.76.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 1882
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 201 Created
< Date: Mon, 15 Nov 2021 21:06:52 GMT
< Location: 
http://host.cloud2.domain.com:8083/connectors/worker-prod-01-mm2-worker
< Content-Type: application/json
< Content-Length: 1735
< Server: Jetty(9.4.33.v20201020)
<
{"name":"worker-prod-01-mm2-worker","config":{"connector.class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","name":"worker-prod-01-mm2-worker","topics":"data.*","tasks.max":"2","source.cluster.alias":"source","target.cluster.alias":"target","source.cluster.bootstrap.servers":"b-1.kafka.uswest-2.amazonaws.com:9096","target.cluster.bootstrap.servers":"b-1.kafka.us-west-2.amazonaws.com:9096","source->target.enabled":"true","target->source.enabled":"false","offset-syncs.topic.replication.factor":"3","topics.exclude":".*[\\-\\.]internal,
.*\\.replica, __consumer_offsets","groups.blacklist":"console-consumer-.*,
connect-.*, 
__.*","topic.creation.default.replication.factor":"3","topic.creatio*
Connection #0 to host host.cloud2.domain.com left intact
n.default.partitions":"24","key.converter":"org.apache.kafka.connect.converters.ByteArrayConverter","value.converter":"org.apache.kafka.connect.converters.ByteArrayConverter","source.cluster.sasl.mechanism":"SCRAM-SHA-512","source.cluster.security.protocol":"SASL_SSL","source.cluster.sasl.jaas.config":"org.apache.kafka.common.security.scram.ScramLoginModule
required username=\"svc_1\"
password=\"xxx290MM0L\";","target.cluster.security.protocol":"SASL_SSL","target.cluster.sasl.mechanism":"SCRAM-SHA-512","target.cluster.sasl.jaas.config":"org.apache.kafka.common.security.scram.ScramLoginModule
required username=\"svc_1\"
password=\"xxx4WWVuA\";"},"tasks":[],"type":"source"}

additionally, when I run:

curl -s http://host.cloud2.domain.com:8083/connectors

all I get is []. I'm not seeing any errors in the log either. can someone
let me know what I'm missing or doing wrong?

Thanks,


Re: Automation Script : Kafka topic creation

2021-11-06 Thread Men Lim
I'm currently using Kafka-gitops.

On Sat, Nov 6, 2021 at 3:35 AM Kafka Life  wrote:

> Dear Kafka experts
>
> does anyone have ready /automated script to create /delete /alter topics in
> different environments?
> taking Configuration parameter as input .
>
> if yes i request you to kindly share it with me .. please
>


Re: Mirror Maker 2: use prefix-less topic names?

2021-10-07 Thread Men Lim
Nope. Mm2 has that prefix. I read that in 3.0, that prefix is going away.

On Thu, Oct 7, 2021 at 11:31 AM Jake Mayward 
wrote:

> Hi,
>
> I have read a bit on MirrorMaker 2 via
> https://kafka.apache.org/documentation/#georeplication-overview, and I am
> curious whether I can enable replication with MirrorMaker 2 without having
> to prefix the topic with the cluster name. Reasoning behind this is that I
> would like the client to always use the same topic name, regardless of the
> cluster it is connected to.
>
> Note: I have seen the
> https://github.com/apache/kafka/blob/trunk/connect/mirror-client/src/main/java/org/apache/kafka/connect/mirror/DefaultReplicationPolicy.java
> but it isn't obvious to me yet whether MM2 takes care of preventing a
> replication loop.
>
> Thanks for any hints!
> Jake
>


Re: mm2: size of target vs source topic

2021-10-04 Thread Men Lim
bump

On Wed, Sep 29, 2021 at 4:09 PM Men Lim  wrote:

> Hi all,
> I'm setting up two new clusters today and using mm2 to replicate data from
> clusterA --> clusterB.  I noticed that the topic has the same amount of
> record but the size is small by 5x.
>
> source topic is 6.975 MB
> target topic is: 1.136 MB
>
> It has the same number of record.  both cluster is using gzip. any idea
> why this is?
>
> thanks,
>


mm2: size of target vs source topic

2021-09-29 Thread Men Lim
Hi all,
I'm setting up two new clusters today and using mm2 to replicate data from
clusterA --> clusterB.  I noticed that the topic has the same amount of
record but the size is small by 5x.

source topic is 6.975 MB
target topic is: 1.136 MB

It has the same number of record.  both cluster is using gzip. any idea why
this is?

thanks,


Re: MM2 - Overriding MirrorSourceConnector Consumer & Producer values

2021-09-20 Thread Men Lim
did you try setting these values in the distributor.properties file?

On Mon, Sep 20, 2021 at 12:43 AM Jamie  wrote:

> Hi All,
> Has anyone been able to override the producer and consumer values of the
> MM2 MirrorSourceConnector as described below?
> Many Thanks,
> Jamie
>
>
> -Original Message-
> From: Jamie 
> To: users@kafka.apache.org 
> Sent: Thu, 16 Sep 2021 15:52
> Subject: MM2 - Overriding MirrorSourceConnector Consumer & Producer values
>
> Hi All,
> I've trying to override the properties of the consumer and producer in MM2
> to tune them for high throughput. For example, I'm trying to set the
> consumers fetch.min.bytes to 10.
> I'm running MM2 in a dedicated mirror maker cluster (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP382:MirrorMaker2.0-RunningadedicatedMirrorMakercluster)
> and using version 2.7.1 of Kafka.
> I have the equivalent of the following in my mirror maker properties file:
> clusters = A, B
> A.bootstrap.servers = A_host1:9092, A_host2:9092, A_host3:9092
>  B.bootstrap.servers = B_host1:9092, B_host2:9092, B_host3:9092
> # enable and configure individual replication flowsA->B.enabled =
> true
> # regex which defines which topics gets replicated. For eg "foo-.*"
>  A->B.topics = .test-topic
>
> I'm trying to override the properties of the consumer which fetches
> records from cluster "A" and the producer that sends records to cluster
> "B".
> I've tried the following in the config file:
> A.consumer.fetch.min.bytes = 10
> A->B.consumer.fetch.min.bytes = 10
> A.fetch.min.bytes = 10
> B.consumer.fetch.min.bytes = 10
> B.fetch.min.bytes = 10
> None of which seem to work, when I start MM2 and go into the logs and look
> at the value using the the MirrorSourceConnector tasks consumer and
> producer config I still see the default value for fetch.min.bytes (1) being
> used.
> Am I trying to override the values of the consumer incorrectly or do I
> need to set these in a different place?
> Many Thanks,
> Jamie
>
>
>
>


Re: Kafka Mirror Maker 2 - source topic keep getting created

2021-08-13 Thread Men Lim
Have you tried running mm2 in a distributed mode instead of
standalone/dedicated?  Try using this file below that I have working on a
AWS EC2 instance:

name=mm2
clusters = source, target
source.bootstrap.servers = source:9092
target.bootstrap.servers = target:9092
group.id = mm2

## all replication factor has to be greater than the min.insync.replicas =
2, which is the default msk configuration.
source.config.storage.replication.factor = 4
target.config.storage.replication.factor = 4
source.offset.storage.replication.factor = 4
target.offset.storage.replication.factor = 4

source.status.storage.replication.factor = 4
target.status.storage.replication.factor = 4

source->target.enabled = true
target->source.enabled = false

offset-syncs.topic.replication.factor = 4
heartbeats.topic.replication.factor = 4
checkpoints.topic.replication.factor = 4

topics = .*

tasks.max = 2
replication.factor = 4
refresh.topics.enabled = true
sync.topic.configs.enabled = true
refresh.topics.interval.seconds = 30

topics.blacklist = .*[\-\.]internal, .*\.replica, __consumer_offsets
groups.blacklist = console-consumer-.*, connect-.*, __.*

source->target.emit.heartbeats.enabled=true
source->target.emit.checkpoints.enabled=true

On Tue, Aug 10, 2021 at 7:54 PM AlR  wrote:

> I have 2 Kafka setup, A and B. A is a cluster with 3 instances running on
> the same machine, while B is a stand alone in another machine. I tried to
> use Mirror Maker to replicate from A to B. The config file is as follows:
>
> -
>
> clusters = A, BA.bootstrap.servers = host1:9091, host1:9092,
> host1:9093B.bootstrap.servers = host2:9092
> A->B.enabled = trueB->A.enabled = false
> replication.factor = 1
> checkpoints.topic.replication.factor=1heartbeats.topic
> replication.factor=1offset-syncs.topic.replication.factor=1
>
> offset.storage.replication.factor=1status.storage.replication.factor=1config.storage.replication.factor=1---
>
> I execute mirror maker as follows:
>
> bin/connect-mirror-maker.sh config/mirror.maker.properties
>
> But when I run the mirror maker on the ‘B’ side I can see that it keeps
> creating ‘heartbeats’ topic. So at some point I will end up having so many
> ‘A.heartbeats’ topics like below. And it will just keep growing until I
> kill the mirror maker.
>
> A.A.A.A.heartbeatsA.A.A.heartbeatsA.A.heartbeatsA.heartbeats
>
> Anyone know what’s the problem and how to fix this?
>
> Thanks.
>
> Kafka Version: 2.13-2.8
>


Re: What's so special about 2,8,9,15,56,72 error codes?

2021-04-28 Thread Men Lim
that article linked to apache error code, which tells you their meaning.

https://kafka.apache.org/protocol.html#protocol_error_codes

On Wed, Apr 28, 2021 at 6:44 AM Nikita Kretov  wrote:

> I'm doing little research about what metrics and formulas used to
> calculate SLA for kafka clusters. I found that some of major cloud
> providers offer managed kafka solutions. for example - aws msk (Amazon
> Managed Streaming for Apache Kafka)
>
> Interestingly, aws msk SLA document defines ``Error``` as ```...any
> Apache Kafka API Request that returns the 2, 8, 9, 15, 56, 72 error
> codes, or an Apache Kafka API Request that upon retry returns the 19 and
> 20 error codes as described in the Apache Kafka site...``` (from
> https://aws.amazon.com/msk/sla/ )
> So my question is - Does someone know what's so special about this
> specific error codes ?
>
> Thank you.
>


Re: mirrormaker 2.0

2021-04-26 Thread Men Lim
Hi Madhan,
try this article I found a while back in case this also become my use case

https://stackoverflow.com/questions/59390555/is-it-possible-to-replicate-kafka-topics-without-alias-prefix-with-mirrormaker2



On Thu, Apr 22, 2021 at 9:40 PM Dhanikachalam, Madhan (CORP)
 wrote:

> I am testing MM2. I got the connector working but it is creating topics in
> the downstream cluster like this mm-poc-src.grp1-top1 which is the
> alias.. How can I create the downstream topic to be the exact
> name as the source?
>
>
>
> Also, do you provide commercial support for open source kafka? Or direct
> me to any firms that offer Open source kafka support?
>
>
>
> I am also testing kafkadrop. Looking for an open source tool that has the
> feature to browse messages and also maintain history information. Do you
> recommend any? Please let me know. Thanks.
>
>
>
>
>
>
>
>
>
> [image: /Users/wardg/Documents/April 2019 Email Signature/ADP_Email
> signature_032...@2x.png]
>
> Thanks
>
> Madhan Dhanikachalam
>
> *GETS Transformers*
>
>
>
> To open tickets to my team follow below links
>
> *MFT:*
> http://servicecatalog:8080/usm/wpf?Node=icguinode.catalogitemdetails&Args=10933&ObjectID=10933&NspPath
> =
>
> *MQ:*
> http://servicecatalog:8080/usm/wpf?Node=icguinode.catalogitemdetails&Args=10688&ObjectID=10688&NspPath
> =
>
>
>
>
>
>
> --
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, notify the sender immediately by
> return email and delete the message and any attachments from your system.
>


Re: MirrorMaker 2 with SSL

2021-04-19 Thread Men Lim
I got this working late last week.  I did the shotgun approach with all the
values in both the distributor and connector.json and once it worked, I
started to remove things to see when it stop working.

On Sun, Apr 18, 2021 at 10:00 PM Ning Zhang  wrote:

> if the source kafka cluster is SSL-enabled, then the consumer of mm2
> should be configured to read from SSL-enabled cluster
> if the target kafka cluster is SSL-enabled, then the producer of mm2
> should be configured to write to SSL-enabled cluster.
>
> On 2021/04/16 03:23:21, Men Lim  wrote:
> > well the 405 is due to a syntax error in the connector.json.  after
> fixing
> > that, passing the -k switch, it started.  but when looking at the
> > connect.log, mm2 is only talking in plaintext rather than SSL.  After a
> > while it timed out because the port 9094 is ssl while mm2 is trying to
> use
> > plaintext.  at least got past the curl problem.
> >
> > On Wed, Apr 14, 2021 at 12:11 PM Men Lim  wrote:
> >
> > > I read thru the security_ssl page when I started this, it doesn't apply
> > > much to me because I'm running this in AWS MSK, where I can't access
> the
> > > broker. so my hands are tied there when it come to certificate.
> > >
> > > however, this morning, I decided to work on creating a self sign cert
> for
> > > the CURL command.  I was able to get past the ssl handshake error and
> now
> > > running into another issue.
> > >
> > > *   Trying 127.0.0.1...
> > > * TCP_NODELAY set
> > > * Connected to localhost (127.0.0.1) port 8443 (#0)
> > > * ALPN, offering h2
> > > * ALPN, offering http/1.1
> > > * Cipher selection:
> > > ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> > > * successfully set certificate verify locations:
> > > *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
> > >   CApath: none
> > > * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> > > * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> > > * TLSv1.2 (IN), TLS handshake, Server hello (2):
> > > * TLSv1.2 (IN), TLS handshake, Certificate (11):
> > > * TLSv1.2 (OUT), TLS alert, unknown CA (560):
> > >
> > > ** SSL certificate problem: self signed certificate* Closing
> connection 0*
> > >
> > > so I tried passing the -k flag and got the following message
> > >
> > > *   Trying 127.0.0.1...
> > > * TCP_NODELAY set
> > > * Connected to localhost (127.0.0.1) port 8443 (#0)
> > > * ALPN, offering h2
> > > * ALPN, offering http/1.1
> > > * Cipher selection:
> > > ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> > > * successfully set certificate verify locations:
> > > *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
> > >   CApath: none
> > > * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> > > * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> > > * TLSv1.2 (IN), TLS handshake, Server hello (2):
> > > * TLSv1.2 (IN), TLS handshake, Certificate (11):
> > > * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
> > > * TLSv1.2 (IN), TLS handshake, Server finished (14):
> > > * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
> > > * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
> > > * TLSv1.2 (OUT), TLS handshake, Finished (20):
> > > * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
> > > * TLSv1.2 (IN), TLS handshake, Finished (20):
> > > * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
> > > * ALPN, server did not agree to a protocol
> > > * Server certificate:
> > > *  subject: C=us; ST=il; L=Unknown; O=usfoods; OU=Unknown; CN=Unknown
> > > *  start date: Apr 14 16:52:07 2021 GMT
> > > *  expire date: Apr 12 16:52:07 2031 GMT
> > > *  issuer: C=US; ST=MI; L=Unknown; O=FUD; OU=Unknown; CN=Unknown
> > > *  SSL certificate verify result: self signed certificate (18),
> continuing
> > > anyway.
> > > > PUT /connectors HTTP/1.1
> > > > Host: localhost:8443
> > > > User-Agent: curl/7.61.1
> > > > Accept: */*
> > > > Content-Type: application/json
> > > > Content-Length: 1251
> > > > Expect: 100-continue
> > > >
> > > < HTTP/1.1 100 Continue
> > > * We are completely uploaded and fine
> > > < HTTP/1.1 405 Method Not Allowed
> > > < Date: Wed, 14 Apr 2021 19:07:01 GMT
> > > < Content-Length: 58
> > > < Server: Jetty(9.4.33.v20201020)

Re: MirrorMaker 2 with SSL

2021-04-15 Thread Men Lim
well the 405 is due to a syntax error in the connector.json.  after fixing
that, passing the -k switch, it started.  but when looking at the
connect.log, mm2 is only talking in plaintext rather than SSL.  After a
while it timed out because the port 9094 is ssl while mm2 is trying to use
plaintext.  at least got past the curl problem.

On Wed, Apr 14, 2021 at 12:11 PM Men Lim  wrote:

> I read thru the security_ssl page when I started this, it doesn't apply
> much to me because I'm running this in AWS MSK, where I can't access the
> broker. so my hands are tied there when it come to certificate.
>
> however, this morning, I decided to work on creating a self sign cert for
> the CURL command.  I was able to get past the ssl handshake error and now
> running into another issue.
>
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8443 (#0)
> * ALPN, offering h2
> * ALPN, offering http/1.1
> * Cipher selection:
> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
>   CApath: none
> * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
> * TLSv1.2 (OUT), TLS alert, unknown CA (560):
>
> ** SSL certificate problem: self signed certificate* Closing connection 0*
>
> so I tried passing the -k flag and got the following message
>
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8443 (#0)
> * ALPN, offering h2
> * ALPN, offering http/1.1
> * Cipher selection:
> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> * successfully set certificate verify locations:
> *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
>   CApath: none
> * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
> * TLSv1.2 (IN), TLS handshake, Server finished (14):
> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
> * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
> * TLSv1.2 (OUT), TLS handshake, Finished (20):
> * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
> * TLSv1.2 (IN), TLS handshake, Finished (20):
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
> * ALPN, server did not agree to a protocol
> * Server certificate:
> *  subject: C=us; ST=il; L=Unknown; O=usfoods; OU=Unknown; CN=Unknown
> *  start date: Apr 14 16:52:07 2021 GMT
> *  expire date: Apr 12 16:52:07 2031 GMT
> *  issuer: C=US; ST=MI; L=Unknown; O=FUD; OU=Unknown; CN=Unknown
> *  SSL certificate verify result: self signed certificate (18), continuing
> anyway.
> > PUT /connectors HTTP/1.1
> > Host: localhost:8443
> > User-Agent: curl/7.61.1
> > Accept: */*
> > Content-Type: application/json
> > Content-Length: 1251
> > Expect: 100-continue
> >
> < HTTP/1.1 100 Continue
> * We are completely uploaded and fine
> < HTTP/1.1 405 Method Not Allowed
> < Date: Wed, 14 Apr 2021 19:07:01 GMT
> < Content-Length: 58
> < Server: Jetty(9.4.33.v20201020)
> <
> * Connection #0 to host localhost left intact
> *{"error_code":405,"message":"HTTP 405 Method Not Allowed"}*
>
> with the error, the mm2 process still isn't running.  i'm not sure how to
> go about troubleshooting this HTTP 405 msg.  Maybe the cert can't be self
> signed and need an official one.
>
>
> On Tue, Apr 13, 2021 at 8:41 PM Ning Zhang  wrote:
>
>> assume your target / destination kafka cluster is SSL enabled. If your
>> MM2 wants to write to such cluster, you may have the following config in
>> your MM2:
>>
>>
>> https://github.com/ning2008wisc/minikube-mm2-demo/blob/master/kafka-mm/values.yaml#L79-L80
>>
>> on the broker (even client) side, you may refer to:
>>
>> http://kafka.apache.org/090/documentation.html#security_ssl
>>
>> On 2021/04/09 15:01:26, Men Lim  wrote:
>> > Hi Ning,
>> >
>> > thanks for the response.  This self sign cert stays on the ec2 instance,
>> > specifically for the curl command and I don't have to share it with the
>> > brokers correct?
>> >
>> > thanks,
>> >
>> >
>> >
>> > On Fri, Apr 9, 2021 at 7:55 AM Ning Zhang 
>> wrote:
>> >
>> > > Hi Men,
>> > 

Re: MirrorMaker 2 with SSL

2021-04-14 Thread Men Lim
I read thru the security_ssl page when I started this, it doesn't apply
much to me because I'm running this in AWS MSK, where I can't access the
broker. so my hands are tied there when it come to certificate.

however, this morning, I decided to work on creating a self sign cert for
the CURL command.  I was able to get past the ssl handshake error and now
running into another issue.

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):

** SSL certificate problem: self signed certificate* Closing connection 0*

so I tried passing the -k flag and got the following message

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: C=us; ST=il; L=Unknown; O=usfoods; OU=Unknown; CN=Unknown
*  start date: Apr 14 16:52:07 2021 GMT
*  expire date: Apr 12 16:52:07 2031 GMT
*  issuer: C=US; ST=MI; L=Unknown; O=FUD; OU=Unknown; CN=Unknown
*  SSL certificate verify result: self signed certificate (18), continuing
anyway.
> PUT /connectors HTTP/1.1
> Host: localhost:8443
> User-Agent: curl/7.61.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 1251
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 405 Method Not Allowed
< Date: Wed, 14 Apr 2021 19:07:01 GMT
< Content-Length: 58
< Server: Jetty(9.4.33.v20201020)
<
* Connection #0 to host localhost left intact
*{"error_code":405,"message":"HTTP 405 Method Not Allowed"}*

with the error, the mm2 process still isn't running.  i'm not sure how to
go about troubleshooting this HTTP 405 msg.  Maybe the cert can't be self
signed and need an official one.


On Tue, Apr 13, 2021 at 8:41 PM Ning Zhang  wrote:

> assume your target / destination kafka cluster is SSL enabled. If your MM2
> wants to write to such cluster, you may have the following config in your
> MM2:
>
>
> https://github.com/ning2008wisc/minikube-mm2-demo/blob/master/kafka-mm/values.yaml#L79-L80
>
> on the broker (even client) side, you may refer to:
>
> http://kafka.apache.org/090/documentation.html#security_ssl
>
> On 2021/04/09 15:01:26, Men Lim  wrote:
> > Hi Ning,
> >
> > thanks for the response.  This self sign cert stays on the ec2 instance,
> > specifically for the curl command and I don't have to share it with the
> > brokers correct?
> >
> > thanks,
> >
> >
> >
> > On Fri, Apr 9, 2021 at 7:55 AM Ning Zhang 
> wrote:
> >
> > > Hi Men,
> > >
> > > I used to deploy MM2 on EC2 with SSL and IIRC, probably give a try of
> > > self-signing certs and key for testing purpose:
> > > https://linuxize.com/post/creating-a-self-signed-ssl-certificate/
> > >
> > > On 2021/04/09 03:14:30, Men Lim  wrote:
> > > > Hi Ryanne,
> > > >
> > > > thanks for the reply.  My kafka clusters are on AWS, their serverless
> > > > platform, MSK.  I'm stuck with using the default java cacerts unless
> I
> > > use
> > > > their AWS PCA which is pretty pricey.
> > > >
> > > > I ran the CURL command yesterday with the -v and --tlsv1.2 flag and
> got
> > > the
> > > > following verbose message:
> > > >
> > > > curl -s -X POST -H 'Content-Type: application/json&

Re: MirrorMaker 2 with SSL

2021-04-09 Thread Men Lim
Hi Ning,

thanks for the response.  This self sign cert stays on the ec2 instance,
specifically for the curl command and I don't have to share it with the
brokers correct?

thanks,



On Fri, Apr 9, 2021 at 7:55 AM Ning Zhang  wrote:

> Hi Men,
>
> I used to deploy MM2 on EC2 with SSL and IIRC, probably give a try of
> self-signing certs and key for testing purpose:
> https://linuxize.com/post/creating-a-self-signed-ssl-certificate/
>
> On 2021/04/09 03:14:30, Men Lim  wrote:
> > Hi Ryanne,
> >
> > thanks for the reply.  My kafka clusters are on AWS, their serverless
> > platform, MSK.  I'm stuck with using the default java cacerts unless I
> use
> > their AWS PCA which is pretty pricey.
> >
> > I ran the CURL command yesterday with the -v and --tlsv1.2 flag and got
> the
> > following verbose message:
> >
> > curl -s -X POST -H 'Content-Type: application/json' --data
> @connector.json
> > https://localhost:8443/connectors -v --tlsv1.2
> > *   Trying 127.0.0.1...
> > * TCP_NODELAY set
> > * Connected to localhost (127.0.0.1) port 8443 (#0)
> > * ALPN, offering h2
> > * ALPN, offering http/1.1
> > * Cipher selection:
> > ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
> > * successfully set certificate verify locations:
> > *   CAfile: /etc/pki/tls/certs/ca-bundle.crt
> >   CApath: none
> > * TLSv1.2 (OUT), TLS header, Certificate Status (22):
> > * TLSv1.2 (OUT), TLS handshake, Client hello (1):
> > * TLSv1.2 (IN), TLS header, Unknown (21):
> > * TLSv1.2 (IN), TLS alert, handshake failure (552):
> > * error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert
> handshake
> > failure
> >
> > Thanks
> >
> > On Mon, Apr 5, 2021 at 7:26 AM Ryanne Dolan 
> wrote:
> >
> > > Yes it's possible. The most common issue in my experience is the
> location
> > > of the trust store and key store being different or absent on some
> hosts.
> > > You need to make sure that these locations are consistent across all
> hosts
> > > in your Connect cluster, or use a ConfigProvider to provide the
> location
> > > dynamically. Otherwise, a task will get scheduled on some host and
> fail to
> > > find these files.
> > >
> > > Ryanne
> > >
> > >
> > > On Wed, Mar 31, 2021, 8:22 PM Men Lim  wrote:
> > >
> > > > Hello.  I was wondering if someone can help answer my question.  I'm
> > > trying
> > > > to run MirrorMaker 2 in distributed mode using SSL.  I have the
> > > distributor
> > > > running in SSL but when I can't get the curl REST api to do so. I saw
> > > that
> > > > kif-208 fixed this but I can't seem to implement it.
> > > >
> > > > in my mm2-dist.prop file I have set:
> > > > 
> > > > listeners=https://localhost:8443
> > > > security.protocol=SSL
> > > >
> > > >
> > >
> ssl.truststore.location=/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks
> > > > 
> > > > my connector.json file look like this:
> > > >
> > > > 
> > > > {
> > > > "name": "mm2-connect-cluster",
> > > > "config":{
> > > > "connector.class":
> > > "org.apache.kafka.connect.mirror.MirrorSourceConnector",
> > > > "connector.client.config.override.policy": "All",
> > > > "name": "mm2-connect-cluster",
> > > > "topics": "test.*",
> > > > "tasks.max": "1",
> > > > "source.cluster.alias": "source",
> > > > "target.cluster.alias": "target",
> > > > "source.cluster.bootstrap.servers": "source:9094",
> > > > "target.cluster.bootstrap.servers": "target:9094",
> > > > "source->target.enabled": "true",
> > > > "target->source.enabled": "false",
> > > > "offset-syncs.topic.replication.factor": "4",
> > > > "topics.exclude": ".*[\\-\\.]internal, .*\\.replica,
> > > > __consumer_offsets",
> > > > "groups.blacklist": "console-consumer-.*, connect-.*, __.*",
> > > > "topic.creation.enabled": "true",
> > > > "topic.creation.default.replication.factor": "4",
> > > > "topic.creation.default.partitions": "1"
> > > > "key.converter":
> "org.apache.kafka.connect.json.JsonConverter",
> > > > "value.converter":
> "org.apache.kafka.connect.json.JsonConverter",
> > > > "security.protocol": "SSL",
> > > > "ssl.truststore.password":
> > > > "/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks"
> > > > }
> > > > }
> > > > 
> > > >
> > > > I would then start up the distributor and it launched fine.  So I
> try to
> > > > run the CURl command
> > > >
> > > > 
> > > > curl -s -X POST -H 'Content-Type: application/json' --data
> > > @connector.json
> > > > https://localhost:8443/connectors
> > > > 
> > > > nada.  nothing.  no error.  no reasons for not starting.
> > > >
> > > > Is it possible to run MM2 with SSL?  If so, can someone point me to a
> > > > working example?
> > > >
> > > > thanks.
> > > >
> > >
> >
>


Re: MirrorMaker 2 with SSL

2021-04-08 Thread Men Lim
Hi Ryanne,

thanks for the reply.  My kafka clusters are on AWS, their serverless
platform, MSK.  I'm stuck with using the default java cacerts unless I use
their AWS PCA which is pretty pricey.

I ran the CURL command yesterday with the -v and --tlsv1.2 flag and got the
following verbose message:

curl -s -X POST -H 'Content-Type: application/json' --data @connector.json
https://localhost:8443/connectors -v --tlsv1.2
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Unknown (21):
* TLSv1.2 (IN), TLS alert, handshake failure (552):
* error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
failure

Thanks

On Mon, Apr 5, 2021 at 7:26 AM Ryanne Dolan  wrote:

> Yes it's possible. The most common issue in my experience is the location
> of the trust store and key store being different or absent on some hosts.
> You need to make sure that these locations are consistent across all hosts
> in your Connect cluster, or use a ConfigProvider to provide the location
> dynamically. Otherwise, a task will get scheduled on some host and fail to
> find these files.
>
> Ryanne
>
>
> On Wed, Mar 31, 2021, 8:22 PM Men Lim  wrote:
>
> > Hello.  I was wondering if someone can help answer my question.  I'm
> trying
> > to run MirrorMaker 2 in distributed mode using SSL.  I have the
> distributor
> > running in SSL but when I can't get the curl REST api to do so. I saw
> that
> > kif-208 fixed this but I can't seem to implement it.
> >
> > in my mm2-dist.prop file I have set:
> > 
> > listeners=https://localhost:8443
> > security.protocol=SSL
> >
> >
> ssl.truststore.location=/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks
> > 
> > my connector.json file look like this:
> >
> > 
> > {
> > "name": "mm2-connect-cluster",
> > "config":{
> > "connector.class":
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",
> > "connector.client.config.override.policy": "All",
> > "name": "mm2-connect-cluster",
> > "topics": "test.*",
> > "tasks.max": "1",
> > "source.cluster.alias": "source",
> > "target.cluster.alias": "target",
> > "source.cluster.bootstrap.servers": "source:9094",
> > "target.cluster.bootstrap.servers": "target:9094",
> > "source->target.enabled": "true",
> > "target->source.enabled": "false",
> > "offset-syncs.topic.replication.factor": "4",
> > "topics.exclude": ".*[\\-\\.]internal, .*\\.replica,
> > __consumer_offsets",
> > "groups.blacklist": "console-consumer-.*, connect-.*, __.*",
> > "topic.creation.enabled": "true",
> > "topic.creation.default.replication.factor": "4",
> > "topic.creation.default.partitions": "1"
> > "key.converter": "org.apache.kafka.connect.json.JsonConverter",
> > "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> > "security.protocol": "SSL",
> > "ssl.truststore.password":
> > "/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks"
> > }
> > }
> > 
> >
> > I would then start up the distributor and it launched fine.  So I try to
> > run the CURl command
> >
> > 
> > curl -s -X POST -H 'Content-Type: application/json' --data
> @connector.json
> > https://localhost:8443/connectors
> > 
> > nada.  nothing.  no error.  no reasons for not starting.
> >
> > Is it possible to run MM2 with SSL?  If so, can someone point me to a
> > working example?
> >
> > thanks.
> >
>


MirrorMaker 2 with SSL

2021-03-31 Thread Men Lim
Hello.  I was wondering if someone can help answer my question.  I'm trying
to run MirrorMaker 2 in distributed mode using SSL.  I have the distributor
running in SSL but when I can't get the curl REST api to do so. I saw that
kif-208 fixed this but I can't seem to implement it.

in my mm2-dist.prop file I have set:

listeners=https://localhost:8443
security.protocol=SSL
ssl.truststore.location=/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks

my connector.json file look like this:


{
"name": "mm2-connect-cluster",
"config":{
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"connector.client.config.override.policy": "All",
"name": "mm2-connect-cluster",
"topics": "test.*",
"tasks.max": "1",
"source.cluster.alias": "source",
"target.cluster.alias": "target",
"source.cluster.bootstrap.servers": "source:9094",
"target.cluster.bootstrap.servers": "target:9094",
"source->target.enabled": "true",
"target->source.enabled": "false",
"offset-syncs.topic.replication.factor": "4",
"topics.exclude": ".*[\\-\\.]internal, .*\\.replica,
__consumer_offsets",
"groups.blacklist": "console-consumer-.*, connect-.*, __.*",
"topic.creation.enabled": "true",
"topic.creation.default.replication.factor": "4",
"topic.creation.default.partitions": "1"
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"security.protocol": "SSL",
"ssl.truststore.password":
"/home/ec2-user/kafka_2.13-2.7.0/cert/kafka.client.truststore.jks"
}
}


I would then start up the distributor and it launched fine.  So I try to
run the CURl command


curl -s -X POST -H 'Content-Type: application/json' --data @connector.json
https://localhost:8443/connectors

nada.  nothing.  no error.  no reasons for not starting.

Is it possible to run MM2 with SSL?  If so, can someone point me to a
working example?

thanks.