Hi
Use it with --command-config client_security.properties and pass below
type configurations in properties file:-
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule
required \
username="*" \
password="*";
Congratulations
On Mon, 19 Oct, 2020, 11:02 pm Bill Bejeck, wrote:
> Congratulations Chia-Ping!
>
> -Bill
>
> On Mon, Oct 19, 2020 at 1:26 PM Matthias J. Sax wrote:
>
> > Congrats Chia-Ping!
> >
> > On 10/19/20 10:24 AM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > I'm happy to announce
Hi John
Please find my inline response below
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 8:22 PM John Roesler wrote:
> Hi Deepak,
>
> It sounds like you're saying that the exception handler is
> correctly indicating that Streams should "Continue", and
>
Hi Team
Just a reminder.
Can you please help me with this?
Regards and Thanks
Deepak Raghav
On Tue, Sep 1, 2020 at 1:44 PM Deepak Raghav
wrote:
> Hi Team
>
> I have created a CustomExceptionHandler class by
> implementing DeserializationExceptionHandler interface to handle the
if I missed anything.
Regards and Thanks
Deepak Raghav
Hi Tom
Can you please reply to this.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 10:05 PM Deepak Raghav
wrote:
> Hi Tom
>
> I have to change the log level at runtime i.e without restarting the
> worker process.
>
> Can you please share any suggestion
Hi Tom
I have to change the log level at runtime i.e without restarting the worker
process.
Can you please share any suggestion on this with log4j.
Regards and Thanks
Deepak Raghav
On Mon, Jul 27, 2020 at 7:09 PM Tom Bentley wrote:
> Hi Deepak,
>
> https://issues.apache.org/ji
Hi Team
Request you to please have a look.
Regards and Thanks
Deepak Raghav
On Thu, Jul 23, 2020 at 6:42 PM Deepak Raghav
wrote:
> Hi Team
>
> I have some source connector, which is using the logging provided by
> kafka-connect framework.
>
> Now I need to change the log
to log4j2, could you please help me with this.
Regards and Thanks
Deepak Raghav
Hi Robin
Request you to please reply.
Regards and Thanks
Deepak Raghav
On Wed, Jun 10, 2020 at 11:57 AM Deepak Raghav
wrote:
> Hi Robin
>
> Can you please reply.
>
> I just want to add one more thing, that yesterday I tried with
> connect.protocal=eager. Task distrib
Hi Robin
Can you please reply.
I just want to add one more thing, that yesterday I tried with
connect.protocal=eager. Task distribution was balanced after that.
Regards and Thanks
Deepak Raghav
On Tue, Jun 9, 2020 at 2:37 PM Deepak Raghav
wrote:
> Hi Robin
>
> Thanks for y
understanding is correct or not.
Regards and Thanks
Deepak Raghav
On Tue, May 26, 2020 at 8:20 PM Robin Moffatt wrote:
> The KIP for the current rebalancing protocol is probably a good reference:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-415:+Incremental+Cooperative+Re
Hi Team
Just a Gentle Reminder.
Regards and Thanks
Deepak Raghav
On Fri, May 29, 2020 at 1:15 PM Deepak Raghav
wrote:
> Hi Team
>
> Recently, I had seen strange behavior in kafka-connect. We have source
> connector with single task only, which reads from S3 bucket and cop
the connector, a task can be
left assigned in both the worker node.
Note : I have seen this only one time, after that it was never reproduced.
Regards and Thanks
Deepak Raghav
and I cannot show this mail as a
reference.
It would be very great if you please share any link/discussion reference
regarding the same.
Regards and Thanks
Deepak Raghav
On Thu, May 21, 2020 at 2:12 PM Robin Moffatt wrote:
> I don't think you're right to assert that this is "expected b
chchargeableevent",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.0.5:*8080*&qu
* *W2*
C1T1C1T2
C2T2C2T2
I hope, I gave your answer,
Regards and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 4:42 PM Robin Moffatt wrote:
> OK, I understand better now.
>
> You can read more about the guts of the re
and Thanks
Deepak Raghav
On Wed, May 20, 2020 at 1:41 PM Robin Moffatt wrote:
> So you're running two workers on the same machine (10.0.0.4), is
> that correct? Normally you'd run one worker per machine unless there was a
> particular reason otherwise.
> What version of Apache Kafka a
Hi
Please, can anybody help me with this?
Regards and Thanks
Deepak Raghav
On Tue, May 19, 2020 at 1:37 PM Deepak Raghav
wrote:
> Hi Team
>
> We have two worker node in a cluster and 2 connector with having 10 tasks
> each.
>
> Now, suppose if we have two kafka connect pr
Hi
Is there any study that shows why smaller size messages are optimal for
Kafka, and as size increases to 1MB and more, the throughput decreases ?
What are the design choices in Kafka that leads to this behavior.
Can any developer or committer share any insight and point to the relevant
piece
overcome our broken
graphs.
Thanks. Regards.
--
Raghav
;
}
}
Thanks.
R
On Fri, Mar 8, 2019 at 10:41 PM Manikumar wrote:
> Hi Raghav,
>
> As you know, KIP-372 added "version" tag to RequestsPerSec metric to
> monitor requests for each version.
> As mentioned in the KIP, to get total count per request (across all
&g
please help us figure out the answer for the email below ? It will
be greatly appreciated. We just want to know how to find the version number
?
Many thanks.
R
On Fri, Dec 14, 2018 at 5:16 PM Raghav wrote:
> I got it to work. I fired up a console and then saw what beans are
> regi
, and wildcards don't work. See the screenshot below,
apiVersion is 7. Where did this come from ? Can someone please help to
understand.
[image: jmx.png]
On Fri, Dec 14, 2018 at 4:29 PM Raghav wrote:
> Is this a test case for this commit:
> https://github.com/apache/kafka/pull/4506 ? I have
:34 AM Raghav wrote:
> Thanks Ismael. How to query it in 2.1 ? I tried all possible ways
> including using version, but I am still getting the same exception message.
>
> Thanks for your help.
>
> On Thu, Dec 13, 2018 at 7:19 PM Ismael Juma wrote:
>
>> The metric was
eve this was in
> the upgrade notes for 2.0.0.
>
> Ismael
>
> On Thu, Dec 13, 2018, 3:35 PM Raghav
> > Hi
> >
> > We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
> > our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that
> the
Hi
We are trying to move from Kafka 1.1.0 to Kafka 2.1.0. We used to monitor
our 3 node Kafka using JMX. Upon moving to 2.1.0, we have observed that the
*below* mentioned metric can't be retrie
and we get the below exception:
Hi
We have a 3 node Kafka Brokers setup.
Our current value of default.replication.factor is 2.
What should be the recommended value of offsets.topic.replication.factor ?
Please advise as we are not completely sure
about offsets.topic.replication.factor ?
Thanks for your help.
R
Anyone ? We really hit the wall deciphering this error log, and we don't
know how to fix it.
On Wed, Oct 10, 2018 at 12:52 PM Raghav wrote:
> Hi
>
> We are on Kafka 1.1 and have 3 Kafka brokers, and help your need to
> understand the error message, and what it would take to fix
s here*
*controlled.shutdown.enable=true
auto.leader.rebalance.enable=true*
*unclean.leader.election.enable=true*
--
Raghav
Hi
Our 3 node Zookeeper ensemble got powered down, and upon powering up the
zookeeper could get quorum and kept throwing these errors. As a result our
Kafka cluster was unusable. What is the best way to revive ZK cluster in
such situations ? Please suggest.
2018-08-17_00:59:18.87009 2018-08-17
On Wed, Aug 8, 2018 at 6:46 PM, Raghav wrote:
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
> JMX port, and consume metrics via JMX api, and dump to a time series
> database.
>
> I checked out jmxtrans, but currently it does not dump to TSDB (t
his and dumping to InfluxDB
>
>
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> On Aug 8, 2018, at 8:46 PM, Raghav wrote:
>
> Hi
>
> Is there any Java API available so that I can enable our Kafka cluster's
Hi
Is there any Java API available so that I can enable our Kafka cluster's
JMX port, and consume metrics via JMX api, and dump to a time series
database.
I checked out jmxtrans, but currently it does not dump to TSDB (time series
database).
Thanks.
R
o a rolling restart
> > manually, you should shut down one broker at a time.
> >
> > In this way, you leave time to the broker controller service to balance
> > the active replicas into the healthy nodes.
> >
> > The same procedure when you start up your nodes.
>
Hi
We have a 3 Kafka brokers setup on 0.10.2.1. We have a requirement in our
company environment that we have to first stop our 3 Kafka Broker setup,
then do some operations stuff that takes about 1 hours, and then bring up
Kafka (version 1.1) brokers again.
In order to achieve this, we issue:
around that could help
us overcome this issue ? We can repro it every single time.
Many thanks.
> On Fri, May 11, 2018 at 3:16 PM, Raghav <raghavas...@gmail.com> wrote:
>
> > Hi
> >
> > We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both
> ar
Hi
We have a 3 node zk ensemble as well as 3 node Kafka Cluster. They both are
hosted on the same 3 VMs.
Before Restart
1. We were on Kafka 0.10.2.1
After Restart
1. We moved to Kafka 1.1
We observe that Kafkas report leadership issues, and for lot of partitions
Leader is -1. I see some logs
Hi
Are there anything that needs to be taken care for if we want to move from
0.10.2.x to latest 1.1 release ?
Is this stable release and is it recommended for production use ?
Thanks
Raghav
Anyone ?
On Thu, Mar 29, 2018 at 6:11 PM, Raghav <raghavas...@gmail.com> wrote:
> Hi
>
> We have a 3 node Kafka cluster running. Time to time, we have some changes
> in trust store and we restart Kafka to take new changes into account. We
> are on Kafka 0.10.x.
>
>
Hi
We have a 3 node Kafka cluster running. Time to time, we have some changes
in trust store and we restart Kafka to take new changes into account. We
are on Kafka 0.10.x.
If we move to 1.1, would we still need to restart Kafka upon trust store
changes ?
Thanks.
--
Raghav
Is It recommended to move to 1.0 release if we want to overcome this issue
? Please advise, Ted.
Thanks.
R
On Thu, Mar 15, 2018 at 7:43 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Looking at KAFKA-3702, it is still Open.
>
> FYI
>
> On Thu, Mar 15, 2018 at 5:51 PM, Raghav &
gt; > at
> > org.apache.kafka.common.network.SslTransportLayer.
> > close(SslTransportLayer.java:148)
> > at
> > org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:45)
> > at
> > org.apache.kafka.common.network.Selector.close(Selector.java:442)
> > at org.apache.kafka.common.network.Selector.poll(
> > Selector.java:310)
> > at
> > org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> > at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> > at
> > org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> > Thanks a lot.
> >
>
--
Raghav
Hi
We have a 3 node secure Kafka Cluster (
https://kafka.apache.org/documentation/#security_ssl)
Recently, my producer client is receiving the below message. Can someone
please help to understand the message and possible few pointers to fix
debug and may be fix this issue.
18/03/15 14:37:23
ue, Feb 6, 2018 at 1:36 PM, Raghav <raghavas...@gmail.com> wrote:
>
> > Ted
> >
> > Sorry, I did not understand your point here.
> >
> > On Tue, Feb 6, 2018 at 1:09 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > >
o
> be deleted before actually deleting it.
>
> if (cleaner != null && !isFuture) {
>
> cleaner.abortCleaning(topicPartition)
>
> FYI
>
> On Tue, Feb 6, 2018 at 12:56 PM, Raghav <raghavas...@gmail.com> wrote:
>
> > From the log-cleaner
>From the log-cleaner.log, I see the following. It seems like it resume but
is aborted. Not sure how to read this:
[2018-02-06 18:06:22,178] INFO Compaction for partition topic043-27 is
resumed (kafka.log.LogCleaner)
[2018-02-06 18:06:22,178] INFO The cleaning for partition topic043-27 is
Linux. CentOS.
On Tue, Feb 6, 2018 at 12:26 PM, M. Manna <manme...@gmail.com> wrote:
> Is this Windows or Linux?
>
> On 6 Feb 2018 8:24 pm, "Raghav" <raghavas...@gmail.com> wrote:
>
> > Hi
> >
> > While configuring a topic, we are specifying
> >
> > > > > > log.segment.bytes=536870912
> > > > > >
> > > > > > *topic configuration (30GB):*
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ bin/kafka-topics.sh --zookeeper
> > > > > > zk-01:2181/kafka --describe --topic stg_logtopic
> > > > > > Topic:stg_logtopicPartitionCount:12 ReplicationFactor:3
> > > > > > Configs:retention.bytes=300
> > > > > > Topic: stg_logtopic Partition: 0Leader: 4
> > > > > > Replicas: 4,5,6 Isr: 4,5,6
> > > > > > Topic: stg_logtopic Partition: 1Leader: 5
> > > > > > Replicas: 5,6,1 Isr: 5,1,6
> > > > > > ...
> > > > > >
> > > > > > And, disk usage showing 910GB usage for one partition!
> > > > > >
> > > > > > [tstumpges@kafka-02 kafka]$ sudo du -s -h /data1/kafka-data/*
> > > > > > 82G /data1/kafka-data/stg_logother3-2
> > > > > > 155G/data1/kafka-data/stg_logother2-9
> > > > > > 169G/data1/kafka-data/stg_logother1-6
> > > > > > 910G/data1/kafka-data/stg_logtopic-4
> > > > > >
> > > > > > I can see there are plenty of segment log files (512MB each) in
> the
> > > > > > partition directory... what is going on?!
> > > > > >
> > > > > > Thanks in advance, Thunder
> > > > > >
> > > > >
> > > >
> > >
> >
>
--
Raghav
Hi
While configuring a topic, we are specifying the retention bytes per topic
as follows. Our retention time in hours is 48.
*bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181 --create
--topic AmazingTopic --replication-factor 2 --partitions 64 --config
retention.bytes=16106127360
Can someone please help here ?
On Thu, Nov 23, 2017 at 10:42 AM, Raghav <raghavas...@gmail.com> wrote:
> Anyone here ?
>
> On Wed, Nov 22, 2017 at 4:04 PM, Raghav <raghavas...@gmail.com> wrote:
>
>> Hi
>>
>> If I give several locations with smaller c
Anyone here ?
On Wed, Nov 22, 2017 at 4:04 PM, Raghav <raghavas...@gmail.com> wrote:
> Hi
>
> If I give several locations with smaller capacity for log.dirs vs one
> large drive for log.dirs, are there any PROS or CONS between the two
> (assuming total storage is same in bot
that there are no issues.
Thanks.
--
Raghav
nme...@gmail.com> wrote:
> There's a video where Jay Kreps talks about how Kafka works - YouTube has
> it as the top 5 under "How Kafka Works".
>
>
> On 20 Sep 2017 5:49 pm, "Raghav" <raghavas...@gmail.com> wrote:
>
> > Hi
> >
> > Jus
.
--
Raghav
Thanks, Guozhang.
On Mon, Sep 18, 2017 at 5:23 PM, Guozhang Wang <wangg...@gmail.com> wrote:
> It is available online now:
> https://www.confluent.io/kafka-summit-sf17/resource/
>
>
> Guozhang
>
> On Tue, Sep 19, 2017 at 8:13 AM, Raghav <raghavas...@gmail.com
Hi
Just wondering if the videos are available anywhere from Kafka Summit 2017
to watch ?
--
Raghav
Thanks, Kamal.
On Fri, Sep 8, 2017 at 4:10 AM, Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:
> add this lines at the end of your log4j.properties,
>
> log4j.logger.org.apache.kafka.clients.producer=WARN
>
> On Thu, Sep 7, 2017 at 5:27 PM, Raghav <ragh
log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss} %-5p
%c{1}:%L - %m%n
On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi <viktorsomo...@gmail.com>
wrote:
> Hi Raghav,
>
> I think it is enough to raise the logging level
> of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4
Hi
My Java code produces Kafka config overtime it does a send which makes log
very very verbose.
How can I reduce the Kafka client (producer) logging in my java code ?
Thanks for your help.
--
Raghav
Kafka Brokers only. Clients were Java client that used the same client
version as the broker.
On Thu, Aug 31, 2017 at 5:43 AM, Saravanan Tirugnanum <vtsarv...@gmail.com>
wrote:
> Thank you Raghav. Was it like you upgraded Kafka Broker or Clients or both.
>
> Regards
> Saravana
e ?
>
> Regards
> Saravanan
>
>
> On Wednesday, August 9, 2017 at 11:51:19 PM UTC-5, Raghav wrote:
>>
>> Hi
>>
>> I am sending very small 32 byte message to Kafka broker in a tight loop
>> with 250ms sleep. I have one broker, 1 partition, and replication f
:] - INFO [main:NIOServerCnxnFactory@89] -
binding to port 0.0.0.0/0.0.0.0:2181
Killed
--
Raghav
Broker is 100% running. ZK path shows /broker/ids/1
On Fri, Aug 18, 2017 at 1:02 AM, Yang Cui <y...@freewheel.tv> wrote:
> please use zk client to check the path:/brokers/ids in ZK
>
> 发自我的 iPhone
>
> > 在 2017年8月18日,下午3:14,Raghav <raghavas...@gmail.com> 写道:
> &
5:47,813] ERROR
org.apache.kafka.common.errors.InvalidReplicationFactorException:
replication factor: 1 larger than available brokers: 0*
* (kafka.admin.TopicCommand$)"*
Thanks.
--
Raghav
wrote:
>
>
> ____
> From: Raghav <raghavas...@gmail.com>
> Sent: Thursday, August 10, 2017 12:51 AM
> To: Users; confluent-platf...@googlegroups.com
> Subject: How to debug - NETWORK_EXCEPTION
>
> Hi
>
> I am sending very small 32 byte message to Kafka broker i
=1502479584961,
sendTimeMs=1502479584961),
responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
17/08/11 19:26:24 DEBUG internals.AbstractCoordinator:563 Group coordinator
lookup for group ConsumerGroup05 failed: The group coordinator is not
available.
Thanks.
--
Raghav
)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
--
Raghav
Thanks.
On Thu, Jul 13, 2017 at 2:41 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Hi Raghav,
>
> You could take a look at https://github.com/apache/
> kafka/blob/trunk/clients/src/test/java/org/apache/kafka/
> test/TestSslUtils.java
>
> Regards,
>
> Raji
Guys, Would anyone know about it ?
On Tue, Jul 11, 2017 at 6:20 AM, Raghav <raghavas...@gmail.com> wrote:
> Hi
>
> I followed https://kafka.apache.org/documentation/#security to create
> keystore and trust store using Java Keytool. Now, I am looking to do the
> same stuff p
.
--
Raghav
Hi Rajini
Now that 0.11.0 is out, can we use the Admin client ? Are there some
example code for these ?
Thanks.
On Wed, May 24, 2017 at 9:06 PM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Hi Raghav,
>
> Yes, you can create ACLs programmatically. Take a
Thanks for the update, Guozhang.
On Thu, Jun 22, 2017 at 9:52 PM, Guozhang Wang <wangg...@gmail.com> wrote:
> Raghav,
>
> We are going through the voting process now, expecting to have another RC
> and release in a few more days.
>
>
> Guozhang
>
> On Thu
Hi
Would anyone know when is the Kafka 0.11.0 scheduled to be released ?
Thanks.
--
Raghav
gt; producer the message, but Consumer cannot receive the message.
>
> What value should we use for advertised.listeners so that Producer can
> write and Consumers can read ?
>
> Thanks.
>
--
Raghav
ound using ACL for Kafka yet (as I am still doing PoC
> > myself) - so probably some other power user can chime in?
> >
> > KR,
> >
> > On 30 May 2017 at 23:35, Raghav <raghavas...@gmail.com> wrote:
> >
> > > Hi
> > >
> > > I wan
replication and partition.
2. Push ACLs into Kafka Cluster
3. Get existing ACL info from Kafka Cluster
Thanks.
Raghav
Hi Alex
In fact I copied the same configuration that Rajini pasted above and it
worked for me. You can try the same. Let me know if it doesn't work.
Thanks.
On Fri, May 26, 2017 at 4:19 AM, Kamalov, Alex <alex.kama...@bnymellon.com>
wrote:
> Hey Raghav,
>
>
>
> Yes, I
rzo
> 908 209-4484 <(908)%20209-4484>
>
> On May 24, 2017 9:29 PM, "Raghav" <raghavas...@gmail.com> wrote:
>
>> Mike
>>
>> I am not using jaas file. I literally took the config Rajini gave in the
>> previous email and it worked for me. I am
configure. Any assistance would be greatly appreciated. Thanks in advance
>
> kafka: { version: 0.10.1.1 }
>
> zkper: { version: 3.4.9 }
>
> Conrad Bennett Jr.
>
>
--
Raghav
9-4484>
>
> On May 24, 2017 9:29 PM, "Raghav" <raghavas...@gmail.com> wrote:
>
>> Mike
>>
>> I am not using jaas file. I literally took the config Rajini gave in the
>> previous email and it worked for me. I am using ssl Kafka with ACLs. I am
>>
ng issues getting acls to work. Out of intereat, are you
> starting ur brokers with a jaas file, if so do u mind sharing the client
> and server side jaas entries so I can validate what I'm doing.
>
> mike marzo
> 908 209-4484
>
> On May 24, 2017 10:54 AM, "Raghav" <ra
with a CA to sign certificates. Hopefully that would
work too.
Thanks a lot again.
Raghav
On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
Rajini
I will try and report to you shortly. Many thanks.
Raghav
On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Raghav/Darshan,
>
> Can you try these steps on a clean installation of Kafka? It works for me,
> so hopefully it will work for
wrote:
> Raghav
>
> I saw few posts of yours around Kafka ACLs and the problems. I have seen
> similar issues where Writer has not been able to write to any topic. I have
> seen "leader not available" and sometimes "unknown topic or partition", and
> "t
Hello Kafka Users
I am a new Kafka user and trying to make Kafka SSL work with Authorization
and ACLs. I followed posts from Kafka and Confluent docs exactly to the
point but my producer cannot write to kafka broker. I get
"LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.
Can
: Create from
hosts: *
[root@kafka1 KAFKA]#
Thanks.
On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav <
}
(org.apache.kafka.clients.NetworkClient)
On Mon, May 22, 2017 at 8:02 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> If you are using auto-create of topics, you also need to grant Create
> access to kaka-cluster.
>
> On Mon, May 22, 2017 at 9:51 AM, Raghav <raghavas...@gmail.com>
= 10.10.0.23 on resource =
Cluster:kafka-cluster (kafka.authorizer.logger)
On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Raghav,
>
> I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
> tried defining ACLs wi
(kafka.authorizer.logger)
[2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
Operation = Describe from host = 10.10.0.23 on resource =
Topic:kafka-testtopic (kafka.authorizer.logger)
Thanks.
On Sun, May 21, 2017 at 10:52 PM, Raghav <raghavas...@gmail.com> wrote:
> I tried all poss
2017 at 10:32 AM, Raghav <raghavas...@gmail.com> wrote:
> Hi
>
> I have a SSL setup with Kafka Broker, Producer and Consumer, and it works
> fine. I tried to setup ACLs as given on the website. When I start my
> producer, I am getting this error:
>
>
> [root@kafka
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:Bob
When certificate was being generated for Producer (Bob was used in the
CNAME.)
Am I missing something here ? Please help
Thanks.
Raghav
etching metadata with
> > >> > > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ cat
> client-ssl.properties*
> > >> > >
> > >> > > *#group.id <http://group.id>=sslgroup*
> > >> > >
> > >> > > *security.protocol=SSL*
> > >> > >
> > >> > > *ssl.truststore.location=/Users/rbaddam/Desktop/Dev/
> > >> > > kafka_2.11-0.10.1.0/ssl/client.truststore.jks*
> > >> > >
> > >> > > *ssl.truststore.password=123456*
> > >> > >
> > >> > > * #Configure Below if you use Client Auth*
> > >> > >
> > >> > >
> > >> > > *ssl.keystore.location=/Users/rbaddam/Desktop/Dev/kafka_2.
> > >> > > 11-0.10.1.0/ssl/client.keystore.jks*
> > >> > >
> > >> > > *ssl.keystore.password=123456*
> > >> > >
> > >> > > *ssl.key.password=123456*
> > >> > >
> > >> > >
> > >> > > *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$
> > >> bin/kafka-console-consumer.sh
> > >> > > --bootstrap-server 10.247.195.122:9093 <
> http://10.247.195.122:9093>
> > >> > > --new-consumer --consumer.config client-ssl.properties --topic
> > >> ssltopic
> > >> > > --from-beginning*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,817] WARN Error while fetching metadata with
> > >> > > correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> > >> > > (org.apache.kafka.clients.NetworkClient)*
> > >> > >
> > >> > > *[2016-12-13 14:53:28,819] ERROR Unknown error when running
> > consumer:
> > >> > > (kafka.tools.ConsoleConsumer$)*
> > >> > >
> > >> > > *org.apache.kafka.common.errors.GroupAuthorizationException: Not
> > >> > > authorized to access group: console-consumer-52826*
> > >> > >
> > >> > >
> > >> > > Thanks in advance,
> > >> > >
> > >> > > Raghu - raghu98...@gmail.com
> > >> > > This e-mail and its contents (to include attachments) are the
> > >> property of
> > >> > > National Health Systems, Inc., its subsidiaries and affiliates,
> > >> including
> > >> > > but not limited to Rx.com Community Healthcare Network, Inc. and
> its
> > >> > > subsidiaries, and may contain confidential and proprietary or
> > >> privileged
> > >> > > information. If you are not the intended recipient of this e-mail,
> > you
> > >> > are
> > >> > > hereby notified that any unauthorized disclosure, copying, or
> > >> > distribution
> > >> > > of this e-mail or of its attachments, or the taking of any
> > >> unauthorized
> > >> > > action based on information contained herein is strictly
> prohibited.
> > >> > > Unauthorized use of information contained herein may subject you
> to
> > >> civil
> > >> > > and criminal prosecution and penalties. If you are not the
> intended
> > >> > > recipient, please immediately notify the sender by telephone at
> > >> > > 800-433-5719 or return e-mail and permanently delete the original
> > >> > e-mail.
> > >> > >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > G.Kiran Kumar
> > >
> >
> >
> >
> > --
> > G.Kiran Kumar
> >
>
--
Raghav
and key store. In this
test, I did not add the CA cert in either keystone or trust store.
Thanks for all your help.
On Thu, May 18, 2017 at 8:26 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Raghav,
>
> Perhaps what you want to do is:
>
> *You do (for the broker
at 6:26 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Raghav,
>
> Yes, you can create a truststore with your customers' certificates and
> vice-versa. It will be best to give your CA certificate to your customers
> and get the CA certificate from each of your customers
wrote:
> Raghav,
>
> 1. Yes, your customers can use certificates signed by a trusted authority.
> You can simply omit the truststore configuration for your broker in
> server.properties, and Kafka would use the default, which will trust the
> client certificates. If your brokers
://kafka.apache.org/documentation/#security I have to manually give
password. It would be great if we can automate this process either through
script or Java code. Any suggestions ...
Many thanks.
On Tue, May 16, 2017 at 10:58 AM, Raghav <raghavas...@gmail.com> wrote:
> Many thanks, Rajini.
>
>
Many thanks, Rajini.
On Tue, May 16, 2017 at 8:43 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:
> Hi Raghav,
>
> If your Kafka broker is configured with *ssl.client.auth=required,* your
> customer's clients need to provide a keystore. In any case, they need a
> trustst
atory if ssl.client.auth=required, optional for
> requested and not used for none. The truststore configured on the client is
> used to authenticate the server. So you have to provide it unless your
> broker is using certificates signed by a trusted authority.
>
> Hope that helps.
>
&g
Hi
I read the documentation here:
https://kafka.apache.org/documentation/#security_ssl
I have few questions about trust store and keystore based on this scenario:
We have 5 Kafka Brokers in our cluster. We want our clients to write to our
Kafka brokers in a secure way. Suppose, we also host a
1 - 100 of 108 matches
Mail list logo