Congratulations on this amazing release! Lots of cool new features :)
I've also released a YouTube video that will hopefully help the community
get up to speed: https://www.youtube.com/watch?v=kaWbp1Cnfo4&t=5s
Happy watching!
On Tue, Mar 26, 2019 at 7:02 PM Matthias J. Sax wrote:
> The Apache
One could refactor MirrorMaker to commit the source cluster's offset in the
target cluster's instead (in a special topic)
This would technically allow achieving exactly once using the Transactional
API.
But there's work associated with that
Let me know if I’m missing something
On 12/1/18,
I believe these are defaults you can set at the broker level so that if the
topic doesn’t have that setting set, it will inherit those
But you can definitely override your topic configuration at the topic level
On 9 March 2017 at 7:42:14 am, Nicolas Motte (lingusi...@gmail.com) wrote:
Hi everyone
https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11
Am I missing something ?
On 23 February 2017 at 9:21:08 am, Gwen Shapira (g...@confluent.io) wrote:
I saw them in Maven yesterday?
On Wed, Feb 22, 2017 at 2:15 PM, Stephane Maarek
wrote:
> Awesome thanks a lot! When should
Awesome thanks a lot! When should we expect the dependencies to be released
in Maven? (including 2.12 scala)
On 23 February 2017 at 8:27:10 am, Jun Rao (j...@confluent.io) wrote:
Thanks for driving the release, Ewen.
Jun
On Wed, Feb 22, 2017 at 12:33 AM, Ewen Cheslack-Postava
wrote:
> The Apa
Zookeeper in the Client section.
Am I missing something?
Regards,
Stephane
On 21 February 2017 at 12:20:58 am, Martin Gainty (mgai...@hotmail.com)
wrote:
MG>confusion between JAAS-security terminology and Kafka-SASL terminology?
____
From: Stephane Maarek
Sent:
Hi,
I’m wondering if the official Kafka documentation is misleading. Here (
https://kafka.apache.org/documentation/#security_sasl_brokernotes) you can
read:
1. Client section is used to authenticate a SASL connection with
zookeeper. It also allows the brokers to set SASL ACL on zookeeper no
Hi
I’d be great to document what the JAAS file may look like at:
http://docs.confluent.io/3.1.2/schema-registry/docs/security.html
I need to ask for principals from my IT which takes a while, so is this a
correct JAAS?
KafkaClient{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=
Hi
It seems that your keytab doesn't have the principal you configured your
"client" section to use. Post your jaas here if you want further help but
basically you should be able to do
kinit -V -k -t
On 18 Feb. 2017 3:56 am, "Raghav" wrote:
Hi
I am trying to setup a simple setup with one K
because it is updated with the
> latest 0.10.2 client.
>
> -hans
>
> Sent from my iPhone
>
> > On Feb 16, 2017, at 2:55 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
> >
> > Hi,
> >
> > What is Zookeeper used for in the Kafka Res
7 Feb. 2017 4:54 pm, "Manikumar" wrote:
Pl enable authorizer logs ( config/log4j.properties) and check if
operations are getting denied.
Also, enable producer debug logs (config/tools-log4j.properties) to check
for any errors.
On Fri, Feb 17, 2017 at 10:34 AM, Stephane
Hi,
I secured my cluster and everything was working fine. Brokers are up and
don’t complain, my topics are all synchronized.
Here’s my config (excerpt):
listeners=
PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_PLAINTEXT://0.0.0.0:9094,SASL_SSL://0.0.0.0:9095
super.users=User:kafka;User:ANONYMO
So the issue is that you need to have your kafka/f...@realm.com in the
KafkaServer jaas part, but the same zkcli...@realm.com in the Client jaas
part. That should solve your issues
On 9 February 2017 at 7:42:54 pm, Ashish Bhushan (ashish6...@gmail.com)
wrote:
Any help ?
On 09-Feb-2017 1:13 PM,
Hi,
What is Zookeeper used for in the Kafka Rest Proxy?
Is the dependency on Zookeeper why I can’t integrate Kafka Rest proxy with
a secure cluster?
http://docs.confluent.io/3.1.1/kafka-rest/docs/security.html
Regards,
Stephane
Hi,
If I authorise applications to create their own topics (with auto create
setting), will the topics automatically inherit some SASL ACLs?
I’m wondering this because of for example Kafka Streams applications…. I
know they can create their own temp topics and such.
Just wondering what a good ACL
(sorry many questions on security)
I have a kafka cluster with 3 principals
kafka/kafka-1.hostname@realm.com kafka/kafka-2.hostname@realm.com
kafka/kafka-3.hostname@realm.com
I’m trying to enable ACL and I was reading on the confluent website that I
should setup my brokers to be supe
Hi,
Is it possible to assign Kerberos users to groups and then set ACL for
these groups ?
The problem is that it’s painful to add every user’s ACL when their
principal is created, so we’re thinking of creating a “public” and a
“confidential” group. Topics would be assigned to either and then if th
Hi,
We have a Kafka cluster in dev, and ideally I’d like the following ports to
be opened:
9092 -> PLAINTEXT
9093 -> SSL
9094 -> SASL_PLAINTEXT
9095 -> SASL_SSL
The goal is to allow applications to slowly evolve toward 9095 and then
migrate to prod where 9095 is the only port opened.
*Is it poss
for you?
Ismael
On Mon, Feb 6, 2017 at 10:34 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> Hi,
>
> As written here:
> http://docs.confluent.io/3.1.2/connect/security.html#acl-considerations
> "Note that if you are using SASL for authentication, you mus
Hi,
As written here:
http://docs.confluent.io/3.1.2/connect/security.html#acl-considerations
"Note that if you are using SASL for authentication, you must use the same
principal for workers and connectors as only a single JAAS is currently
supported on the client side at this time as described her
Hi,
I’m using the confluent kafka images and whenever I turn off a broker I get
the following messages:
[2017-02-01 02:44:48,310] WARN Failed to send SSL Close message
(org.apache.kafka.common.network.SslTransportLayer)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Nat
So a bit of feedback as well.
I hope Kafka connect would work the following way (just a proposal )
You send a configuration which points to a class but also a version for
that class (connector ). Kafka connect then has some sort of capability to
pull that class from a dependency repository and is
Hi Ewen
If the trend is to hide zookeeper entirely (and most likely restricting its
network connection to Kafka only ) would it make sense to update the Kafka
topics tool ? Currently it is
> *bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor
> 1 --partitions 1 --topic
y pause consumption), then it
> sounds like you found a way to trigger a bug. If you have a set of steps to
> reproduce the issue, a bug report would be appreciated.
>
> -Ewen
>
> On Mon, Jan 16, 2017 at 11:01 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote
am, Stephane Maarek (
steph...@simplemachines.com.au) wrote:
Hi Konstantine,
I appreciate you taking the time to respond
So I have set CONNECT_LOG4J_ROOT_LEVEL=INFO and that’s the output I got
below
Now I understand I need to set CONNECT_LOG4J_LOGGERS also. Can I please
have an example of how to
nfigurable with the current templates because this allows you to view the
logs for each container directly through docker via the command "docker
logs ", which is the preferred way.
Hope this helps,
Konstantine
On Mon, Jan 16, 2017 at 9:51 PM, Stephane Maarek <
steph...@simplemachine
Hi,
I have paused my connector yet it’s still very much active and processing
data. I can see because the offset lag keeps on decreasing (it got 100M
messages to read yet).
Is such a bug known? When I get the status it says PAUSED
(It is a custom connector, but I don’t think I’ve implemented anyt
rootLogger and it doesn’t seem to be
taken into account
On 16 January 2017 at 7:01:50 pm, Stephane Maarek (
steph...@simplemachines.com.au) wrote:
Hi,
I created my own connector and I’m launching it in cluster mode, but every
DEBUG statement is still going to the console.
How can I control the
Hi,
I created my own connector and I’m launching it in cluster mode, but every
DEBUG statement is still going to the console.
How can I control the log level of Kafka Connect and its associated
connectors? I’m using the confluent docker image btw
Thanks
Stephane
Hi Stephen
Out of curiosity, why did you pick ZFS over XFS or ext4 and what options
are you using when formatting and mounting?
Regards,
Stephane
On 13 January 2017 at 6:40:18 am, Stephen Powis (spo...@salesforce.com)
wrote:
Running Centos 6.7 3.10.95-1.el6.elrepo.x86_64. 4 SATA disks in RAID10
Hi,
I’m wondering if the following is feasible…
I have a json document with pretty much 0 schema. The only thing I know for
sure is that it’s a json document.
My goal is to pipe that json document in a postgres table that has two
columns: id and json. The id column is basically topic+partition+off
Thanks!
So I just override the conf while doing the API call? It’d be great to have
this documented somewhere on the confluent website. I couldn’t find it.
On 6 January 2017 at 3:42:45 pm, Ewen Cheslack-Postava (e...@confluent.io)
wrote:
On Thu, Jan 5, 2017 at 7:19 PM, Stephane Maarek <
st
:12 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:
> Hi,
>
> We like to operate in micro-services (dockerize and ship everything on
ecs)
> and I was wondering which approach was preferred.
> We have one kafka cluster, one zookeeper cluster, etc, but when it comes
Hi,
We like to operate in micro-services (dockerize and ship everything on ecs)
and I was wondering which approach was preferred.
We have one kafka cluster, one zookeeper cluster, etc, but when it comes to
kafka connect I have some doubts.
Is it better to have one big kafka connect with multiple
Hi,
My company has an Active Directory but I’m not exactly sure what to ask for
from them.
My current setup and goal is a fully automated kafka cluster, with during
each kafka broker boot a DNS name will be created (
kafka-broker-10.example.com for example).
I’m looking into enabling security wit
Hi
What would be an ideal block size for a disk that has kafka logs and why?
I’m tempted to use a very high value like 64k to enhance sequential reads,
but I have no idea if it will be good.
(I’m also using the XFS has my disk format)
Thanks for the help!
Stephane
Hi,
Using the reassign partition tool, moving partitions 30 at a time, I’m
getting the following errors (in mass)
kafka.common.NotAssignedReplicaException: Leader 12 failed to record
follower 9's position 924392 since the replica is not recognized to be one
of the assigned replicas 10,12,13 for p
Hi,
I’m shutting down kafka properly see down the bottom.
But on reboot, I sometimes get the following:
[2016-12-23 07:45:26,544] INFO Loading logs. (kafka.log.LogManager)
[2016-12-23 07:45:26,609] WARN Found a corrupted index file due to
requirement failed: Corrupt index found, index file
(/mn
or
<https://github.com/pinterest/secor>. Secor will write all messages to an
S3 bucket from which you could rerun the data if you need to. Sadly, it
doesn't come with a producer to rerun the data and you would have to write
your own.
Let me know if that helps!
Thanks much,
Andrew Clar
Hi,
I’m doing a repartitioning from broker 4 5 6 to broker 7 8 9. I’m getting a
LOT of the following errors (for all topics):
[2016-12-22 04:47:21,957] ERROR [ReplicaFetcherThread-0-9], Error for
partition [__consumer_offsets,29] to broker
9:org.apache.kafka.common.errors.NotLeaderForPartitionExc
Hi,
I have Kafka running on EC2 in AWS.
I would like to backup my data volumes daily in order to recover to a point
in time in case of a disaster.
One thing I’m worried about is that if I do an EBS snapshot while Kafka is
running, it seems a Kafka that recovers on it will have to deal with
corrup
Please read the following:
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIgetexactly-oncemessagingfromKafka?
https://kafka.apache.org/0101/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
(usage Examples).
It describes all the use cases.
If you store offs
The answer is that I had a wildcard certificate and my advertised hostname
had an extra dot “.” within it, which was causing the wildcard not be
valid. Changing my naming conventions by using hyphens and now it works
smoothly
On 21 December 2016 at 3:17:24 pm, Stephane Maarek (
steph
Hi,
I have setup SSL (port 9093) using keystore / truststore on each broker and
as you can see, it works if I specify the truststore, doesn’t work if I
don’t:
root@8681fd9da149:/test# kafka-console-producer --broker-list
localhost:9093 --topic test_ssl
hi
[2016-12-21 04:09:16,527] WARN Bootstr
CNAME and
advertised hostnames
We basically have the CNAME in order to cover all the brokers only using 3
DNS records, but the bootstrap CNAME is never advertised by any of the
broker. Is it an issue?
Kind regards,
Stephane
[image: Simple Machines]
*Stephane Maarek* | Developer
+61 416 575 980
brokers?
Kind regards,
Stephane
[image: Simple Machines]
*Stephane Maarek* | Developer
+61 416 575 980
steph...@simplemachines.com.au
simplemachines.com.au
Level 2, 145 William Street, Sydney NSW 2010
On 20 December 2016 at 4:27:28 am, Rajini Sivaram (rajinisiva...@gmail.com)
wrote:
Stephane
Hi,
I have read the docs extensively but yet there are a few answers I can’t
find. It has to do with external CA
Please confirm my understanding if possible:
I can create my own CA to sign all the brokers and clients certificates.
Pros:
- cheap, easy, automated. I need to find a way to access tha
47 matches
Mail list logo