Hi,
A. What is the usage of `kafka-replica-verification.sh` script ? I don't
find any documentations about it in [1] and [2].
I have a topic `test` with 10 partitions. Ran the above script, it
continuously prints the below results.
[kamal@tcltest1 bin]$ sh kafka-replica-verification.sh --time
1) Can the ACLs be specified statically in a config file of sorts? Or is
bin/kafka-acl.sh or a similar kafka client API the only way to specify
the
ACLs?
kafka-acls.sh executes simpleAClAuthorizer and the only way it accepts
acls is via command-line params.
2) I notice that bin/kafka-acl.sh
Thanks Ismael,
Don't have permissions, my username is dbahir.
On Fri, Jun 3, 2016 at 4:49 AM, Ismael Juma wrote:
> There are instructions here:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> Let me know your user id in the wiki if you
Doing some more investigation into this, the KTable-KTable inner join is
indeed emitting records on every update of either KTable. If their is no
match found, the record that's emitted is null. This may be a conscious
design decision due to the continuous nature of the join, although I'd love
to
Hello,
Few questions on Kafka Security.
1) Can the ACLs be specified statically in a config file of sorts? Or is
bin/kafka-acl.sh or a similar kafka client API the only way to specify the
ACLs?
2) I notice that bin/kafka-acl.sh takes an argument to specify zookeeper,
but doesn't seem to have a
No. These versions and all versions 0.8 onwards rely on Zookeeper.
On Wednesday, 8 June 2016, Subhash Agrawal wrote:
> Hi,
> I am currently using Kafka 0.7.1 without zookeeper. We have single node
> kafka server.
> To enhance security, we have decided to support SSL. As
Hi,
I am currently using Kafka 0.7.1 without zookeeper. We have single node kafka
server.
To enhance security, we have decided to support SSL. As 0.7.1 version does not
support SSL,
we are upgrading to latest version 0.10.0.0. We noticed that with the latest
version, it is
mandatory to use
Hi,
I am currently using Kafka 0.7.1 without zookeeper. We have single node kafka
server.
To enhance security, we have decided to support SSL. As 0.7.1 version does not
support SSL,
we are upgrading to latest version 0.10.0.0. We noticed that with the latest
version, it is
mandatory to use
Will have to get back to you on that, Ismael. This code is owned by
another team and is in a different code repository.
Chris
From: Ismael Juma
To: users@kafka.apache.org
Date: 06/08/2016 03:22 PM
Subject:Re: Invalid Version for API key
Great Chris.
Great Chris. Out of curiosity, which Kafka client was being used by this
microservice?
Ismael
On 8 Jun 2016 18:54, "Chris Barlock" wrote:
> Thanks, Ismael. We tracked it down to one of the 11 microservices running
> on this system.
>
> Thank you, all, who replied to me!
>
>
Is there any way in the current API to achieve this?
I am trying to set up some screen to be able to promote an observer to
voting member in case of a loss of quorum for instance.
Appreciate your help.
thanks,
Nomar
Thanks, Ismael. We tracked it down to one of the 11 microservices running
on this system.
Thank you, all, who replied to me!
Chris
From: Ismael Juma
To: users@kafka.apache.org
Date: 06/08/2016 11:29 AM
Subject:Re: Invalid Version for API key
Sent by:
Hi Chris,
The error includes the IP address of the client:
172.20.3.0:53901
Would that help identify the client? We are not aware of issues with the
built-in clients.
Ismael
On Wed, Jun 8, 2016 at 4:13 PM, Chris Barlock wrote:
> I think everyone is using the Maven
I think everyone is using the Maven kafka-clients 0.8.2.1 but I can't be
sure since my team does not have control over all the code. I cannot
identify what is triggering these exceptions -- which is why I was asking
about additional tracing that might add some content around the exception
to
Hello! Now I am using kafka 0.9. The offset of group does not on zookeeper
again, so how can I get the offset of group.
1. when I use commond: "bin/kafka-run-class.sh
kafka.tools.ConsumerOffsetChecker --zookeeper ..."
error like this: "ConsumerOffsetChecker is deprecated and will be dropped
ApiKey 2 is the Offsets request. There is only a version 0 of that protocol
since there has been no change in the protocol version for LIST_OFFSETS
from 0.8 to 0.10. So there error that version 1 is invalid is correct.
What has changed in 0.10 is that the validation and errors of incorrect api
Hello everyone,
I'm unable to `import org.apache.kafka.clients` when using gradle. The
error is package not found.
here is my build.gradle file
//build.gradle file
apply plugin: 'java'
repositories {
mavenCentral()
}
dependencies {
compile group: 'org.apache.kafka', name: 'kafka-clients',
I'm new to java and I though I could import packages `import
org.apache.kafka.clients.*` works.
Thanks anyways.
On Wed, Jun 8, 2016 at 3:16 PM sunil mahendrakar <
sunil.mahendraka...@gmail.com> wrote:
> Hello everyone,
>
> I'm unable to `import org.apache.kafka.clients` when using gradle. The
I have a seemingly simple case where I want to join two KTables to produce
a new table with a different key, but I am getting NPEs. My understanding
is that to change the key of a KTable, I need to do a groupBy and a reduce.
What I believe is going on is that the inner join operation is emitting
I believe the wire format has changed between 0.8 and 0.9. It might be
necessary to update your clients. I'd try that first before doing any further
debugging / tracing.
--
Best regards,
Rad
On Wed, Jun 8, 2016 at 4:40 PM +0200, "Chris Barlock"
wrote:
Hi Elias,
You'll have to do some rolling restarts, but downtime can be limited.
There's two things you have to consider at a high level:
1) How to migrate zookeeper without downtime
-Starting with a quorum of 3, add two of the new servers to the quorum
bringing it up to 5
-Once everything is in
Anyone? Is there additional tracing that can be turned on to track down
the source of these exceptions?
Chris
From: Chris Barlock/Raleigh/IBM@IBMUS
To: users@kafka.apache.org
Date: 06/07/2016 12:45 PM
Subject:Invalid Version for API key
We are running some tests on
Hi all,
I've finally managed to use the tool with the --command-config flag and a
consumer.properties file containing the line
"security.protocol=SASL_PLAINTEXT" (the security-protocol flag of the
command line tool does not seem to have any effect for this particular
tool).
However, I'm still at
Hello!
I'm working with Kafka 0.9.1 new consumer API.
The consumer is manually assigned to a partition. For this consumer I would
like to see its progress (meaning the lag).
Since I added the group id consumer-tutorial as property, I assumed that I
can use the command
The thread on the hortonworks community forum is there ->
https://community.hortonworks.com/questions/38409/not-able-to-monitor-consumer-group-lag-with-new-co.html
2016-06-08 9:59 GMT+02:00 Pierre Labiausse :
> Hello again,
>
> I've tried to use the command line tool
Hi,
Is there a proper way to change the zookeeper cluster?
I want to migrate new zookeeper cluster, preferably with little or no
downtime.
Is it possible?
I don't know about 'normal', we use the mockproducer with autocomplete set
to false, and use a responseThread to simulate produce behaviour like this:
private final class ResponseThread extends Thread {
public void run() {
try {
Thread.sleep(responseTime);
} catch
Hello again,
I've tried to use the command line tool again without success, but I'm not
seing the IOException anymore in the error stack, only an EOFException.
Nothing is logged server-side, as if the request is not even received.
Ewen, I've tried using the command-config flag as you suggested,
Liju,
Quotas are not applied to the replica fetch followers.
Regards,
Rajini
On Fri, Jun 3, 2016 at 7:25 PM, Liju John wrote:
> Hi ,
>
> We are exploring the new quotas feature with Kafka 0.9.01.
> Could you please let me know if quotas feature works for fetch follower
Anyone got any pointers to simple examples of their use?
Through trial and error I have managed to queue a message to a mock consumer by:
Initialisation:
TopicPartition partition = new TopicPartition(topicName, 0);
consumer.rebalance(singletonList(partition));
Hi Saeed,
Kafka Streams takes care of assigning partitions to consumers automatically for
you. You don't have to write anything explicit to do that. See
WordCountDemo.java as an example. Was there another reason you wanted control
over partition assignment?
Thanks
Eno
> On 7 Jun 2016, at
31 matches
Mail list logo