kafka-consumer-groups.sh fail with sasl enabled 0.9.0.1

2016-08-08 Thread Prabhu V
I am using a kafka consumer where the partitions are assigned using maually
instead of automatic group assignment using a code similar to "consumer.
assign();"

In this case bin/kafka-consumer-groups fails with the message "Consumer
group `my_group1` does not exist or is rebalancing"


On debugging I found that the AdminClient.scals is returning a empty list
for the group summary with "GroupSummary(Dead,,,List())" status.

The command works when I use a consumer group with automatic partition
assignment.

Can someone from kafka-dev confirm if this is the expected behaviour ?

Thanks,
Prabhu


kafka-connect-hdfs failure due to corrupt WAL file

2016-08-02 Thread Prabhu V
Hi,

I am using kafka-connect-hdfs in a 2 nodes and one of the nodes had to be
rebooted when the process was running.

Upon restart the process fails with

16/08/02 21:43:30 ERROR hdfs.TopicPartitionWriter: Recovery failed at state
RECOVERY_PARTITION_PAUSED
org.apache.kafka.connect.errors.ConnectException: java.io.IOException:
Filesystem closed
at io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:131)

I looked into the code and looks like when the WAL log file may be
corrupt/unreadeable, it causes this issue.

Can I delete the WAL files to fix this issue ? I dont mind duplicate events.

Thanks,
Prabhu


kafka-connect-hdfs offset out of range behaviour

2016-07-13 Thread Prabhu V
The kafka-connect-hdfs just hangs if the "offset" that it expects is no
longer present (this happens when the message get deleted because of
retention time)

The process in this case does not write any output and the messages get
ignored.

Is this by design ?

The relevant code is

TopicPartitionWriter.java

if (offset == -1) {
  offset = record.kafkaOffset();
} else if (record.kafkaOffset() != expectedOffset) {
  // Currently it's possible to see stale data with the wrong offset
after a rebalance when you
  // rewind, which we do since we manage our own offsets. See
KAFKA-2894.
  if (!sawInvalidOffset) {
log.info(
"Ignoring stale out-of-order record in {}-{}. Has offset {}
instead of expected offset {}",
record.topic(), record.kafkaPartition(), record.kafkaOffset(),
expectedOffset);
  }
  sawInvalidOffset = true;
  return;
}

In the "else if" we should not ignore the message if the
record.kafkaOffset() is greater than expectedOffset. Any thoughts ?

Thanks,
Prabhu


Find partition offsets in a kerberized kafka cluster

2016-07-06 Thread Prabhu V
Hi,

Does the kafka.tools.GetOffsetShell utility work with a kerberized kafka
cluster ?

I doubt that it uses the old consumer which does not work with kerberos and
hence cannot be used with kerberos.

Is there a utlity that has this functionality in a kerberized cluster ?

I currently do this by the following code on the new consumer. Let me know
if there is a better way.

partsTopic is a array of TopicPartition

consumer.assign(Arrays.asList(partsTopic));
consumer.seekToEnd(partsTopic);

for (TopicPartition partTopic : partsTopic) {
log.info(String.format("%s, offset - %s", partTopic.toString(),
consumer.position(partTopic)));
}


Thanks,
Prabhu


Re: batching related issue in 0.9.0 producer

2016-05-03 Thread Prabhu V
Hi Mayuresh,

Staying on the BufferPool.java, could you tell me why we need the following
piece of code

if (this.availableMemory > 0 || !this.free.isEmpty()) {
if (!this.waiters.isEmpty())
this.waiters.peekFirst().signal();
}

As far as I see, the there are 2 threads, the Producer and the Sender. The
producer attemps to append the record and if there is no memory in the
buffer it "awaits" on the "condition". When the Sender has sent some data,
it deallocates the buffer and "signals" the "condition". In this scenario
that there never will be more than one element in the "waiters" deque and
the producer thread is blocked.

If we have a multithreaded producer then it makes sense.

Please let me know if this is not the case.

Thanks,
Prabhu

On Tue, May 3, 2016 at 1:56 PM, Prabhu V <vpra...@gmail.com> wrote:

>
> Whenever the BufferPool throws a "Failed to allocate memory within the
> configured max blocking time" excepion, it should also remove the condition
> object from the waiters deque. Otherwise the condition object is stays
> forever in the deque.
>
> (i.e) "this.waiters.remove(moreMemory);" should happen before the
> exception is thrown.
>
> .Otherwise the waiting thread count will never get to 0 after the
> exception and batching will not occur. This is because in the
> RecordAccumulator.ready method the exhausted flat is set as
>
> boolean exhausted = this.free.queued() > 0 where free.queued() returns the
> waiters.size().
>
> I reported a issue with the producer on this thread
> http://mail-archives.apache.org/mod_mbox/kafka-users/201605.mbox/browser
>
> and this was because of above issue.
>
>
> Thanks
>


batching related issue in 0.9.0 producer

2016-05-03 Thread Prabhu V
Whenever the BufferPool throws a "Failed to allocate memory within the
configured max blocking time" excepion, it should also remove the condition
object from the waiters deque. Otherwise the condition object is stays
forever in the deque.

(i.e) "this.waiters.remove(moreMemory);" should happen before the exception
is thrown.

.Otherwise the waiting thread count will never get to 0 after the exception
and batching will not occur. This is because in the RecordAccumulator.ready
method the exhausted flat is set as

boolean exhausted = this.free.queued() > 0 where free.queued() returns the
waiters.size().

I reported a issue with the producer on this thread
http://mail-archives.apache.org/mod_mbox/kafka-users/201605.mbox/browser

and this was because of above issue.


Thanks


kafka producer 0.9.0 - performance degradation

2016-05-02 Thread Prabhu V
We are writing messages at the rate of about 9000 records/sec into our
kafka cluster, at times we see that the producer performance degrades
considerably and then it never recovers. When this happens we see the
following error "unable to allocate buffer within timeout". The
"waiting-threads" metric is very high when the process degrades, any inputs
would be appreciated.

The producer parameters are

batch.size=100linger.ms=3
acks=-1metadata.fetch.timeout.ms=1000
compression.type=none
max.request.size=1000

Athough the buffer is fully available the errors are
"org.apache.kafka.common.errors.TimeoutException: Failed to allocate memory
within the configured max blocking time"

Below is the JMX screen shot URL  when the producer is running degraded vs
running ok.

http://i.stack.imgur.com/UIKXa.png

The batch size is 1,000,000. The issue is the same when the batchsize is
dropped to 500,000.

I have this question on stack overflow
http://stackoverflow.com/questions/36961677/kafka-producer-0-9-0-performance-large-number-of-waiting-threads/36964792#36964792


Thanks much


New to Kafka

2016-03-11 Thread prabhu v
Hi,

Can anyone please help me with the video presentations from Kafka experts?

Seems the link provided in Kafka home page
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
is outdated..

Thanks in advance..


New to Kafka

2016-03-11 Thread prabhu v
Hi,

Can anyone please help me with the video presentations from Kafka experts?

Seems the link provided in Kafka home page
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
is
outdated..

Thanks in advance..


Re: Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Thanks for the reply..

I will remove the bootstrap.servers property and add zookeeper.connect in
consumer properties and let you know

Also, is there any way we can check how much data the target data center is
lagging behind source DC?


On Wed, Mar 9, 2016 at 3:41 PM, Gerard Klijs <gerard.kl...@dizzit.com>
wrote:

> What do you see in the logs?
> It could be it goes wrong because you have the bootstrap.servers property
> which is not supported for the old consumer.
>
> On Wed, Mar 9, 2016 at 11:05 AM Gerard Klijs <gerard.kl...@dizzit.com>
> wrote:
>
> > Don't know the actual question, it matters what you want to do.
> > Just watch out trying to copy every topic using a new consumer, cause
> then
> > internal topics are copied, leading to errors.
> > Here is a temple start script we used:
> >
> > #!/usr/bin/env bash
> > export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.local.only=false
> -Djava.rmi.server.hostname=
> -Dcom.sun.management.jmxremote.rmi.port="
> > export JMX_PORT=
> > /usr/bin/kafka-mirror-maker --consumer.config
> $HOME_DIR/consumer.properties --producer.config
> $HOME_DIR/producer.properties --whitelist='' 1>>
> $LOG_DIR/mirror-maker.log 2>> $LOG_DIR/mirror-maker.log
> >
> > Both the consumer and producer configs have sensible defaults, these are
> out consumer.properties template:
> >
> > #Consumer template to be used with the mirror maker
> > zookeeper.connect=
> > group.id=mirrormaker
> > auto.offset.reset=smallest
> > #next property is not available in new consumer
> > exclude.internal.topics=true
> >
> > *And a producer.properties template:*
> >
> > #Producer template to be used with the mirror maker
> > bootstrap.servers=
> > client.id=mirrormaker
> >
> > Because the internal topics can't be excluded in the new consumer yet,
> we use the old consumer.
> >
> > Hope this helps.
> >
> >
> > On Wed, Mar 9, 2016 at 10:57 AM prabhu v <prabhuvrajp...@gmail.com>
> wrote:
> >
> >> Hi Experts,
> >>
> >> I am trying to mirror
> >>
> >>
> >>
> >>
> >> --
> >> Regards,
> >>
> >> Prabhu.V
> >>
> >
>



-- 
Regards,

Prabhu.V


Re: Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Hi Experts,

I am trying to replicate data between different data centers using mirror
maker tool.

kafka-run-class.bat kafka.tools.MirrorMaker --consumer.config
consumer.properties --producer.config producer.properties --whitelist *

Can someone provide the sample consumer.properties and producer.properties
that we use for the above?

I have tried using the below configs, seems i am missing something..

Producer configs:

bootstrap.servers=us1s-cspapsv15:9097
producer.type = async
connect.timeout.ms = 1
request.required.acks = 0
zookeeper.connect = us1s-cspapsv15:2181
serializer.class = kafka.serializer.DefaultEncoder

Consumer configs:

bootstrap.servers=c015cjqcmap01:9095
zookeeper.connect=c015cjqcmap01:2181
group.id=test-consumer-group
zookeeper.sync.time.ms = 2000
zookeeper.session.timeout.ms = 2000
zookeeper.connection.timeout.ms = 6000



Also to check the consumer position, I have tried using
kafka.tools.ConsumerOffsetChecker, seems it is no longer supported in 0.9
release.. Can some one tell me on how to check the consumer position in 0.9
release?

Thanks,
Prabhu



Thanks,
Prabhu


On Wed, Mar 9, 2016 at 3:26 PM, prabhu v <prabhuvrajp...@gmail.com> wrote:

> Hi Experts,
>
> I am trying to mirror
>
>
>
>
> --
> Regards,
>
> Prabhu.V
>
>



-- 
Regards,

Prabhu.V


Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Hi Experts,

I am trying to mirror




-- 
Regards,

Prabhu.V


Re: Consumer - Failed to find leader

2016-01-20 Thread prabhu v
Hi Harsh/Ismael,

Any suggestions or inputs for the above issue?

When i run the producer client, I still get this error

./kafka-console-producer.sh --broker-list hostname:9094 --topic topic3


*[2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
with key: null, value: 5 bytes with error: Failed to update metadata after
6 ms.
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)*

Also, i can see the below error in controller.log

*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: preferred replicas by
broker Map(1 -> Map([topic3,0] -> List(1, 0)), 0 -> Map([topic3,1] ->
List(0, 1), [topic2,0] -> List(0), [topic1,0] -> List(0)))
(kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: topics not in preferred
replica Map() (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] TRACE [Controller 0]: leader imbalance ratio for
broker 1 is 0.00 (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,408] DEBUG [Controller 0]: topics not in preferred
replica Map() (kafka.controller.KafkaController)*
*[2016-01-20 09:39:33,409] TRACE [Controller 0]: leader imbalance ratio for
broker 0 is 0.00 (kafka.controller.KafkaController)*


Tried reinstalling kafka, but no luck:(


Checked telnet also, I am able to connect to that port.
[root@blrd-cmgvapp46 logs]# telnet hostname 9094
Trying 172.31.31.186...
Connected to hostname (172.31.31.186).
Escape character is '^]'.

I can see the topic is created properly.

[root@hostname bin]# ./kafka-topics.sh --describe --zookeeper hostname:2181
--topic topic3
Topic:topic3PartitionCount:2ReplicationFactor:2 Configs:
Topic: topic3   Partition: 0Leader: 1   Replicas: 1,0
Isr: 1,0
Topic: topic3   Partition: 1Leader: 0   Replicas: 0,1
Isr: 0,1


Thanks in advance,


On Tue, Jan 5, 2016 at 3:17 PM, prabhu v <prabhuvrajp...@gmail.com> wrote:

> Hi Harsha,
>
> This is my Kafka_server_jaas.config file. This is passed as JVM param to
> the Kafka broker while start up.
>
> =
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=true
>storeKey=true
>   serviceName="kafka"
>keyTab="/etc/security/keytabs/kafka1.keytab"
> useTicketCache=true
> principal="kafka/hostname@realmname";
> };
>
> zkclient{
>
> com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=true
>storeKey=true
>   serviceName="zookeeper"
>keyTab="/etc/security/keytabs/kafka1.keytab"
> useTicketCache=true
> principal="kafka@realmname";
>
> };
> =
>
> Note: For security reasons, changed my original FQDN to hostname and
> original realm name to realm name in the below output.
>
> I am able to view the ticket using klist command as well. Please find
> below output.
>
> [root@localhost config]# kinit -k -t /etc/security/keytabs/kafka1.keytab
> kafka/hostname@realmname
> [root@localhost config]# klist
> Ticket cache: FILE:/tmp/krb5cc_0
> Default principal: kafka/hostname@realmname
>
> Valid starting ExpiresService principal
> 01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
> renew until 01/05/16 08:14:28
>
>
>
>
>
>
> For(topics,producer and consumer) clients, I am using the below JAAS
> Config:
>
> =
>
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> keyTab="/etc/security/keytabs/kafka_client.keytab"
> storeKey=true
> useTicketCache=true
> serviceName="kafka"
> principal="kafkaclient/hostname@realmname";
> };
>
> =
>
> I am able to view the ticket using klist command as well. Please find
> below output.
>
> [root@localhost config]# kinit -k -t
> /etc/security/keytabs/kafka_client.keytab kafkaclient/hostname@realmname
> [root@localhost config]# klist
> Ticket cache: FILE:/tmp/krb5cc_0
> Default principal: kafkaclient/hostname@realmname
>
> Valid starting ExpiresService principal
> 01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
> renew until 01/05/16 08:14:28
>
> Error when running producer client:
>
> ./kafka-console-producer.sh --broker-list hostname:9095 --topic test
>
>
> [2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
> with key: null, value: 5 bytes with error: Failed to update metadata after
> 6 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
> Error when running topics.sh:
>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper hostname:2181
> [2015-12-28 12:41:32,589] WARN S

Re: Consumer - Failed to find leader

2016-01-05 Thread prabhu v
Hi Harsha,

This is my Kafka_server_jaas.config file. This is passed as JVM param to
the Kafka broker while start up.

=
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
   storeKey=true
  serviceName="kafka"
   keyTab="/etc/security/keytabs/kafka1.keytab"
useTicketCache=true
principal="kafka/hostname@realmname";
};

zkclient{

com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
   storeKey=true
  serviceName="zookeeper"
   keyTab="/etc/security/keytabs/kafka1.keytab"
useTicketCache=true
principal="kafka@realmname";

};
=

Note: For security reasons, changed my original FQDN to hostname and
original realm name to realm name in the below output.

I am able to view the ticket using klist command as well. Please find below
output.

[root@localhost config]# kinit -k -t /etc/security/keytabs/kafka1.keytab
kafka/hostname@realmname
[root@localhost config]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: kafka/hostname@realmname

Valid starting ExpiresService principal
01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
renew until 01/05/16 08:14:28






For(topics,producer and consumer) clients, I am using the below JAAS Config:

=

Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
storeKey=true
useTicketCache=true
serviceName="kafka"
principal="kafkaclient/hostname@realmname";
};

=

I am able to view the ticket using klist command as well. Please find below
output.

[root@localhost config]# kinit -k -t
/etc/security/keytabs/kafka_client.keytab kafkaclient/hostname@realmname
[root@localhost config]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: kafkaclient/hostname@realmname

Valid starting ExpiresService principal
01/05/16 08:14:28  01/06/16 08:14:28  krbtgt/realm@realm
renew until 01/05/16 08:14:28

Error when running producer client:

./kafka-console-producer.sh --broker-list hostname:9095 --topic test


[2016-01-05 10:16:20,272] ERROR Error when sending message to topic test
with key: null, value: 5 bytes with error: Failed to update metadata after
6 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

Error when running topics.sh:

[root@localhost bin]# ./kafka-topics.sh --list --zookeeper hostname:2181
[2015-12-28 12:41:32,589] WARN SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it. (org.apache.zookeeper.ClientCnxn)
^Z

Please let me know if i am missing anything.




Thanks,
Prabhu




On Wed, Dec 30, 2015 at 9:28 PM, Harsha <ka...@harsha.io> wrote:

> can you add your jass file details. Your jaas file might have
> useTicketCache=true and storeKey=true as well
> example of KafkaServer jass file
>
> KafkaServer {
>
> com.sun.security.auth.module.Krb5LoginModule required
>
> useKeyTab=true
>
> storeKey=true
>
> serviceName="kafka"
>
> keyTab="/vagrant/keytabs/kafka1.keytab"
>
> principal="kafka/kafka1.witzend@witzend.com";
> };
>
> and KafkaClient
> KafkaClient {
>
> com.sun.security.auth.module.Krb5LoginModule required
>
> useTicketCache=true
>
> serviceName="kafka";
>
> };
>
> On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
>
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security concerns,
> Before sending this mail,i have replaced our FQDN hostname to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks,
> Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:
>
> Prabhu,
>When using SASL/kerberos always make sure you give FQDN of
>the hostname . In your command you are using --zookeeper
>localhost:2181 and make sure you change that hostname.
>
> "avax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper"
>
> did you try  kinit with that keytab at the command line.
>
> -Harsha
> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> > Thanks for the input Ismael.
> >
> > I will try and let you know.
> >
> > Also need your valuable inputs for the below issue:)
> >
> > i am not able to run kafka-topics.sh(0.9.0.0 version)
> >
> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
> &

Re: Consumer - Failed to find leader

2015-12-30 Thread prabhu v
Hi Harsha,

I have used the Fully qualified domain name. Just for security concerns,
Before sending this mail,i have replaced our FQDN hostname to localhost.

yes, i have tried KINIT and I am able to view the tickets using klist
command as well.

Thanks,
Prabhu

On Wed, Dec 30, 2015 at 11:27 AM, Harsha <ka...@harsha.io> wrote:

> Prabhu,
>When using SASL/kerberos always make sure you give FQDN of
>the hostname . In your command you are using --zookeeper
>localhost:2181 and make sure you change that hostname.
>
> "avax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper"
>
> did you try  kinit with that keytab at the command line.
>
> -Harsha
> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> > Thanks for the input Ismael.
> >
> > I will try and let you know.
> >
> > Also need your valuable inputs for the below issue:)
> >
> > i am not able to run kafka-topics.sh(0.9.0.0 version)
> >
> > [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
> > [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> > javax.security.auth.login.LoginException: No key to store Will continue
> > connection to Zookeeper server without SASL authentication, if Zookeeper
> > server allows it. (org.apache.zookeeper.ClientCnxn)
> > ^Z
> >
> > I am sure the key is present in its keytab file ( I have cross verified
> > using kinit command as well).
> >
> > Am i missing anything while calling the kafka-topics.sh??
> >
> >
> >
> > On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <isma...@gmail.com> wrote:
> >
> > > Hi Prabhu,
> > >
> > > kafka-console-consumer.sh uses the old consumer by default, but only
> the
> > > new consumer supports security. Use --new-consumer to change this.
> > >
> > > Hope this helps.
> > >
> > > Ismael
> > > On 28 Dec 2015 05:48, "prabhu v" <prabhuvrajp...@gmail.com> wrote:
> > >
> > > > Hi Experts,
> > > >
> > > > I am getting the below error when running the consumer
> > > > "kafka-console-consumer.sh" .
> > > >
> > > > I am using the new version 0.9.0.1.
> > > > Topic name: test
> > > >
> > > >
> > > > [2015-12-28 06:13:34,409] WARN
> > > >
> > > >
> > >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > > Failed to find leader for Set([test,0])
> > > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > > kafka.common.BrokerEndPointNotAvailableException: End point
> PLAINTEXT not
> > > > found for broker 0
> > > > at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > > >
> > > >
> > > > Please find the current configuration below.
> > > >
> > > > Configuration:
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" consumer.properties
> > > > zookeeper.connect=localhost:2181
> > > > zookeeper.connection.timeout.ms=6
> > > > group.id=test-consumer-group
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > >
> > > > [root@localhost config]# grep -v "^#" producer.properties
> > > > metadata.broker.list=localhost:9094,localhost:9095
> > > > producer.type=sync
> > > > compression.codec=none
> > > > serializer.class=kafka.serializer.DefaultEncoder
> > > > security.protocol=SASL_PLAINTEXT
> > > > sasl.kerberos.service.name="kafka"
> > > >
> > > > [root@localhost config]# grep -v "^#" server1.properties
> > > >
> > > > broker.id=0
> > > > listeners=SASL_PLAINTEXT://localhost:9094
> > > > delete.topic.enable=true
> > > > num.network.threads=3
> > > > num.io.threads=8
> > > > socket.send.buffer.bytes=102400
> > > > socket.receive.buffer.bytes=102400
> > > > socket.request.max.bytes=104857600
> > > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > > num.partitions=1
> > > > num.recovery.threads.per.data.dir=1
> > > > log.retention.hours=168
> > > > log.segment.bytes=1073741824
> > > > log.reten

Re: Consumer - Failed to find leader

2015-12-28 Thread prabhu v
Thanks for the input Ismael.

I will try and let you know.

Also need your valuable inputs for the below issue:)

i am not able to run kafka-topics.sh(0.9.0.0 version)

[root@localhost bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
[2015-12-28 12:41:32,589] WARN SASL configuration failed:
javax.security.auth.login.LoginException: No key to store Will continue
connection to Zookeeper server without SASL authentication, if Zookeeper
server allows it. (org.apache.zookeeper.ClientCnxn)
^Z

I am sure the key is present in its keytab file ( I have cross verified
using kinit command as well).

Am i missing anything while calling the kafka-topics.sh??



On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma <isma...@gmail.com> wrote:

> Hi Prabhu,
>
> kafka-console-consumer.sh uses the old consumer by default, but only the
> new consumer supports security. Use --new-consumer to change this.
>
> Hope this helps.
>
> Ismael
> On 28 Dec 2015 05:48, "prabhu v" <prabhuvrajp...@gmail.com> wrote:
>
> > Hi Experts,
> >
> > I am getting the below error when running the consumer
> > "kafka-console-consumer.sh" .
> >
> > I am using the new version 0.9.0.1.
> > Topic name: test
> >
> >
> > [2015-12-28 06:13:34,409] WARN
> >
> >
> [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > Failed to find leader for Set([test,0])
> > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
> > found for broker 0
> > at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> >
> >
> > Please find the current configuration below.
> >
> > Configuration:
> >
> >
> > [root@localhost config]# grep -v "^#" consumer.properties
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=6
> > group.id=test-consumer-group
> > security.protocol=SASL_PLAINTEXT
> > sasl.kerberos.service.name="kafka"
> >
> >
> > [root@localhost config]# grep -v "^#" producer.properties
> > metadata.broker.list=localhost:9094,localhost:9095
> > producer.type=sync
> > compression.codec=none
> > serializer.class=kafka.serializer.DefaultEncoder
> > security.protocol=SASL_PLAINTEXT
> > sasl.kerberos.service.name="kafka"
> >
> > [root@localhost config]# grep -v "^#" server1.properties
> >
> > broker.id=0
> > listeners=SASL_PLAINTEXT://localhost:9094
> > delete.topic.enable=true
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=30
> > log.cleaner.enable=false
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=6
> > inter.broker.protocol.version=0.9.0.0
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > allow.everyone.if.no.acl.found=true
> >
> >
> > [root@localhost config]# grep -v "^#" server4.properties
> > broker.id=1
> > listeners=SASL_PLAINTEXT://localhost:9095
> > delete.topic.enable=true
> > num.network.threads=3
> > num.io.threads=8
> > socket.send.buffer.bytes=102400
> > socket.receive.buffer.bytes=102400
> > socket.request.max.bytes=104857600
> > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > num.partitions=1
> > num.recovery.threads.per.data.dir=1
> > log.retention.hours=168
> > log.segment.bytes=1073741824
> > log.retention.check.interval.ms=30
> > log.cleaner.enable=false
> > zookeeper.connect=localhost:2181
> > zookeeper.connection.timeout.ms=6
> > inter.broker.protocol.version=0.9.0.0
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > zookeeper.sasl.client=zkclient
> >
> > [root@localhost config]# grep -v "^#" zookeeper.properties
> > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > clientPort=2181
> > maxClientCnxns=0
> > requireClientAuthScheme=sasl
> >
> authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > jaasLoginRenew=360
> >
> >
> > Need your valuable inputs on this issue.
> > --
> > Regards,
> >
> > Prabhu.V
> >
>



-- 
Regards,

Prabhu.V


Consumer - Failed to find leader

2015-12-27 Thread prabhu v
Hi Experts,

I am getting the below error when running the consumer
"kafka-console-consumer.sh" .

I am using the new version 0.9.0.1.
Topic name: test


[2015-12-28 06:13:34,409] WARN
[console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
Failed to find leader for Set([test,0])
(kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
found for broker 0
at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)


Please find the current configuration below.

Configuration:


[root@localhost config]# grep -v "^#" consumer.properties
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6
group.id=test-consumer-group
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name="kafka"


[root@localhost config]# grep -v "^#" producer.properties
metadata.broker.list=localhost:9094,localhost:9095
producer.type=sync
compression.codec=none
serializer.class=kafka.serializer.DefaultEncoder
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name="kafka"

[root@localhost config]# grep -v "^#" server1.properties

broker.id=0
listeners=SASL_PLAINTEXT://localhost:9094
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6
inter.broker.protocol.version=0.9.0.0
security.inter.broker.protocol=SASL_PLAINTEXT
allow.everyone.if.no.acl.found=true


[root@localhost config]# grep -v "^#" server4.properties
broker.id=1
listeners=SASL_PLAINTEXT://localhost:9095
delete.topic.enable=true
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=30
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6
inter.broker.protocol.version=0.9.0.0
security.inter.broker.protocol=SASL_PLAINTEXT
zookeeper.sasl.client=zkclient

[root@localhost config]# grep -v "^#" zookeeper.properties
dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
clientPort=2181
maxClientCnxns=0
requireClientAuthScheme=sasl
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=360


Need your valuable inputs on this issue.
-- 
Regards,

Prabhu.V


Kafka User Group meeting Link

2015-12-17 Thread prabhu v
Hi,

Can anyone provide me the link for the KAFKA USER Group meetings which
happened on Jun. 14, 2012 and June 3, 2014??

Link provided in the below wiki page is not a valid one..

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations




-- 
Regards,

Prabhu.V

-- 
Regards,

Prabhu.V


Re: Kafka User Group meeting Link

2015-12-17 Thread prabhu v
I am put up in India.

Looking for the below user group meetings. I am able to access 2nd Kafka
User group meeting, but not 1st & 3rd.

User group meetings:

   - 1st Kafka user group meeting at LinkedIn, Jun. 14, 2012. video (part 1)
   <http://www.ustream.tv/recorded/23319178>, video (part 2)
   <http://www.ustream.tv/recorded/23321300>
   - 2nd Kafka user group meeting at LinkedIn, Jun 27 2013. video
   <http://www.youtube.com/watch?v=njuz0zSBvnc>
   - 3rd Kafka user group meeting at LinkedIn, June 3, 2014, video
   <http://www.ustream.tv/recorded/48396701>

<https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations>

Regards,
Prabhu

On Thu, Dec 17, 2015 at 11:58 PM, Jens Rantil <jens.ran...@tink.se> wrote:

> Hi,
>
>
> In which part of the world?
>
>
>
>
> Cheers,
>
> Jens
>
>
>
>
>
> –
> Skickat från Mailbox
>
> On Thu, Dec 17, 2015 at 8:23 AM, prabhu v <prabhuvrajp...@gmail.com>
> wrote:
>
> > Hi,
> > Can anyone provide me the link for the KAFKA USER Group meetings which
> > happened on Jun. 14, 2012 and June 3, 2014??
> > Link provided in the below wiki page is not a valid one..
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
> > --
> > Regards,
> > Prabhu.V
> > --
> > Regards,
> > Prabhu.V
>



-- 
Regards,

Prabhu.V