Closing socket for 1.2.4.5 because of error (kafka.network.Processor)

2015-12-21 Thread Arathi Maddula

Hi,

I found this error in my kafka.out. How can I find what is causing this error? 
Kafka jar : kafka_2.10-0.8.2.2.3.2.0-2950.jar


Closing socket for /1.2.3.4 because of error (kafka.network.Processor)
java.lang.NullPointerException
at kafka.network.Processor.run(SocketServer.scala:404)
at java.lang.Thread.run(Thread.java:745)

Thanks,
Arathi



Change kafka broker ids dynamically

2015-11-06 Thread Arathi Maddula
Hi,
Is it possible to change the broker.ids property for a node belonging to a 
Kafka cluster? For example, currently if I  have brokers with ids 1,2,3. If I 
want to stop broker 1,  can I change broker.id to 0 (with current id = 1) in 
server.properties and meta.properties files and then restart broker 1. Can I 
repeat this for brokers 2 and 3 as well?

Thanks,
Arathi



Regarding question Kafka metrics. Issue with unclean leader election rate

2014-01-23 Thread Arathi Maddula
This is wrt  question Kafka metrics. Issue with unclean leader election rate 
(http://www.marshut.com/inkitk/kafka-metrics-issue-with-unclean-leader-election-rate.html)

We use Kafka 0.8.0


Kafka metrics. Issue with unclean leader election rate

2014-01-23 Thread Arathi Maddula
Yes we use 0.8.0 release


Default compression code in Kafka topics

2013-11-08 Thread arathi maddula
Hi,

We have a cluster of Kafka servers. We want data of all topics on these
servers to be compressed, Is there some configuration to achieve this?
I was able to compress code by using compression.codec property  in
ProducerConfig in Kafka Producer.
But I wanted to know if there is a way of enabling topic compression
without modifying anything in ProducerConfig properties.That is, can I add
compression.codec=snappy in server.properties and have all topics' data
compressed. Is there anything I can do during topic creation wrt
compression?


Thanks
Arathi


Messages TTL setting

2013-07-22 Thread arathi maddula
Hi,

We have a 3 node Kafka cluster. We want to increase the maximum amount of
time for which messages are saved in Kafka data logs.
Can we change the configuration on one node, stop it and start it and then
change the configuration of the next node?
Or should we stop all 3 nodes at a time, make configuration changes and
then restart all 3? Please suggest.

Thanks,
Arathi


Kafka compression issues

2013-07-09 Thread arathi maddula
Hi,



I use kafka 0.8. When I run the kafka console producer using



 ./kafka-console-producer.sh --topic test.compress.e --compress true
 --broker-list 127.0.0.1:9092



Iam able to see compressed messages in the log.


But when I run a Java producer class using the following properties, no
message is getting into the data log for that topic.
Please tell me what do I need to get the compressed message into log using
java producer
This the code snippet:





Properties props = *new* Properties();



props.put(broker.list, 127.0.0.1:9092);





props.put(serializer.class, kafka.serializer.StringEncoder);

props.put(compress, true);

props.put(compression.codec, gzip);

props.put(compressed.topics, test.compress22);



ProducerConfig config = *new* ProducerConfig(props);

ProducerString, String producer = *new* ProducerString, String(config);

String KafkaTopic = test.compress22;

String strLine;

*try* {

*for*(*int* i=1;i=2; i++){

strLine=Message +I;

KeyedMessageString, String s = *new* KeyedMessageString,
String(KafkaTopic, *null*, strLine);

producer.send(s);







}

producer.close();



} *catch* (Exception e) {

e.printStackTrace();

}


 Thanks
Arathi


High level zookeeper consumer life

2013-05-31 Thread arathi maddula
Hi,
I use a high level consumer inside a servlet which reads from Kafka stream.
Every time I send an HTTP request, I use a different group ID. This results
in lots of consumers on zookeeper node( ls /consumers on zookeeper client
shows list of all consumers).
Please help me clarify these:

1)Is there a time after which zookeeper auto deletes these consumers? If
yes, can that be configured.
2)If I reuse a group ID later on, will an existing consumer be used?
3)Can there be issues if the no. of group IDs/consumers (due to
simultaneous hits or separate hits) for a topic exceeds the total no. of
partitions for a topic?
4)Sometimes I get a ConsumerRebalanceException. Could this be due to many
consumers? Can this exception occur if two servlet users use the same
groupID/consumer for the same topic simultaneously.

Thanks,
Arathi


Issue during commitOffsets using SimpleConsumer

2013-05-30 Thread arathi maddula
Hi,
I get the following error on running SimpleConsumer.commitOffsets(). Could
you tell me what is the issue?



*java.io.EOFException*: Received -1 when reading from channel, socket has
likely been closed.

at kafka.utils.Utils$.read(Utils.scala:375)

at
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)

at kafka.network.Receive$class.readCompletely(Transmission.scala:56)

at
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)

at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)

at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:83)

at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:73)

at kafka.consumer.SimpleConsumer.commitOffsets(SimpleConsumer.scala:134)

at
kafka.javaapi.consumer.SimpleConsumer.commitOffsets(SimpleConsumer.scala:89)

at KafkaConsumer2.main(*KafkaConsumer2.java:226*)

Thanks
Arathi


SimpleConsumer and message offsets

2013-05-29 Thread arathi maddula
Hi,
Iam using a program which reads data from  a stream using SimpleConsumer.
Is there a way for SimpleConsumer to remember the last offset read? I want
the program to continue from the last offset read when it is restarted.
Thanks in advance.

-Arathi


Re: Offset in high level consumer

2013-05-23 Thread arathi maddula
Hi Neha,
Thanks for the quick reply. Could you tell me if there is some way of
determining the offset for a consumer from a high level Java consumer class
apart from ConsumerOffsetChecker tool? This tool can be run only from  the
command line. Is it possible to use this in a Java class?

I write streaming data from a website into Kafka topic and then read the
same data using a servlet and serve it to a Java client. The problem is
that the servlet sometimes does not consume data from the topic as quickly
as it is produced. In order to give warning to the user saying data is
lost, I will need some way of determining the consumer group offset from
the servlet.

The servlet is a high level Kafka consumer and it is sufficient for our
needs, so I don't want to use a simple consumer.

Any pointers will be of great help.
Thanks
Arathi
On Wed, May 22, 2013 at 7:24 PM, Neha Narkhede neha.narkh...@gmail.comwrote:

 You can run the ConsumerOffsetChecker tool that ships with Kafka.

 Thanks,
 Neha


 On Wed, May 22, 2013 at 2:02 PM, arathi maddula arathimadd...@gmail.com
 wrote:

  Hi,
 
  Could you tell me how to find the offset in a high level  Java consumer ?
 
  Thanks
  Arathi