Re: Offset storage

2015-10-29 Thread pushkar priyadarshi
Storing offsets in Kafka frees up zookeeper writes for offset sync.so I
think it's preferred one to use whenever possible

On Thursday, October 29, 2015, Mayuresh Gharat 
wrote:

> You can use either of them.
> The new kafka consumer (still under development) does not store offsets in
> zookeeper. It stores in kafka.
>
> Thanks,
>
> Mayuresh
>
> On Wed, Oct 28, 2015 at 7:26 AM, Burtsev, Kirill <
> kirill.burt...@cmegroup.com > wrote:
>
> > Which one is considered preferred offset storage: zookeeper or kafka? I
> > see that default for 0.8.2.2 high level consumer is zookeeper, but I saw
> a
> > few references about migrating offset storage to kafka.
> >
> > Thanks
> >
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>


-- 
Sent from my iPhone


Re: Consumer of multiple topic

2015-10-23 Thread pushkar priyadarshi
Currently there are no partitions based subscription inside topic.So when
you subscribe to both topics your consumer will get data from each
partitions in these two topics, i dont think you would be missing anything.



On Fri, Oct 23, 2015 at 11:35 AM, Fajar Maulana Firdaus 
wrote:

> I am using kafka to implement event sourcing. Based on a threat in
> http://search-hadoop.com/kafka, I created sepparate topic for my
> events, e.g. UserCreatedEvent and UserUpdateEmailEvent. Both events
> are using userId as the message key. However assuming each topic has
> several partition, how to setup the consumer?
>
> My consumer is listening to both events / topics. But I am not sure if
> my consumer didn't miss the event for user A from both topics because
> in topic UserCreated, key userA can end up in partition 1 but on
> UserUpdateEmail topic, key userA can end up in other partition.
>
> Or does it mean I need to make consumer to listen to all partition?
> But this approach is not scalable.
>
> Thank you,
> Fajar
>


Re: What happens when ISR is behind leader

2015-10-01 Thread pushkar priyadarshi
Hi,

There are two properties which determines when does a replica falls off
sync.Look for replica.lag.time.max.ms and replica.lag.max.messages .If a
replica goes out of sync then it would not be even considered for leader
election.

Regards,
Pushkar

On Wed, Sep 30, 2015 at 9:44 AM, Shushant Arora 
wrote:

> Hi
>
> I have a kafka cluster with 2 brokers and replication as 2.
> Now say for a partition P1 leader broker b1 has offsets 1-10 and ISR broker
> is behind leader and now it has data for offsets (1-5) only. Now broker B1
> gets down and kafka elects B2 as leader for partition P1. Now new write for
> partition P1 will happen on B2 - what will be the offset of new message
> will it start from (5+1)=6 or (10+1)=11?
>
> And if it starts from 11 ? will offsets 6-10 will be missing ?
>
> Thanks
>


Kafka BrokerTopicMetrics MessageInPerSec rate

2015-07-15 Thread pushkar priyadarshi
Hi,

While benchmarking new producer and consumer syncing offset in zookeeper i
see that MessageInRate reported in BrokerTopicMetrics is not same as rate
at which i am able to publish and consume messages.

Using my own custom reporter i can see the rate at which messages are
published and consumed and i expected rate at which message are consumed to
be similar(or lesser) to rate reported  by
kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec
But most of the time i am seeing produce/consume rate = 2 *
MessagesInPerSec across broker.

Was wondering what exactly does MessagesInPerSec in broker means?When we
are publishing we produce ProducerRecord.Is messagingInPerSecond
something different from this rate?
I am using 3 broker and a single topic with multiple partitions and
replication factor set to 3.
All the brokers are on 0.8.2.1

Thanks


Re: Fetching details from Kafka Server

2015-07-13 Thread pushkar priyadarshi
2)You need to implement MetricReporter and provider that implementation
class name against producer side configuration metric.reporters

On Mon, Jul 13, 2015 at 9:08 PM, Swati Suman swatisuman1...@gmail.com
wrote:

 Hi Team,
 We are using Kafka 0.8.2

 I have two questions:

 1)Is there any Java Api in Kafka that gives me the list of all the consumer
 groups along with the topic/partition from which they are consuming
 Also, is there any way that I can fetch the zookeeper list from the kafka
 server side .
 Note: I am able to fetch the above information from the Zookeeper. But I
 want to fetch it from Kafka Server.

 2). I have implemented a Custom Metrics Reporter which is implementing
 KafkaMetricsReporter and KafkaMetricsMBeanReporter. So it is extracting all
 the Server Metrics as seen in page
 http://docs.confluent.io/1.0/kafka/monitoring.html and not the Producer
 and
 Consumer Metrics. Is there any way I can fetch them from the kafka server
 side or do the Producer/Consumer need to implement something to be able to
 fetch/emit them.

 I will be very thankful if you could share your thoughts on this.

 Thanks In Advance!!

 Best Regards,
 Swati Suman



Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
Hi,

The documentation for new producer allows passing ack=2(or any other
numeric value) but when i actually pass anything other than 0,1,-1 in
broker log i see following warning.

Client producer-1 from /X.x.x.x:50105 sent a produce request with
request.required.acks of 2, which is now deprecated and will be removed in
next release. Valid values are -1, 0 or 1. Please consult Kafka
documentation for supported and recommended configuration

I have a particular use case where i want replication to be acknowledged by
exactly (replicationFactor -1 ) broker or message publish should fail if
that many Acks are not possible.

regards


Re: Kafka New Producer setting acks=2 in 0.8.2.1

2015-05-14 Thread pushkar priyadarshi
Thanks Guozhang. It worked .
On Thu, May 14, 2015 at 4:59 PM Guozhang Wang wangg...@gmail.com wrote:

 Hello,

 This behavior has been changed since 0.8.2.0, you can find the details in
 the following KIP discussion:


 https://cwiki.apache.org/confluence/display/KAFKA/KIP-1+-+Remove+support+of+request.required.acks

 And the related ticket is KAFKA-1555.

 For your use case you could set min.insync.replicas to replicationFactor -
 1, see detailed description of this config here:

 http://kafka.apache.org/documentation.html#newproducerconfigs

 Guozhang

 On Thu, May 14, 2015 at 1:40 PM, pushkar priyadarshi 
 priyadarshi.push...@gmail.com wrote:

  Hi,
 
  The documentation for new producer allows passing ack=2(or any other
  numeric value) but when i actually pass anything other than 0,1,-1 in
  broker log i see following warning.
 
  Client producer-1 from /X.x.x.x:50105 sent a produce request with
  request.required.acks of 2, which is now deprecated and will be removed
 in
  next release. Valid values are -1, 0 or 1. Please consult Kafka
  documentation for supported and recommended configuration
 
  I have a particular use case where i want replication to be acknowledged
 by
  exactly (replicationFactor -1 ) broker or message publish should fail if
  that many Acks are not possible.
 
  regards
 



 --
 -- Guozhang



Re: Kafka Zookeeper queries

2015-04-21 Thread pushkar priyadarshi
In my knowledge if you are using 0.8.2.1 which is latest stable you can
sync up your consumer offsets in kafka itself instead of Zk which further
brings down write load on ZKs.

Regards,
Pushkar

On Tue, Apr 21, 2015 at 1:13 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 2 partitions should be OK.

 On 4/21/15, 12:33 AM, Achanta Vamsi Subhash achanta.va...@flipkart.com
 wrote:

 We are planning to have ~2 partitions. Will it be a bottleneck?
 
 On Mon, Apr 20, 2015 at 10:48 PM, Jiangjie Qin j...@linkedin.com.invalid
 
 wrote:
 
  Producers usually do not query zookeeper at all.
  Consumers usually query zookeeper at beginning or rebalance. It is
  supposed to be in frequent if you don¹t have consumers come and go all
 the
  time. One exception is that if you are using zookeeper based consumer
  offset commit, it will commit offset to zookeeper frequently.
  In Kafka, the most heavily used mechanism for zookeeper is zookeeper
  listener and they are not fired in a regular frequency.
 
  The limitation of Zookeeper usage for Kafka I am aware of is probably
 the
  size of each zNode. As long as you don¹t have so many partitions that
  zNode cannot handle, it should be fine.
 
  Thanks.
 
  Jiangjie (Becket) Qin
 
  On 4/20/15, 5:58 AM, Achanta Vamsi Subhash
 achanta.va...@flipkart.com
  wrote:
 
  Hi,
  
  Could anyone help with this?
  
  Thanks.
  
  On Sun, Apr 19, 2015 at 12:58 AM, Achanta Vamsi Subhash 
  achanta.va...@flipkart.com wrote:
  
   Hi,
  
   How often does Kafka query zookeeper while producing and consuming?
  
   Ex:
   If there is a single partition to which we produce and a HighLevel
   consumer running on it, how many read/write queries to zookeeper
 happen.
  
   Extending further, multiple topics with ~100 partitions each, how
 many
   zookeeper calls will be made (read/write).
  
   What is the max limit of no of partitions / kafka cluster that
 zookeeper
   can handle?
  
   --
   Regards
   Vamsi Subhash
  
  
  
  
  --
  Regards
  Vamsi Subhash
 
 
 
 
 --
 Regards
 Vamsi Subhash




Re: Warn No Checkpointed highwatermark is found for partition

2015-04-21 Thread pushkar priyadarshi
I think its ok for this to come in the start when topic is created as there
are no high watermark(offset of last commited message) check pointed.
Got to understand this from this blog
https://engineering.linkedin.com/kafka/intra-cluster-replication-apache-kafka

Thanks And Regards,
Pushkar

On Tue, Apr 21, 2015 at 3:07 PM, pushkar priyadarshi 
priyadarshi.push...@gmail.com wrote:

 I Get warnings in server log saying No checkpointed highwatermark is
 found for partition in server.log when trying to create a new topic.

 What does this mean?Though this is warning was curious to know if it
 implies of any potential problem.

 Thanks And Regards,
 Pushkar



Warn No Checkpointed highwatermark is found for partition

2015-04-21 Thread pushkar priyadarshi
I Get warnings in server log saying No checkpointed highwatermark is found
for partition in server.log when trying to create a new topic.

What does this mean?Though this is warning was curious to know if it
implies of any potential problem.

Thanks And Regards,
Pushkar


Re: Which version works for kafka 0.8.2 as consumer?

2015-04-01 Thread pushkar priyadarshi
So in 0.8.2.0/0.8.2.1 high level consumer can not make use of offset
syncing in kafka?

On Wed, Apr 1, 2015 at 12:51 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:

 Yes, KafkaConsumer in 0.8.2 is still in development. You probably still
 want to use ZookeeperConsumerConnector for now.

 On 4/1/15, 9:28 AM, Mark Zang deepnight...@gmail.com wrote:

 I found the 0.8.2.0 and 0.8.2.1 has a KafkaConsumer. But this class seems
 not completed and not functional. Lots of method returns null or throws
 NSM. Which version of consumer for kafka 0.8.2 broker?
 
 Thanks!
 
 --
 Best regards!
 Mike Zang




using 0.8.2 in production

2015-03-30 Thread pushkar priyadarshi
Hi,

I remember some time back people were asked not to upgrade to 0.8.2.Wanted
to know if issues pertaining to that are resolved now and is it safe now to
migrate to 0.8.2?

Thanks And Regards,
Pushkar


Re: Interested in contributing to Kafka?

2014-07-16 Thread pushkar priyadarshi
I have been using kafka for quite some time now and would really be
interested to contribute to this awesome code base.

Regards,
Pushkar


On Thu, Jul 17, 2014 at 7:17 AM, Joe Stein joe.st...@stealth.ly wrote:

 ./gradlew scaladoc

 Builds the scala doc, perhaps we can start to publish this again with the
 next release and link it on the website.  For more related check out the
 README


 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


 On Wed, Jul 16, 2014 at 8:39 PM, hsy...@gmail.com hsy...@gmail.com
 wrote:

  Is there a scala API doc for the entire kafka library?
 
 
  On Wed, Jul 16, 2014 at 5:34 PM, hsy...@gmail.com hsy...@gmail.com
  wrote:
 
   Hi Jay,
  
   I would like to take a look at the code base and maybe start working on
   some jiras.
  
   Best,
   Siyuan
  
  
   On Wed, Jul 16, 2014 at 3:09 PM, Jay Kreps jay.kr...@gmail.com
 wrote:
  
   Hey All,
  
   A number of people have been submitting really nice patches recently.
  
   If you are interested in contributing and are looking for something to
   work on, or if you are contributing and are interested in ramping up
   to be a committer on the project, please let us know--we are happy to
   help you help us :-). It is often hard to know what JIRAs or projects
   would be good to work on, how hard those will be, and where to get
   started. Feel free to reach out to me, Neha, Jun, or any of the other
   committers for help with this.
  
   Cheers,
  
-Jay
  
  
  
 



Re: Help is processing huge data through Kafka-storm cluster

2014-06-15 Thread pushkar priyadarshi
what throughput are you getting from your kafka cluster alone?Storm
throughput can be dependent on what processing you are actually doing from
inside it.so must look at each component starting from kafka first.

Regards,
Pushkar


On Sat, Jun 14, 2014 at 8:44 PM, Shaikh Ahmed rnsr.sha...@gmail.com wrote:

 Hi,

 Daily we are downloaded 28 Million of messages and Monthly it goes up to
 800+ million.

 We want to process this amount of data through our kafka and storm cluster
 and would like to store in HBase cluster.

 We are targeting to process one month of data in one day. Is it possible?

 We have setup our cluster thinking that we can process million of messages
 in one sec as mentioned on web. Unfortunately, we have ended-up with
 processing only 1200-1700 message per second.  if we continue with this
 speed than it will take min 10 days to process 30 days of data, which is
 the relevant solution in our case.

 I suspect that we have to change some configuration to achieve this goal.
 Looking for help from experts to support me in achieving this task.

 *Kafka Cluster:*
 Kafka is running on two dedicated machines with 48 GB of RAM and 2TB of
 storage. We have total 11 nodes kafka cluster spread across these two
 servers.

 *Kafka Configuration:*
 producer.type=async
 compression.codec=none
 request.required.acks=-1
 serializer.class=kafka.serializer.StringEncoder
 queue.buffering.max.ms=10
 batch.num.messages=1
 queue.buffering.max.messages=10
 default.replication.factor=3
 controlled.shutdown.enable=true
 auto.leader.rebalance.enable=true
 num.network.threads=2
 num.io.threads=8
 num.partitions=4
 log.retention.hours=12
 log.segment.bytes=536870912
 log.retention.check.interval.ms=6
 log.cleaner.enable=false

 *Storm Cluster:*
 Storm is running with 5 supervisor and 1 nimbus on IBM servers with 48 GB
 of RAM and 8TB of storage. These servers are shared with hbase cluster.

 *Kafka spout configuration*
 kafkaConfig.bufferSizeBytes = 1024*1024*8;
 kafkaConfig.fetchSizeBytes = 1024*1024*4;
 kafkaConfig.forceFromStart = true;

 *Topology: StormTopology*
 Spout   - Partition: 4
 First Bolt -  parallelism hint: 6 and Num tasks: 5
 Second Bolt -  parallelism hint: 5
 Third Bolt -   parallelism hint: 3
 Fourth Bolt   -  parallelism hint: 3 and Num tasks: 4
 Fifth Bolt  -  parallelism hint: 3
 Sixth Bolt -  parallelism hint: 3

 *Supervisor configuration:*

 storm.local.dir: /app/storm
 storm.zookeeper.port: 2181
 storm.cluster.mode: distributed
 storm.local.mode.zmq: false
 supervisor.slots.ports:
 - 6700
 - 6701
 - 6702
 - 6703
 supervisor.worker.start.timeout.secs: 180
 supervisor.worker.timeout.secs: 30
 supervisor.monitor.frequency.secs: 3
 supervisor.heartbeat.frequency.secs: 5
 supervisor.enable: true

 storm.messaging.netty.server_worker_threads: 2
 storm.messaging.netty.client_worker_threads: 2
 storm.messaging.netty.buffer_size: 52428800 #50MB buffer
 storm.messaging.netty.max_retries: 25
 storm.messaging.netty.max_wait_ms: 1000
 storm.messaging.netty.min_wait_ms: 100


 supervisor.childopts: -Xmx1024m -Djava.net.preferIPv4Stack=true
 worker.childopts: -Xmx2048m -Djava.net.preferIPv4Stack=true


 Please let me know if more information needed..

 Thanks in advance.

 Regards,
 Riyaz



Re: Help is processing huge data through Kafka-storm cluster

2014-06-15 Thread pushkar priyadarshi
and one more thing.using kafka metrices you can easily monitor at what rate
you are able to publish on to kafka and what speed your consumer(in this
case your spout) is able to drain messages out of kafka.it's possible that
due to slowly draining out even publishing rate in worst case might get
effected as if consumer lags behind too much then it will result into disk
seeks while consuming the older messages.


On Sun, Jun 15, 2014 at 8:16 PM, pushkar priyadarshi 
priyadarshi.push...@gmail.com wrote:

 what throughput are you getting from your kafka cluster alone?Storm
 throughput can be dependent on what processing you are actually doing from
 inside it.so must look at each component starting from kafka first.

 Regards,
 Pushkar


 On Sat, Jun 14, 2014 at 8:44 PM, Shaikh Ahmed rnsr.sha...@gmail.com
 wrote:

 Hi,

 Daily we are downloaded 28 Million of messages and Monthly it goes up to
 800+ million.

 We want to process this amount of data through our kafka and storm cluster
 and would like to store in HBase cluster.

 We are targeting to process one month of data in one day. Is it possible?

 We have setup our cluster thinking that we can process million of messages
 in one sec as mentioned on web. Unfortunately, we have ended-up with
 processing only 1200-1700 message per second.  if we continue with this
 speed than it will take min 10 days to process 30 days of data, which is
 the relevant solution in our case.

 I suspect that we have to change some configuration to achieve this goal.
 Looking for help from experts to support me in achieving this task.

 *Kafka Cluster:*
 Kafka is running on two dedicated machines with 48 GB of RAM and 2TB of
 storage. We have total 11 nodes kafka cluster spread across these two
 servers.

 *Kafka Configuration:*
 producer.type=async
 compression.codec=none
 request.required.acks=-1
 serializer.class=kafka.serializer.StringEncoder
 queue.buffering.max.ms=10
 batch.num.messages=1
 queue.buffering.max.messages=10
 default.replication.factor=3
 controlled.shutdown.enable=true
 auto.leader.rebalance.enable=true
 num.network.threads=2
 num.io.threads=8
 num.partitions=4
 log.retention.hours=12
 log.segment.bytes=536870912
 log.retention.check.interval.ms=6
 log.cleaner.enable=false

 *Storm Cluster:*
 Storm is running with 5 supervisor and 1 nimbus on IBM servers with 48 GB
 of RAM and 8TB of storage. These servers are shared with hbase cluster.

 *Kafka spout configuration*
 kafkaConfig.bufferSizeBytes = 1024*1024*8;
 kafkaConfig.fetchSizeBytes = 1024*1024*4;
 kafkaConfig.forceFromStart = true;

 *Topology: StormTopology*
 Spout   - Partition: 4
 First Bolt -  parallelism hint: 6 and Num tasks: 5
 Second Bolt -  parallelism hint: 5
 Third Bolt -   parallelism hint: 3
 Fourth Bolt   -  parallelism hint: 3 and Num tasks: 4
 Fifth Bolt  -  parallelism hint: 3
 Sixth Bolt -  parallelism hint: 3

 *Supervisor configuration:*

 storm.local.dir: /app/storm
 storm.zookeeper.port: 2181
 storm.cluster.mode: distributed
 storm.local.mode.zmq: false
 supervisor.slots.ports:
 - 6700
 - 6701
 - 6702
 - 6703
 supervisor.worker.start.timeout.secs: 180
 supervisor.worker.timeout.secs: 30
 supervisor.monitor.frequency.secs: 3
 supervisor.heartbeat.frequency.secs: 5
 supervisor.enable: true

 storm.messaging.netty.server_worker_threads: 2
 storm.messaging.netty.client_worker_threads: 2
 storm.messaging.netty.buffer_size: 52428800 #50MB buffer
 storm.messaging.netty.max_retries: 25
 storm.messaging.netty.max_wait_ms: 1000
 storm.messaging.netty.min_wait_ms: 100


 supervisor.childopts: -Xmx1024m -Djava.net.preferIPv4Stack=true
 worker.childopts: -Xmx2048m -Djava.net.preferIPv4Stack=true


 Please let me know if more information needed..

 Thanks in advance.

 Regards,
 Riyaz





Re: Sync Producer

2014-06-08 Thread pushkar priyadarshi
setting the config is the way to use async.it throws an exception when
unable to send a message.


On Sun, Jun 8, 2014 at 12:46 PM, Achanta Vamsi Subhash 
achanta.va...@flipkart.com wrote:

 - Is setting type in config of the producer to sync the way?
 - Is the exception thrown a Runtime Exception? My IDE doesn't show that an
 exception is being thrown?



 On Sun, Jun 8, 2014 at 12:24 PM, Achanta Vamsi Subhash 
 achanta.va...@flipkart.com wrote:

  Hi,
 
  How to use a sync producer with a KeyedMessageString, String. The
  example in the documentation points to the Async Producer. What
 exceptions
  will be thrown if the producer.send() fails?
 
  Could any one point to an example of sync producer?
 
  --
  Regards
  Vamsi Subhash
 



 --
 Regards
 Vamsi Subhash



Re: New Metrics Reporter for Graphite

2014-05-22 Thread pushkar priyadarshi
Hello Damien,
Im also using same thing for pushing to graphite(forked from gangalia) but
i dont see default jvm paramaters like OS metrics being pushed to
graphite?Have you checked your version.Are you able to push these metrices
as well.


On Thu, May 22, 2014 at 8:02 PM, Jun Rao jun...@gmail.com wrote:

 Thanks for sharing. Added to the wiki.

 Jun


 On Thu, May 22, 2014 at 12:33 AM, Damien Claveau
 damien.clav...@gmail.comwrote:

  Hi,
 
  I have released an additionnal MetricsReporter for Graphite.
 
  It is basically a fork from the kafka-ganglia project on Github,
  and it is available here :
 https://github.com/damienclaveau/kafka-graphite
 
  Dear Jun, you could maybe add the link in the wiki here :
  https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
  (I haven't found how to comment on a confluence page :-S )
 
  Hope this will help.
 



Re: Kafka: writing custom Encoder/Serializer

2014-05-20 Thread pushkar priyadarshi
you can send byte[] that you get by using your own serializer ; through
kafka ().On the reciving side u can deseraialize from the byte[] and read
back your object.for using this you will have to
supply serializer.class=kafka.serializer.DefaultEncoder in the properties.


On Tue, May 20, 2014 at 4:23 PM, Kumar Pradeep kprad...@novell.com wrote:

 I am trying to build a POC with Kafka 0.8.1. I am using my own java class
 as a Kafka message which has a bunch of String data types. For
 serializer.class property in my producer, I cannot use the default
 serializer class or the String serializer class that comes with Kafka
 library. I guess I need to write my own serializer and feed it to the
 producer properties. If you are aware of writing an example custom
 serializer in Kafka (in java), please do share. Appreciate a lot, thanks
 much.

 I tried to use something like below, but I get the exception: Exception in
 thread main java.lang.NoSuchMethodException:
 test.EventsDataSerializer.init(kafka.utils.VerifiableProperties)
  at java.lang.Class.getConstructor0(Class.java:2971)


 package test;

 import java.io.IOException;

 import com.fasterxml.jackson.core.JsonFactory;
 import com.fasterxml.jackson.databind.ObjectMapper;

 import kafka.message.Message;
 import kafka.serializer.Decoder;
 import kafka.serializer.Encoder;

 public  class EventsDataSerializer implements EncoderSimulateEvent,
 DecoderSimulateEvent {

  public Message toMessage(SimulateEvent eventDetails) {
 try {
 ObjectMapper mapper = new ObjectMapper(new JsonFactory());
 byte[] serialized = mapper.writeValueAsBytes(eventDetails);
 return new Message(serialized);
 } catch (IOException e) {
 e.printStackTrace();
 return null;   // TODO
 }
 }
 public SimulateEvent toEvent(Message message) {
  SimulateEvent event = new SimulateEvent();

 ObjectMapper mapper = new ObjectMapper(new JsonFactory());
 try {
 //TODO handle error
 return mapper.readValue(message.payload().array(),
 SimulateEvent.class);
 } catch (IOException e) {
 e.printStackTrace();
 return null;
 }

 }

  public byte[] toBytes(SimulateEvent arg0) {
   // TODO Auto-generated method stub
   return null;
  }
  public SimulateEvent fromBytes(byte[] arg0) {
   // TODO Auto-generated method stub
   return null;
  }
 }





Re: Kafka: writing custom Encoder/Serializer

2014-05-20 Thread pushkar priyadarshi
ProducerString, byte[] producer = new ProducerString, byte[](config);

Try this.



On Wed, May 21, 2014 at 12:26 AM, Neha Narkhede neha.narkh...@gmail.comwrote:

 Pradeep,

 If you are writing a POC, I'd suggest you do that using the new producer
 APIs
 http://people.apache.org/~nehanarkhede/kafka-0.9-producer-javadoc/doc/org/apache/kafka/clients/producer/Producer.html
 .
 These are much easier to use, exposes more functionality and the new
 producer is faster than the older one. It is currently in beta, slated for
 release in 0.8.2 or 0.9 and we are working on stabilizing it, but it should
 work great for your POC. We'd love to hear feedback on the APIs.

 Thanks,
 Neha


 On Tue, May 20, 2014 at 10:51 AM, Kumar Pradeep kprad...@novell.com
 wrote:

  Thanks Pushkar for your response.
 
  I tried to send my own byte array; however the Kafka Producer Class does
  not take byte [] as input type. Do you have an example of this? Please
  share if you do; really appreciate.
 
  Here is my code:
 
 
  public class TestEventProducer {
  public static void main(String[] args) {
 
   String topic = test-topic;
   long eventsNum = 10;
 
  Properties props = new Properties();
  props.put(metadata.broker.list, localhost:9092);
  props.put(serializer.class, kafka.serializer.DefaultEncoder
 );
  props.put(request.required.acks, 0);
  ProducerConfig config = new ProducerConfig(props);
 
  byte [] rawData;
  ProducerString, rawData producer = new ProducerString,
  rawData(config); //compillation error rawData cannot be resolved to a
 type
 
  long start = System.currentTimeMillis();
 
  for (long nEvents = 0; nEvents  eventsNum; nEvents++) {
 
   SimulateEvent event = new SimulateEvent();
   try {
  rawData = Serializer.serialize(event);
} catch (IOException e) {
  e.printStackTrace();
}
  KeyedMessageString, rawData data = new KeyedMessageString,
  rawData(topic, event);
  producer.send(data);
  System.out.println(produced event#: + nEvents +  + data);
  }
  System.out.println(Took  + (System.currentTimeMillis() - start)
  + to produce  + eventsNum + messages);
  producer.close();
  }
  }
 
  public class Serializer {
  public static byte[] serialize(Object obj) throws IOException {
  ByteArrayOutputStream b = new ByteArrayOutputStream();
  ObjectOutputStream o = new ObjectOutputStream(b);
  o.writeObject(obj);
  return b.toByteArray();
  }
 
  public static Object deserialize(byte[] bytes) throws IOException,
  ClassNotFoundException {
  ByteArrayInputStream b = new ByteArrayInputStream(bytes);
  ObjectInputStream o = new ObjectInputStream(b);
  return o.readObject();
  }
  }
 
   pushkar priyadarshi priyadarshi.push...@gmail.com 5/20/2014 5:11
 PM
  
  you can send byte[] that you get by using your own serializer ; through
 
  kafka ().On the reciving side u can deseraialize from the byte[] and read
 
  back your object.for using this you will have to
 
  supply serializer.class=kafka.serializer.DefaultEncoder in the
 properties.
 
 
 
  On Tue, May 20, 2014 at 4:23 PM, Kumar Pradeep kprad...@novell.com
  wrote:
 
 
   I am trying to build a POC with Kafka 0.8.1. I am using my own java
 class
 
   as a Kafka message which has a bunch of String data types. For
 
   serializer.class property in my producer, I cannot use the default
 
   serializer class or the String serializer class that comes with Kafka
 
   library. I guess I need to write my own serializer and feed it to the
 
   producer properties. If you are aware of writing an example custom
 
   serializer in Kafka (in java), please do share. Appreciate a lot,
 thanks
 
   much.
 
  
 
   I tried to use something like below, but I get the exception: Exception
  in
 
   thread main java.lang.NoSuchMethodException:
 
   test.EventsDataSerializer.init(kafka.utils.VerifiableProperties)
 
at java.lang.Class.getConstructor0(Class.java:2971)
 
  
 
  
 
   package test;
 
  
 
   import java.io.IOException;
 
  
 
   import com.fasterxml.jackson.core.JsonFactory;
 
   import com.fasterxml.jackson.databind.ObjectMapper;
 
  
 
   import kafka.message.Message;
 
   import kafka.serializer.Decoder;
 
   import kafka.serializer.Encoder;
 
  
 
   public  class EventsDataSerializer implements EncoderSimulateEvent,
 
   DecoderSimulateEvent {
 
  
 
public Message toMessage(SimulateEvent eventDetails) {
 
   try {
 
   ObjectMapper mapper = new ObjectMapper(new JsonFactory());
 
   byte[] serialized = mapper.writeValueAsBytes(eventDetails);
 
   return new Message(serialized);
 
   } catch (IOException e) {
 
   e.printStackTrace();
 
   return null;   // TODO

Re: Kafka Performance Tuning

2014-04-24 Thread pushkar priyadarshi
you can use the kafka-list-topic.sh to find out if leader for particual
topic is available.-1 in leader column might indicate trouble.


On Fri, Apr 25, 2014 at 6:34 AM, Guozhang Wang wangg...@gmail.com wrote:

 Could you double check if the topic LOGFILE04 is already created on the
 servers?

 Guozhang


 On Thu, Apr 24, 2014 at 10:46 AM, Yashika Gupta 
 yashika.gu...@impetus.co.in
  wrote:

  Jun,
 
  The detailed logs are as follows:
 
  24.04.2014 13:37:31812 INFO main kafka.producer.SyncProducer -
  Disconnecting from localhost:9092
  24.04.2014 13:37:38612 WARN main kafka.producer.BrokerPartitionInfo -
  Error while fetching metadata [{TopicMetadata for topic LOGFILE04 -
  No partition metadata for topic LOGFILE04 due to
  kafka.common.LeaderNotAvailableException}] for topic [LOGFILE04]: class
  kafka.common.LeaderNotAvailableException
  24.04.2014 13:37:40712 INFO main kafka.client.ClientUtils$ - Fetching
  metadata from broker id:0,host:localhost,port:9092 with correlation id 1
  for 1 topic(s) Set(LOGFILE04)
  24.04.2014 13:37:41212 INFO main kafka.producer.SyncProducer - Connected
  to localhost:9092 for producing
  24.04.2014 13:37:48812 INFO main kafka.producer.SyncProducer -
  Disconnecting from localhost:9092
  24.04.2014 13:37:48912 WARN main kafka.producer.BrokerPartitionInfo -
  Error while fetching metadata [{TopicMetadata for topic LOGFILE04 -
  No partition metadata for topic LOGFILE04 due to
  kafka.common.LeaderNotAvailableException}] for topic [LOGFILE04]: class
  kafka.common.LeaderNotAvailableException
  24.04.2014 13:37:49012 ERROR main
 kafka.producer.async.DefaultEventHandler
  - Failed to collate messages by topic, partition due to: Failed to fetch
  topic metadata for topic: LOGFILE04
 
 
  24.04.2014 13:39:96513 WARN
 
 ConsumerFetcherThread-produceLogLine2_vcmd-devanshu-1398361030812-8a0c706e-0-0
  kafka.consumer.ConsumerFetcherThread -
 
 [ConsumerFetcherThread-produceLogLine2_vcmd-devanshu-1398361030812-8a0c706e-0-0],
  Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 4;
 ClientId:
 
 produceLogLine2-ConsumerFetcherThread-produceLogLine2_vcmd-devanshu-1398361030812-8a0c706e-0-0;
  ReplicaId: -1; MaxWait: 6 ms; MinBytes: 1 bytes; RequestInfo:
  [LOGFILE04,0] - PartitionFetchInfo(2,1048576)
  java.net.SocketTimeoutException
  at
  sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
  at
 sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
  at
 
 java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
  at kafka.utils.Utils$.read(Unknown Source)
  at kafka.network.BoundedByteBufferReceive.readFrom(Unknown
 Source)
  at kafka.network.Receive$class.readCompletely(Unknown Source)
  at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown
  Source)
  at kafka.network.BlockingChannel.receive(Unknown Source)
  at kafka.consumer.SimpleConsumer.liftedTree1$1(Unknown Source)
  at
 
 kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(Unknown
  Source)
  at
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Unknown
  Source)
  at
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(Unknown
  Source)
  at
 
 kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(Unknown
  Source)
  at kafka.metrics.KafkaTimer.time(Unknown Source)
  at
  kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(Unknown
 Source)
  at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(Unknown
  Source)
  at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(Unknown
  Source)
  at kafka.metrics.KafkaTimer.time(Unknown Source)
  at kafka.consumer.SimpleConsumer.fetch(Unknown Source)
  at kafka.server.AbstractFetcherThread.processFetchRequest(Unknown
  Source)
  at kafka.server.AbstractFetcherThread.doWork(Unknown Source)
  at kafka.utils.ShutdownableThread.run(Unknown Source)
 
 
  Regards,
  Yashika
  
  From: Jun Rao jun...@gmail.com
  Sent: Thursday, April 24, 2014 10:49 PM
  To: users@kafka.apache.org
  Subject: Re: Kafka Performance Tuning
 
  Before that error messge, the log should tell you the cause of the error.
  Could you dig that out?
 
  Thanks,
 
  Jun
 
 
  On Thu, Apr 24, 2014 at 10:12 AM, Yashika Gupta 
  yashika.gu...@impetus.co.in
   wrote:
 
   Hi,
  
   I am working on a POC where I have 1 Zookeeper and 2 Kafka Brokers on
 my
   local machine. I am running 8 sets of Kafka consumers and producers
  running
   in parallel.
  
   Below are my configurations:
   Consumer Configs:
   zookeeper.session.timeout.ms=12
   zookeeper.sync.time.ms=2000
   zookeeper.connection.timeout.ms=12
   auto.commit.interval.ms=6
   rebalance.backoff.ms=2000
   fetch.wait.max.ms=6
   

Re: Review for the new consumer APIs

2014-04-08 Thread pushkar priyadarshi
Was trying to understand when we have subscribe then why poll is a separate
API.Why cant we pass a callback in subscribe itself?


On Mon, Apr 7, 2014 at 9:51 PM, Neha Narkhede neha.narkh...@gmail.comwrote:

 Hi,

 I'm looking for people to review the new consumers APIs. Patch is posted at
 https://issues.apache.org/jira/browse/KAFKA-1328

 Thanks,
 Neha



Re: Puppet module for deploying Kafka released

2014-02-26 Thread pushkar priyadarshi
i have been using one from here.

https://github.com/whisklabs/puppet-kafka
but had to fix  few small problem like when this starts kafka as upstart
service it does not provide log path so kafka logs never appear since as a
service they dont have default terminal.

Thanks for sharing.Will start using it.Any plans for adding this to puppet
labs?

Regards,
Pushkar


On Wed, Feb 26, 2014 at 2:23 PM, Michael G. Noll
mich...@michael-noll.comwrote:

 Hi everyone,

 I have released a Puppet module to deploy Kafka 0.8 in case anyone is
 interested.

 The module uses Puppet parameterized classes and as such decouples code
 (Puppet manifests) from configuration data -- hence you can use Puppet
 Hiera to configure the way Kafka is deployed without having to write or
 fork/modify Puppet manifests.  The module is available under the Apache
 v2 license.  Any code contributions, bug reports, etc. are of course
 very welcome.

 The module including docs and examples is available at:
 https://github.com/miguno/puppet-kafka

 Enjoy!
 Michael






Re: Kafka High Level Consumer Fetch All Messages From Topic Using Java API (Equivalent to --from-beginning)

2014-02-14 Thread pushkar priyadarshi
I don't think there is any direct high level API equivalent to this.every
time you read messages using high level api your offset gets synced in zoo
keeper .auto offset is for cases where last read offset for example have
been purged n rather than getting exception you want to just fall back to
either most current or oldest message offset.
But other's more experienced opinion on this will be great.
Regards,
Pushkar
On Feb 14, 2014 4:40 PM, jpa...@yahoo.com wrote:

 Good Morning,

 I am testing the Kafka High Level Consumer using the ConsumerGroupExample
 code from the Kafka site. I would like to retrieve all the existing
 messages on the topic called test that I have in the Kafka server config.
 Looking at other blogs, auto.offset.reset should be set to smallest to be
 able to get all messages:
 private static ConsumerConfig createConsumerConfig(String a_zookeeper,
 String a_groupId){ Properties props = new Properties();
 props.put(zookeeper.connect, a_zookeeper); props.put(group.id,
 a_groupId); props.put(auto.offset.reset, smallest); props.put(
 zookeeper.session.timeout.ms, 1);  return new
 ConsumerConfig(props);
 }
 The question I really have is this: what is the equivalent Java api call
 for the High Level Consumer that is the equivalent of:
 bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test
 --from-beginning
 Thx for your help!!


Pattern for using kafka producer API

2014-02-09 Thread pushkar priyadarshi
What is the most appropriate design for using kafka producer from
performance view point.I had few in my mind.

1.Since single kafka producer object have synchronization; using single
producer object from multiple thread might not be efficient.so one way
would be to use multiple kafka producer from inside same thread.

2.Have multiple thread each having it's own instance of producer.This has
thread overheads if kafka internally using the same semantics.

It would be great if someone can comment on these approaches or suggest
widely used one.
P.S. im using 0.8.0 and mostly concerned with async producer.

Thanks And Regards,
Pushkar


Re: which zookeeper version

2014-01-02 Thread pushkar priyadarshi
Thanks Jason.


On Thu, Jan 2, 2014 at 7:04 PM, Jason Rosenberg j...@squareup.com wrote:

 Hi Pushkar,

 We've been using zk 3.4.5 for several months now, without any
 problems, in production.

 Jason

 On Thu, Jan 2, 2014 at 1:15 AM, pushkar priyadarshi
 priyadarshi.push...@gmail.com wrote:
  Hi,
 
  I am starting a fresh deployment of kafka + zookeeper.Looking at
 zookeeper
  releases find 3.4.5 old and stable enough.Has anyone used this before in
  production?
  kafka ops wiki page says at Linkedin deployment still uses 3.3.4.Any
  specific reason for the same.
 
  Thanks And Regards,
  Pushkar



which zookeeper version

2014-01-01 Thread pushkar priyadarshi
Hi,

I am starting a fresh deployment of kafka + zookeeper.Looking at zookeeper
releases find 3.4.5 old and stable enough.Has anyone used this before in
production?
kafka ops wiki page says at Linkedin deployment still uses 3.3.4.Any
specific reason for the same.

Thanks And Regards,
Pushkar


Re: doubt regarding the metadata.brokers.list parameter in producer properties

2013-12-19 Thread pushkar priyadarshi
1.When you start producing : at this time if any of your supplied broker is
alive system will continue to work.
2.Broker going down and coming up with new IP : producer API refreshes
metadata information on failures(configurable) so they should be able to
detect new brokers.
But i dont think it's possible to ignore the initially supplied parameter.


On Thu, Dec 19, 2013 at 4:57 PM, Arjun ar...@socialtwist.com wrote:

 Hi,

 I am running kafka 0.8 and zoo keeper along with it. The problem i have is
 with the metadata.broker.list which is present in the producer properties.
 As i am using the zookeeper can i just ignore this property and kafka will
 work fine?
 We run kafka on the ec2 nodes, lets say we are running some 3 kafka
 servers, and producers metadata.broker.list populated with the ip's of
 these nodes, tomorrow if we add two more nodes for scaling up the system we
 need to add those nodes to producers.
 Lets say in the above scenario(my nodes ip's are non elastic), then for
 some reason one of my nodes is down, and we brought up then the ip will get
 changed, so the producers should be changed to have new ip? what will
 happen if one after another all these servers are down and being brought up?

 Is there any way I don't give the list but kafka with the help of
 zookeeper gets the list. I know kafka do not want to be dependent on
 zookeeper, but what should we do in such a case.

 Thanks
 Arjun Narasimha Kota



Re: doubt regarding the metadata.brokers.list parameter in producer properties

2013-12-19 Thread pushkar priyadarshi
is the auto broker discovery not possible through zk.connect?

Regards,
Pushkar


On Thu, Dec 19, 2013 at 9:29 PM, Jun Rao jun...@gmail.com wrote:

 metadata.broker.list is also used when there are leader failures. Could you
 get a vip in ec2 and put the brokers' ip behind the vip?

 Thanks,

 Jun


 On Thu, Dec 19, 2013 at 4:07 AM, Arjun ar...@socialtwist.com wrote:

  So from your reply what i understood is this particular property iis used
  only when starting the producers.
 
  is that right? can you please confirm.
 
  Thanks
  Arjun Narasimha Kota
 
 
  On Thursday 19 December 2013 05:33 PM, pushkar priyadarshi wrote:
 
  1.When you start producing : at this time if any of your supplied broker
  is
  alive system will continue to work.
  2.Broker going down and coming up with new IP : producer API refreshes
  metadata information on failures(configurable) so they should be able to
  detect new brokers.
  But i dont think it's possible to ignore the initially supplied
 parameter.
 
 
  On Thu, Dec 19, 2013 at 4:57 PM, Arjun ar...@socialtwist.com wrote:
 
   Hi,
 
  I am running kafka 0.8 and zoo keeper along with it. The problem i have
  is
  with the metadata.broker.list which is present in the producer
  properties.
  As i am using the zookeeper can i just ignore this property and kafka
  will
  work fine?
  We run kafka on the ec2 nodes, lets say we are running some 3 kafka
  servers, and producers metadata.broker.list populated with the ip's of
  these nodes, tomorrow if we add two more nodes for scaling up the
 system
  we
  need to add those nodes to producers.
  Lets say in the above scenario(my nodes ip's are non elastic), then for
  some reason one of my nodes is down, and we brought up then the ip will
  get
  changed, so the producers should be changed to have new ip? what will
  happen if one after another all these servers are down and being
 brought
  up?
 
  Is there any way I don't give the list but kafka with the help of
  zookeeper gets the list. I know kafka do not want to be dependent on
  zookeeper, but what should we do in such a case.
 
  Thanks
  Arjun Narasimha Kota
 
 
 



kafka build error scala 2.10

2013-12-18 Thread pushkar priyadarshi
While doing dev setup as described in
https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup

im getting following build errors.

immutable is already defined as class immutable Annotations_2.9+.scala
/KafkaEclipse/core/src/main/scala/kafka/utils line 38 Scala Problem

threadsafe is already defined as class threadsafe Annotations_2.9+.scala
/KafkaEclipse/core/src/main/scala/kafka/utils line 28 Scala Problem

nonthreadsafe is already defined as class nonthreadsafe
Annotations_2.9+.scala /KafkaEclipse/core/src/main/scala/kafka/utils
line 33 Scala
Problem


This error is coming from  a file
Util /kafka/src/main/scala/kafka/utils/Annotations_2.9+.scala

Please note that i had to install scala 2.10 eclipse plugin as Juno had
some problem with 2.9.


Regards,

Pushkar


Re: regarding run-simulator.sh

2013-12-18 Thread pushkar priyadarshi
i see many tools mentioned for perf here

https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing

of all these what all already exist in 0.8 release?
e.g. i was not able to find jmx-dump.sh , R script etc anywhere.


On Wed, Dec 18, 2013 at 11:01 AM, pushkar priyadarshi 
priyadarshi.push...@gmail.com wrote:

 thanks Jun.


 On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao jun...@gmail.com wrote:

 You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.

 Thanks,

 Jun


 On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi 
 priyadarshi.push...@gmail.com wrote:

  i am not able to find run-simulator.sh in 0.8 even after building
 perf.if
  this tool has been deprecated what are other alternatives available now
 for
  perf testing?
 
  Regards,
  Pushkar
 





Re: Data loss in case of request.required.acks set to -1

2013-12-18 Thread pushkar priyadarshi
You can try setting a higher value for message.send.max.retries in
producer config.

Regards,
Pushkar


On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal 
hanish.bansal.agar...@gmail.com wrote:

 Hi All,

 We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
 Replication Factor: 2
 Number of partitions: 2


 I have configured request.required.acks in producer configuration to -1.

 As mentioned in documentation
 http://kafka.apache.org/documentation.html#producerconfigs, setting this
 value to -1 provides guarantee that no messages will be lost.

 I am getting below behaviour:

 If kafka is running as foreground process and i am shutting down the kafka
 leader node using ctrl+C then no data is lost.

 But if i abnormally terminate the kafka using kill -9 pid then still
 facing data loss even after configuring request.required.acks to -1.

 Any suggestions?
 --
 *Thanks  Regards*
 *Hanish Bansal*



Re: Data loss in case of request.required.acks set to -1

2013-12-18 Thread pushkar priyadarshi
my doubt was they are dropping off at producer level only.so suggested
playing with paramaters like retries and backoff.ms and also with
refreshinterval on producer side.

Regards,
Pushkar


On Wed, Dec 18, 2013 at 10:01 PM, Guozhang Wang wangg...@gmail.com wrote:

 Hanish,

 Did you kill -9 one of the brokers only or bouncing them iteratively?

 Guozhang


 On Wed, Dec 18, 2013 at 8:02 AM, Joe Stein joe.st...@stealth.ly wrote:

  How many replicas do you have?
 
 
  On Wed, Dec 18, 2013 at 8:57 AM, Hanish Bansal 
  hanish.bansal.agar...@gmail.com wrote:
 
   Hi pushkar,
  
   I tried with configuring  message.send.max.retries to 10. Default
 value
   for this is 3.
  
   But still facing data loss.
  
  
   On Wed, Dec 18, 2013 at 12:44 PM, pushkar priyadarshi 
   priyadarshi.push...@gmail.com wrote:
  
You can try setting a higher value for message.send.max.retries in
producer config.
   
Regards,
Pushkar
   
   
On Wed, Dec 18, 2013 at 5:34 PM, Hanish Bansal 
hanish.bansal.agar...@gmail.com wrote:
   
 Hi All,

 We are having kafka cluster of 2 nodes. (using 0.8.0 final release)
 Replication Factor: 2
 Number of partitions: 2


 I have configured request.required.acks in producer configuration
 to
   -1.

 As mentioned in documentation
 http://kafka.apache.org/documentation.html#producerconfigs,
 setting
   this
 value to -1 provides guarantee that no messages will be lost.

 I am getting below behaviour:

 If kafka is running as foreground process and i am shutting down
 the
kafka
 leader node using ctrl+C then no data is lost.

 But if i abnormally terminate the kafka using kill -9 pid then
   still
 facing data loss even after configuring request.required.acks to
 -1.

 Any suggestions?
 --
 *Thanks  Regards*
 *Hanish Bansal*

   
  
  
  
   --
   *Thanks  Regards*
   *Hanish Bansal*
  
 



 --
 -- Guozhang



Re: kafka build error scala 2.10

2013-12-18 Thread pushkar priyadarshi
i see two files name Annotation_2.8.scala and Annotation_2.9.scala.
Excluding them does not help.Is this what you were referring to?

Regards,
Pushkar


On Wed, Dec 18, 2013 at 9:52 PM, Jun Rao jun...@gmail.com wrote:

 You may have to exclude Annotations.scala.

 Thanks,

 Jun


 On Wed, Dec 18, 2013 at 12:16 AM, pushkar priyadarshi 
 priyadarshi.push...@gmail.com wrote:

  While doing dev setup as described in
  https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
 
  im getting following build errors.
 
  immutable is already defined as class immutable Annotations_2.9+.scala
  /KafkaEclipse/core/src/main/scala/kafka/utils line 38 Scala Problem
 
  threadsafe is already defined as class threadsafe Annotations_2.9+.scala
  /KafkaEclipse/core/src/main/scala/kafka/utils line 28 Scala Problem
 
  nonthreadsafe is already defined as class nonthreadsafe
  Annotations_2.9+.scala /KafkaEclipse/core/src/main/scala/kafka/utils
  line 33 Scala
  Problem
 
 
  This error is coming from  a file
  Util /kafka/src/main/scala/kafka/utils/Annotations_2.9+.scala
 
  Please note that i had to install scala 2.10 eclipse plugin as Juno had
  some problem with 2.9.
 
 
  Regards,
 
  Pushkar
 



regarding run-simulator.sh

2013-12-17 Thread pushkar priyadarshi
i am not able to find run-simulator.sh in 0.8 even after building perf.if
this tool has been deprecated what are other alternatives available now for
perf testing?

Regards,
Pushkar


Re: regarding run-simulator.sh

2013-12-17 Thread pushkar priyadarshi
thanks Jun.


On Wed, Dec 18, 2013 at 10:47 AM, Jun Rao jun...@gmail.com wrote:

 You can run kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh.

 Thanks,

 Jun


 On Tue, Dec 17, 2013 at 8:44 PM, pushkar priyadarshi 
 priyadarshi.push...@gmail.com wrote:

  i am not able to find run-simulator.sh in 0.8 even after building perf.if
  this tool has been deprecated what are other alternatives available now
 for
  perf testing?
 
  Regards,
  Pushkar