Re: Kafka Kerberos Ansible

2017-03-06 Thread Mudit Agarwal
thanks Le.However my cluster is kerberized.

  From: Le Cyberian <lecyber...@gmail.com>
 To: Mudit Agarwal <mudit...@yahoo.com> 
 Sent: Monday, 6 March 2017 9:24 PM
 Subject: Re: Kafka Kerberos Ansible
   
Hi Mudit,

I guess its more related to Ansible rather than Kafka itself, However i
will try to answer.

Since Ansible uses SSH and you already have passwordless ssh between
ansible host (which executes playbooks) to Kafka Cluster.

You can simply use ansible command or shell module to get the list of
topics available in the respective group.

For example: bin/kafka-consumer-groups.sh --new-consumer --describe --group
default --bootstrap-server localhost:9092

You can use above to get list of topics available along with some lag which
it might be behind if processing the pipeline.

I am not sure how listing topics would help you in your ansible role/task,
maybe you are using assert or something else to check something.

BR,

Le

On Mon, Mar 6, 2017 at 9:57 PM, Mudit Agarwal <mudit...@yahoo.com> wrote:

> Let me reframe the questions.
>
> How can i list the topics using ansible script from ansible host which is
> outside the kafka cluster.
> My kafka cluster is kerberized.
> Kafka and ansible are passwordless ssh.
>
> Thanks,
> Mudit
>
>
> --
> *From:* Le Cyberian <lecyber...@gmail.com>
> *To:* users@kafka.apache.org; Mudit Agarwal <mudit...@yahoo.com>
> *Sent:* Monday, 6 March 2017 6:46 PM
> *Subject:* Re: Kafka Kerberos Ansible
>
> Hi Mudit,
>
> What do you mean by accessing Kafka cluster outside Ansible VM ? It needs
> to listen to a interface which is available for the network outside of the
> VM
>
> BR,
>
> Lee
>
> On Mon, Mar 6, 2017 at 7:42 PM, Mudit Agarwal <mudit...@yahoo.com.invalid>
> wrote:
>
> Hi,
> How we can access the kafka cluster from an outside Ansible VM.The kafka
> is kerberiszed.All linux environment.
> Thanks,Mudit
>
>
>
>
>
>
>
>
>


   

Re: Kafka Kerberos Ansible

2017-03-06 Thread Mudit Agarwal
Let me reframe the questions.
How can i list the topics using ansible script from ansible host which is 
outside the kafka cluster.My kafka cluster is kerberized.Kafka and ansible are 
passwordless ssh.
Thanks,Mudit

  From: Le Cyberian <lecyber...@gmail.com>
 To: users@kafka.apache.org; Mudit Agarwal <mudit...@yahoo.com> 
 Sent: Monday, 6 March 2017 6:46 PM
 Subject: Re: Kafka Kerberos Ansible
   
Hi Mudit,

What do you mean by accessing Kafka cluster outside Ansible VM ? It needs to 
listen to a interface which is available for the network outside of the VM
BR,

Lee
On Mon, Mar 6, 2017 at 7:42 PM, Mudit Agarwal <mudit...@yahoo.com.invalid> 
wrote:

Hi,
How we can access the kafka cluster from an outside Ansible VM.The kafka is 
kerberiszed.All linux environment.
Thanks,Mudit



   



   

Kafka Kerberos Ansible

2017-03-06 Thread Mudit Agarwal
Hi,
How we can access the kafka cluster from an outside Ansible VM.The kafka is 
kerberiszed.All linux environment.
Thanks,Mudit

 
 
   

Re: Kafka Multi DataCenter HA/Failover

2016-10-28 Thread Mudit Agarwal
I means 1.The producers in Datacenter A will start writing to Kafka in 
Datacenter B if Kafka in A is failing?

  From: "Tauzell, Dave" <dave.tauz...@surescripts.com>
 To: "users@kafka.apache.org" <users@kafka.apache.org>; Mudit Agarwal 
<mudit...@yahoo.com> 
 Sent: Friday, 28 October 2016 4:22 PM
 Subject: RE: Kafka Multi DataCenter HA/Failover
   
By failover do you mean:

1. The producers in Datacenter A will start writing to Kafka in Datacenter B if 
Kafka in A is failing?
Or
2. Consumers in Datacenter B have access to messages written to Kafka in 
Datacenter A

-Dave

-Original Message-
From: Mudit Agarwal [mailto:mudit...@yahoo.com.INVALID] 
Sent: Friday, October 28, 2016 10:09 AM
To: users@kafka.apache.org
Subject: Re: Kafka Multi DataCenter HA/Failover

Thanks dave.
Any ways for how we can achieve HA/Failover in kafka across two DC?
Thanks,Mudit

      From: "Tauzell, Dave" <dave.tauz...@surescripts.com>
 To: "users@kafka.apache.org" <users@kafka.apache.org>; Mudit Agarwal 
<mudit...@yahoo.com> 
 Sent: Friday, 28 October 2016 4:02 PM
 Subject: RE: Kafka Multi DataCenter HA/Failover
  
>> without any lag

You are going to have some lag at some point between datacenters.

I haven't used this but from taking to them they are working or have created a 
replacement for MirrorMaker using the Connect framework which will fix a number 
of MirrorMaker issues.  I haven't talked to anybody about Kafka failoer.

-Dave

-Original Message-
From: Mudit Agarwal [mailto:mudit...@yahoo.com.INVALID]
Sent: Friday, October 28, 2016 9:38 AM
To: Users
Subject: Kafka Multi DataCenter HA/Failover

 Hi,
I learned that Confluent Enterprise provides Multi DC failover and HA 
synchronously and without any lag.I'm looking to learn further information and 
more detailed documentation on this.I have gone thorugh the white paper and it 
just talks about Replicator.
Any pointers for more information will be helpful.
Thanks,Mudit
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


  


   

Re: Kafka Multi DataCenter HA/Failover

2016-10-28 Thread Mudit Agarwal
Hi Hans,
The latency between my two DC is 150ms.And yes I'm looking for synchronous 
replication.Is that possible?
Thanks,Mudit

  From: Hans Jespersen <h...@confluent.io>
 To: users@kafka.apache.org; Mudit Agarwal <mudit...@yahoo.com> 
 Sent: Friday, 28 October 2016 4:34 PM
 Subject: Re: Kafka Multi DataCenter HA/Failover
   

What is the latency between the two datacenters? I ask because unless they are 
very close, you probably don’t want to do any form of synchronous 
replication.The Confluent Replicator (coming very soon in Confluent Enterprise 
3.1) will do async replication of both messages and configuration metadata 
between datacenters. 
It’s still up to you to monitor for what your app considers acceptable lag 
between the two datacenters but at least now that is possible using timestamps 
and the new offsetsForTimes() capability added in 0.10.1. Accurate message 
timestamp based offset lookup is necessary because the offset numbers for a 
given message will not match in both datacenters. 
-hans

On Oct 28, 2016, at 8:08 AM, Mudit Agarwal <mudit...@yahoo.com.INVALID> wrote:
Thanks dave.
Any ways for how we can achieve HA/Failover in kafka across two DC?
Thanks,Mudit

  From: "Tauzell, Dave" <dave.tauz...@surescripts.com>
 To: "users@kafka.apache.org" <users@kafka.apache.org>; Mudit Agarwal 
<mudit...@yahoo.com> 
 Sent: Friday, 28 October 2016 4:02 PM
 Subject: RE: Kafka Multi DataCenter HA/Failover



without any lag



You are going to have some lag at some point between datacenters.

I haven't used this but from taking to them they are working or have created a 
replacement for MirrorMaker using the Connect framework which will fix a number 
of MirrorMaker issues.  I haven't talked to anybody about Kafka failoer.

-Dave

-Original Message-
From: Mudit Agarwal [mailto:mudit...@yahoo.com.INVALID]
Sent: Friday, October 28, 2016 9:38 AM
To: Users
Subject: Kafka Multi DataCenter HA/Failover

 Hi,
I learned that Confluent Enterprise provides Multi DC failover and HA 
synchronously and without any lag.I'm looking to learn further information and 
more detailed documentation on this.I have gone thorugh the white paper and it 
just talks about Replicator.
Any pointers for more information will be helpful.
Thanks,Mudit
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.






   

Re: Kafka Multi DataCenter HA/Failover

2016-10-28 Thread Mudit Agarwal
Thanks dave.
Any ways for how we can achieve HA/Failover in kafka across two DC?
Thanks,Mudit

  From: "Tauzell, Dave" <dave.tauz...@surescripts.com>
 To: "users@kafka.apache.org" <users@kafka.apache.org>; Mudit Agarwal 
<mudit...@yahoo.com> 
 Sent: Friday, 28 October 2016 4:02 PM
 Subject: RE: Kafka Multi DataCenter HA/Failover
   
>> without any lag

You are going to have some lag at some point between datacenters.

I haven't used this but from taking to them they are working or have created a 
replacement for MirrorMaker using the Connect framework which will fix a number 
of MirrorMaker issues.  I haven't talked to anybody about Kafka failoer.

-Dave

-----Original Message-
From: Mudit Agarwal [mailto:mudit...@yahoo.com.INVALID]
Sent: Friday, October 28, 2016 9:38 AM
To: Users
Subject: Kafka Multi DataCenter HA/Failover

 Hi,
I learned that Confluent Enterprise provides Multi DC failover and HA 
synchronously and without any lag.I'm looking to learn further information and 
more detailed documentation on this.I have gone thorugh the white paper and it 
just talks about Replicator.
Any pointers for more information will be helpful.
Thanks,Mudit
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


   

Kafka Multi DataCenter HA/Failover

2016-10-28 Thread Mudit Agarwal
 Hi,
I learned that Confluent Enterprise provides Multi DC failover and HA 
synchronously and without any lag.I'm looking to learn further information and 
more detailed documentation on this.I have gone thorugh the white paper and it 
just talks about Replicator.
Any pointers for more information will be helpful.
Thanks,Mudit

Kafka Mirrormaker in sync mode

2016-10-27 Thread Mudit Agarwal
Hi,
Can we run Kafka MirrorMaker in sync mode where both source producer and 
consumer both are running in sync node ?
Thanks,Mudit

Kafka multitenancy

2016-10-26 Thread Mudit Agarwal
Hi,
How we can achieve multi-tenancy in kafka efficiently?
Thanks,Mudit

Re: NetFlow metrics to Kafka

2016-07-12 Thread Mudit Agarwal
Hi Ozery,
Please try mentioning the kafka broker IP and port in cisco collector config 
where you will sending your flows.
Thanks,Mudit

  From: OZERAY MATHIEU 
 To: "users@kafka.apache.org"  
 Sent: Tuesday, 12 July 2016 8:20 PM
 Subject: NetFlow metrics to Kafka
   
Hello,


I have a question about Kafka.

Actually, I produce NetFlow metrics on my Cisco router. I want know if it's 
possible to send NetFlow metrics to Kafka broker to resend this in Logstash 
server ?

Thanks for your answer.

Have a nice day.


Mathieu OZERAY

   

Re: kafka unable to send records - scala / spark

2016-07-12 Thread Mudit Agarwal
Sumit,You need to mention arguments as well.You are passing empty list.for ex:
props.put("bootstrap.servers", "localhost:9092")


  From: Sumit Khanna 
 To: users@kafka.apache.org 
 Sent: Tuesday, 12 July 2016 4:49 PM
 Subject: kafka unable to send records - scala / spark
   
Hello Guys.

Have tried a lot, from kafka.javaapi. etc to Producer to KafkaProducer, and
am working with 0.9.0.0
This is the error I am getting :

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 62 ms.

at
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:437)

at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)

at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)

at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)

at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)


Also,


here is the kafka props :


var props = new Properties()



        props.put("bootstrap.servers", "")

    props.put("metadata.broker.list", "")

    props.put("group.id", "")

    props.put("producer.type", "")

    props.put("key.serializer", "")

    props.put("value.serializer", "")

    props.put("request.required.acks", "1")

    props.put("auto.create.topics.enable","true")

    props.put("block.on.buffer.full","false")



    val producer = new KafkaProducer[String,String](props)

    partitionOfRecords.foreach

                {

                    case x:String=>{

                        val message=new ProducerRecord[String,
String]("[TOPIC#1]
"+dbname+"_"+dbtable,dbname,x)

                        producer.send(message).get();

                    }

                }

      }
Please help.

Thanks in advance.

Best,
Sumit Khanna


  

Re: Pros and cons of dockerizing kafka brokers?

2016-07-08 Thread Mudit Agarwal
 blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px 
#715FFA solid !important; padding-left:1ex !important; background-color:white 
!important; }  i am running 5 broker kafka cluster on docker and using mesos 
cluster manager and marathon framework 


Sent from Yahoo Mail for iPhone


On Saturday, July 9, 2016, 00:00, Christian  wrote:

We're using AWS ECS for our Kafka cluster of six nodes. We did some
performance testing on a three node cluster and the results were as good as
the Linkedin published results on bare metal machines.

We are using EBS st1 drives. The bottleneck is the network to the ebs
volumes. So for about 25% more cost, we doubled our vms, using twice as
many 1/2 sized EBS volumes.

-Christian

On Fri, Jul 8, 2016 at 12:07 PM Krish  wrote:

> Thanks, Christian.
> I am currently reading about kafka-on-mesos.
> I will hack something this weekend to see if I can bring up a kafka
> scheduler on mesos using dockerized brokers. .
>
>
>
> --
> κρισhναν
>
> On Thu, Jul 7, 2016 at 7:29 PM, Christian Posta  >
> wrote:
>
> > One thing I can think of is Kafka likes lots of OS page cache.
> Dockerizing
> > from the standpoint of packaging configs is a good idea, just make sure
> if
> > you're running many brokers together on the same host, they've got enough
> > resources (CPU/Mem) so they don't starve each other.
> >
> > On Thu, Jul 7, 2016 at 2:30 AM, Krish  wrote:
> >
> >> Hi,
> >> I am currently testing a custom docker volume driver plugin for AWS
> >> EFS/EBS
> >> access and mounting. So, running kafka broker inside a container makes
> >> will
> >> ease up a lot of configuration issues wrt storage for me.
> >>
> >> Are there any pros and cons of dockerizing kafka broker?
> >> Off the top of my head, since kafka forms the base of our setup, I can
> >> think of making is use the host networking stack, and increase ulimits
> for
> >> the container.
> >> I would like to know if and when kafka becomes greedy and cannibalizes
> >> resources; I can also ensure that it runs on a dedicated machine.
> >>
> >> Thanks.
> >>
> >> Best,
> >> Krish
> >>
> >
> >
> >
> > --
> > *Christian Posta*
> > twitter: @christianposta
> > http://www.christianposta.com/blog
> > http://fabric8.io
> >
> >
>
 



Re: Delete Message From topic

2016-06-14 Thread Mudit Agarwal
Thanks Tom!

  From: Todd Palino 
 To: "users@kafka.apache.org"  
 Sent: Tuesday, 14 June 2016 10:01 PM
 Subject: Re: Delete Message From topic
   
Well, if you have a log compacted topic, you can issue a tombstone message
(key with a null message) to delete it. Outside of that, what Tom said
applies.

-Todd


On Tue, Jun 14, 2016 at 9:13 PM, Mudit Kumar  wrote:

> Thanks Tom!
>
>
>
>
> On 6/14/16, 8:55 PM, "Tom Crayford"  wrote:
>
> >Hi Mudit,
> >
> >Sorry this is not possible. The only deletion Kafka offers is retention or
> >whole topic deletion.
> >
> >Thanks
> >
> >Tom Crayford
> >Heroku Kafka
> >
> >On Tuesday, 14 June 2016, Mudit Kumar  wrote:
> >
> >> Hey,
> >>
> >> How can I delete particular messages from particular topic?Is that
> >> possible?
> >>
> >> Thanks,
> >> Mudit
> >>
> >>
>
>


-- 
*Todd Palino*
Staff Site Reliability Engineer
Data Infrastructure Streaming



linkedin.com/in/toddpalino


   

Re: upgrading Kafka

2016-05-26 Thread Mudit Agarwal
Yes,you can use constraints and same volumes.That can be trusted.

  From: Radoslaw Gruchalski 
 To: "Karnam, Kiran" ; users@kafka.apache.org 
 Sent: Thursday, 26 May 2016 2:31 AM
 Subject: Re: upgrading Kafka
   
Kiran,

If you’re using Docker, you can use Docker on Mesos, you can use constraints to 
force relaunched kafka broker to always relaunch at the same agent and you can 
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–  
Best regards,

Radek Gruchalski

ra...@gruchalski.com
de.linkedin.com/in/radgruchalski

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On May 25, 2016 at 10:58:06 PM, Karnam, Kiran (kkar...@ea.com) wrote:

Hi All,  

We are using Docker containers to deploy Kafka, we are planning to use mesos 
for the deployment and maintenance of containers. Is there a way during upgrade 
that we can persist the data so that it is available for the upgraded 
container.  

we don't want the clusters to go into chaos with data replicating around the 
network because a node that was upgraded suddenly has no data  

Thanks,  
Kiran  

  

Re: Get topic level detail from new consumer group command

2016-05-05 Thread Mudit Agarwal
you need to run describe topic command to get the topic details:
./kafka-topics.sh --zookeeper ":2181" --describe --topic 

  From: ravi singh 
 To: users@kafka.apache.org; d...@kafka.apache.org 
 Sent: Friday, 6 May 2016 1:07 AM
 Subject: Get topic level detail from new consumer group command
   
 ./bin/kafka-consumer-groups.sh --group batchprocessord_zero
 --bootstrap-server kafka-1-evilcorp.com:9092 --new-consumer --describe
Running the above ConsumerGroupcommad will describe consumer for all the
topics it's listening to.

Is there any workaround to get *only topic level detail*?

​
-- 
*Regards,*
*Ravi*

  

Re: Adding a broker

2016-05-03 Thread Mudit Agarwal
It will store new messages only,how i think you can migrate your old replicas 
on new brokers!
Thanks,Mudit

  From: Jens Rantil 
 To: "users@kafka.apache.org"  
 Sent: Tuesday, 3 May 2016 6:21 PM
 Subject: Adding a broker
   
Hi,

When I added a replicated broker to a cluster, will it first stream
historical logs from the master? Or will it simply starts storing new
messages from producers?

Thanks,
Jens
-- 

Jens Rantil
Backend Developer @ Tink

Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
For urgent matters you can reach me at +46-708-84 18 32.


   

Best Guide/link for Kafka Ops work

2016-04-21 Thread Mudit Agarwal
Hi,
Any recommendations for any online guide/link on managing/Administration of 
kafka cluster.
Thanks,Mudit

Re: Issue with Kafka Broker

2016-04-12 Thread Mudit Agarwal
Can someone help on below issue?



> On Apr 11, 2016, at 12:27 AM, Mudit Agarwal <mudit...@yahoo.com.INVALID> 
> wrote:
> 
> Hi Guys,
> I have 3 node kafka setup.Version is 0.9.0.1.I bounced the kafka-server and 
> zookeeper services on broker 3 node.Post that i am seeing below messages in 
> logs continuosly.Any help will be highly appreciated.
> 
> [2016-04-11 05:24:56,074] WARN [Replica Manager on Broker 3]: While recording 
> the replica LEO, the partition [product_differential,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:56,074] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [orderservice.production,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:56,574] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [product_differential,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:56,574] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [orderservice.production,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:57,075] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [product_differential,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:57,075] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [orderservice.production,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:57,575] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [product_differential,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:57,575] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [orderservice.production,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:58,076] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [product_differential,0] hasn't been created. 
> (kafka.server.ReplicaManager)[2016-04-11 05:24:58,076] WARN [Replica Manager 
> on Broker 3]: While recording the replica LEO, the partition 
> [orderservice.production,0] hasn't been created. (kafka.server.ReplicaManager)

  

Issue with Kafka Broker

2016-04-10 Thread Mudit Agarwal
Hi Guys,
I have 3 node kafka setup.Version is 0.9.0.1.I bounced the kafka-server and 
zookeeper services on broker 3 node.Post that i am seeing below messages in 
logs continuosly.Any help will be highly appreciated.

[2016-04-11 05:24:56,074] WARN [Replica Manager on Broker 3]: While recording 
the replica LEO, the partition [product_differential,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:56,074] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[orderservice.production,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:56,574] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[product_differential,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:56,574] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[orderservice.production,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:57,075] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[product_differential,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:57,075] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[orderservice.production,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:57,575] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[product_differential,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:57,575] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[orderservice.production,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:58,076] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[product_differential,0] hasn't been created. 
(kafka.server.ReplicaManager)[2016-04-11 05:24:58,076] WARN [Replica Manager on 
Broker 3]: While recording the replica LEO, the partition 
[orderservice.production,0] hasn't been created. (kafka.server.ReplicaManager)

kafka issue

2016-04-06 Thread Mudit Agarwal
Hi,
Seeing below exceptions on broker id 3 node, in my three node kafka 
cluster.Also looks like this broker id is long under replicated and out of sync.
[2016-04-06 04:00:00,720] ERROR [Replica Manager on Broker 3]: Error processing 
fetch operation on partition [subscribed_product_logs,2] offset 586475 
(kafka.server.ReplicaManager)java.lang.IllegalStateException: Failed to read 
complete buffer for targetOffset 586475 startPosition 824825447 in 
/kafka/kafka-logs//00494997.log        at 
kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:133)        at 
kafka.log.LogSegment.translateOffset(LogSegment.scala:105)        at 
kafka.log.LogSegment.read(LogSegment.scala:126)        at 
kafka.log.Log.read(Log.scala:506)        at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
        at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)        
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)    
    at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)       
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)       
 at scala.collection.AbstractTraversable.map(Traversable.scala:104)        at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)        
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)        
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)        at 
kafka.server.KafkaApis.handle(KafkaApis.scala:69)        at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)        at 
java.lang.Thread.run(Thread.java:745)
Any help is appreciated.
version is 0.9.0.1
Thanks,Mudit



kafka issue

2016-04-06 Thread Mudit Agarwal
Hi,



Seeing below exceptions on broker id 3 node, in my three node kafka 
cluster.Also looks like this broker id is long under replicated and out of sync.
[2016-04-06 04:00:00,720] ERROR [Replica Manager on Broker 3]: Error processing 
fetch operation on partition [subscribed_product_logs,2] offset 586475 
(kafka.server.ReplicaManager)java.lang.IllegalStateException: Failed to read 
complete buffer for targetOffset 586475 startPosition 824825447 in 
/kafka/kafka-logs//00494997.log        at 
kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:133)        at 
kafka.log.LogSegment.translateOffset(LogSegment.scala:105)        at 
kafka.log.LogSegment.read(LogSegment.scala:126)        at 
kafka.log.Log.read(Log.scala:506)        at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:536)
        at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:507)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at 
scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)        
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)    
    at 
scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)       
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)       
 at scala.collection.AbstractTraversable.map(Traversable.scala:104)        at 
kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:507)        
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:462)        
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:431)        at 
kafka.server.KafkaApis.handle(KafkaApis.scala:69)        at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)        at 
java.lang.Thread.run(Thread.java:745)
Any help is appreciated.
version is 0.9.0.1
Thanks,Mudit