Named Pipe and Kafka Producer

2017-07-20 Thread Milind Vaidya
Hi I am using named pipe and reading from it using Java and sending events to Kafka Cluster. The std out of a process is `tee` ed to But I am observing data loss. I am yet to debug this issue. I was wondering if anybody has already interfaced name pipe for sending data to kafka and what are the

Re: Kafka 0.10.1 cluster using 100% disk usage (reads)

2017-06-12 Thread Milind Vaidya
This sound exactly similar to what I experienced in the similar scenario. Can you please take a look at the File System time stamp of the actual log files on one of the broker hosts ? For me when I restarted the new brokers with version 0.10.0 it changes to current ts. Meaning if I have set 48 hr

Re: 0.10.0.0 cluster : segments getting latest ts

2017-05-30 Thread Milind Vaidya
ion phase, if the first message in a segment does not > have a timestamp, the log rolling will still be based on the (current > > time > > - create time of the segment)." > > -hans > > /** > * Hans Jespersen, Principal Systems Engineer, Confluent Inc. > * h...@

Re: 0.10.0.0 cluster : segments getting latest ts

2017-05-25 Thread Milind Vaidya
essage.time.difference.ms appropriately together with log.roll.ms > to > > avoid frequent log segment roll out. > > > > During the migration phase, if the first message in a segment does not > > have a timestamp, the log rolling will still be based on the (current &

0.10.0.0 cluster : segments getting latest ts

2017-05-25 Thread Milind Vaidya
I have 6 broker cluster. I upgraded it from 0.8.1.1 to 0.10.0.0. Kafka Producer to cluster to consumer (apache storm) upgrade went smooth without any errors. Initially keeping protocol to 0.8 and after clients were upgraded it was promoted to 0.10. Out of 6 brokers, 3 are honouring log.retentio

Disks full after upgrading kafka version : 0.8.1.1 to 0.10.0.0

2017-05-24 Thread Milind Vaidya
In 24 hours the brokers started getting killed due to disk full. The retention period is 48 hrs and with 0.8 disks used to fill ~65% What is going wrong here ? This is production system. I am reducing the retention for the time being to 24 hrs.

Re: Question regarding buffer.memory, max.request.size and send.buffer.bytes

2017-05-23 Thread Milind Vaidya
umentation/kafka/latest/ > topics/kafka_performance.html > > > > On 23 May 2017 at 20:09, Milind Vaidya wrote: > > > I have set the producer properties as follows (0.10.0.0) > > > > *"linger.ms <http://linger.ms>"** : **"500"**

Question regarding buffer.memory, max.request.size and send.buffer.bytes

2017-05-23 Thread Milind Vaidya
I have set the producer properties as follows (0.10.0.0) *"linger.ms "** : **"500"** ,* *"batch.size"** : **"1000"**,* *"buffer.memory"** :**"**1**"**,* *"send.buffer.bytes"** : **"512000"* *and default * * max.request.size = *1048576 If records are sent faster than

Advantages of 0.10.0 protocol over 0.8.0

2017-05-14 Thread Milind Vaidya
Hi We are using 0.8.1.1 for producer, broker(cluster) as well as for storm integration. We are planning to upgrade it to 0.10.0 the main reason being producer API supporting flush(). That said, we have test it in QA and look like as long as protocol is not bumped with newer dependencies, roll ba

Setting API version while upgrading

2017-05-04 Thread Milind Vaidya
> > > The documentation says "Upgrading from 0.8.x or 0.9.x to 0.10.0.0" > > I am upgrading from kafka_2.9.2-0.8.1.1 so which one is correct > > A. 0.8.1.1 > > *inter.broker.protocol.version**=**0.8.1.1* > > *log.message.format.version**=**0.8.1.1* > > B. 0.8.1 > > *inter.broker.protocol.version**=

Is rollback supported while upgrade ?

2017-05-04 Thread Milind Vaidya
Upgrading from kafka_2.9.2-0.8.1.1 to kafka_2.11-0.10.0.0 The new version kafka will look at the same location for log files as older one is what I am assuming. As per documentation following properties will be set in the new broker inter.broker.protocol.version=0.8.1.1 log.message.format.versi

Upgrading from kafka_2.9.2-0.8.1.1 to kafka_2.11-0.10.0.0

2017-05-02 Thread Milind Vaidya
Hi We are required to upgrade this in out production system. We have observed some data loss on producer side and want to try out new producer with flush() api. The procedure look like following as per the documents 1. Upgrade the cluster (brokers) with rolling upgrade. 2. Upgrade clients. I

Java stdin producer loosing logs

2017-04-14 Thread Milind Vaidya
Hi Background : I have following set up Apache server >> Apache Kafka Producer >> Apache Kafka Cluster >> Apache Storm As a normal scenario, front end boxes run the apache server and populate the log files. The requirement is to read every log and send it to kafka cluster. The java producer r

Version compatibility and flush() in Kafka Producer

2017-04-14 Thread Milind Vaidya
Is Kafka Producer 0.9.0 compatible with 0.8.* brokers ? I could not conclude to tried it out myself. I tried using that setup, which works, in the sense messages to come through on consumer side. But with new producer I was trying to user flush() call to force sending of messages from the produ

Failure scenarios for a java kafka producer reading from stdin

2017-04-13 Thread Milind Vaidya
Hi Background : I have following set up Apache server >> Apache Kafka Producer >> Apache Kafka Cluster >> Apache Storm As a normal scenario, front end boxes run the apache server and populate the log files. The requirement is to read every log and send it to kafka cluster. The java producer r

Java stdin producer loosing logs

2017-04-12 Thread Milind Vaidya
Hi Background : I have following set up Apache server >> Apache Kafka Producer >> Apache Kafka Cluster >> Apache Storm As a normal scenario, front end boxes run the apache server and populate the log files. The requirement is to read every log and send it to kafka cluster. The java producer r

Re: Fast way search data in kafka

2017-03-23 Thread Milind Vaidya
S=',' > > Marko Bonaći > Monitoring | Alerting | Anomaly Detection | Centralized Log Management > Solr & Elasticsearch Support > Sematext <http://sematext.com/> | Contact > <http://sematext.com/about/contact.html> > > On Thu, Mar 23, 2017 at 9:28 PM, Mili

Re: Fast way search data in kafka

2017-03-23 Thread Milind Vaidya
> wrote: > > > Try Presto https://prestodb.io. It may solve your problem. > > > > On Sat, 4 Mar 2017, 03:18 Milind Vaidya, wrote: > > > > > I have 6 broker kafka setup. > > > > > > I have retention period of 48 hrs. > > > > > >

Fast way search data in kafka

2017-03-03 Thread Milind Vaidya
I have 6 broker kafka setup. I have retention period of 48 hrs. To debug if certain data has reached kafka or not I am using command line consumer to then piping to grep. But it will take huge amount of time and may not succeed as well. Is there an other way to search something in kafka without

Re: Consumer Group, relabancing and partition uniqueness

2016-06-29 Thread Milind Vaidya
ons. > > I hope that this give you an overview of what happens and somehow answer to > your questions. > > Regards, > florin > > On Thu, Jun 30, 2016 at 12:36 AM, Milind Vaidya wrote: > > > Hi > > > > Background : > > > > I am using a java

Consumer Group, relabancing and partition uniqueness

2016-06-29 Thread Milind Vaidya
Hi Background : I am using a java based multithreaded kafka consumer. Two instances of this consumer are running on 2 different machines i.e. one consumer process per box, and belong to same consumer group. Internally each process has 2 threads each. Both the consumer processes consume from