Hi All,

I am using kafka 0.8.


My producers configurations are as follows

    kafka8.bytearray.producer.type=sync

    kafka8.producer.batch.num.messages=100

    kafka8.producer.topic.metadata.refresh.interval.ms=600000

    kafka8.producer.retry.backoff.ms=100

    kafka8.producer.message.send.max.retries=3

My Kafaka server. properties are as

    # The number of messages to accept before forcing a flush of data to disk
    log.flush.interval.messages=500

    # The maximum amount of time a message can sit in a log before we
force a flush
    log.flush.interval.ms=100

    # Per-topic overrides for log.flush.interval.ms
    #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000


Specified Sync property in producer. properties file

    # specifies whether the messages are sent asynchronously (async)
or synchronously (sync)
    producer.type=sync



My consumer is running in a separate jar. Consumer config are

    zookeeper.connect=IP
    group.id=consumerGroup
    fetch.message.max.bytes=1000000000
    zookeeper.session.timeout.ms=60000
    auto.offset.reset=smallest
    zookeeper.sync.time.ms=200
    auto.commit.enable=false

If my data generator and consumer are running parallel and suddenly
kafka is restarted, less
records are consumed then expected.

e.g.  If i set the number of records to be produce are 3000 after that
it throws an exception. My consumer runs
in parallel to that, mean while if i restart my kafka ,my consumer is
only able to
get 2400 approx records. approximately 600 records are missing even if
i am running kafaka in synchronized mode.

 I am not able to know why this data lose is happening. If you have
any idea regarding this.
Please help me to know what i am missing here in this case.

Regards,

Nishant Kumar

Reply via email to