Re: messages lost

2015-01-06 Thread Sa Li
Hi, experts

Again, we still having the issues of losing data, see we see data 5000
records, but only find 4500 records on brokers, we did set required.acks -1
to make sure all brokers ack, but that only cause the long latency, but not
cure the data lost.


thanks


On Mon, Jan 5, 2015 at 9:55 AM, Xiaoyu Wang xw...@rocketfuel.com wrote:

 @Sa,

 the required.acks is producer side configuration. Set to -1 means requiring
 ack from all brokers.

 On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:

  Thanks a lot, Tim, this is the config of brokers
 
  --
  broker.id=1
  port=9092
  host.name=10.100.70.128
  num.network.threads=4
  num.io.threads=8
  socket.send.buffer.bytes=1048576
  socket.receive.buffer.bytes=1048576
  socket.request.max.bytes=104857600
  auto.leader.rebalance.enable=true
  auto.create.topics.enable=true
  default.replication.factor=3
 
  log.dirs=/tmp/kafka-logs-1
  num.partitions=8
 
  log.flush.interval.messages=1
  log.flush.interval.ms=1000
  log.retention.hours=168
  log.segment.bytes=536870912
  log.cleanup.interval.mins=1
 
  zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,10.100.70.29:2181
  zookeeper.connection.timeout.ms=100
 
  ---
 
 
  We actually play around request.required.acks in producer config, -1
 cause
  long latency, 1 is the parameter to cause messages lost. But I am not
 sure,
  if this is the reason to lose the records.
 
 
  thanks
 
  AL
 
 
 
 
 
 
 
  On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen tnac...@gmail.com wrote:
 
   What's your configured required.acks? And also are you waiting for all
   your messages to be acknowledged as well?
  
   The new producer returns futures back, but you still need to wait for
   the futures to complete.
  
   Tim
  
   On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
Hi, all
   
We are sending the message from a producer, we send 10 records,
 but
   we
see only 99573 records for that topics, we confirm this by consume
 this
topic and check the log size in kafka web console.
   
Any ideas for the message lost, what is the reason to cause this?
   
thanks
   
--
   
Alec Li
  
 
 
 
  --
 
  Alec Li
 




-- 

Alec Li


Re: messages lost

2015-01-06 Thread Joe Stein
You should never be storing your log files in /tmp please change that.

Ack = -1 is what you should be using if you want to guarantee messages are
saved. You should not be seeing high latencies (unless a few milliseconds
is high for you).

Are you using sync or async producer? What version of Kafka? How are you
counting the data from the topic? How are you counting you sent each
message and that it successfully acked? How are you counting from the topic
and have you verified the counts summed from each partition?

Can you share some sample code that reproduces this issue?

You can try counting the message from each partition using
https://github.com/edenhill/kafkacat and pipe to wc -l it makes for a nice
simple sanity check to where the problem might be.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/

On Tue, Jan 6, 2015 at 7:21 PM, Sa Li sal...@gmail.com wrote:

 Hi, experts

 Again, we still having the issues of losing data, see we see data 5000
 records, but only find 4500 records on brokers, we did set required.acks -1
 to make sure all brokers ack, but that only cause the long latency, but not
 cure the data lost.


 thanks


 On Mon, Jan 5, 2015 at 9:55 AM, Xiaoyu Wang xw...@rocketfuel.com wrote:

  @Sa,
 
  the required.acks is producer side configuration. Set to -1 means
 requiring
  ack from all brokers.
 
  On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:
 
   Thanks a lot, Tim, this is the config of brokers
  
   --
   broker.id=1
   port=9092
   host.name=10.100.70.128
   num.network.threads=4
   num.io.threads=8
   socket.send.buffer.bytes=1048576
   socket.receive.buffer.bytes=1048576
   socket.request.max.bytes=104857600
   auto.leader.rebalance.enable=true
   auto.create.topics.enable=true
   default.replication.factor=3
  
   log.dirs=/tmp/kafka-logs-1
   num.partitions=8
  
   log.flush.interval.messages=1
   log.flush.interval.ms=1000
   log.retention.hours=168
   log.segment.bytes=536870912
   log.cleanup.interval.mins=1
  
   zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,
 10.100.70.29:2181
   zookeeper.connection.timeout.ms=100
  
   ---
  
  
   We actually play around request.required.acks in producer config, -1
  cause
   long latency, 1 is the parameter to cause messages lost. But I am not
  sure,
   if this is the reason to lose the records.
  
  
   thanks
  
   AL
  
  
  
  
  
  
  
   On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen tnac...@gmail.com
 wrote:
  
What's your configured required.acks? And also are you waiting for
 all
your messages to be acknowledged as well?
   
The new producer returns futures back, but you still need to wait for
the futures to complete.
   
Tim
   
On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
 Hi, all

 We are sending the message from a producer, we send 10 records,
  but
we
 see only 99573 records for that topics, we confirm this by consume
  this
 topic and check the log size in kafka web console.

 Any ideas for the message lost, what is the reason to cause this?

 thanks

 --

 Alec Li
   
  
  
  
   --
  
   Alec Li
  
 



 --

 Alec Li



Re: messages lost

2015-01-06 Thread Mayuresh Gharat
Try doing .get() on the future returned by the new producer. It should
guarantee that message has made to kafka.

Thanks,

Mayuresh

On Tue, Jan 6, 2015 at 4:21 PM, Sa Li sal...@gmail.com wrote:

 Hi, experts

 Again, we still having the issues of losing data, see we see data 5000
 records, but only find 4500 records on brokers, we did set required.acks -1
 to make sure all brokers ack, but that only cause the long latency, but not
 cure the data lost.


 thanks


 On Mon, Jan 5, 2015 at 9:55 AM, Xiaoyu Wang xw...@rocketfuel.com wrote:

  @Sa,
 
  the required.acks is producer side configuration. Set to -1 means
 requiring
  ack from all brokers.
 
  On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:
 
   Thanks a lot, Tim, this is the config of brokers
  
   --
   broker.id=1
   port=9092
   host.name=10.100.70.128
   num.network.threads=4
   num.io.threads=8
   socket.send.buffer.bytes=1048576
   socket.receive.buffer.bytes=1048576
   socket.request.max.bytes=104857600
   auto.leader.rebalance.enable=true
   auto.create.topics.enable=true
   default.replication.factor=3
  
   log.dirs=/tmp/kafka-logs-1
   num.partitions=8
  
   log.flush.interval.messages=1
   log.flush.interval.ms=1000
   log.retention.hours=168
   log.segment.bytes=536870912
   log.cleanup.interval.mins=1
  
   zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,
 10.100.70.29:2181
   zookeeper.connection.timeout.ms=100
  
   ---
  
  
   We actually play around request.required.acks in producer config, -1
  cause
   long latency, 1 is the parameter to cause messages lost. But I am not
  sure,
   if this is the reason to lose the records.
  
  
   thanks
  
   AL
  
  
  
  
  
  
  
   On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen tnac...@gmail.com
 wrote:
  
What's your configured required.acks? And also are you waiting for
 all
your messages to be acknowledged as well?
   
The new producer returns futures back, but you still need to wait for
the futures to complete.
   
Tim
   
On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
 Hi, all

 We are sending the message from a producer, we send 10 records,
  but
we
 see only 99573 records for that topics, we confirm this by consume
  this
 topic and check the log size in kafka web console.

 Any ideas for the message lost, what is the reason to cause this?

 thanks

 --

 Alec Li
   
  
  
  
   --
  
   Alec Li
  
 



 --

 Alec Li




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: messages lost

2015-01-05 Thread Xiaoyu Wang
@Sa,

the required.acks is producer side configuration. Set to -1 means requiring
ack from all brokers.

On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:

 Thanks a lot, Tim, this is the config of brokers

 --
 broker.id=1
 port=9092
 host.name=10.100.70.128
 num.network.threads=4
 num.io.threads=8
 socket.send.buffer.bytes=1048576
 socket.receive.buffer.bytes=1048576
 socket.request.max.bytes=104857600
 auto.leader.rebalance.enable=true
 auto.create.topics.enable=true
 default.replication.factor=3

 log.dirs=/tmp/kafka-logs-1
 num.partitions=8

 log.flush.interval.messages=1
 log.flush.interval.ms=1000
 log.retention.hours=168
 log.segment.bytes=536870912
 log.cleanup.interval.mins=1

 zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,10.100.70.29:2181
 zookeeper.connection.timeout.ms=100

 ---


 We actually play around request.required.acks in producer config, -1 cause
 long latency, 1 is the parameter to cause messages lost. But I am not sure,
 if this is the reason to lose the records.


 thanks

 AL







 On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen tnac...@gmail.com wrote:

  What's your configured required.acks? And also are you waiting for all
  your messages to be acknowledged as well?
 
  The new producer returns futures back, but you still need to wait for
  the futures to complete.
 
  Tim
 
  On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
   Hi, all
  
   We are sending the message from a producer, we send 10 records, but
  we
   see only 99573 records for that topics, we confirm this by consume this
   topic and check the log size in kafka web console.
  
   Any ideas for the message lost, what is the reason to cause this?
  
   thanks
  
   --
  
   Alec Li
 



 --

 Alec Li



Re: messages lost

2015-01-02 Thread Sa Li
Thanks a lot, Tim, this is the config of brokers

--
broker.id=1
port=9092
host.name=10.100.70.128
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
auto.leader.rebalance.enable=true
auto.create.topics.enable=true
default.replication.factor=3

log.dirs=/tmp/kafka-logs-1
num.partitions=8

log.flush.interval.messages=1
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=536870912
log.cleanup.interval.mins=1

zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,10.100.70.29:2181
zookeeper.connection.timeout.ms=100

---


We actually play around request.required.acks in producer config, -1 cause
long latency, 1 is the parameter to cause messages lost. But I am not sure,
if this is the reason to lose the records.


thanks

AL







On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen tnac...@gmail.com wrote:

 What's your configured required.acks? And also are you waiting for all
 your messages to be acknowledged as well?

 The new producer returns futures back, but you still need to wait for
 the futures to complete.

 Tim

 On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
  Hi, all
 
  We are sending the message from a producer, we send 10 records, but
 we
  see only 99573 records for that topics, we confirm this by consume this
  topic and check the log size in kafka web console.
 
  Any ideas for the message lost, what is the reason to cause this?
 
  thanks
 
  --
 
  Alec Li




-- 

Alec Li


Re: messages lost

2015-01-02 Thread Timothy Chen
What's your configured required.acks? And also are you waiting for all
your messages to be acknowledged as well?

The new producer returns futures back, but you still need to wait for
the futures to complete.

Tim

On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
 Hi, all

 We are sending the message from a producer, we send 10 records, but we
 see only 99573 records for that topics, we confirm this by consume this
 topic and check the log size in kafka web console.

 Any ideas for the message lost, what is the reason to cause this?

 thanks

 --

 Alec Li


messages lost

2015-01-02 Thread Sa Li
Hi, all

We are sending the message from a producer, we send 10 records, but we
see only 99573 records for that topics, we confirm this by consume this
topic and check the log size in kafka web console.

Any ideas for the message lost, what is the reason to cause this?

thanks

-- 

Alec Li