Hi Jun,
we do not use any compression in our test.
We deploy producer and broker in the same machine. The problem still
exists. We use sync producer, and send one message at a time(no batch now).
We find that when the qps reaches more than 40k, the exception appears. So
I don't think it's the
I restarted the zookeeper server first, then broker. It's the same
instance of kafka 0.8 and I am using the same config file. In
server.properties I have: brokerid=1
Is that sufficient to ensure the broker get restarted with the same
broker id as before?
thanks,
Jason
On Wed, Mar 20, 2013 at
On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg j...@squareup.com wrote:
I think might be cool, would be to have a feature where by you can tell a
broker to stop accepting new data produced to it, but still allow consumers
to consume from it.
That way, you can roll out new brokers to a
The latest version of 0.8 can be found in the 0.8 branch, not trunk.
Thanks,
Jun
On Wed, Mar 20, 2013 at 7:47 AM, Jason Huang jason.hu...@icare.com wrote:
The 0.8 version I use was built from trunk last Dec. Since then, this
error happened 3 times. Each time we had to remove all the ZK and
We are seeing some odd socket timeouts from one of our producers. This
producer fans out data from one topic into dozens or hundreds of potential
output topics. We batch the send's to write 1,000 messages at a time.
The odd thing is that the timeouts are happening in the socket read, so I
On Wed, Mar 20, 2013 at 10:55 AM, Jason Rosenberg j...@squareup.com wrote:
On Wed, Mar 20, 2013 at 9:06 AM, Philip O'Toole phi...@loggly.com wrote:
On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg j...@squareup.com wrote:
I think might be cool, would be to have a feature where by you can
No worries Philip, I'll assume you you mispoke at first when talking about
a load-balancer between the consumers and brokers. Kafka, unfortunately,
doesn't allow consumers to connect to kafka via a load balancer.
Ah yes, I misspoke. I meant an LB between Producers and Kafka Brokers.
For
On Wed, Mar 20, 2013 at 12:06 PM, Jason Rosenberg j...@squareup.com wrote:
On Wed, Mar 20, 2013 at 12:00 PM, Philip O'Toole phi...@loggly.com wrote:
For
producers, also, you can't really use a load-balancer to connect to
brokers
(you can use zk, or you can use a broker list, in 0.7.2, and
Ok,
I programmatically configure my kafka producers, but essentially configure
them passing a set of config properties, as if specified in a .properties
file.
I'll think about trying this, seems like it might just work.
Jason
On Wed, Mar 20, 2013 at 12:16 PM, Philip O'Toole phi...@loggly.com
Okay, how do we do this logistically? I've take the Producer code that I
wrote for testing purposes and wrote a description around it. How do I get
it to you guys?
Simple Consumer is going to take a little longer since my test Consumers
are non-trivial and I'll need to simplify them.
Thanks,
There was only one producer running in all our tests. Beside, we also tried
the low-level java api, the problem still shows up.
Thanks
2013/3/20 Jun Rao jun...@gmail.com
How many producer instances do you have? Can you reproduce the problem with
a single producer?
Thanks,
Jun
On Wed,
How many threads are you using?
Thanks,
Jun
On Wed, Mar 20, 2013 at 7:33 PM, Yang Zhou zhou.yang.h...@gmail.com wrote:
Sorry, I made a mistake, we use many threads producing at same time.
2013/3/20 Jun Rao jun...@gmail.com
How many producer instances do you have? Can you reproduce the
Hi Jun,
I didn't find any error in producer log.
I did another test, first I injected data to kafka server, then stop
producer, and start consumer.
The exception still happened, so the exception is not related with producer.
From the log below, it seems consumer exception happened first.
*
Do you mind filing a bug and attaching the reproducible test case there ?
Thanks,
Neha
On Wednesday, March 20, 2013, 王国栋 wrote:
Hi Jun,
We use one thread with one sync produce to send data to broker
(QPS:10k-15k, each log is about 1k bytes). The problem is reproduced.
We have used
14 matches
Mail list logo