roker. Once kafka-957
> is resolved, mirror maker will be able to partition messages based on the
> same key in the target cluster.
>
> Thanks,
>
> Jun
>
>
> On Tue, Jul 2, 2013 at 7:55 PM, 王国栋 wrote:
>
> > HI guys,
> >
> > We are using kafka0.7.2, in o
HI guys,
We are using kafka0.7.2, in our cluster, we use customized partition
functions in producer.
Say, we compute partition id with user id in our log.
But when we use mirror maker to pull log, we find that mirror maker uses
random partition to push log into destination brokers.
In my mind, w
Hi Yonghui,
We see this exception too. We believe this is related to the jdk versions.
In our case, the exception only happens on heavy workload, and after we
upgrade the jdk version , it is gone.
Currently, we use jdk1.6_35. I suggest you to try this.
Best
Guodong
On Thu, May 30, 2013 at 11:
Hi Neha,
We can not understand why the partitions will be unbalanced if each thread
gets different number of messages.
We go through the code of producer, and the partition number is generated
by "random.nextInt(numOfPartitions)", so we think even if different thread
gets different number of mess
Thanks Neha.
Yes, I believe we run into this issue.
We will try this patch. Currently, make topic-partition directory manually
is OK for us.
Guodong
On Wed, Apr 17, 2013 at 9:24 PM, Neha Narkhede wrote:
> Can you check if you are hitting this
> https://issues.apache.org/jira/browse/KAFKA-27
ls of the bug though ? It will be
> helpful for the community to understand.
>
> Thanks,
> Neha
>
> On Sunday, March 24, 2013, 王国栋 wrote:
>
> > Hi Neha & Jun,
> >
> > I think we have found the reason of this bug.
> > It is related to jdk ve
hede wrote:
> Do you mind filing a bug and attaching the reproducible test case there ?
>
> Thanks,
> Neha
>
> On Wednesday, March 20, 2013, 王国栋 wrote:
>
> > Hi Jun,
> >
> > We use one thread with one sync produce to send data to broker
> > (QPS:10k
ucer instances do you have? Can you reproduce the problem
> > with
> > > a single producer?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Mar 20, 2013 at 12:29 AM, 王国栋 wrote:
> > >
> > > > Hi Jun,
> &
tHandlers$$anonfun$handlerFor$1.apply(KafkaRequestHandlers.scala:38)
> > > > at kafka.network.Processor.handle(SocketServer.scala:296)
> > > > at kafka.network.Processor.read(SocketServer.scala:319)
> > > > at kafka.network.Processor.run(SocketServer.scala:214)
> > > > at java.lang.Thread.run(Thread.java:636)
> > > >
> > > > Or this:
> > > >
> > > > 1406871 [kafka-processor-2] ERROR kafka.network.Processor - Closing
> > > socket
> > > > for /10.0.2.140 because of error
> > > > java.nio.BufferUnderflowException
> > > > at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145)
> > > > at java.nio.ByteBuffer.get(ByteBuffer.java:692)
> > > > at kafka.utils.Utils$.readShortString(Utils.scala:123)
> > > > at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:29)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.api.MultiProducerRequest$$anonfun$readFrom$1.apply$mcVI$sp(MultiProducerRequest.scala:28)
> > > > at
> > > >
> > > >
> > >
> >
> scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
> > > > at
> > > > scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(--
> > *Best Regards
> >
> > 向河林*
> >
>
--
Guodong Wang
王国栋
> > > at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.api.MultiProducerRequest$$anonfun$readFrom$1.apply$mcVI$sp(MultiProducerRequest.scala:28)
> > > > at
> > > >
> > > >
> > >
> >
> scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
> > > > at
> > > >
> > scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
> > > > at
> > > >
> kafka.api.MultiProducerRequest$.readFrom(MultiProducerRequest.scala:27)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.server.KafkaRequestHandlers.handleMultiProducerRequest(KafkaRequestHandlers.scala:59)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:41)
> > > > at
> > > >
> > > >
> > >
> >
> kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$4.apply(KafkaRequestHandlers.scala:41)
> > > > at kafka.network.Processor.handle(SocketServer.scala:296)
> > > > at kafka.network.Processor.read(SocketServer.scala:319)
> > > > at kafka.network.Processor.run(SocketServer.scala:214)
> > > > at java.lang.Thread.run(Thread.java:636)
> > > >
> > > > It bothers us for a few days, and at first we thought it might be
> some
> > > > wrong configuration settings, and we changed to the wiki's
> recommended
> > > > configuration, but unfortunately the exceptions still came out.
> > > >
> > > > In what situation can these exceptions be thrown out ? What can we
> do
> > to
> > > > avoid these exceptions ?
> > > >
> > > > THANKS
> > > >
> > > > --
> > > > *Best Regards
> > > >
> > > > Xiang Helin*
> > > >
> > >
> >
> >
> >
> > --
> > *Best Regards
> >
> > 向河林*
> >
>
--
Guodong Wang
王国栋
iav.com:2191/kafka,zk9sh.prod.mediav.com:2191/kafka,zk10sh.prod.mediav.com:2191/kafka
> >
>
> Thanks,
>
> Jun
>
> On Tue, Mar 12, 2013 at 2:08 AM, 王国栋 wrote:
>
> > Hi guys,
> >
> > I am using kafka0.7.2 now, and I configure zk.connect as "zk.connect=
&
8.1), you can use a tool to move all
> partitions off a broker first and then decommission it.
>
> Thanks,
>
> Jun
>
>
> On Sun, Feb 17, 2013 at 2:19 AM, 王国栋 wrote:
>
> > Hi Jun,
> >
> > If we use high level producer based on zookeeper, how can we
>
or myself, could you give me a brief sketch
> > of implementation?
> >
> > Thank you
> > Best, Jae
> >
>
--
Guodong Wang
王国栋
Jun
>
> On Mon, Feb 4, 2013 at 10:30 PM, 王国栋 wrote:
>
> > We kill the broker on purpose, since we are doing some fail over tests.
> If
> > the broker is hard killed, will the async-producer lose message for 6
> > seconds?
> >
> > Thanks.
> >
&
ZK
> notification immediately. If the broker is hard killed, the ZK notification
> won't arrive at the producer until zk session timeout (default is 6 secs).
>
> Thanks,
>
> Jun
>
> On Mon, Feb 4, 2013 at 7:11 PM, 王国栋 wrote:
>
> > Hi Jun
> >
> >
> producer for load balancing?
>
> Thanks,
>
> Jun
>
> On Sun, Feb 3, 2013 at 10:00 PM, 王国栋 wrote:
>
> > Hi,
> >
> > we are doing some fail over test on Kafka0.7.2. We kill one of the
> brokers
> > on purpose. But we find that the async-producer los
er to sync-producer, and got the exceptions as soon as
the broker is down.
So I am wondering if I can get the socket status connected to the broker.
Does some callbacks can do this?
Thanks.
--
Guodong Wang
王国栋
Thanks yonghui.
That makes sense.
On Mon, Dec 31, 2012 at 1:45 AM, 永辉 赵 wrote:
> Partition name registered in zk contains broker id and sequence(partition)
> id, such as "0-0", "0-1", ...
>
> Thanks,
> Yonghui
>
>
>
>
>
> On 12-12-30 下午11:
okers haven't register the topic in zk, I think these
> brokers may not actually work for this topic.
>
> Please correct me if I am wrong.
>
> Thanks,
> Yonghui
>
>
>
>
>
> On 12-12-28 上午10:30, "王国栋" wrote:
>
> >Hi ,
> >
> >I am
3 partitions?
I think setting topic.partition.count.map as "topic1:3, topic2:3" should
works. Is there any simpler configuration?
Thanks.
--
Guodong Wang(王国栋)
Email:wangg...@gmail.com
> On Tue, Dec 4, 2012 at 6:23 PM, 王国栋 wrote:
>
> > Hi ,
> >
> > One of my colleagues wants to subscribe to this email group. But he can
> not
> > send subscription email to kafka-users-subscr...@incubator.apache.org <
> > kafka-users-subscr...@incubator.
ot;"""""""""""""""'
Could you please tell us what is the subscription email address now?
Thanks.
--
Guodong Wang(王国栋)
Email:wangg...@gmail.com
22 matches
Mail list logo