num.consumer.fetchers means the max number of fetcher threads that can be
spawned. it doesn't necessarily mean you can get as many fetcher threads as
you specify.

To me the metrics are suggesting a very slow consumption rate only 18.21
bytes/minute. Here is the benchmark Linkedin does

http://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines

You should check if 18.21 bytes/minute is the max throughput you can get on
your machine with bin/kafka-consumer-perf-test.sh if this is case you
definitely need to tune it

On Mon, Apr 13, 2015 at 12:43 PM, nitin sharma <kumarsharma.ni...@gmail.com>
wrote:

> hi Xiao,
>
> i have finally got JMX monitoring enabled for my kafka nodes in test
> envrionment and here is what i observed.
> i was monitoring mbeans under "kafka.consumer" domain of JVM running "Kafka
> Mirror Maker" process.
>
> =========================
> AllTopicsBytes ===> 18.21 bytes/minute
> FetchRequestRateAndTimeMs ===> 9.69 Request/min and  99th Percentile is
> 104.13ms.
> ======================
>
> Interesting thing is that I have specified "num.consumer.fetchers=200" in
> my consumer property file but i can see only 8 threads of type :
>
>
> "kafka.consumer":name="KafkaMaker1-ConsumerFetcherThread-KafkaMaker1_<<zkhost>>-1428952277321-5e044226-138-1-host_<<brokerhostname>>-port_9092-FetchRequestRateAndTimeMs",type="FetchRequestAndResponseMetrics"
>
>
> Could this be the issue?
>
> note, my JVM is set to 1GB and only 30MB is utilized most of the time.
>
>
> Regards,
> Nitin Kumar Sharma.
>
>
> On Wed, Apr 8, 2015 at 10:48 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > Metrics like Bytepersec, FetchRequestRateAndTimeMs can help you to check
> if
> > the consumer has problem processing messages
> >
> > On Thu, Apr 9, 2015 at 2:40 AM, nitin sharma <
> kumarsharma.ni...@gmail.com>
> > wrote:
> >
> > > thanks, but can you please tell which metrics could highlight the
> factor
> > > causing slow data migration by MirrorMaker?
> > >
> > > Regards,
> > > Nitin Kumar Sharma.
> > >
> > >
> > > On Tue, Apr 7, 2015 at 10:10 PM, tao xiao <xiaotao...@gmail.com>
> wrote:
> > >
> > > > You may need to look into the consumer metrics and producer metrics
> to
> > > > identify the root cause. metrics in kafka.consumer and kafka.producer
> > > > categories will help you find out the problems.
> > > >
> > > > This link gives instruction how to read the metrics
> > > > http://kafka.apache.org/documentation.html#monitoring
> > > >
> > > >
> > > > On Wed, Apr 8, 2015 at 3:39 AM, nitin sharma <
> > > kumarsharma.ni...@gmail.com>
> > > > wrote:
> > > >
> > > > > hi,
> > > > >
> > > > > sorry for late response. ... i have been able to fix the issue ..
> > > problem
> > > > > was in my approach. I got confused between my source and target
> > system
> > > > > while defining consumer & producer property file .. it is fixed now
> > > > >
> > > > > Now new issue.. the rate at which data is migrated is very very
> > > slow... i
> > > > > mean it took 5 min to copy only 15Kb.. :( .. here are the property
> > for
> > > > > producer and consumer.. there is no network latency between Source
> > and
> > > > > Destination clusters as such.
> > > > >
> > > > >
> > > > > #### Producer ###########
> > > > > metadata.broker.list=<broker1IP>:9092,<broker2IP>:9092
> > > > > serializer.class=kafka.serializer.DefaultEncoder
> > > > > auto.create.topics.enable=true
> > > > > request.required.acks=1
> > > > > request.required.acks=1
> > > > > producer.type=async
> > > > > batch.num.messages=3000
> > > > > queue.buffering.max.ms=5000
> > > > > queue.buffering.max.messages=100000
> > > > > queue.enqueue.timeout.ms=-1
> > > > > socket.send.buffer.bytes=5282880
> > > > >
> > > > > ######### Consumer ###########
> > > > >
> > > >
> > >
> >
> zookeeper.connect=<zk1hostname>:2181,<zk2hostname>:2181,<zk3hostname>:2181
> > > > > group.id=KafkaMaker
> > > > > auto.create.topics.enable=true
> > > > > socket.receive.buffer.bytes=5243880
> > > > > zookeeper.connection.timeout.ms=1000000
> > > > > num.consumer.fetchers=20
> > > > > fetch.message.max.bytes=5243880
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > > Nitin Kumar Sharma.
> > > > >
> > > > >
> > > > > On Tue, Mar 31, 2015 at 12:36 PM, tao xiao <xiaotao...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Can you attach your mirror maker log?
> > > > > >
> > > > > > On Wed, Apr 1, 2015 at 12:28 AM, nitin sharma <
> > > > > kumarsharma.ni...@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > i tried with auto.offset.reset=smallest, but still not
> working..
> > > > > > >
> > > > > > > there is data in my source cluster
> > > > > > >
> > > > > > > Regards,
> > > > > > > Nitin Kumar Sharma.
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Mar 30, 2015 at 10:30 PM, tao xiao <
> xiaotao...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Do you have data sending to *testtopic? *By default mirror
> > maker
> > > > only
> > > > > > > > consumes data being sent after it taps into the topic. you
> need
> > > to
> > > > > keep
> > > > > > > > sending data to the topic after mirror maker connection is
> > > > > established.
> > > > > > > If
> > > > > > > > you want to change the behavior you can set
> > > > > auto.offset.reset=smallest
> > > > > > so
> > > > > > > > that any new mirror maker coming to the topic will start from
> > the
> > > > > > > smallest
> > > > > > > > offset
> > > > > > > >
> > > > > > > > On Tue, Mar 31, 2015 at 3:53 AM, nitin sharma <
> > > > > > > kumarsharma.ni...@gmail.com
> > > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > thanks Xiao
> > > > > > > > >
> > > > > > > > > I tried MirrorMaker option in my test environment but
> > failed. I
> > > > am
> > > > > > not
> > > > > > > > > able to see the log getting copied to destination cluster.
> I
> > > see
> > > > in
> > > > > > the
> > > > > > > > log
> > > > > > > > > of MirrorMaker process that connection is successfully
> > > > established
> > > > > > > > between
> > > > > > > > > source and destination cluster but still not sure what i
> > > causing
> > > > > the
> > > > > > > > problem
> > > > > > > > >
> > > > > > > > > Env. Setup ==>
> > > > > > > > >
> > > > > > > > > I). Source Cluster (Qenv02) -- i have 2
> > > > > broker(Qenv02kf01,Qenv02kf02)
> > > > > > > and
> > > > > > > > > 3 zk(Qenv02zk01,Qenv02zk02 and Qenv02zk03).
> > > > > > > > > Destination Clustern (Qenv05) -- i have 2 broker
> > > > > > > (Qenv05kf01,Qenv05kf02)
> > > > > > > > > and 3 zk(Qenv05zk01,Qenv05zk02 and Qenv05zk03).
> > > > > > > > >
> > > > > > > > > II). i have kept consumer and producer properties file in
> one
> > > of
> > > > > the
> > > > > > > > > source kafka broker config folder.
> > > > > > > > >
> > > > > > > > > III).i have executed following command from the same kafka
> > > broker
> > > > > to
> > > > > > > > > start the process.. log are attached :
> > > > > > > > >
> > > > > > > > > /app/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker
> > > > > > > > -consumer.config
> > > > > > > > > /app/kafka/config/consumer1.properties --num.streams=2
> > > > > > > --producer.config
> > > > > > > > > /app/kafka/config/producer1.properties --whitelist
> testtopic
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > IV). I tried Consumer offset tracker tool also, while
> Mirror
> > > > Maker
> > > > > > > > running
> > > > > > > > > . I tried by launching second session of same broker where
> > > mirror
> > > > > > maker
> > > > > > > > is
> > > > > > > > > running. I got error message that "*NoNode for
> > > > > > > > > /consumers/KafkaMaker/offsets/testtopic/0*" .Complete log
> > > > attached.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > > Nitin Kumar Sharma.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Thu, Mar 26, 2015 at 11:24 AM, tao xiao <
> > > xiaotao...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >> Both consumer-1 and consumer-2 are properties of source
> > > clusters
> > > > > > > mirror
> > > > > > > > >> maker transfers data from. Mirror maker is designed to be
> > able
> > > > to
> > > > > > > > consume
> > > > > > > > >> data from N sources (N >= 1) and transfer data to one
> > > > destination
> > > > > > > > cluster.
> > > > > > > > >> You are free to supply as many consumer properties as you
> > want
> > > > to
> > > > > > > > instruct
> > > > > > > > >> mirror maker where to consumer data from.
> > > > > > > > >>
> > > > > > > > >> On Thu, Mar 26, 2015 at 9:50 PM, nitin sharma <
> > > > > > > > >> kumarsharma.ni...@gmail.com>
> > > > > > > > >> wrote:
> > > > > > > > >>
> > > > > > > > >> > thanks Mayuresh and Jiangjie for your response.
> > > > > > > > >> >
> > > > > > > > >> > I have actually not understood Mirror maker clearly and
> > > hence
> > > > > bit
> > > > > > > > >> skeptical
> > > > > > > > >> > if i will be able to execute it effectively.
> > > > > > > > >> >
> > > > > > > > >> > Online i hv seen the following command to be execute,
> but
> > > not
> > > > > > > > understood
> > > > > > > > >> >  what is consumer-1 & -2.properties here? do i need to
> > copy
> > > > from
> > > > > > my
> > > > > > > > >> > consumer code? also, any reason why to provide consumer
> > > > > property?
> > > > > > > > >> >
> > > > > > > > >> > bin/kafka-run-class.sh kafka.tools.MirrorMaker
> > > > --consumer.config
> > > > > > > > >> > consumer-1.properties --consumer.config
> > > consumer-2.properties
> > > > > > > > >> > --producer.config producer.properties --whitelist
> my-topic
> > > > > > > > >> >
> > > > > > > > >> >
> > > > > > > > >> > Regards,
> > > > > > > > >> > Nitin Kumar Sharma.
> > > > > > > > >> >
> > > > > > > > >> >
> > > > > > > > >> > On Wed, Mar 25, 2015 at 8:57 PM, Mayuresh Gharat <
> > > > > > > > >> > gharatmayures...@gmail.com
> > > > > > > > >> > > wrote:
> > > > > > > > >> >
> > > > > > > > >> > > You can use the Mirror maker to move data from one
> data
> > > > center
> > > > > > to
> > > > > > > > >> other
> > > > > > > > >> > and
> > > > > > > > >> > > once all the data has been moved you can shut down the
> > > > source
> > > > > > data
> > > > > > > > >> center
> > > > > > > > >> > > by doing a controlled shutdown.
> > > > > > > > >> > >
> > > > > > > > >> > > Thanks,
> > > > > > > > >> > >
> > > > > > > > >> > > Mayuresh
> > > > > > > > >> > >
> > > > > > > > >> > > On Wed, Mar 25, 2015 at 2:35 PM, Jiangjie Qin
> > > > > > > > >> <j...@linkedin.com.invalid
> > > > > > > > >> > >
> > > > > > > > >> > > wrote:
> > > > > > > > >> > >
> > > > > > > > >> > > > If you want to do a seamless migration. I think a
> > better
> > > > way
> > > > > > is
> > > > > > > to
> > > > > > > > >> > build
> > > > > > > > >> > > a
> > > > > > > > >> > > > cross datacenter Kafka cluster temporarily. So the
> > > process
> > > > > is:
> > > > > > > > >> > > > 1. Add several new Kafka brokers in your new
> > datacenter
> > > > and
> > > > > > add
> > > > > > > > >> them to
> > > > > > > > >> > > > the old cluster.
> > > > > > > > >> > > > 2. Use replica assignment tool to reassign all the
> > > > > partitions
> > > > > > to
> > > > > > > > >> > brokers
> > > > > > > > >> > > > in new datacenter.
> > > > > > > > >> > > > 3. Perform controlled shutdown on the brokers in old
> > > > > > datacenter.
> > > > > > > > >> > > >
> > > > > > > > >> > > > Jiangjie (Becket) Qin
> > > > > > > > >> > > >
> > > > > > > > >> > > > On 3/25/15, 2:01 PM, "nitin sharma" <
> > > > > > > kumarsharma.ni...@gmail.com>
> > > > > > > > >> > wrote:
> > > > > > > > >> > > >
> > > > > > > > >> > > > >Hi Team,
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >in my project, we have built a new datacenter for
> > Kafka
> > > > > > brokers
> > > > > > > > and
> > > > > > > > >> > > wants
> > > > > > > > >> > > > >to migrate from current datacenter to new one.
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >Switching producers and consumers wont be a problem
> > > > > provided
> > > > > > > New
> > > > > > > > >> > > > >Datacenter
> > > > > > > > >> > > > >has all the messages of existing Datacenter.
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >i have only 1 topic with 2 partition that need to
> be
> > > > > > > migrated...
> > > > > > > > >> that
> > > > > > > > >> > > too
> > > > > > > > >> > > > >it is only 1 time activity.
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >Kindly suggest the best way to deal with this
> > > situation.
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >
> > > > > > > > >> > > > >Regards,
> > > > > > > > >> > > > >Nitin Kumar Sharma.
> > > > > > > > >> > > >
> > > > > > > > >> > > >
> > > > > > > > >> > >
> > > > > > > > >> > >
> > > > > > > > >> > > --
> > > > > > > > >> > > -Regards,
> > > > > > > > >> > > Mayuresh R. Gharat
> > > > > > > > >> > > (862) 250-7125
> > > > > > > > >> > >
> > > > > > > > >> >
> > > > > > > > >>
> > > > > > > > >>
> > > > > > > > >>
> > > > > > > > >> --
> > > > > > > > >> Regards,
> > > > > > > > >> Tao
> > > > > > > > >>
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Regards,
> > > > > > > > Tao
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Regards,
> > > > > > Tao
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Tao
> > > >
> > >
> >
> >
> >
> > --
> > Regards,
> > Tao
> >
>



-- 
Regards,
Tao

Reply via email to