Yes, this is typical use case for mirror maker.
Thanks,
Jun
On Tue, Oct 7, 2014 at 4:33 PM, István wrote:
> Hi all,
>
> I have just a quick question. I was wondering if MirrorMaker is the right
> solution for this task, basically I need to move a subset of the data from
> a production cluster
We use http://quantifind.com/KafkaOffsetMonitor/...
On Tue, Oct 7, 2014 at 8:49 PM, Gwen Shapira wrote:
> I'm using Hue's ZooKeeper app:
> http://gethue.com/new-zookeeper-browser-app/
>
> This UI looks very cute, but I didn't try it yet:
> https://github.com/claudemamo/kafka-web-console
>
> Gwen
I'm using Hue's ZooKeeper app: http://gethue.com/new-zookeeper-browser-app/
This UI looks very cute, but I didn't try it yet:
https://github.com/claudemamo/kafka-web-console
Gwen
On Tue, Oct 7, 2014 at 5:08 PM, Shafaq wrote:
> We are going to deploy Kafka in Production and also monitor it via c
>> we've got a cron which iterates each topic+partition and writes an index
of timestamps->byte offset
What is byte offset? Is this reliable to gauge the number of tuples being
written to a topic when there are many different consumers writing to the
same topic ?
On Tue, Oct 7, 2014 at 4:35 PM, N
We are going to deploy Kafka in Production and also monitor it via console.
(e.g. State of partitions in Broker- leader and slaves, state of consumers )
Is there out-of-the-box solution?
What is the best and efficient way of deployment and monitoring
Has someone tried this- looks promising
http:
> On Tue, Oct 7, 2014 at 3:56 PM, Josh J wrote:
>> Is there a way to monitor the rate of rates to a particular topic? I wish
>> to monitor the frequency of incoming tuples in order to consume from the
>> topic in particular ways depending on the incoming write throughput.
we've got a cron which i
Hi all,
I have just a quick question. I was wondering if MirrorMaker is the right
solution for this task, basically I need to move a subset of the data from
a production cluster to a test cluster. My original idea was to set
the log.retention.minutes
to a low value like 12 hours so I could keep o
Interested in the total number of tuples written per millisecond per topic.
On Tue, Oct 7, 2014 at 3:56 PM, Josh J wrote:
> Hi,
>
> Is there a way to monitor the rate of rates to a particular topic? I wish
> to monitor the frequency of incoming tuples in order to consume from the
> topic in part
Hi,
Is there a way to monitor the rate of rates to a particular topic? I wish
to monitor the frequency of incoming tuples in order to consume from the
topic in particular ways depending on the incoming write throughput.
Thanks,
Josh
I think the more automated/lazy way right now would be to shutdown one
broker, rm -rf all its data, add the data directories in config, and
restart to let the broker restore off the replicas. This may actually be
okay though it is a little slower.
-Jay
On Tue, Oct 7, 2014 at 3:25 PM, Jun Rao wro
You can stop the broker and copy some of the log directories to the new
volumes. You have to be a bit careful when you do that. There are two
metadata files recovery-point-offset-checkpoint and
replication-offset-checkpoint that you have to manually split and copy over.
Ideally, we should be able
Otis,
Yes, if you guys can help provide a patch in a few days, we can probably
get it to the 0.8.2 release.
Thanks,
Jun
On Tue, Oct 7, 2014 at 12:10 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi Jun,
>
> I think your MBean renaming approach will work. I see
> https://issues.a
Hi Jun,
I think your MBean renaming approach will work. I see
https://issues.apache.org/jira/browse/KAFKA-1481 has Fix Version 0.8.2, but
is not marked as a Blocker. We'd love to get the MBeans fixed so this
makes it in 0.8.2 release. Do you know if this is on anyone's plate (the
issue is curre
KAFKA-1092 (https://issues.apache.org/jira/browse/KAFKA-1092) provided the
answer to the query. Thanks all.
On Tue, Oct 7, 2014 at 8:26 AM, Biju N wrote:
> Hello There,
>I have the following in my server.properties on a two node kafka test
> cluster
>
> ….
> port=6667
> host.name=f-bcpc-vm3.
Neha,
I log volume or can it be volumes plural?
-Steve
On Tue, Oct 7, 2014 at 6:41 AM, Neha Narkhede
wrote:
> Is it possible to perform this migration without losing the data currently
> stored in the kafka cluster?
>
> Though I haven't tested this, the way this is designed should allow you to
Is it possible to perform this migration without losing the data currently
stored in the kafka cluster?
Though I haven't tested this, the way this is designed should allow you to
shut down a broker, move some partition directories over to the new log
volume and restart the broker. You will have to
Hello There,
I have the following in my server.properties on a two node kafka test
cluster
….
port=6667
host.name=f-bcpc-vm3.bcpc.example.com
advertised.host.name=f-bcpc-vm3.bcpc.example.com
advertised.port=9092
…
When I bring up Kafka, there is no process listening on port 9092 but Kafka
list
Hi,
I have a Kafka 0.8.1.1 cluster consisting in 4 servers with several topics on
it.
The cluster was initially configured to store kafka log data in a single
directory on each server (log.dirs = /tmp/kafka-logs)
Now, I have assigned 3 new disks to each server and I would like to use them to
18 matches
Mail list logo