New consumer - mirror maker - blacklist

2017-06-26 Thread cs user
Hi All,

The docs currently say the following:

However, --blacklist is not supported when the new consumer has been
enabled (i.e. when bootstrap.servers has been defined in the consumer
configuration)

Is there an alternative to using a blacklist when using the new consumer
type? So that it is possible to replicate all topics apart from a list
which is specified?

Thanks!


Re: Kafka broker crash - broker id then changed

2016-09-09 Thread cs user
Coming back to this issue, looks like it was a result of the centos 7
systemd cleanup task on tmp:

/usr/lib/tmpfiles.d/tmp.conf



#  This file is part of systemd.

#

#  systemd is free software; you can redistribute it and/or modify it

#  under the terms of the GNU Lesser General Public License as published by

#  the Free Software Foundation; either version 2.1 of the License, or

#  (at your option) any later version.



# See tmpfiles.d(5) for details



# Clear tmp directories separately, to make them easier to override

v /tmp 1777 root root 10d

v /var/tmp 1777 root root 30d



# Exclude namespace mountpoints created with PrivateTmp=yes

x /tmp/systemd-private-%b-*

X /tmp/systemd-private-%b-*/tmp

x /var/tmp/systemd-private-%b-*

X /var/tmp/systemd-private-%b-*/tmp




Cheers!



On Thu, May 26, 2016 at 9:27 AM, cs user <acldstk...@gmail.com> wrote:

> Hi Ben,
>
> Thanks for responding. I can't imagine what would have cleaned temp up at
> that time. I don't think we have anything in place to do that, it also
> appears to happened to both machines at the same time.
>
> It also appears that the other topics were not affected, there were still
> other files present in temp.
>
> Thanks!
>
> On Thu, May 26, 2016 at 9:19 AM, Ben Davison <ben.davi...@7digital.com>
> wrote:
>
>> Possibly tmp got cleaned up?
>>
>> Seems like one of the log files where deleted while a producer was writing
>> messages to it:
>>
>> On Thursday, 26 May 2016, cs user <acldstk...@gmail.com> wrote:
>>
>> > Hi All,
>> >
>> > We are running Kafka version 0.9.0.1, at the time the brokers crashed
>> > yesterday we were running in a 2 mode cluster. This has now been
>> increased
>> > to 3.
>> >
>> > We are not specifying a broker id and relying on kafka generating one.
>> >
>> > After the brokers crashed (at exactly the same time) we left kafka
>> stopped
>> > for a while. After kafka was started back up, the broker id's on both
>> > servers were incremented, they were 1001/1002 and they flipped to
>> > 1003/1004. This seemed to cause some problems as partitions were
>> assigned
>> > to broker id's which it believed had disappeared and so were not
>> > recoverable.
>> >
>> > We noticed that the broker id's are actually stored in:
>> >
>> > /tmp/kafka-logs/meta.properties
>> >
>> > So we set these back to what they were and restarted. Is there a reason
>> why
>> > these would change?
>> >
>> > Below are the error logs from each server:
>> >
>> > Server 1
>> >
>> > [2016-05-25 09:05:52,827] INFO [ReplicaFetcherManager on broker 1002]
>> > Removed fetcher for partitions [Topic1Heartbeat,1]
>> > (kafka.server.ReplicaFetcherManager)
>> > [2016-05-25 09:05:52,831] INFO Completed load of log Topic1Heartbeat-1
>> with
>> > log end offset 0 (kafka.log.Log)
>> > [2016-05-25 09:05:52,831] INFO Created log for partition
>> > [Topic1Heartbeat,1] in /tmp/kafka-logs with properties
>> {compression.type ->
>> > producer, file.delete.delay.ms -> 6, max.message.bytes -> 112,
>> > min.insync.replicas -> 1, segment.
>> > jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5,
>> > index.interval.bytes -> 4096, unclean.leader.election.enable -> true,
>> > retention.bytes -> -1, delete.retention.ms -> 8640, cleanup.policy
>> ->
>> > delete, flush.ms -> 9
>> > 223372036854775807, segment.ms -> 60480, segment.bytes ->
>> 1073741824,
>> > retention.ms -> 60480, segment.index.bytes -> 10485760,
>> flush.messages
>> > -> 9223372036854775807}. (kafka.log.LogManager)
>> > [2016-05-25 09:05:52,831] INFO Partition [Topic1Heartbeat,1] on broker
>> > 1002: No checkpointed highwatermark is found for partition
>> > [Topic1Heartbeat,1] (kafka.cluster.Partition)
>> > [2016-05-25 09:14:12,189] INFO [GroupCoordinator 1002]: Preparing to
>> > restabilize group Topic1 with old generation 0
>> > (kafka.coordinator.GroupCoordinator)
>> > [2016-05-25 09:14:12,190] INFO [GroupCoordinator 1002]: Stabilized group
>> > Topic1 generation 1 (kafka.coordinator.GroupCoordinator)
>> > [2016-05-25 09:14:12,195] INFO [GroupCoordinator 1002]: Assignment
>> received
>> > from leader for group Topic1 for generation 1
>> > (kafka.coordinator.GroupCoordinator)
>> > [2016-05-25 09:14:12,749] FATAL [Replica Manager on Broker 1002]:
>> Halting
>> > due to

Re: Question regarding functionality of MirrorMaker

2016-08-26 Thread cs user
Hi Umesh,

I haven't had that problem, it seems to work fine for me. The only issue I
found, which kind of makes sense, it that it doesn't mirror existing topics
immediately, only when messages are first set to the topic after mirror
maker connects. It doesn't start from the first offset available, only the
current one.

However once you start sending messages it seems to subscribe to them fine
and they get created on the mirror maker cluster, same for new topics which
are created on the source cluster, they seem to come over fine.

Only thing I can think of is that you have disabled auto topic creation on
the mirror maker cluster so that mirror maker is unable to create them
automatically? But then it wouldn't be able to create the existing topics
either so that doesn't make sense.

Are there any error messages in your mirror maker logs or on the mirror
maker cluster which point to what the issue might be?

Other than the boostrap servers, my producer settings look as follows:

producer.type=async
compression.codec=0
serializer.class=kafka.serializer.DefaultEncoder
max.message.size=1000
queue.time=1000
queue.enqueueTimeout.ms=-1



Cheers!




On Fri, Aug 26, 2016 at 6:08 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
wrote:

> Hello Mate,
> Thanks for your detailed response and it surely helps.
>
> WhiteList is the required config for MM from 0.9.0 onwards. And you are
> correct that --new-consumer requires --bootstrap-servers rather than
> --zookeeper .
>
> However, did you notice that MM picks the topics which are present at the
> time of its startup and mirrors the data. When you add some new topics
> after its startup it doesn't pick it automatically?
>
> Regards,
> Umesh Chaudhary
>
> On Thu, 25 Aug 2016 at 19:23 cs user <acldstk...@gmail.com> wrote:
>
> > Hi Umesh,
> >
> > I am new to kafka as well, and configuring the MirrorMaker. I got mine
> > working in the following way.
> >
> > I run the mirror maker instance on the mirror cluster, as in where you
> want
> > to mirror the topics to, although I'm not sure it matters.
> >
> > I use the following options when starting my service (systemd file):
> >
> > KAFKA_RUN="/opt/kafka/bin/kafka-run-class.sh"
> > KAFKA_ARGS="kafka.tools.MirrorMaker"
> > KAFKA_CONFIG="--new.consumer --offset.commit.interval.ms=5000
> > --consumer.config /opt/kafka/config/consumer-mirror1.properties
> > --producer.config /opt/kafka/config/producer.properties
> --whitelist=\".*\""
> >
> > Without the --new.consumer parameter, the --consumer.config and
> > producer.config files need to contain the zookeeper config for relevant
> > clusters. When using the --new.consumer switch this is no longer needed
> (as
> > I understand it).
> >
> > The consumer config points at my source cluster, the producer config
> points
> > locally, to my mirror cluster. I think it's also important to configure
> the
> > whitelist to tell it which topics you want to mirror, in my case I mirror
> > all of them with a wildcard.
> >
> > Not much config in the consumer.config and producer.config files apart
> from
> > the bootstrap.servers list, pointing at the relevant cluster. I have 3
> > brokers in my mirror cluster and each one of them runs the same mirror
> > maker service so one will take over if another one fails.
> >
> > I hope someone will correct me if I am wrong about anything, and
> hopefully
> > this will help!
> >
> > Cheers!
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Aug 25, 2016 at 9:36 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
> > wrote:
> >
> > > Hey Folks,
> > > I was trying to understand the behavior of MirrorMaker but looks like I
> > am
> > > missing something here. Please see the steps which I performed :
> > >
> > > 1) I configured MM on source Kafka cluster
> > > 2) Created a topic and pushed some data in it using console producer.
> > > 3) My understanding is that MM would start mirroring the data (which is
> > > there in the topic) based on "offsetCommitIntervalMs" and it would be
> > there
> > > in destination cluster.
> > >
> > > https://github.com/apache/kafka/blob/0.9.0/core/src/
> > > main/scala/kafka/tools/MirrorMaker.scala#L503
> > >
> > > 4) But when I list the topics on destination, I cant see the topic
> which
> > I
> > > recently created on source.
> > > 5) I tried to check the offset of "mirrormaker_group" for that topic
> (on
> > > source cluster) using kafka.admin.ConsumerGroupCommand, I see the
> offsets
> > > for that topic as "unknown".
> > > 6) But when I start console consumer for that topic on source or
> > > destination (auto creation of topic is true), I see that all data in
> > being
> > > mirrored via MM and kafka.admin.ConsumerGroupCommand tells the right
> > > offsets this time.
> > >
> > > Is this expected behavior of MM or did I mess up with some
> configuration?
> > >
> > > Regards,
> > > Umesh
> > >
> >
>


Re: kafka mirrormaker - new consumer version - message delay

2016-07-21 Thread cs user
Hi All,

Adding this setting does the trick it seems:

--offset.commit.interval.ms 5000

This defaults to 60,000.

Not sure if this has any adverse affects by lowering it to 5 seconds.

Cheers!


On Thu, Jul 21, 2016 at 10:50 AM, cs user <acldstk...@gmail.com> wrote:

> Hi All,
>
> I've recently enabled ssl for our cluster, and as we are using a mirror
> maker I'm now starting our mirror maker processes with the --new.consumer
> switch. I can see that now the mirror maker consumer group type has
> switched from ZK to KF.
>
> However I've started to notice a delay when sending messages to the main
> clusters and the messages arriving on the mirror maker cluster. Sometimes
> this can take upwards of 30 seconds or more for the lag to catch up.
>
> It seems like there is a delay in the mirror maker process detecting that
> new messages are waiting, as it doesn't seem to make a different how big
> the lag is, 100 messages, 1000 or 50,000. Once it detects there is a lag it
> downloads all the waiting messages fairly instantly. I'd like this delay to
> be as minimal as possible though.
>
> It seems very similar to this:
> https://github.com/Jroland/kafka-net/issues/44
>
> Does anyone know if there is a similar option with the mirrormaker process?
>
> Thanks !
>
>
>


kafka mirrormaker - new consumer version - message delay

2016-07-21 Thread cs user
Hi All,

I've recently enabled ssl for our cluster, and as we are using a mirror
maker I'm now starting our mirror maker processes with the --new.consumer
switch. I can see that now the mirror maker consumer group type has
switched from ZK to KF.

However I've started to notice a delay when sending messages to the main
clusters and the messages arriving on the mirror maker cluster. Sometimes
this can take upwards of 30 seconds or more for the lag to catch up.

It seems like there is a delay in the mirror maker process detecting that
new messages are waiting, as it doesn't seem to make a different how big
the lag is, 100 messages, 1000 or 50,000. Once it detects there is a lag it
downloads all the waiting messages fairly instantly. I'd like this delay to
be as minimal as possible though.

It seems very similar to this:
https://github.com/Jroland/kafka-net/issues/44

Does anyone know if there is a similar option with the mirrormaker process?

Thanks !


SSL / SASL_SSL questions

2016-07-18 Thread cs user
Hi All,

I have a question about the config I have working, and whether or not all
traffic is being encrypted when sent via the client.

Lets say I have the following settings, I'm only including the relevant
parameters:


Broker config:

listeners=SASL_SSL://:9092,SSL://:9093
log.message.format.version=0.10.0.0
port=9092
sasl.mechanism.inter.broker.protocol=SSL
sasl.enabled.mechanisms=PLAIN,SSL
security.inter.broker.protocol=SSL
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password=somepassword
ssl.key.password=somepassword
ssl.truststore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.truststore.password=somepassword


Client config, clients connect to the cluster on port 9092 (SASL_SSL)

PROPS.put("security.protocol","SASL_SSL");
PROPS.put("sasl.mechanism", "PLAIN");
PROPS.put("ssl.truststore.location","/some/location/kafka.client.truststore.jks");
PROPS.put("ssl.truststore.password","somepassword");


In this scenario, I believe that traffic between the servers is being
encrypted via TLS and authentication is being provided by TLS.

By giving a false password, I can confirm that client->broker connections
are being authenticated using the JAAS method. Once I put in the correct
password the producer is able to connect and send messages.

However how about client->broker communication? Once authentication has
completed, is all future traffic which is sent also encrypted with TLS?

Thanks in advance for any responses.

Cheers!


Re: Enabling PLAINTEXT inter broker security

2016-07-15 Thread cs user
Just to follow on from this, what is the difference between these two
broker parameters?

listeners Listener List - Comma-separated list of URIs we will listen on
and their protocols. Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface. Examples of legal
listener lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://
0.0.0.0:9092, TRACE://localhost:9093

portthe port to listen and accept connections on

Can I set both the following? Does it make sense?

listeners=SASL_PLAINTEXT://:9092
port=9092


On Fri, Jul 15, 2016 at 12:26 PM, cs user <acldstk...@gmail.com> wrote:

> Yep, tried 0.10.0.0, all working fine :-)
>
> I was using 0.9.
>
> Apologies for the spam!
>
> On Fri, Jul 15, 2016 at 12:05 PM, Manikumar Reddy <
> manikumar.re...@gmail.com> wrote:
>
>> Hi,
>>
>> Which Kafka version you are using?
>> SASL/PLAIN support is available from Kafka 0.10.0.0 release onwards.
>>
>>
>> Thanks
>> Manikumar
>>
>> On Fri, Jul 15, 2016 at 4:22 PM, cs user <acldstk...@gmail.com> wrote:
>>
>> > Apologies, just to me clear, my broker settings are actually as below,
>> > using PLAINTEXT throughout
>> >
>> > listeners=SASL_PLAINTEXT://host.name:port
>> > security.inter.broker.protocol=SASL_PLAINTEXT
>> > sasl.mechanism.inter.broker.protocol=PLAIN
>> > sasl.enabled.mechanisms=PLAIN
>> >
>> >
>> > On Fri, Jul 15, 2016 at 11:50 AM, cs user <acldstk...@gmail.com> wrote:
>> >
>> > > Hi All,
>> > >
>> > > I'm dipping my toes into kafka security, I'm following the guide here:
>> > >
>> >
>> http://kafka.apache.org/documentation.html#security_sasl_plain_brokerconfig
>> > >  and
>> > http://kafka.apache.org/documentation.html#security_sasl_brokerconfig
>> > >
>> > > My jaas config file looks like:
>> > >
>> > > KafkaServer {
>> > > org.apache.kafka.common.security.plain.PlainLoginModule
>> required
>> > > username="admin"
>> > > password="admin-secret"
>> > > user_admin="admin-secret"
>> > > user_alice="alice-secret";
>> > > };
>> > >
>> > > I pass the following to kafka on startup to load the above in:
>> > >
>> > > -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
>> > >
>> > >
>> > > I'm using the following for my broker settings, using PLAINTEXT
>> > throughout:
>> > >
>> > > listeners=SASL_PLAINTEXT://host.name:port
>> > > security.inter.broker.protocol=SASL_SSL
>> > > sasl.mechanism.inter.broker.protocol=PLAIN
>> > > sasl.enabled.mechanisms=PLAIN
>> > >
>> > >
>> > >
>> > > However when kafka starts up I get the following error message:
>> > >
>> > > Caused by: javax.security.auth.login.LoginException: unable to find
>> > > LoginModule class:
>> > org.apache.kafka.common.security.plain.PlainLoginModule
>> > >
>> > > Any idea why I would be getting this error?
>> > >
>> > > Thanks!
>> > >
>> >
>>
>
>


Re: Enabling PLAINTEXT inter broker security

2016-07-15 Thread cs user
Yep, tried 0.10.0.0, all working fine :-)

I was using 0.9.

Apologies for the spam!

On Fri, Jul 15, 2016 at 12:05 PM, Manikumar Reddy <manikumar.re...@gmail.com
> wrote:

> Hi,
>
> Which Kafka version you are using?
> SASL/PLAIN support is available from Kafka 0.10.0.0 release onwards.
>
>
> Thanks
> Manikumar
>
> On Fri, Jul 15, 2016 at 4:22 PM, cs user <acldstk...@gmail.com> wrote:
>
> > Apologies, just to me clear, my broker settings are actually as below,
> > using PLAINTEXT throughout
> >
> > listeners=SASL_PLAINTEXT://host.name:port
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > sasl.mechanism.inter.broker.protocol=PLAIN
> > sasl.enabled.mechanisms=PLAIN
> >
> >
> > On Fri, Jul 15, 2016 at 11:50 AM, cs user <acldstk...@gmail.com> wrote:
> >
> > > Hi All,
> > >
> > > I'm dipping my toes into kafka security, I'm following the guide here:
> > >
> >
> http://kafka.apache.org/documentation.html#security_sasl_plain_brokerconfig
> > >  and
> > http://kafka.apache.org/documentation.html#security_sasl_brokerconfig
> > >
> > > My jaas config file looks like:
> > >
> > > KafkaServer {
> > > org.apache.kafka.common.security.plain.PlainLoginModule
> required
> > > username="admin"
> > > password="admin-secret"
> > > user_admin="admin-secret"
> > > user_alice="alice-secret";
> > > };
> > >
> > > I pass the following to kafka on startup to load the above in:
> > >
> > > -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
> > >
> > >
> > > I'm using the following for my broker settings, using PLAINTEXT
> > throughout:
> > >
> > > listeners=SASL_PLAINTEXT://host.name:port
> > > security.inter.broker.protocol=SASL_SSL
> > > sasl.mechanism.inter.broker.protocol=PLAIN
> > > sasl.enabled.mechanisms=PLAIN
> > >
> > >
> > >
> > > However when kafka starts up I get the following error message:
> > >
> > > Caused by: javax.security.auth.login.LoginException: unable to find
> > > LoginModule class:
> > org.apache.kafka.common.security.plain.PlainLoginModule
> > >
> > > Any idea why I would be getting this error?
> > >
> > > Thanks!
> > >
> >
>


Re: Enabling PLAINTEXT inter broker security

2016-07-15 Thread cs user
Apologies, just to me clear, my broker settings are actually as below,
using PLAINTEXT throughout

listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN


On Fri, Jul 15, 2016 at 11:50 AM, cs user <acldstk...@gmail.com> wrote:

> Hi All,
>
> I'm dipping my toes into kafka security, I'm following the guide here:
> http://kafka.apache.org/documentation.html#security_sasl_plain_brokerconfig
>  and http://kafka.apache.org/documentation.html#security_sasl_brokerconfig
>
> My jaas config file looks like:
>
> KafkaServer {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="admin"
> password="admin-secret"
> user_admin="admin-secret"
> user_alice="alice-secret";
> };
>
> I pass the following to kafka on startup to load the above in:
>
> -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
>
>
> I'm using the following for my broker settings, using PLAINTEXT throughout:
>
> listeners=SASL_PLAINTEXT://host.name:port
> security.inter.broker.protocol=SASL_SSL
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
>
>
>
> However when kafka starts up I get the following error message:
>
> Caused by: javax.security.auth.login.LoginException: unable to find
> LoginModule class: org.apache.kafka.common.security.plain.PlainLoginModule
>
> Any idea why I would be getting this error?
>
> Thanks!
>


Enabling PLAINTEXT inter broker security

2016-07-15 Thread cs user
Hi All,

I'm dipping my toes into kafka security, I'm following the guide here:
http://kafka.apache.org/documentation.html#security_sasl_plain_brokerconfig
 and http://kafka.apache.org/documentation.html#security_sasl_brokerconfig

My jaas config file looks like:

KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};

I pass the following to kafka on startup to load the above in:

-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf


I'm using the following for my broker settings, using PLAINTEXT throughout:

listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN



However when kafka starts up I get the following error message:

Caused by: javax.security.auth.login.LoginException: unable to find
LoginModule class: org.apache.kafka.common.security.plain.PlainLoginModule

Any idea why I would be getting this error?

Thanks!


Re: Mirror maker setup - multi node

2016-06-28 Thread cs user
Hi there,

I mean 1 cluster with 3 nodes. So I will need to run the mirror maker
cluster on each of the 3 nodes in the cluster, in case of the loss of a
node, the other 2 will continue to pull messages off the consumer cluster.
It does seem to work correctly when I tested it. It just warns about topics
with only one partition, when multiple clients are trying to consume from
it:

"No broker partitions consumed by consumer thread"

One problem I have found is that when a topic is created, at first the
mirror is unable to consume messages instantly. It seems that only say 70%
of the messages( say 7,000 of of 10,000) that were sent to newly created
topic make it to the mirror.

After the first batch, however, the messages seem to be mirrored correctly.

I checked the mirror maker process logs and found the following, just as
the topic was created:

[2016-06-28 11:56:46,339] WARN Error while fetching metadata with
correlation id 0 : {topictest4=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
[2016-06-28 11:56:46,545] WARN Error while fetching metadata with
correlation id 1 : {topictest4=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)
[2016-06-28 11:56:46,649] WARN Error while fetching metadata with
correlation id 2 : {topictest4=LEADER_NOT_AVAILABLE}
(org.apache.kafka.clients.NetworkClient)

Is there any reason not all of the messages made it through? Is there a way
to reset the offset so it tries to sync the messages from the beginning?

Thanks!

On Tue, Jun 28, 2016 at 12:00 PM, Gerard Klijs <gerard.kl...@dizzit.com>
wrote:

> With 3 nodes, I assume you mean 3 clusters? If I understand correctly, say
> you have 3 clusters, A, B, and C, you simultaneously:
> - want to copy from A and B to C, to get an aggregation in C
> - want to copy fram A and C to B, to get a fail-back aggregation in B.
> Now what will happen when a message is produced in cluster a?
> - it will be copied to both C and B.
> - the copy wil cause a new copy in C and B,
> etc.
> There are several ways out if this, depending on your use case. It's pretty
> easy to change the behaviour of the mirrormaker, for example to copy it to
> $topic-aggregation instead of $topic, and to not copy it when the topic
> ends with aggregation
>
> On Tue, Jun 28, 2016 at 10:15 AM cs user <acldstk...@gmail.com> wrote:
>
> > Hi All,
> >
> > So I understand I can run the following to aggregate topics from two
> > different clusters into a mirror:
> >
> > bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
> > sourceCluster1Consumer.config --consumer.config
> > sourceCluster2Consumer.config --num.streams 2 --producer.config
> > targetClusterProducer.config --whitelist=".*"
> >
> > Lets say my kafka mirror cluster consists of 3 nodes, can the above
> process
> > be started on each of the 3 nodes, so that in the event it fails on one
> the
> > other 2 will keep going?
> >
> > Or should only one of the nodes attempt to perform the aggregation?
> >
> > Thanks!
> >
>


Mirror maker setup - multi node

2016-06-28 Thread cs user
Hi All,

So I understand I can run the following to aggregate topics from two
different clusters into a mirror:

bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config
sourceCluster1Consumer.config --consumer.config
sourceCluster2Consumer.config --num.streams 2 --producer.config
targetClusterProducer.config --whitelist=".*"

Lets say my kafka mirror cluster consists of 3 nodes, can the above process
be started on each of the 3 nodes, so that in the event it fails on one the
other 2 will keep going?

Or should only one of the nodes attempt to perform the aggregation?

Thanks!


Re: Kafka behind a load balancer

2016-06-03 Thread cs user
Hi Tom,

That's great, I thought as much, thanks for taking the time to respond,
much appreciated!

Cheers

On Fri, Jun 3, 2016 at 1:18 PM, Tom Crayford <tcrayf...@heroku.com> wrote:

> Hi,
>
> Kafka is designed to distribute traffic between brokers itself. It's
> naturally distributed and does not need, and indeed will not work behind a
> load balancer. I'd recommend reading the docs for more, but
> http://kafka.apache.org/documentation.html#design_loadbalancing is a good
> start.
>
> Thanks
>
> Tom Crayford
> Heroku Kafka
>
> On Fri, Jun 3, 2016 at 1:15 PM, cs user <acldstk...@gmail.com> wrote:
>
> > Hi All,
> >
> > Does anyone have any experience of using kafka behind a load balancer?
> >
> > Would this work? Are there any reasons why you would not want to do it?
> >
> > Thanks!
> >
>


Kafka behind a load balancer

2016-06-03 Thread cs user
Hi All,

Does anyone have any experience of using kafka behind a load balancer?

Would this work? Are there any reasons why you would not want to do it?

Thanks!


Re: Kafka broker crash - broker id then changed

2016-05-26 Thread cs user
Hi Ben,

Thanks for responding. I can't imagine what would have cleaned temp up at
that time. I don't think we have anything in place to do that, it also
appears to happened to both machines at the same time.

It also appears that the other topics were not affected, there were still
other files present in temp.

Thanks!

On Thu, May 26, 2016 at 9:19 AM, Ben Davison <ben.davi...@7digital.com>
wrote:

> Possibly tmp got cleaned up?
>
> Seems like one of the log files where deleted while a producer was writing
> messages to it:
>
> On Thursday, 26 May 2016, cs user <acldstk...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are running Kafka version 0.9.0.1, at the time the brokers crashed
> > yesterday we were running in a 2 mode cluster. This has now been
> increased
> > to 3.
> >
> > We are not specifying a broker id and relying on kafka generating one.
> >
> > After the brokers crashed (at exactly the same time) we left kafka
> stopped
> > for a while. After kafka was started back up, the broker id's on both
> > servers were incremented, they were 1001/1002 and they flipped to
> > 1003/1004. This seemed to cause some problems as partitions were assigned
> > to broker id's which it believed had disappeared and so were not
> > recoverable.
> >
> > We noticed that the broker id's are actually stored in:
> >
> > /tmp/kafka-logs/meta.properties
> >
> > So we set these back to what they were and restarted. Is there a reason
> why
> > these would change?
> >
> > Below are the error logs from each server:
> >
> > Server 1
> >
> > [2016-05-25 09:05:52,827] INFO [ReplicaFetcherManager on broker 1002]
> > Removed fetcher for partitions [Topic1Heartbeat,1]
> > (kafka.server.ReplicaFetcherManager)
> > [2016-05-25 09:05:52,831] INFO Completed load of log Topic1Heartbeat-1
> with
> > log end offset 0 (kafka.log.Log)
> > [2016-05-25 09:05:52,831] INFO Created log for partition
> > [Topic1Heartbeat,1] in /tmp/kafka-logs with properties {compression.type
> ->
> > producer, file.delete.delay.ms -> 6, max.message.bytes -> 112,
> > min.insync.replicas -> 1, segment.
> > jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5,
> > index.interval.bytes -> 4096, unclean.leader.election.enable -> true,
> > retention.bytes -> -1, delete.retention.ms -> 8640, cleanup.policy
> ->
> > delete, flush.ms -> 9
> > 223372036854775807, segment.ms -> 60480, segment.bytes ->
> 1073741824,
> > retention.ms -> 60480, segment.index.bytes -> 10485760,
> flush.messages
> > -> 9223372036854775807}. (kafka.log.LogManager)
> > [2016-05-25 09:05:52,831] INFO Partition [Topic1Heartbeat,1] on broker
> > 1002: No checkpointed highwatermark is found for partition
> > [Topic1Heartbeat,1] (kafka.cluster.Partition)
> > [2016-05-25 09:14:12,189] INFO [GroupCoordinator 1002]: Preparing to
> > restabilize group Topic1 with old generation 0
> > (kafka.coordinator.GroupCoordinator)
> > [2016-05-25 09:14:12,190] INFO [GroupCoordinator 1002]: Stabilized group
> > Topic1 generation 1 (kafka.coordinator.GroupCoordinator)
> > [2016-05-25 09:14:12,195] INFO [GroupCoordinator 1002]: Assignment
> received
> > from leader for group Topic1 for generation 1
> > (kafka.coordinator.GroupCoordinator)
> > [2016-05-25 09:14:12,749] FATAL [Replica Manager on Broker 1002]: Halting
> > due to unrecoverable I/O error while handling produce request:
> >  (kafka.server.ReplicaManager)
> > kafka.common.KafkaStorageException: I/O exception in append to log
> > '__consumer_offsets-0'
> > at kafka.log.Log.append(Log.scala:318)
> > at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> > at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> > at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> > at
> > kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> > at
> >
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> > at
> >
> >
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> > at
> >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
> > at
> >
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
> > at scala.colle

Kafka broker crash - broker id then changed

2016-05-26 Thread cs user
Hi All,

We are running Kafka version 0.9.0.1, at the time the brokers crashed
yesterday we were running in a 2 mode cluster. This has now been increased
to 3.

We are not specifying a broker id and relying on kafka generating one.

After the brokers crashed (at exactly the same time) we left kafka stopped
for a while. After kafka was started back up, the broker id's on both
servers were incremented, they were 1001/1002 and they flipped to
1003/1004. This seemed to cause some problems as partitions were assigned
to broker id's which it believed had disappeared and so were not
recoverable.

We noticed that the broker id's are actually stored in:

/tmp/kafka-logs/meta.properties

So we set these back to what they were and restarted. Is there a reason why
these would change?

Below are the error logs from each server:

Server 1

[2016-05-25 09:05:52,827] INFO [ReplicaFetcherManager on broker 1002]
Removed fetcher for partitions [Topic1Heartbeat,1]
(kafka.server.ReplicaFetcherManager)
[2016-05-25 09:05:52,831] INFO Completed load of log Topic1Heartbeat-1 with
log end offset 0 (kafka.log.Log)
[2016-05-25 09:05:52,831] INFO Created log for partition
[Topic1Heartbeat,1] in /tmp/kafka-logs with properties {compression.type ->
producer, file.delete.delay.ms -> 6, max.message.bytes -> 112,
min.insync.replicas -> 1, segment.
jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5,
index.interval.bytes -> 4096, unclean.leader.election.enable -> true,
retention.bytes -> -1, delete.retention.ms -> 8640, cleanup.policy ->
delete, flush.ms -> 9
223372036854775807, segment.ms -> 60480, segment.bytes -> 1073741824,
retention.ms -> 60480, segment.index.bytes -> 10485760, flush.messages
-> 9223372036854775807}. (kafka.log.LogManager)
[2016-05-25 09:05:52,831] INFO Partition [Topic1Heartbeat,1] on broker
1002: No checkpointed highwatermark is found for partition
[Topic1Heartbeat,1] (kafka.cluster.Partition)
[2016-05-25 09:14:12,189] INFO [GroupCoordinator 1002]: Preparing to
restabilize group Topic1 with old generation 0
(kafka.coordinator.GroupCoordinator)
[2016-05-25 09:14:12,190] INFO [GroupCoordinator 1002]: Stabilized group
Topic1 generation 1 (kafka.coordinator.GroupCoordinator)
[2016-05-25 09:14:12,195] INFO [GroupCoordinator 1002]: Assignment received
from leader for group Topic1 for generation 1
(kafka.coordinator.GroupCoordinator)
[2016-05-25 09:14:12,749] FATAL [Replica Manager on Broker 1002]: Halting
due to unrecoverable I/O error while handling produce request:
 (kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log
'__consumer_offsets-0'
at kafka.log.Log.append(Log.scala:318)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
at
kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
at
scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
at
kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
at
kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:228)
at
kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
at
kafka.coordinator.GroupCoordinator$$anonfun$handleCommitOffsets$9.apply(GroupCoordinator.scala:429)
at scala.Option.foreach(Option.scala:257)
at
kafka.coordinator.GroupCoordinator.handleCommitOffsets(GroupCoordinator.scala:429)
at
kafka.server.KafkaApis.handleOffsetCommitRequest(KafkaApis.scala:280)
at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException:
/tmp/kafka-logs/__consumer_offsets-0/.index (No such
file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at
kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
at