On Wed, Mar 20, 2013 at 8:41 AM, Jason Rosenberg <j...@squareup.com> wrote:
> I think might be cool, would be to have a feature where by you can tell a
> broker to stop accepting new data produced to it, but still allow consumers
> to consume from it.
>
> That way, you can roll out new brokers to a cluster, turn off producing to
> the old nodes, then wait for the log retention period, and then remove the
> old nodes from the cluster.
>
> Does that make sense?  Could it be easily done?

Does to me. A way to do this is to place a load-balancer between your
consumers and brokers, allowing individual brokers to be taken out of
rotation for maintenance.

>
> Or is all this a non-issue anyway in 0.8?
>
> Maybe I should just wait for 0.8 to be ready, before doing my migration
> anyway.....
>
> Jason
>
> On Wed, Mar 20, 2013 at 6:42 AM, Neha Narkhede <neha.narkh...@gmail.com>wrote:
>
>> The zookeeper connection URL with namespace can be
>> zkhost1:123,zkhost2:123/newnamespace
>>
>> The wiki is up to date for Kafka 0.7.2. There is no officially supported
>> feature to do that sort of migration, I suggested one approach that I could
>> think of :-)
>>
>> Thanks,
>> Neha
>>
>> On Tuesday, March 19, 2013, Jason Rosenberg wrote:
>>
>> > I can do most of that I presume.
>> >
>> > It looks like to set up a separate namespace for zk, I can add /path at
>> the
>> > end of each node:port in my zkconnect string, e.g.:
>> >  zkhost1:123/newnamespace,zkhost2:123/newnamespace
>> > right?
>> >
>> > For mirroring, there's some vague documentation here:
>> > https://cwiki.apache.org/KAFKA/kafka-mirroring-mirrormaker.html
>> > Is this the most up to date approach for 0.7.2?  Set up a MirrorMaker
>> > intermediate process that consumes from the old and produces to the new?
>> >
>> > I am not able to restart producers one by one (as there are many, on a
>> > rather asynchronous update/restart cycle).  But I can eventually get them
>> > migrated over, etc.
>> >
>> > Jason
>> >
>> > On Tue, Mar 19, 2013 at 7:07 PM, Neha Narkhede <neha.narkh...@gmail.com
>> <javascript:;>
>> > >wrote:
>> >
>> > > Can you do the following -
>> > >
>> > > 1. Start a mirror Kafka cluster with the new version on a separate
>> > > zookeeper namespace. Configure this to mirror data from the existing
>> > kafka
>> > > cluster.
>> > > 2. Move your consumers to pull data from the mirror
>> > > 3. For each producer, one at a time, change the zookeeper namespace to
>> > > point to the mirror and restart the producer.
>> > > 4. Once the producers have moved to mirror cluster, shutdown mirroring
>> > and
>> > > old cluster.
>> > >
>> > > Thanks,
>> > > Neha
>> > >
>> > > On Tuesday, March 19, 2013, Jason Rosenberg wrote:
>> > >
>> > > > I need to upgrade some kafka broker servers.  So I need to seamlessly
>> > > > migrate traffic from the old brokers to the new ones, without losing
>> > > data,
>> > > > and without stopping producers.  I can temporarily stop consumers,
>> etc.
>> > > >
>> > > > Is there a strategy for this?
>> > > >
>> > > > Also, because of the way we are embedding kafka in our framework, our
>> > > > brokerId's are auto-generated (based on hostname, etc.), so I can't
>> > > simply
>> > > > copy over broker log files, etc., by transferring an old brokerId to
>> a
>> > > new
>> > > > host.
>> > > >
>> > > > Is there a way to change the view of the cluster from the producer's
>> > > > standpoint, without doing so from the consumers standpoint?  That
>> way,
>> > > the
>> > > > producers can start writing to the new brokers, while the consumers
>> > drain
>> > > > all data from the old brokers before switching to the new brokers.
>> > > >
>> > > > I don't actually care about ordering of messages, since the consumers
>> > are
>> > > > publishing them to a store that will index them properly based on
>> > source
>> > > > timestamp, etc.
>> > > >
>> > > > We are using zk for both producers and consumers connections.
>> > > >
>> > > > This is using 0.7.2.  I assume in 0.8 it will be easier, since with
>> > > > replication, you can phase in the new servers gradually, etc., no?
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Jason
>> > > >
>> > >
>> >
>>

Reply via email to