Re: [External Email] Re: Upgrading Kafka Kraft in Kubernetes

2023-10-20 Thread Jakub Scholz
That is definitely confusing that the --bootstrap-server option works only
when it is before the upgrade part. Maybe it is worth opening a JIRA for it?

Jakub

On Fri, Oct 20, 2023 at 7:40 AM Soukal, Jiří 
wrote:

> Hello Jakub,
>
> Thank you for reply. I have tried the --bootstrap-server option before but
> have used it incorrectly.
>
> this gives unrecognized arguments: '--bootstrap-server'
> ./bin/kafka-features.sh upgrade --metadata 3.6 --bootstrap-server
> [server:port]
>
> but this works:
> ./bin/kafka-features.sh --bootstrap-server [server:port] upgrade
> --metadata 3.6
>
> This is doable using declarative Kubernetes job.
>
> Thank you very much, I appreciate your help.
>
> Jiri
>
> From: Jakub Scholz 
> Sent: Thursday, October 19, 2023 5:18 PM
> To: users@kafka.apache.org
> Subject: [External Email] Re: Upgrading Kafka Kraft in Kubernetes
>
> Hi Jiří,
>
> Why can't you run it from another Pod? You should be able to specify
> --bootstrap-server and point it to the brokers to connect to. You can also
> pass further properties to it using the --command-config option. It should
> be also possible to use it from the Admin API
> <
> https://kafka.apache.org/36/javadoc/org/apache/kafka/clients/admin/Admin.html#updateFeatures(java.util.Map
> <
> https://kafka.apache.org/36/javadoc/org/apache/kafka/clients/admin/Admin.html#updateFeatures(java.util.Map
> >,org.apache.kafka.clients.admin.UpdateFeaturesOptions)>
> directly from anywhere if needed.
>
> But there is indeed no way to manage this declaratively in the Kafka
> properties file as it was possible with inter.broker.protocol.version. It
> also works a bit differently than the inter.broker.protocol.version worked
> before KRaft:
> * I think it does more checking whether all nodes in the cluster support
> the version etc.
> * You can't really downgrade it easily (at least not safely).
>
> So maybe it is better you cannot just change some environment variables as
> that might result in crash-looping pods.
>
> Jakub
>
>
> On Thu, Oct 19, 2023 at 2:58 PM Soukal, Jiří  <mailto:j.sou...@quadient.com.invalid>>
> wrote:
>
> > Hello all,
> > Final step of the upgrade procedure is to run command like:
> >
> > "./bin/kafka-features.sh upgrade --metadata 3.6"
> >
> > In the Kubernetes world, which works in the desired state configuration
> > (yamls and its the upper level abstractions), this is quite complicated.
> >
> > The first thing I tried to find is if I can call it from another kafka
> pod
> > while specifying a server to connect to, however it is not possible. It
> > needs to be run from withing the running kafka pod. This leads to
> executing
> > to the pod or running the kubectl exec (e.g. kubectl exec kafka-0 -n
> cloud
> > -- bin/kafka-configs.sh ... )
> >
> > However, this command is also imperative instead of declarative.
> >
> > Question: Is there another approach? E.g. driving the metadata version
> via
> > ENV variable or file configuration?
> >
> > It seems like it was designed without Kubernetes world in mind.
> >
>


RE: [External Email] Re: Upgrading Kafka Kraft in Kubernetes

2023-10-19 Thread Soukal , Jiří
Hello Jakub,

Thank you for reply. I have tried the --bootstrap-server option before but have 
used it incorrectly.

this gives unrecognized arguments: '--bootstrap-server'
./bin/kafka-features.sh upgrade --metadata 3.6 --bootstrap-server [server:port]

but this works:
./bin/kafka-features.sh --bootstrap-server [server:port] upgrade --metadata 3.6

This is doable using declarative Kubernetes job.

Thank you very much, I appreciate your help.

Jiri

From: Jakub Scholz 
Sent: Thursday, October 19, 2023 5:18 PM
To: users@kafka.apache.org
Subject: [External Email] Re: Upgrading Kafka Kraft in Kubernetes

Hi Jiří,

Why can't you run it from another Pod? You should be able to specify
--bootstrap-server and point it to the brokers to connect to. You can also
pass further properties to it using the --command-config option. It should
be also possible to use it from the Admin API
<https://kafka.apache.org/36/javadoc/org/apache/kafka/clients/admin/Admin.html#updateFeatures(java.util.Map<https://kafka.apache.org/36/javadoc/org/apache/kafka/clients/admin/Admin.html#updateFeatures(java.util.Map>,org.apache.kafka.clients.admin.UpdateFeaturesOptions)>
directly from anywhere if needed.

But there is indeed no way to manage this declaratively in the Kafka
properties file as it was possible with inter.broker.protocol.version. It
also works a bit differently than the inter.broker.protocol.version worked
before KRaft:
* I think it does more checking whether all nodes in the cluster support
the version etc.
* You can't really downgrade it easily (at least not safely).

So maybe it is better you cannot just change some environment variables as
that might result in crash-looping pods.

Jakub


On Thu, Oct 19, 2023 at 2:58 PM Soukal, Jiří 
mailto:j.sou...@quadient.com.invalid>>
wrote:

> Hello all,
> Final step of the upgrade procedure is to run command like:
>
> "./bin/kafka-features.sh upgrade --metadata 3.6"
>
> In the Kubernetes world, which works in the desired state configuration
> (yamls and its the upper level abstractions), this is quite complicated.
>
> The first thing I tried to find is if I can call it from another kafka pod
> while specifying a server to connect to, however it is not possible. It
> needs to be run from withing the running kafka pod. This leads to executing
> to the pod or running the kubectl exec (e.g. kubectl exec kafka-0 -n cloud
> -- bin/kafka-configs.sh ... )
>
> However, this command is also imperative instead of declarative.
>
> Question: Is there another approach? E.g. driving the metadata version via
> ENV variable or file configuration?
>
> It seems like it was designed without Kubernetes world in mind.
>


Re: Upgrading Kafka Kraft in Kubernetes

2023-10-19 Thread Jakub Scholz
Hi Jiří,

Why can't you run it from another Pod? You should be able to specify
--bootstrap-server and point it to the brokers to connect to. You can also
pass further properties to it using the --command-config option. It should
be also possible to use it from the Admin API

directly from anywhere if needed.

But there is indeed no way to manage this declaratively in the Kafka
properties file as it was possible with inter.broker.protocol.version. It
also works a bit differently than the inter.broker.protocol.version worked
before KRaft:
* I think it does more checking whether all nodes in the cluster support
the version etc.
* You can't really downgrade it easily (at least not safely).

So maybe it is better you cannot just change some environment variables as
that might result in crash-looping pods.

Jakub


On Thu, Oct 19, 2023 at 2:58 PM Soukal, Jiří 
wrote:

> Hello all,
> Final step of the upgrade procedure is to run command like:
>
> "./bin/kafka-features.sh upgrade --metadata 3.6"
>
> In the Kubernetes world, which works in the desired state configuration
> (yamls and its the upper level abstractions), this is quite complicated.
>
> The first thing I tried to find is if I can call it from another kafka pod
> while specifying a server to connect to, however it is not possible. It
> needs to be run from withing the running kafka pod. This leads to executing
> to the pod or running the kubectl exec (e.g. kubectl exec kafka-0 -n cloud
> -- bin/kafka-configs.sh ... )
>
> However, this command is also imperative instead of declarative.
>
> Question: Is there another approach? E.g. driving the metadata version via
> ENV variable or file configuration?
>
> It seems like it was designed without Kubernetes world in mind.
>


Upgrading Kafka Kraft in Kubernetes

2023-10-19 Thread Soukal , Jiří
Hello all,
Final step of the upgrade procedure is to run command like:

"./bin/kafka-features.sh upgrade --metadata 3.6"

In the Kubernetes world, which works in the desired state configuration (yamls 
and its the upper level abstractions), this is quite complicated.

The first thing I tried to find is if I can call it from another kafka pod 
while specifying a server to connect to, however it is not possible. It needs 
to be run from withing the running kafka pod. This leads to executing to the 
pod or running the kubectl exec (e.g. kubectl exec kafka-0 -n cloud -- 
bin/kafka-configs.sh ... )

However, this command is also imperative instead of declarative.

Question: Is there another approach? E.g. driving the metadata version via ENV 
variable or file configuration?

It seems like it was designed without Kubernetes world in mind.


Upgrading kafka from 2.5 to 2.8

2022-04-03 Thread vinay kumar
Hi,

Is there any wiki for upgarading kafka from 2.5 to 2.8 in production


Re: `java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

2021-09-24 Thread Stig Rohde Døssing
I've had something similar on a different embedded kafka project. Most
likely your issue is that you are putting kafka-clients 3.0.0 on the
classpath alongside the Kafka server in version 2.7.1, which is the version
brought in by your spring-kafka-test dependency. Since the Kafka server
itself depends on kafka-clients, if you upgrade kafka-clients but not the
server on the same classpath, you might get code mismatches like this.

I think you need to wait for a new version of spring-kafka-test. You can
try bumping the org.apache.kafka:kafka_2.13 dependency to 3.0.0, but
there's no guarantee it will work.

Den fre. 24. sep. 2021 kl. 09.24 skrev Bruno Cadonna :

> Hi Bruce,
>
> I do not know the specific root cause of your errors but what I found is
> that Spring 2.7.x is compatible with clients 2.7.0 and 2.8.0, not with
> 3.0.0 and 2.8.1:
>
> https://spring.io/projects/spring-kafka
>
> Best.
> Bruno
>
> On 24.09.21 00:25, Chang Liu wrote:
> > Hi Kafka users,
> >
> > I start running into the following error after upgrading `Kafka-clients`
> from 2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don’t see a
> working solution by searching on Google:
> https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
> <
> https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
> >
> >
> > This looks like backward incompatibility of Kafka-clients. Do you happen
> to know a solution for this?
> >
> > ```
> > java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS
> >
> >   at kafka.server.Defaults$.(KafkaConfig.scala:242)
> >   at kafka.server.Defaults$.(KafkaConfig.scala)
> >   at kafka.server.KafkaConfig$.(KafkaConfig.scala:961)
> >   at kafka.server.KafkaConfig$.(KafkaConfig.scala)
> >   at kafka.server.KafkaConfig.LogDirProp(KafkaConfig.scala)
> >   at
> org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:298)
> >   at
> org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:113)
> >   at
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
> >   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> >   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> >   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> >   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> >   at
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
> >   at
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
> >   at
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
> >   at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
> > ```
> >
> > I got some suggestion that is to upgrade Spring library.
> >
> > This is the `pom.xml` that defines all my dependencies. I only upgraded
> the `Kafka-clients` in production:
> https://github.com/opensearch-project/security/blob/main/pom.xml#L84 <
> https://github.com/opensearch-project/security/blob/main/pom.xml#L84>
> >
> > The dependency for test still remains:
> https://github.com/opensearch-project/security/blob/main/pom.xml#L503 <
> https://github.com/opensearch-project/security/blob/main/pom.xml#L503>
> >
> > Is this the Spring library that should be upgraded?
> https://github.com/opensearch-project/security/blob/main/pom.xml#L495 <
> https://github.com/opensearch-project/security/blob/main/pom.xml#L495>
> >
> > But even though I upgraded Spring library to 2.7.7:
> https://github.com/opensearch-project/security/blob/main/pom.xml#L496 <
> https://github.com/opensearch-project/security/blob/main/pom.xml#L496> ,
> I got another error:
> >
> > `java.lang.NoClassDefFoundError:
> org/apache/kafka/common/record/BufferSupplier`
> >
> > Any suggestion helping me out this will be highly appreciated!
> >
> > Thanks,
> > Bruce
> >
>


Re: `java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

2021-09-24 Thread Bruno Cadonna

Hi Bruce,

I do not know the specific root cause of your errors but what I found is 
that Spring 2.7.x is compatible with clients 2.7.0 and 2.8.0, not with 
3.0.0 and 2.8.1:


https://spring.io/projects/spring-kafka

Best.
Bruno

On 24.09.21 00:25, Chang Liu wrote:

Hi Kafka users,

I start running into the following error after upgrading `Kafka-clients` from 2.5.0 
to 3.0.0. And I see the same error with 2.8.1. I don’t see a working solution by 
searching on Google: 
https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
 
<https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server>

This looks like backward incompatibility of Kafka-clients. Do you happen to 
know a solution for this?

```
java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS

at kafka.server.Defaults$.(KafkaConfig.scala:242)
at kafka.server.Defaults$.(KafkaConfig.scala)
at kafka.server.KafkaConfig$.(KafkaConfig.scala:961)
at kafka.server.KafkaConfig$.(KafkaConfig.scala)
at kafka.server.KafkaConfig.LogDirProp(KafkaConfig.scala)
at 
org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:298)
at 
org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:113)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
```

I got some suggestion that is to upgrade Spring library.

This is the `pom.xml` that defines all my dependencies. I only upgraded the 
`Kafka-clients` in production: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L84 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L84>

The dependency for test still remains: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L503 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L503>

Is this the Spring library that should be upgraded? 
https://github.com/opensearch-project/security/blob/main/pom.xml#L495 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L495>

But even though I upgraded Spring library to 2.7.7: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L496 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L496> , I got 
another error:

`java.lang.NoClassDefFoundError: org/apache/kafka/common/record/BufferSupplier`

Any suggestion helping me out this will be highly appreciated!

Thanks,
Bruce



`java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

2021-09-23 Thread Chang Liu
Hi Kafka users,

I start running into the following error after upgrading `Kafka-clients` from 
2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don’t see a working 
solution by searching on Google: 
https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
 
<https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server>

This looks like backward incompatibility of Kafka-clients. Do you happen to 
know a solution for this?

```
java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS

at kafka.server.Defaults$.(KafkaConfig.scala:242)
at kafka.server.Defaults$.(KafkaConfig.scala)
at kafka.server.KafkaConfig$.(KafkaConfig.scala:961)
at kafka.server.KafkaConfig$.(KafkaConfig.scala)
at kafka.server.KafkaConfig.LogDirProp(KafkaConfig.scala)
at 
org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:298)
at 
org.springframework.kafka.test.rule.EmbeddedKafkaRule.before(EmbeddedKafkaRule.java:113)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
```

I got some suggestion that is to upgrade Spring library. 

This is the `pom.xml` that defines all my dependencies. I only upgraded the 
`Kafka-clients` in production: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L84 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L84>

The dependency for test still remains: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L503 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L503>

Is this the Spring library that should be upgraded? 
https://github.com/opensearch-project/security/blob/main/pom.xml#L495 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L495>

But even though I upgraded Spring library to 2.7.7: 
https://github.com/opensearch-project/security/blob/main/pom.xml#L496 
<https://github.com/opensearch-project/security/blob/main/pom.xml#L496> , I got 
another error:

`java.lang.NoClassDefFoundError: org/apache/kafka/common/record/BufferSupplier`

Any suggestion helping me out this will be highly appreciated!

Thanks,
Bruce

Re: BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-27 Thread Daniyar Kulakhmetov
bumping this up with new update:

I've investigated another occurrence of this exception.

For analyzes, I used:
1) a memory dump that was taken from the broker
2) kafka log file
3) kafka state-change log
4) log, index and time-index files of a failed segment
5) Kafka source code, version 2.3.1 and 1.1.0

Here's how the exception looks like in the kafka log:

2019/11/19 16:03:00 INFO [ProducerStateManager
partition=ad_group_metrics-62] Writing producer snapshot at offset 13886052
(kafka.log.ProducerStateManager) 2019/11/19 16:03:00 INFO [Log
partition=ad_group_metrics-62, dir=/mnt/kafka] Rolled new log segment at
offset 13886052 in 1 ms. (kafka.log.Log) 2019/11/19 16:03:00 ERROR
[ReplicaManager broker=17] Error processing append operation on partition
ad_group_metrics-62 (kafka.server.ReplicaManager) 2019/11/19 16:03:00
java.nio.BufferOverflowException 2019/11/19 16:03:00 at
java.nio.Buffer.nextPutIndex(Buffer.java:527) 2019/11/19 16:03:00 at
java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797) 2019/11/19
16:03:00 at kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
2019/11/19 16:03:00 at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
2019/11/19 16:03:00 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
2019/11/19 16:03:00 at kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
2019/11/19 16:03:00 at
kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
2019/11/19 16:03:00 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
2019/11/19 16:03:00 at
kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690) 2019/11/19 16:03:00
at scala.Option.foreach(Option.scala:407) 2019/11/19 16:03:00 at
kafka.log.Log.$anonfun$roll$2(Log.scala:1690) 2019/11/19 16:03:00 at
kafka.log.Log.maybeHandleIOException(Log.scala:2085) 2019/11/19 16:03:00 at
kafka.log.Log.roll(Log.scala:1654) 2019/11/19 16:03:00 at
kafka.log.Log.maybeRoll(Log.scala:1639) 2019/11/19 16:03:00 at
kafka.log.Log.$anonfun$append$2(Log.scala:966) 2019/11/19 16:03:00 at
kafka.log.Log.maybeHandleIOException(Log.scala:2085) 2019/11/19 16:03:00 at
kafka.log.Log.append(Log.scala:850) 2019/11/19 16:03:00 at
kafka.log.Log.appendAsLeader(Log.scala:819) 2019/11/19 16:03:00 at
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
2019/11/19 16:03:00 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
2019/11/19 16:03:00 at
kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) 2019/11/19 16:03:00
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
2019/11/19 16:03:00 at
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
2019/11/19 16:03:00 at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
2019/11/19 16:03:00 at
scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
2019/11/19 16:03:00 at
scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
2019/11/19 16:03:00 at
scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
2019/11/19 16:03:00 at
scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) 2019/11/19
16:03:00 at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
2019/11/19 16:03:00 at
scala.collection.TraversableLike.map(TraversableLike.scala:238) 2019/11/19
16:03:00 at
scala.collection.TraversableLike.map$(TraversableLike.scala:231) 2019/11/19
16:03:00 at scala.collection.AbstractTraversable.map(Traversable.scala:108)
2019/11/19 16:03:00 at
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
2019/11/19 16:03:00 at
kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
2019/11/19 16:03:00 at
kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544) 2019/11/19
16:03:00 at kafka.server.KafkaApis.handle(KafkaApis.scala:113) 2019/11/19
16:03:00 at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
2019/11/19 16:03:00 at java.lang.Thread.run(Thread.java:748) ...


What we see here, is that a new segment was rolled out at the offset
13886052 and then an exception happened while trying to make *some* segment
as inactive (`onBecomeInactiveSegment`) on appending new messages to the
Log. The timing of the rolling out of a new segment and appending new
messages doesn't play a role. There are many other similar exceptions where
this occurs a few seconds after rolling out of a new segment.

I managed to find the `LogSegment` object for the offset 13886052 in the
memory dump. I followed the source code logic, checking the LogSegment
state and Kafka logs, and found that the `TimeIndex` object somehow went
into the state with 0 entries and 0 max possible entries (and an empty
memory map). Having 0 entries is normal for TimeIndex and OffsetIndex even
if there are some records in the Log unless their size passes some
threshold. But having 0 max possible entries along with 0 entries made the
TimeIndex considered as full (0 entries == 0 max entries) and was
triggering the rolling out a new segment. The Log was 

Re: BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-19 Thread Daniyar Kulakhmetov
Filed JIRA bug:
https://issues.apache.org/jira/browse/KAFKA-9213



On Tue, Nov 19, 2019 at 2:58 PM Ismael Juma  wrote:

> Can you please file a JIRA?
>
> Ismael
>
> On Tue, Nov 19, 2019 at 11:52 AM Daniyar Kulakhmetov <
> dkulakhme...@liftoff.io> wrote:
>
> > Hi Kafka users,
> >
> > We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> > Message format and inter-broker protocol versions left the same:
> >
> > inter.broker.protocol.version=1.1
> > log.message.format.version=1.1
> >
> > After upgrading, we started to get some occasional exceptions:
> >
> > 2019/11/19 05:30:53 INFO [ProducerStateManager
> > partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
> > offset 788532 (kafka.log.ProducerStateManager)
> > 2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
> > dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
> > (kafka.log.Log)
> > 2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing
> append
> > operation on partition matchmaker_retry_clicks_15m-2
> > (kafka.server.ReplicaManager)
> > 2019/11/19 05:31:01 java.nio.BufferOverflowException
> > 2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
> > 2019/11/19 05:31:01 at
> > java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
> > 2019/11/19 05:31:01 at
> > kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
> > 2019/11/19 05:31:01 at
> > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> > 2019/11/19 05:31:01 at
> > kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
> > 2019/11/19 05:31:01 at
> > kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
> > 2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> > 2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
> > 2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> > 2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
> > 2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
> > 2019/11/19 05:31:01 at
> >
> >
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
> > 2019/11/19 05:31:01 at
> > kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
> > 2019/11/19 05:31:01 at
> >
> >
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
> > 2019/11/19 05:31:01 at
> >
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> > 2019/11/19 05:31:01 at
> > scala.collection.TraversableLike.map(TraversableLike.scala:238)
> > 2019/11/19 05:31:01 at
> > scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> > 2019/11/19 05:31:01 at
> > scala.collection.AbstractTraversable.map(Traversable.scala:108)
> > 2019/11/19 05:31:01 at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
> > 2019/11/19 05:31:01 at
> > kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaApis.handle(KafkaApis.scala:113)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
> > 2019/11/19 05:31:01 at java.lang.Thread.run(Thread.java:748)
> >
> >
> > This error persists until broker gets restarted (or leadership get moved
> to
> > another broker).
> >
> > What could be the issue and how we can solve it?
> >
> > Thank you!
> >
> > Best regards,
> > Daniyar.
> >
>


Re: BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-19 Thread Ismael Juma
Can you please file a JIRA?

Ismael

On Tue, Nov 19, 2019 at 11:52 AM Daniyar Kulakhmetov <
dkulakhme...@liftoff.io> wrote:

> Hi Kafka users,
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> Message format and inter-broker protocol versions left the same:
>
> inter.broker.protocol.version=1.1
> log.message.format.version=1.1
>
> After upgrading, we started to get some occasional exceptions:
>
> 2019/11/19 05:30:53 INFO [ProducerStateManager
> partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
> offset 788532 (kafka.log.ProducerStateManager)
> 2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
> dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
> (kafka.log.Log)
> 2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing append
> operation on partition matchmaker_retry_clicks_15m-2
> (kafka.server.ReplicaManager)
> 2019/11/19 05:31:01 java.nio.BufferOverflowException
> 2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
> 2019/11/19 05:31:01 at
> java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
> 2019/11/19 05:31:01 at
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
> 2019/11/19 05:31:01 at
> kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
> 2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
> 2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
> 2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
> 2019/11/19 05:31:01 at
>
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
> 2019/11/19 05:31:01 at
> kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
> 2019/11/19 05:31:01 at
>
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> 2019/11/19 05:31:01 at
> scala.collection.AbstractTraversable.map(Traversable.scala:108)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaApis.handle(KafkaApis.scala:113)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
> 2019/11/19 05:31:01 at java.lang.Thread.run(Thread.java:748)
>
>
> This error persists until broker gets restarted (or leadership get moved to
> another broker).
>
> What could be the issue and how we can solve it?
>
> Thank you!
>
> Best regards,
> Daniyar.
>


Re: BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-19 Thread Daniyar Kulakhmetov
Hi,

We followed the upgrade instruction (
https://kafka.apache.org/documentation/#upgrade) up to step 2, and as it is
said in step 3
"Once the cluster's behavior and performance has been verified, bump the
protocol version by editing" we were verifying cluster's behavior.

Thanks,

On Tue, Nov 19, 2019 at 12:19 PM M. Manna  wrote:

> Hi,
>
> Is there any reason why you haven’t performed the upgrade based on official
> docs ? Or, is this something you’re planning to do now?
>
> Thanks,
>
> On Tue, 19 Nov 2019 at 19:52, Daniyar Kulakhmetov  >
> wrote:
>
> > Hi Kafka users,
> >
> > We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> > Message format and inter-broker protocol versions left the same:
> >
> > inter.broker.protocol.version=1.1
> > log.message.format.version=1.1
> >
> > After upgrading, we started to get some occasional exceptions:
> >
> > 2019/11/19 05:30:53 INFO [ProducerStateManager
> > partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
> > offset 788532 (kafka.log.ProducerStateManager)
> > 2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
> > dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
> > (kafka.log.Log)
> > 2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing
> append
> > operation on partition matchmaker_retry_clicks_15m-2
> > (kafka.server.ReplicaManager)
> > 2019/11/19 05:31:01 java.nio.BufferOverflowException
> > 2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
> > 2019/11/19 05:31:01 at
> > java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
> > 2019/11/19 05:31:01 at
> > kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
> > 2019/11/19 05:31:01 at
> > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> > 2019/11/19 05:31:01 at
> > kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
> > 2019/11/19 05:31:01 at
> > kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
> > 2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> > 2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
> > 2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
> > 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
> > 2019/11/19 05:31:01 at
> > kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> > 2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
> > 2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
> > 2019/11/19 05:31:01 at
> >
> >
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> > 2019/11/19 05:31:01 at
> > kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
> > 2019/11/19 05:31:01 at
> > kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
> > 2019/11/19 05:31:01 at
> >
> >
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
> > 2019/11/19 05:31:01 at
> >
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> > 2019/11/19 05:31:01 at
> > scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> > 2019/11/19 05:31:01 at
> > scala.collection.TraversableLike.map(TraversableLike.scala:238)
> > 2019/11/19 05:31:01 at
> > scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> > 2019/11/19 05:31:01 at
> > scala.collection.AbstractTraversable.map(Traversable.scala:108)
> > 2019/11/19 05:31:01 at
> > kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
> > 2019/11/19 05:31:01 at
> > kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaApis.handle(KafkaApis.scala:113)
> > 2019/11/19 05:31:01 at
> > kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
> > 2019/11/19 05:31:01 at java.lang.Thread.run(Thread.java:748)
> >
> >
> > This error persists until broker gets 

Re: BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-19 Thread M. Manna
Hi,

Is there any reason why you haven’t performed the upgrade based on official
docs ? Or, is this something you’re planning to do now?

Thanks,

On Tue, 19 Nov 2019 at 19:52, Daniyar Kulakhmetov 
wrote:

> Hi Kafka users,
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> Message format and inter-broker protocol versions left the same:
>
> inter.broker.protocol.version=1.1
> log.message.format.version=1.1
>
> After upgrading, we started to get some occasional exceptions:
>
> 2019/11/19 05:30:53 INFO [ProducerStateManager
> partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
> offset 788532 (kafka.log.ProducerStateManager)
> 2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
> dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
> (kafka.log.Log)
> 2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing append
> operation on partition matchmaker_retry_clicks_15m-2
> (kafka.server.ReplicaManager)
> 2019/11/19 05:31:01 java.nio.BufferOverflowException
> 2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
> 2019/11/19 05:31:01 at
> java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
> 2019/11/19 05:31:01 at
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
> 2019/11/19 05:31:01 at
> kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
> 2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
> 2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
> 2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
> 2019/11/19 05:31:01 at
>
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
> 2019/11/19 05:31:01 at
> kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
> 2019/11/19 05:31:01 at
>
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> 2019/11/19 05:31:01 at
> scala.collection.AbstractTraversable.map(Traversable.scala:108)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaApis.handle(KafkaApis.scala:113)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
> 2019/11/19 05:31:01 at java.lang.Thread.run(Thread.java:748)
>
>
> This error persists until broker gets restarted (or leadership get moved to
> another broker).
>
> What could be the issue and how we can solve it?
>
> Thank you!
>
> Best regards,
> Daniyar.
>


BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-11-19 Thread Daniyar Kulakhmetov
Hi Kafka users,

We updated our Kafka cluster from 1.1.0 version to 2.3.1.
Message format and inter-broker protocol versions left the same:

inter.broker.protocol.version=1.1
log.message.format.version=1.1

After upgrading, we started to get some occasional exceptions:

2019/11/19 05:30:53 INFO [ProducerStateManager
partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
offset 788532 (kafka.log.ProducerStateManager)
2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
(kafka.log.Log)
2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing append
operation on partition matchmaker_retry_clicks_15m-2
(kafka.server.ReplicaManager)
2019/11/19 05:31:01 java.nio.BufferOverflowException
2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
2019/11/19 05:31:01 at
java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
2019/11/19 05:31:01 at
kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
2019/11/19 05:31:01 at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
2019/11/19 05:31:01 at
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
2019/11/19 05:31:01 at
kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
2019/11/19 05:31:01 at
kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
2019/11/19 05:31:01 at
kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
2019/11/19 05:31:01 at
kafka.log.Log.maybeHandleIOException(Log.scala:2085)
2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
2019/11/19 05:31:01 at
kafka.log.Log.maybeHandleIOException(Log.scala:2085)
2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
2019/11/19 05:31:01 at
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
2019/11/19 05:31:01 at
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
2019/11/19 05:31:01 at
kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
2019/11/19 05:31:01 at
kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
2019/11/19 05:31:01 at
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
2019/11/19 05:31:01 at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
2019/11/19 05:31:01 at
scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
2019/11/19 05:31:01 at
scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
2019/11/19 05:31:01 at
scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
2019/11/19 05:31:01 at
scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
2019/11/19 05:31:01 at
scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
2019/11/19 05:31:01 at
scala.collection.TraversableLike.map(TraversableLike.scala:238)
2019/11/19 05:31:01 at
scala.collection.TraversableLike.map$(TraversableLike.scala:231)
2019/11/19 05:31:01 at
scala.collection.AbstractTraversable.map(Traversable.scala:108)
2019/11/19 05:31:01 at
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
2019/11/19 05:31:01 at
kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
2019/11/19 05:31:01 at
kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
2019/11/19 05:31:01 at
kafka.server.KafkaApis.handle(KafkaApis.scala:113)
2019/11/19 05:31:01 at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
2019/11/19 05:31:01 at java.lang.Thread.run(Thread.java:748)


This error persists until broker gets restarted (or leadership get moved to
another broker).

What could be the issue and how we can solve it?

Thank you!

Best regards,
Daniyar.


upgrading kafka & zookeeper

2019-08-05 Thread Vignesh
Hi kafka Users,

I have a novice question in kafka upgrade.. This is the 1st time i'm
upgrading my kafka in Linux.

My current version is "kafka_2.11-1.0.0.tgz".. when i initially setup i had
a folder named kafka_2.11-1.0.0.

Now i downloaded a new version "kafka_2.12-2.3.0.tgz". If i extract it is
going to create a new folder kafka_2.12-2.3.0 which will result in 2
independent kafka with server.properties.

As per documentation i have to update server.properties with below 2
properties..

inter.broker.protocol.version=2.3
log.message.format.version=2.3

How does this affect if it is going to install in a new directory with new
server.properties ?

How can i merge server.properties & do the upgrade ? please share if you
have documents or steps..



Thanks,
Vignesh


Re: Upgrading Kafka from Version 0.10.2 to 1.0.0

2018-01-16 Thread Tim Visher
On Tue, Jan 9, 2018 at 4:50 PM, ZigSphere Tech 
wrote:

> Is it easy to upgrade from Kafka version 0.10.2 to 1.0.0 or do I need to
> upgrade to version 0.11.0 first? Anything to expect?
>

We just did (almost) exactly this upgrade. 2.11-0.10.1.0 to 2.11-1.0.0.

The main issue we faced was the broker memory leak that was discussed
several times on this list.
https://lists.apache.org/thread.html/a1d6ea46d29d8a4e4e7aaee57b09a3c7de44e911efd2ddbe3ab11cf5@%3Cusers.kafka.apache.org%3E

We ended up having to upgrade all the things to mitigate. I believe the
official recommendation is to wait till 1.0.1.

--

In Christ,

Timmy V.

https://blog.twonegatives.com
https://five.sentenc.es


Upgrading Kafka from Version 0.10.2 to 1.0.0

2018-01-09 Thread ZigSphere Tech
Hello All,

Is it easy to upgrade from Kafka version 0.10.2 to 1.0.0 or do I need to
upgrade to version 0.11.0 first? Anything to expect?

Thanks


Re: Upgrading Kafka to 11.0

2017-09-12 Thread kiran kumar
[re-posting]

Hi All,

   1. Upgrade the brokers one at a time: shut down the broker, update the
   code, and restart it.

What does it mean to "update the code".
Does it mean replace the old lib folder with latest ? or replace lib and
bin with latest?

Could someone clarify ?

On Fri, Sep 8, 2017 at 10:22 PM, kiran kumar  wrote:

> Hi All,
>
>1. Upgrade the brokers one at a time: shut down the broker, update the
>code, and restart it.
>
> What does it mean to "update the code".
> Does it mean replace the old lib folder with latest ? or replace lib and
> bin with latest?
>
> Could someone clarify ?
>
> Thanks,
> Kiran
>



-- 
G.Kiran Kumar


Disks full after upgrading kafka version : 0.8.1.1 to 0.10.0.0

2017-05-24 Thread Milind Vaidya
In 24 hours the brokers started getting killed due to disk full.

The retention period is 48 hrs and with 0.8 disks used to fill ~65%

What is going wrong here ?

This is production system. I am reducing the retention for the time being
to 24 hrs.


Commitlog path while Upgrading kafka server from 0.9 to 0.10.2.0

2017-04-20 Thread dhiraj prajapati
Hi,
I want to do a rolling upgrade of kafka server from 0.9 to 0.10.2.0. Should
I keep path of the commit logs the same? what is the impact of keeping the
path same/different?

Thanks in advance.


Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-15 Thread Ismael Juma
Looking at the output you pasted, broker `0` was the one being upgraded? A
few things to check:

1. Does broker `0` connect to the other brokers after the restart
2. Is broker `0` able to connect to zookeeper
3. Does everything look OK in the controller and state-change logs in the
controller node
4. Did you allow enough time for the restarted broker to rejoin the ISR

Ismael

On Tue, Mar 14, 2017 at 1:37 PM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:

> Yes, I've set the inter.broker.protocol.version=0.10.0 before restarting
> each broker on a previous update. Clusters currently run with this config.
>
> On 03/14/2017 12:34 PM, Ismael Juma wrote:
>
> So, to double-check, you set inter.broker.protocol.version=0.10.0 before
> bouncing each broker?
>
> On Tue, Mar 14, 2017 at 11:22 AM, Thomas KIEFFER 
>  wrote:
>
>
> Hello Ismael,
>
> Thank you for your feedback.
>
> Yes I've done  this changes on a previous upgrade and set them accordingly
> with the new version when trying to do the upgrade.
>
> inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0,
> 0.10.0 or 0.10.1).
> log.message.format.version=CURRENT_KAFKA_VERSION (See potential
> performance impact following the upgrade for the details on what this
> configuration does.)
> On 03/14/2017 11:26 AM, Ismael Juma wrote:
>
> Hi Thomas,
>
> Did you follow the 
> instructions:https://kafka.apache.org/documentation/#upgrade
>
> Ismael
>
> On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER 
>   
> wrote:
>
>
> I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
> I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw that
> ISR is lost when I upgrade one instance. I didn't find out yet anything
> relevant about this problem, logs seems just fine.
> eg.
>
> kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
> Topic:redirectsPartitionCount:6ReplicationFactor:2
> Configs:retention.bytes=10737418240
> Topic: redirectsPartition: 0Leader: 1Replicas: 1,2Isr:
> 1,2
> Topic: redirectsPartition: 1Leader: 2Replicas: 2,0Isr:
> 2
> Topic: redirectsPartition: 2Leader: 1Replicas: 0,1Isr:
> 1
> Topic: redirectsPartition: 3Leader: 1Replicas: 1,0Isr:
> 1
> Topic: redirectsPartition: 4Leader: 2Replicas: 2,1Isr:
> 2,1
> Topic: redirectsPartition: 5Leader: 2Replicas: 0,2Isr:
> 2
>
> It run with Zookeeper 3.4.6.
>
> As those clusters are in production, I didn't try to migrate more than 1
> instance after spotting this ISR problem, and then rollback to the original
> version 0.10.0.1.
>
> Any update about this would be greatly receive.
>
> --
>  
>  
>  
> 
>
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 
> <+352%20691%20444%20263> <+352%20691%20444%20263>
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
> -- 
> 
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 
> <+352%20691%20444%20263>
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
>
> --
> 
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action 

Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Thomas KIEFFER
Yes, I've set the inter.broker.protocol.version=0.10.0 before restarting 
each broker on a previous update. Clusters currently run with this config.



On 03/14/2017 12:34 PM, Ismael Juma wrote:

So, to double-check, you set inter.broker.protocol.version=0.10.0 before
bouncing each broker?

On Tue, Mar 14, 2017 at 11:22 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:


Hello Ismael,

Thank you for your feedback.

Yes I've done  this changes on a previous upgrade and set them accordingly
with the new version when trying to do the upgrade.

inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0,
0.10.0 or 0.10.1).
log.message.format.version=CURRENT_KAFKA_VERSION (See potential
performance impact following the upgrade for the details on what this
configuration does.)
On 03/14/2017 11:26 AM, Ismael Juma wrote:

Hi Thomas,

Did you follow the instructions:
https://kafka.apache.org/documentation/#upgrade

Ismael

On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER 
 wrote:


I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw that
ISR is lost when I upgrade one instance. I didn't find out yet anything
relevant about this problem, logs seems just fine.
eg.

kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
Topic:redirectsPartitionCount:6ReplicationFactor:2
Configs:retention.bytes=10737418240
 Topic: redirectsPartition: 0Leader: 1Replicas: 1,2Isr:
1,2
 Topic: redirectsPartition: 1Leader: 2Replicas: 2,0Isr:
2
 Topic: redirectsPartition: 2Leader: 1Replicas: 0,1Isr:
1
 Topic: redirectsPartition: 3Leader: 1Replicas: 1,0Isr:
1
 Topic: redirectsPartition: 4Leader: 2Replicas: 2,1Isr:
2,1
 Topic: redirectsPartition: 5Leader: 2Replicas: 0,2Isr:
2

It run with Zookeeper 3.4.6.

As those clusters are in production, I didn't try to migrate more than 1
instance after spotting this ISR problem, and then rollback to the original
version 0.10.0.1.

Any update about this would be greatly receive.

--
 


Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 
<+352%20691%20444%20263>
<+352%20691%20444%20263> | www.olamobile.com

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.


--


Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263
<+352%20691%20444%20263> | www.olamobile.com

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.



--




Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 | 
www.olamobile.com 



--
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.


Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Ismael Juma
So, to double-check, you set inter.broker.protocol.version=0.10.0 before
bouncing each broker?

On Tue, Mar 14, 2017 at 11:22 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:

> Hello Ismael,
>
> Thank you for your feedback.
>
> Yes I've done  this changes on a previous upgrade and set them accordingly
> with the new version when trying to do the upgrade.
>
> inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0,
> 0.10.0 or 0.10.1).
> log.message.format.version=CURRENT_KAFKA_VERSION (See potential
> performance impact following the upgrade for the details on what this
> configuration does.)
> On 03/14/2017 11:26 AM, Ismael Juma wrote:
>
> Hi Thomas,
>
> Did you follow the instructions:
> https://kafka.apache.org/documentation/#upgrade
>
> Ismael
>
> On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER 
>  wrote:
>
>
> I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
> I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw that
> ISR is lost when I upgrade one instance. I didn't find out yet anything
> relevant about this problem, logs seems just fine.
> eg.
>
> kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
> Topic:redirectsPartitionCount:6ReplicationFactor:2
> Configs:retention.bytes=10737418240
> Topic: redirectsPartition: 0Leader: 1Replicas: 1,2Isr:
> 1,2
> Topic: redirectsPartition: 1Leader: 2Replicas: 2,0Isr:
> 2
> Topic: redirectsPartition: 2Leader: 1Replicas: 0,1Isr:
> 1
> Topic: redirectsPartition: 3Leader: 1Replicas: 1,0Isr:
> 1
> Topic: redirectsPartition: 4Leader: 2Replicas: 2,1Isr:
> 2,1
> Topic: redirectsPartition: 5Leader: 2Replicas: 0,2Isr:
> 2
>
> It run with Zookeeper 3.4.6.
>
> As those clusters are in production, I didn't try to migrate more than 1
> instance after spotting this ISR problem, and then rollback to the original
> version 0.10.0.1.
>
> Any update about this would be greatly receive.
>
> --
>  
> 
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 
> <+352%20691%20444%20263>
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
> --
> 
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>


Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Thomas KIEFFER

Hello Ismael,

Thank you for your feedback.

Yes I've done  this changes on a previous upgrade and set them 
accordingly with the new version when trying to do the upgrade.


inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 
0.10.0 or 0.10.1).
log.message.format.version=CURRENT_KAFKA_VERSION (See potential 
performance impact following the upgrade for the details on what this 
configuration does.)


On 03/14/2017 11:26 AM, Ismael Juma wrote:

Hi Thomas,

Did you follow the instructions:

https://kafka.apache.org/documentation/#upgrade

Ismael

On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:


I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw that
ISR is lost when I upgrade one instance. I didn't find out yet anything
relevant about this problem, logs seems just fine.
eg.

kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
Topic:redirectsPartitionCount:6ReplicationFactor:2
Configs:retention.bytes=10737418240
 Topic: redirectsPartition: 0Leader: 1Replicas: 1,2Isr:
1,2
 Topic: redirectsPartition: 1Leader: 2Replicas: 2,0Isr:
2
 Topic: redirectsPartition: 2Leader: 1Replicas: 0,1Isr:
1
 Topic: redirectsPartition: 3Leader: 1Replicas: 1,0Isr:
1
 Topic: redirectsPartition: 4Leader: 2Replicas: 2,1Isr:
2,1
 Topic: redirectsPartition: 5Leader: 2Replicas: 0,2Isr:
2

It run with Zookeeper 3.4.6.

As those clusters are in production, I didn't try to migrate more than 1
instance after spotting this ISR problem, and then rollback to the original
version 0.10.0.1.

Any update about this would be greatly receive.

--


Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263
<+352%20691%20444%20263> | www.olamobile.com

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.


--




Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 | 
www.olamobile.com 



--
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.


Re: Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-14 Thread Ismael Juma
Hi Thomas,

Did you follow the instructions:

https://kafka.apache.org/documentation/#upgrade

Ismael

On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:

> I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
> I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw that
> ISR is lost when I upgrade one instance. I didn't find out yet anything
> relevant about this problem, logs seems just fine.
> eg.
>
> kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
> Topic:redirectsPartitionCount:6ReplicationFactor:2
> Configs:retention.bytes=10737418240
> Topic: redirectsPartition: 0Leader: 1Replicas: 1,2Isr:
> 1,2
> Topic: redirectsPartition: 1Leader: 2Replicas: 2,0Isr:
> 2
> Topic: redirectsPartition: 2Leader: 1Replicas: 0,1Isr:
> 1
> Topic: redirectsPartition: 3Leader: 1Replicas: 1,0Isr:
> 1
> Topic: redirectsPartition: 4Leader: 2Replicas: 2,1Isr:
> 2,1
> Topic: redirectsPartition: 5Leader: 2Replicas: 0,2Isr:
> 2
>
> It run with Zookeeper 3.4.6.
>
> As those clusters are in production, I didn't try to migrate more than 1
> instance after spotting this ISR problem, and then rollback to the original
> version 0.10.0.1.
>
> Any update about this would be greatly receive.
>
> --
> 
>
> Thomas Kieffer
>
> Senior Linux Systems Administrator
>
> Skype: thomas.kieffer.corporate | Phone: (+352) 691444263
> <+352%20691%20444%20263> | www.olamobile.com
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission, dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.


Lost ISR when upgrading kafka from 0.10.0.1 to any newer version like 0.10.1.0 or 0.10.2.0

2017-03-13 Thread Thomas KIEFFER
I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When 
I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw 
that ISR is lost when I upgrade one instance. I didn't find out yet 
anything relevant about this problem, logs seems just fine.


eg.

kafka-topics.sh --describe --zookeeper kazoo002.#.prv --topic redirects
Topic:redirectsPartitionCount:6ReplicationFactor:2 
Configs:retention.bytes=10737418240
Topic: redirectsPartition: 0Leader: 1Replicas: 1,2
Isr: 1,2
Topic: redirectsPartition: 1Leader: 2Replicas: 2,0
Isr: 2
Topic: redirectsPartition: 2Leader: 1Replicas: 0,1
Isr: 1
Topic: redirectsPartition: 3Leader: 1Replicas: 1,0
Isr: 1
Topic: redirectsPartition: 4Leader: 2Replicas: 2,1
Isr: 2,1
Topic: redirectsPartition: 5Leader: 2Replicas: 0,2
Isr: 2


It run with Zookeeper 3.4.6.

As those clusters are in production, I didn't try to migrate more than 1 
instance after spotting this ISR problem, and then rollback to the 
original version 0.10.0.1.


Any update about this would be greatly receive.

--




Thomas Kieffer

Senior Linux Systems Administrator

Skype: thomas.kieffer.corporate | Phone: (+352) 691444263 | 
www.olamobile.com 



--
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Ismael Juma
That's right, there should be no performance penalty if the broker is
configured to use the older message format. The downside is that timestamps
introduced in message format version 2 won't be supported in that case.

Ismael

On Tue, Nov 29, 2016 at 11:31 PM, Hans Jespersen  wrote:

> The performance impact of upgrading and some settings you can use to
> mitigate this impact when the majority of your clients are still 0.8.x are
> documented on the Apache Kafka website
> https://kafka.apache.org/documentation#upgrade_10_performance_impact
>
> -hans
>
> /**
>  * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
>  * h...@confluent.io (650)924-2670
>  */
>
> On Tue, Nov 29, 2016 at 3:27 PM, Apurva Mehta  wrote:
>
> > I may be wrong, but since there have been message format changes between
> > 0.8.2 and 0.10.1, there will be a performance penalty if the clients are
> > not also upgraded. This is because you lose the zero-copy semantics on
> the
> > server side as the messages have to be converted to the old format before
> > being sent out on the wire to the old clients.
> >
> > On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker 
> wrote:
> >
> > > The only obvious downside I'm aware of is not being able to benefit
> > > from the bugfixes in the client. We are essentially doing the same
> > > thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> > > our clients from 0.8.1.x.
> > >
> > > On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > > > Hi Everyone,
> > > >
> > > > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > > > see
> > > > that Kafka 0.10.1.0 should be backwards compatible with client
> > > > libraries
> > > > written for older versions but that newer client libraries are only
> > > > compatible with their version and up.
> > > >
> > > > My question is what disadvantages would there be to never upgrading
> > > > the
> > > > clients? I'm mainly asking because it would be advantageous to save
> > > > some
> > > > time here with a little technical debt if the costs weren't too high.
> > > > If
> > > > there are major issues then I can take on the client upgrade as well.
> > > >
> > > > Thanks in advance!
> > > >
> > > > --
> > > >
> > > > In Christ,
> > > >
> > > > Timmy V.
> > > >
> > > > http://blog.twonegatives.com/
> > > > http://five.sentenc.es/ -- Spend less time on mail
> > > --
> > >
> > >
> > > Tommy Becker
> > >
> > > Senior Software Engineer
> > >
> > > O +1 919.460.4747
> > >
> > > tivo.com
> > >
> > >
> > > 
> > >
> > > This email and any attachments may contain confidential and privileged
> > > material for the sole use of the intended recipient. Any review,
> copying,
> > > or distribution of this email (or any attachments) by others is
> > prohibited.
> > > If you are not the intended recipient, please contact the sender
> > > immediately and permanently delete this email and any attachments. No
> > > employee or agent of TiVo Inc. is authorized to conclude any binding
> > > agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> > > Inc. may only be made by a signed written agreement.
> > >
> >
>


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Hans Jespersen
The performance impact of upgrading and some settings you can use to
mitigate this impact when the majority of your clients are still 0.8.x are
documented on the Apache Kafka website
https://kafka.apache.org/documentation#upgrade_10_performance_impact

-hans

/**
 * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
 * h...@confluent.io (650)924-2670
 */

On Tue, Nov 29, 2016 at 3:27 PM, Apurva Mehta  wrote:

> I may be wrong, but since there have been message format changes between
> 0.8.2 and 0.10.1, there will be a performance penalty if the clients are
> not also upgraded. This is because you lose the zero-copy semantics on the
> server side as the messages have to be converted to the old format before
> being sent out on the wire to the old clients.
>
> On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker  wrote:
>
> > The only obvious downside I'm aware of is not being able to benefit
> > from the bugfixes in the client. We are essentially doing the same
> > thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> > our clients from 0.8.1.x.
> >
> > On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > > Hi Everyone,
> > >
> > > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > > see
> > > that Kafka 0.10.1.0 should be backwards compatible with client
> > > libraries
> > > written for older versions but that newer client libraries are only
> > > compatible with their version and up.
> > >
> > > My question is what disadvantages would there be to never upgrading
> > > the
> > > clients? I'm mainly asking because it would be advantageous to save
> > > some
> > > time here with a little technical debt if the costs weren't too high.
> > > If
> > > there are major issues then I can take on the client upgrade as well.
> > >
> > > Thanks in advance!
> > >
> > > --
> > >
> > > In Christ,
> > >
> > > Timmy V.
> > >
> > > http://blog.twonegatives.com/
> > > http://five.sentenc.es/ -- Spend less time on mail
> > --
> >
> >
> > Tommy Becker
> >
> > Senior Software Engineer
> >
> > O +1 919.460.4747
> >
> > tivo.com
> >
> >
> > 
> >
> > This email and any attachments may contain confidential and privileged
> > material for the sole use of the intended recipient. Any review, copying,
> > or distribution of this email (or any attachments) by others is
> prohibited.
> > If you are not the intended recipient, please contact the sender
> > immediately and permanently delete this email and any attachments. No
> > employee or agent of TiVo Inc. is authorized to conclude any binding
> > agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> > Inc. may only be made by a signed written agreement.
> >
>


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Apurva Mehta
I may be wrong, but since there have been message format changes between
0.8.2 and 0.10.1, there will be a performance penalty if the clients are
not also upgraded. This is because you lose the zero-copy semantics on the
server side as the messages have to be converted to the old format before
being sent out on the wire to the old clients.

On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker  wrote:

> The only obvious downside I'm aware of is not being able to benefit
> from the bugfixes in the client. We are essentially doing the same
> thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> our clients from 0.8.1.x.
>
> On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > Hi Everyone,
> >
> > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > see
> > that Kafka 0.10.1.0 should be backwards compatible with client
> > libraries
> > written for older versions but that newer client libraries are only
> > compatible with their version and up.
> >
> > My question is what disadvantages would there be to never upgrading
> > the
> > clients? I'm mainly asking because it would be advantageous to save
> > some
> > time here with a little technical debt if the costs weren't too high.
> > If
> > there are major issues then I can take on the client upgrade as well.
> >
> > Thanks in advance!
> >
> > --
> >
> > In Christ,
> >
> > Timmy V.
> >
> > http://blog.twonegatives.com/
> > http://five.sentenc.es/ -- Spend less time on mail
> --
>
>
> Tommy Becker
>
> Senior Software Engineer
>
> O +1 919.460.4747
>
> tivo.com
>
>
> 
>
> This email and any attachments may contain confidential and privileged
> material for the sole use of the intended recipient. Any review, copying,
> or distribution of this email (or any attachments) by others is prohibited.
> If you are not the intended recipient, please contact the sender
> immediately and permanently delete this email and any attachments. No
> employee or agent of TiVo Inc. is authorized to conclude any binding
> agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> Inc. may only be made by a signed written agreement.
>


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Thomas Becker
The only obvious downside I'm aware of is not being able to benefit
from the bugfixes in the client. We are essentially doing the same
thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
our clients from 0.8.1.x.

On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> Hi Everyone,
>
> I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> see
> that Kafka 0.10.1.0 should be backwards compatible with client
> libraries
> written for older versions but that newer client libraries are only
> compatible with their version and up.
>
> My question is what disadvantages would there be to never upgrading
> the
> clients? I'm mainly asking because it would be advantageous to save
> some
> time here with a little technical debt if the costs weren't too high.
> If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ -- Spend less time on mail
--


Tommy Becker

Senior Software Engineer

O +1 919.460.4747

tivo.com




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Gwen Shapira
Most people upgrade clients to enjoy new client features, fix bugs or
improve performance. If none of these apply, no need to upgrade.

Since you are upgrading to 0.10.1.0, read the upgrade docs closely -
there are specific server settings regarding the message format that
you need to configure a certain way if the clients are not upgraded.

Gwen

On Tue, Nov 29, 2016 at 6:30 AM, Tim Visher  wrote:
> Hi Everyone,
>
> I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I see
> that Kafka 0.10.1.0 should be backwards compatible with client libraries
> written for older versions but that newer client libraries are only
> compatible with their version and up.
>
> My question is what disadvantages would there be to never upgrading the
> clients? I'm mainly asking because it would be advantageous to save some
> time here with a little technical debt if the costs weren't too high. If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ -- Spend less time on mail



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Tim Visher
Hi Everyone,

I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I see
that Kafka 0.10.1.0 should be backwards compatible with client libraries
written for older versions but that newer client libraries are only
compatible with their version and up.

My question is what disadvantages would there be to never upgrading the
clients? I'm mainly asking because it would be advantageous to save some
time here with a little technical debt if the costs weren't too high. If
there are major issues then I can take on the client upgrade as well.

Thanks in advance!

--

In Christ,

Timmy V.

http://blog.twonegatives.com/
http://five.sentenc.es/ -- Spend less time on mail


Re: upgrading Kafka

2016-05-26 Thread Mudit Agarwal
Yes,you can use constraints and same volumes.That can be trusted.

  From: Radoslaw Gruchalski <ra...@gruchalski.com>
 To: "Karnam, Kiran" <kkar...@ea.com>; users@kafka.apache.org 
 Sent: Thursday, 26 May 2016 2:31 AM
 Subject: Re: upgrading Kafka
   
Kiran,

If you’re using Docker, you can use Docker on Mesos, you can use constraints to 
force relaunched kafka broker to always relaunch at the same agent and you can 
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–  
Best regards,

Radek Gruchalski

ra...@gruchalski.com
de.linkedin.com/in/radgruchalski

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On May 25, 2016 at 10:58:06 PM, Karnam, Kiran (kkar...@ea.com) wrote:

Hi All,  

We are using Docker containers to deploy Kafka, we are planning to use mesos 
for the deployment and maintenance of containers. Is there a way during upgrade 
that we can persist the data so that it is available for the upgraded 
container.  

we don't want the clusters to go into chaos with data replicating around the 
network because a node that was upgraded suddenly has no data  

Thanks,  
Kiran  

  

Re: upgrading Kafka

2016-05-25 Thread craig w
More specifically, see:
https://github.com/mesos/kafka#failed-broker-recovery

On Wed, May 25, 2016 at 6:02 PM, craig w  wrote:

> The Kafka framework can be used to deploy brokers. It will also bring a
> broker back up on the server it was last running on (within a certain
> amount of time).
>
> However the Kafka framework doesn't run brokers in containers.
>
>
> On Wednesday, May 25, 2016, Radoslaw Gruchalski 
> wrote:
>
>> Kiran,
>>
>> If you’re using Docker, you can use Docker on Mesos, you can use
>> constraints to force relaunched kafka broker to always relaunch at the same
>> agent and you can use Docker volumes to persist the data.
>> Not sure if https://github.com/mesos/kafka provides these capabilites.
>> –
>> Best regards,
>> Radek Gruchalski
>> ra...@gruchalski.com
>> de.linkedin.com/in/radgruchalski
>>
>> Confidentiality:
>> This communication is intended for the above-named person and may be
>> confidential and/or legally privileged.
>> If it has come to you in error you must take no action based on it, nor
>> must you copy or show it to anyone; please delete/destroy and inform the
>> sender immediately.
>>
>> On May 25, 2016 at 10:58:06 PM, Karnam, Kiran (kkar...@ea.com) wrote:
>>
>> Hi All,
>>
>> We are using Docker containers to deploy Kafka, we are planning to use
>> mesos for the deployment and maintenance of containers. Is there a way
>> during upgrade that we can persist the data so that it is available for the
>> upgraded container.
>>
>> we don't want the clusters to go into chaos with data replicating around
>> the network because a node that was upgraded suddenly has no data
>>
>> Thanks,
>> Kiran
>>
>
>
> --
>
> https://github.com/mindscratch
> https://www.google.com/+CraigWickesser
> https://twitter.com/mind_scratch
> https://twitter.com/craig_links
>
>
>


-- 

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


Re: upgrading Kafka

2016-05-25 Thread craig w
The Kafka framework can be used to deploy brokers. It will also bring a
broker back up on the server it was last running on (within a certain
amount of time).

However the Kafka framework doesn't run brokers in containers.

On Wednesday, May 25, 2016, Radoslaw Gruchalski 
wrote:

> Kiran,
>
> If you’re using Docker, you can use Docker on Mesos, you can use
> constraints to force relaunched kafka broker to always relaunch at the same
> agent and you can use Docker volumes to persist the data.
> Not sure if https://github.com/mesos/kafka provides these capabilites.
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com 
> de.linkedin.com/in/radgruchalski
>
> Confidentiality:
> This communication is intended for the above-named person and may be
> confidential and/or legally privileged.
> If it has come to you in error you must take no action based on it, nor
> must you copy or show it to anyone; please delete/destroy and inform the
> sender immediately.
>
> On May 25, 2016 at 10:58:06 PM, Karnam, Kiran (kkar...@ea.com
> ) wrote:
>
> Hi All,
>
> We are using Docker containers to deploy Kafka, we are planning to use
> mesos for the deployment and maintenance of containers. Is there a way
> during upgrade that we can persist the data so that it is available for the
> upgraded container.
>
> we don't want the clusters to go into chaos with data replicating around
> the network because a node that was upgraded suddenly has no data
>
> Thanks,
> Kiran
>


-- 

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links


Re: upgrading Kafka

2016-05-25 Thread Radoslaw Gruchalski
Kiran,

If you’re using Docker, you can use Docker on Mesos, you can use constraints to 
force relaunched kafka broker to always relaunch at the same agent and you can 
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–  
Best regards,

Radek Gruchalski

ra...@gruchalski.com
de.linkedin.com/in/radgruchalski

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On May 25, 2016 at 10:58:06 PM, Karnam, Kiran (kkar...@ea.com) wrote:

Hi All,  

We are using Docker containers to deploy Kafka, we are planning to use mesos 
for the deployment and maintenance of containers. Is there a way during upgrade 
that we can persist the data so that it is available for the upgraded 
container.  

we don't want the clusters to go into chaos with data replicating around the 
network because a node that was upgraded suddenly has no data  

Thanks,  
Kiran  


upgrading Kafka

2016-05-25 Thread Karnam, Kiran
Hi All,

We are using Docker containers to deploy Kafka, we are planning to use mesos 
for the deployment and maintenance of containers. Is there a way during upgrade 
that we can persist the data so that it is available for the upgraded container.

we don't want the clusters to go into chaos with data replicating around the 
network because a node that was upgraded suddenly has no data

Thanks,
Kiran


Re: Best practice for upgrading Kafka cluster from 0.8.1 to 0.8.1.1

2014-12-04 Thread Yu Yang
Guozhang,

We haven't enable message compression yet. In this case, what shall we do
when we upgrade to 0.8.2?  Must we launch a new cluster, redirect the
traffic to the new cluster, and turn off the old one?

Thanks!

-Yu


On Tue, Dec 2, 2014 at 4:33 PM, Guozhang Wang wangg...@gmail.com wrote:

 Yu,

 Are you enabling message compression in 0.8.1 now? If you have already then
 upgrading to 0.8.2 will not change its behavior.

 Guozhang

 On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang yuyan...@gmail.com wrote:

  Hi Neha,
 
  Thanks for the reply!  We know that Kafka 0.8.2 will be released soon. If
  we want to upgrade to Kafka 0.8.2 and enable message compression, will we
  still be able do this in the same way, or we need to handle it
 differently?
 
  Thanks!
 
  Regards,
  -Yu
 
  On Tue, Dec 2, 2014 at 3:11 PM, Neha Narkhede neha.narkh...@gmail.com
  wrote:
 
   Will doing one broker at
   a time by brining the broker down, updating the code, and restarting it
  be
   sufficient?
  
   Yes this should work for the upgrade.
  
   On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com wrote:
  
Hi,
   
We have a kafka cluster that runs Kafka 0.8.1 that we are considering
upgrade to 0.8.1.1. The Kafka documentation
http://kafka.apache.org/documentation.html#upgrade mentions
  upgrading
from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1.  Will doing one
  broker
   at
a time by brining the broker down, updating the code, and restarting
 it
   be
sufficient? Any best practice suggestions?
   
Thanks!
   
Regards,
Yu
   
  
 



 --
 -- Guozhang



Re: Best practice for upgrading Kafka cluster from 0.8.1 to 0.8.1.1

2014-12-04 Thread Yu Yang
Thanks, Guozhang!

On Thu, Dec 4, 2014 at 9:08 AM, Guozhang Wang wangg...@gmail.com wrote:

 You can still do the in-place upgrade, and the logs on the broker will be
 then mixed with uncompressed and compressed messages. This is fine also
 since the consumers are able to de-compress dynamically based on the
 message type when consuming the data.

 Guozhang

 On Wed, Dec 3, 2014 at 11:33 AM, Yu Yang yuyan...@gmail.com wrote:

  Guozhang,
 
  We haven't enable message compression yet. In this case, what shall we do
  when we upgrade to 0.8.2?  Must we launch a new cluster, redirect the
  traffic to the new cluster, and turn off the old one?
 
  Thanks!
 
  -Yu
 
 
  On Tue, Dec 2, 2014 at 4:33 PM, Guozhang Wang wangg...@gmail.com
 wrote:
 
   Yu,
  
   Are you enabling message compression in 0.8.1 now? If you have already
  then
   upgrading to 0.8.2 will not change its behavior.
  
   Guozhang
  
   On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang yuyan...@gmail.com wrote:
  
Hi Neha,
   
Thanks for the reply!  We know that Kafka 0.8.2 will be released
 soon.
  If
we want to upgrade to Kafka 0.8.2 and enable message compression,
 will
  we
still be able do this in the same way, or we need to handle it
   differently?
   
Thanks!
   
Regards,
-Yu
   
On Tue, Dec 2, 2014 at 3:11 PM, Neha Narkhede 
 neha.narkh...@gmail.com
  
wrote:
   
 Will doing one broker at
 a time by brining the broker down, updating the code, and
 restarting
  it
be
 sufficient?

 Yes this should work for the upgrade.

 On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com
 wrote:

  Hi,
 
  We have a kafka cluster that runs Kafka 0.8.1 that we are
  considering
  upgrade to 0.8.1.1. The Kafka documentation
  http://kafka.apache.org/documentation.html#upgrade mentions
upgrading
  from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1.  Will doing one
broker
 at
  a time by brining the broker down, updating the code, and
  restarting
   it
 be
  sufficient? Any best practice suggestions?
 
  Thanks!
 
  Regards,
  Yu
 

   
  
  
  
   --
   -- Guozhang
  
 



 --
 -- Guozhang



Re: Best practice for upgrading Kafka cluster from 0.8.1 to 0.8.1.1

2014-12-02 Thread Neha Narkhede
Will doing one broker at
a time by brining the broker down, updating the code, and restarting it be
sufficient?

Yes this should work for the upgrade.

On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com wrote:

 Hi,

 We have a kafka cluster that runs Kafka 0.8.1 that we are considering
 upgrade to 0.8.1.1. The Kafka documentation
 http://kafka.apache.org/documentation.html#upgrade mentions upgrading
 from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1.  Will doing one broker at
 a time by brining the broker down, updating the code, and restarting it be
 sufficient? Any best practice suggestions?

 Thanks!

 Regards,
 Yu



Re: Best practice for upgrading Kafka cluster from 0.8.1 to 0.8.1.1

2014-12-02 Thread Guozhang Wang
Yu,

Are you enabling message compression in 0.8.1 now? If you have already then
upgrading to 0.8.2 will not change its behavior.

Guozhang

On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang yuyan...@gmail.com wrote:

 Hi Neha,

 Thanks for the reply!  We know that Kafka 0.8.2 will be released soon. If
 we want to upgrade to Kafka 0.8.2 and enable message compression, will we
 still be able do this in the same way, or we need to handle it differently?

 Thanks!

 Regards,
 -Yu

 On Tue, Dec 2, 2014 at 3:11 PM, Neha Narkhede neha.narkh...@gmail.com
 wrote:

  Will doing one broker at
  a time by brining the broker down, updating the code, and restarting it
 be
  sufficient?
 
  Yes this should work for the upgrade.
 
  On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com wrote:
 
   Hi,
  
   We have a kafka cluster that runs Kafka 0.8.1 that we are considering
   upgrade to 0.8.1.1. The Kafka documentation
   http://kafka.apache.org/documentation.html#upgrade mentions
 upgrading
   from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1.  Will doing one
 broker
  at
   a time by brining the broker down, updating the code, and restarting it
  be
   sufficient? Any best practice suggestions?
  
   Thanks!
  
   Regards,
   Yu
  
 




-- 
-- Guozhang


Best practice for upgrading Kafka cluster from 0.8.1 to 0.8.1.1

2014-12-01 Thread Yu Yang
Hi,

We have a kafka cluster that runs Kafka 0.8.1 that we are considering
upgrade to 0.8.1.1. The Kafka documentation
http://kafka.apache.org/documentation.html#upgrade mentions upgrading
from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1.  Will doing one broker at
a time by brining the broker down, updating the code, and restarting it be
sufficient? Any best practice suggestions?

Thanks!

Regards,
Yu