er:port] upgrade
> --metadata 3.6
>
> This is doable using declarative Kubernetes job.
>
> Thank you very much, I appreciate your help.
>
> Jiri
>
> From: Jakub Scholz
> Sent: Thursday, October 19, 2023 5:18 PM
> To: users@kafka.apache.org
> Subject: [External Email] Re: U
--bootstrap-server [server:port] upgrade --metadata 3.6
This is doable using declarative Kubernetes job.
Thank you very much, I appreciate your help.
Jiri
From: Jakub Scholz
Sent: Thursday, October 19, 2023 5:18 PM
To: users@kafka.apache.org
Subject: [External Email] Re: Upgrading Kafka Kraft
Hi Jiří,
Why can't you run it from another Pod? You should be able to specify
--bootstrap-server and point it to the brokers to connect to. You can also
pass further properties to it using the --command-config option. It should
be also possible to use it from the Admin API
Hello all,
Final step of the upgrade procedure is to run command like:
"./bin/kafka-features.sh upgrade --metadata 3.6"
In the Kubernetes world, which works in the desired state configuration (yamls
and its the upper level abstractions), this is quite complicated.
The first thing I tried to
Hi,
Is there any wiki for upgarading kafka from 2.5 to 2.8 in production
projects/spring-kafka
>
> Best.
> Bruno
>
> On 24.09.21 00:25, Chang Liu wrote:
> > Hi Kafka users,
> >
> > I start running into the following error after upgrading `Kafka-clients`
> from 2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don’t see a
> working
running into the following error after upgrading `Kafka-clients` from 2.5.0
to 3.0.0. And I see the same error with 2.8.1. I don’t see a working solution by
searching on Google:
https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
<https://stackoverflow.
Hi Kafka users,
I start running into the following error after upgrading `Kafka-clients` from
2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don’t see a working
solution by searching on Google:
https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server
bumping this up with new update:
I've investigated another occurrence of this exception.
For analyzes, I used:
1) a memory dump that was taken from the broker
2) kafka log file
3) kafka state-change log
4) log, index and time-index files of a failed segment
5) Kafka source code, version 2.3.1
Filed JIRA bug:
https://issues.apache.org/jira/browse/KAFKA-9213
On Tue, Nov 19, 2019 at 2:58 PM Ismael Juma wrote:
> Can you please file a JIRA?
>
> Ismael
>
> On Tue, Nov 19, 2019 at 11:52 AM Daniyar Kulakhmetov <
> dkulakhme...@liftoff.io> wrote:
>
> > Hi Kafka users,
> >
> > We updated
Can you please file a JIRA?
Ismael
On Tue, Nov 19, 2019 at 11:52 AM Daniyar Kulakhmetov <
dkulakhme...@liftoff.io> wrote:
> Hi Kafka users,
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> Message format and inter-broker protocol versions left the same:
>
>
Hi,
We followed the upgrade instruction (
https://kafka.apache.org/documentation/#upgrade) up to step 2, and as it is
said in step 3
"Once the cluster's behavior and performance has been verified, bump the
protocol version by editing" we were verifying cluster's behavior.
Thanks,
On Tue, Nov
Hi,
Is there any reason why you haven’t performed the upgrade based on official
docs ? Or, is this something you’re planning to do now?
Thanks,
On Tue, 19 Nov 2019 at 19:52, Daniyar Kulakhmetov
wrote:
> Hi Kafka users,
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1.
> Message
Hi Kafka users,
We updated our Kafka cluster from 1.1.0 version to 2.3.1.
Message format and inter-broker protocol versions left the same:
inter.broker.protocol.version=1.1
log.message.format.version=1.1
After upgrading, we started to get some occasional exceptions:
2019/11/19 05:30:53 INFO
Hi kafka Users,
I have a novice question in kafka upgrade.. This is the 1st time i'm
upgrading my kafka in Linux.
My current version is "kafka_2.11-1.0.0.tgz".. when i initially setup i had
a folder named kafka_2.11-1.0.0.
Now i downloaded a new version "kafka_2.12-2.3.0.tgz". If i extract it
On Tue, Jan 9, 2018 at 4:50 PM, ZigSphere Tech
wrote:
> Is it easy to upgrade from Kafka version 0.10.2 to 1.0.0 or do I need to
> upgrade to version 0.11.0 first? Anything to expect?
>
We just did (almost) exactly this upgrade. 2.11-0.10.1.0 to 2.11-1.0.0.
The main
Hello All,
Is it easy to upgrade from Kafka version 0.10.2 to 1.0.0 or do I need to
upgrade to version 0.11.0 first? Anything to expect?
Thanks
[re-posting]
Hi All,
1. Upgrade the brokers one at a time: shut down the broker, update the
code, and restart it.
What does it mean to "update the code".
Does it mean replace the old lib folder with latest ? or replace lib and
bin with latest?
Could someone clarify ?
On Fri, Sep 8, 2017
In 24 hours the brokers started getting killed due to disk full.
The retention period is 48 hrs and with 0.8 disks used to fill ~65%
What is going wrong here ?
This is production system. I am reducing the retention for the time being
to 24 hrs.
Hi,
I want to do a rolling upgrade of kafka server from 0.9 to 0.10.2.0. Should
I keep path of the commit logs the same? what is the impact of keeping the
path same/different?
Thanks in advance.
Looking at the output you pasted, broker `0` was the one being upgraded? A
few things to check:
1. Does broker `0` connect to the other brokers after the restart
2. Is broker `0` able to connect to zookeeper
3. Does everything look OK in the controller and state-change logs in the
controller node
Yes, I've set the inter.broker.protocol.version=0.10.0 before restarting
each broker on a previous update. Clusters currently run with this config.
On 03/14/2017 12:34 PM, Ismael Juma wrote:
So, to double-check, you set inter.broker.protocol.version=0.10.0 before
bouncing each broker?
On
So, to double-check, you set inter.broker.protocol.version=0.10.0 before
bouncing each broker?
On Tue, Mar 14, 2017 at 11:22 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:
> Hello Ismael,
>
> Thank you for your feedback.
>
> Yes I've done this changes on a previous upgrade
Hello Ismael,
Thank you for your feedback.
Yes I've done this changes on a previous upgrade and set them
accordingly with the new version when trying to do the upgrade.
inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0,
0.10.0 or 0.10.1).
Hi Thomas,
Did you follow the instructions:
https://kafka.apache.org/documentation/#upgrade
Ismael
On Mon, Mar 13, 2017 at 9:43 AM, Thomas KIEFFER <
thomas.kief...@olamobile.com.invalid> wrote:
> I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
> I'm doing the switch
I'm trying to perform an upgrade of 2 kafka cluster of 5 instances, When
I'm doing the switch between 0.10.0.1 and 0.10.1.0 or 0.10.2.0, I saw
that ISR is lost when I upgrade one instance. I didn't find out yet
anything relevant about this problem, logs seems just fine.
eg.
kafka-topics.sh
That's right, there should be no performance penalty if the broker is
configured to use the older message format. The downside is that timestamps
introduced in message format version 2 won't be supported in that case.
Ismael
On Tue, Nov 29, 2016 at 11:31 PM, Hans Jespersen
The performance impact of upgrading and some settings you can use to
mitigate this impact when the majority of your clients are still 0.8.x are
documented on the Apache Kafka website
https://kafka.apache.org/documentation#upgrade_10_performance_impact
-hans
/**
* Hans Jespersen, Principal
I may be wrong, but since there have been message format changes between
0.8.2 and 0.10.1, there will be a performance penalty if the clients are
not also upgraded. This is because you lose the zero-copy semantics on the
server side as the messages have to be converted to the old format before
The only obvious downside I'm aware of is not being able to benefit
from the bugfixes in the client. We are essentially doing the same
thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
our clients from 0.8.1.x.
On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> Hi
Most people upgrade clients to enjoy new client features, fix bugs or
improve performance. If none of these apply, no need to upgrade.
Since you are upgrading to 0.10.1.0, read the upgrade docs closely -
there are specific server settings regarding the message format that
you need to configure a
Hi Everyone,
I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I see
that Kafka 0.10.1.0 should be backwards compatible with client libraries
written for older versions but that newer client libraries are only
compatible with their version and up.
My question is what
Yes,you can use constraints and same volumes.That can be trusted.
From: Radoslaw Gruchalski <ra...@gruchalski.com>
To: "Karnam, Kiran" <kkar...@ea.com>; users@kafka.apache.org
Sent: Thursday, 26 May 2016 2:31 AM
Subject: Re: upgrading Kafka
Kiran,
If you’re
More specifically, see:
https://github.com/mesos/kafka#failed-broker-recovery
On Wed, May 25, 2016 at 6:02 PM, craig w wrote:
> The Kafka framework can be used to deploy brokers. It will also bring a
> broker back up on the server it was last running on (within a certain
>
The Kafka framework can be used to deploy brokers. It will also bring a
broker back up on the server it was last running on (within a certain
amount of time).
However the Kafka framework doesn't run brokers in containers.
On Wednesday, May 25, 2016, Radoslaw Gruchalski
Kiran,
If you’re using Docker, you can use Docker on Mesos, you can use constraints to
force relaunched kafka broker to always relaunch at the same agent and you can
use Docker volumes to persist the data.
Not sure if https://github.com/mesos/kafka provides these capabilites.
–
Best regards,
Hi All,
We are using Docker containers to deploy Kafka, we are planning to use mesos
for the deployment and maintenance of containers. Is there a way during upgrade
that we can persist the data so that it is available for the upgraded container.
we don't want the clusters to go into chaos with
Guozhang,
We haven't enable message compression yet. In this case, what shall we do
when we upgrade to 0.8.2? Must we launch a new cluster, redirect the
traffic to the new cluster, and turn off the old one?
Thanks!
-Yu
On Tue, Dec 2, 2014 at 4:33 PM, Guozhang Wang wangg...@gmail.com wrote:
Thanks, Guozhang!
On Thu, Dec 4, 2014 at 9:08 AM, Guozhang Wang wangg...@gmail.com wrote:
You can still do the in-place upgrade, and the logs on the broker will be
then mixed with uncompressed and compressed messages. This is fine also
since the consumers are able to de-compress dynamically
Will doing one broker at
a time by brining the broker down, updating the code, and restarting it be
sufficient?
Yes this should work for the upgrade.
On Mon, Dec 1, 2014 at 10:23 PM, Yu Yang yuyan...@gmail.com wrote:
Hi,
We have a kafka cluster that runs Kafka 0.8.1 that we are considering
Yu,
Are you enabling message compression in 0.8.1 now? If you have already then
upgrading to 0.8.2 will not change its behavior.
Guozhang
On Tue, Dec 2, 2014 at 4:21 PM, Yu Yang yuyan...@gmail.com wrote:
Hi Neha,
Thanks for the reply! We know that Kafka 0.8.2 will be released soon. If
we
Hi,
We have a kafka cluster that runs Kafka 0.8.1 that we are considering
upgrade to 0.8.1.1. The Kafka documentation
http://kafka.apache.org/documentation.html#upgrade mentions upgrading
from 0.8 to 0.8.1, but not from 0.8.1 to 0.8.1.1. Will doing one broker at
a time by brining the broker
42 matches
Mail list logo