Hi Nicolas, Yes, those steps are important and very prescriptive. You need to follow those steps exactly as prescribed by the doctor :)
In your scenario it needs to be set to 2.0 in the beginning and then once everything is stabilized and you are satisfied then you will have to bump it to 2.8 and then restart the brokers. You can still go back before the broker restarts and changes but once you have changed it to 2.8 and restarted the brokers, you can't fall back to 2.0 anymore. Israel Ekpo Lead Instructor, IzzyAcademy.com https://www.youtube.com/c/izzyacademy https://izzyacademy.com/ On Tue, Jan 25, 2022 at 9:50 AM Nicolas Carlot <nicolas.car...@chronopost.fr.invalid> wrote: > Hi Israel, > > I followed these steps but instead of setting inter.broker.protocol.version > to 2.8 in step 3 I just removed it. Dunno if that changes anything ? > Regarding zookeeper we're using 3.5.8 > Luckily this was a pre-production cluster, we were testing the upgrade here > and we do have backups. > > Anyway to ensute that all zk nodes are in sync when performing the upgrade > ? > I fail to understand how this can lead to such situation. Shouldn't the > unsync zk node data be discared automatically ? > > Le mar. 25 janv. 2022 à 15:32, Israel Ekpo <israele...@gmail.com> a écrit > : > > > Hi Nicolas, > > > > Did you follow the upgrade steps here? > > > > https://kafka.apache.org/documentation/#upgrade_2_8_1 > > > > Also, the recommended Zookeeper versions for 2.0 (3.4.13) is different > from > > that of 2.8 (3.5.9) > > > > https://github.com/apache/kafka/blob/2.0/gradle/dependencies.gradle#L87 > > https://github.com/apache/kafka/blob/2.8/gradle/dependencies.gradle#L122 > > > > It looks like your Zookeeper quorum was not in sync before the upgrade. 2 > > of them seem to agree with the topic id but 1 is different. > > > > If this is production, you may have to decide which one to keep and then > > you may need to upgrade Zookeeper first make sure they are all in sync > with > > your approved data, then you can bring Kafka online. > > > > I generally recommend to back up your cluster data (in remote storage) > and > > config data in version control before any upgrades so that you can always > > restore the state if all does not go well. > > > > This could be due to Zookeeper version incompatibility or bad state of > data > > in Zookeeper before the upgrade. > > > > Israel Ekpo > > Lead Instructor, IzzyAcademy.com > > https://www.youtube.com/c/izzyacademy > > https://izzyacademy.com/ > > > > > > On Tue, Jan 25, 2022 at 7:21 AM Nicolas Carlot > > <nicolas.car...@chronopost.fr.invalid> wrote: > > > > > Hello everyone, > > > > > > I just had a major failure while upgrading a kafka cluster from 2.0 to > > > 2.8.1 following the provided migration process. > > > I understand that a topicId is now given to each topic within zookeeper > > and > > > meta.properties of each partition. > > > While describing the topic, it seems I have different topicId depending > > on > > > the zk node i'm querying: > > > > > > [kafkaadm@lyn3e154(PFI):~ 13:16:28]$ > > > /opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper > > satezookeeperi1:62181 > > > --describe --topic PARCEL360.LT > > > Topic: PARCEL360.LT TopicId: XjIuCqy2TcKu-M5smrz9iA > PartitionCount: > > 10 > > > ReplicationFactor: 3 Configs: compression.type=lz4 > > > Topic: PARCEL360.LT Partition: 0 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 1 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 2 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 3 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 4 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 5 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 6 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 7 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 8 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 9 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > [kafkaadm@lyn3e154(PFI):~ 13:17:06]$ > > > /opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper > > satezookeeperi2:62181 > > > --describe --topic PARCEL360.LT > > > Topic: PARCEL360.LT TopicId: zwbQDd9NRjGwq-v2twHfIQ > PartitionCount: > > 10 > > > ReplicationFactor: 3 Configs: compression.type=lz4 > > > Topic: PARCEL360.LT Partition: 0 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 1 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 2 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 3 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 4 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 5 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 6 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 7 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 8 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 9 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > [kafkaadm@lyn3e154(PFI):~ 13:17:11]$ > > > /opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper > > satezookeeperi3:62181 > > > --describe --topic PARCEL360.LT > > > Topic: PARCEL360.LT TopicId: XjIuCqy2TcKu-M5smrz9iA > PartitionCount: > > 10 > > > ReplicationFactor: 3 Configs: compression.type=lz4 > > > Topic: PARCEL360.LT Partition: 0 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 1 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 2 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 3 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 4 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 5 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 6 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > Topic: PARCEL360.LT Partition: 7 Leader: 3 > > Replicas: > > > 2,3,1 Isr: 3 > > > Topic: PARCEL360.LT Partition: 8 Leader: 3 > > Replicas: > > > 3,1,2 Isr: 3 > > > Topic: PARCEL360.LT Partition: 9 Leader: 3 > > Replicas: > > > 1,2,3 Isr: 3 > > > > > > > > > Any idea what's happening here ? > > > > > > > > > > > > > > > -- > > > <https://www.chronopost.fr/fr?xtatc=INT-149> > > > > > > > > > *Nicolas Carlot* > > > *Lead dev*Direction des Systèmes d'Information > > > > > > > > > 3 boulevard Romain Rolland > > > 75014 Paris > > > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L3A4PQ> > > > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pzPQ> > > > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pRPQ> > > > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pVPQ> > > > > > > > > -- > <https://www.chronopost.fr/fr?xtatc=INT-149> > > > *Nicolas Carlot* > *Lead dev*Direction des Systèmes d'Information > > > 3 boulevard Romain Rolland > 75014 Paris > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L3A4PQ> > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pzPQ> > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pRPQ> > <https://mailsign.chronopost.fr/linkc/K0ppSHlnPT0-L1pVPQ> >