Re: __consumer_offsets & __transaction_state topics have ReplicationFactor: 1

2022-12-16 Thread Chris Peart
Hi Andrew, 
Would you be able to provide an example of the json with all the partitions in 
please, I tried this on our dev cluster but it didn’t work. 
Many Thanks,
Chris

> On 16 Dec 2022, at 9:03 pm, Andrew Grant  wrote:
> 
> Hey Chris,
> 
> You'd need to do the same for all partitions. I just showed partition 49 as
> an example - I picked 49 because when I ran a describe it showed up at the
> bottom of my terminal :) You could do all the partitions in the same
> reassignment. In that JSON I just put partition 49 but you could add all
> the other partitions in it.
> 
> Yeah I'm pretty sure you'd do basically the same for __transaction_state. I
> havent tested that myself locally so might be worth doing so on your end.
> 
> Hope that helps a bit.
> 
> Andrew
> 
>> On Fri, Dec 16, 2022 at 11:41 AM Chris Peart  wrote:
>> 
>> Hi Andrew,
>> 
>> Thanks for the speedy reply, so do I just need to do this for partition
>> 49? What about partitions 0-48, will these be covered by reassigning
>> partition 49.
>> 
>> Do I need to do this for the __transaction_state topics too?
>> 
>> Many thanks,
>> Chris
>> 
>>> On 16 Dec 2022, at 4:17 pm, Andrew Grant 
>> wrote:
>>> 
>>> Hey Chris,
>>> I think you should be able to use the reassignment tool to add replicas.
>>> You should be able to do something similar to migrate the partitions away
>>> from the old brokers and onto the new ones and also increase the
>>> replication factor at the same time. I tested just increasing the
>>> replication factor with the following commands:
>>> 
>>> kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
>>> __consumer_offsets --describe | grep 'Partition: 49'
>>> Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1
>>> Offline:
>>> 
>>> kafka % cat reassignment.json
>>> {
>>> "version": 1,
>>> "partitions": [
>>>   {
>>> "topic": "__consumer_offsets",
>>> "partition": 49,
>>> "replicas": [ 1, 0 ]
>>>   }
>>> ]
>>> }
>>> 
>>> kafka % ./bin/kafka-reassign-partitions.sh --bootstrap-server
>>> localhost:9092 --reassignment-json-file reassignment.json --execute
>>> Current partition replica assignment
>>> 
>>> 
>> {"version":1,"partitions":[{"topic":"__consumer_offsets","partition":49,"replicas":[1],"log_dirs":["any"]}]}
>>> 
>>> Save this to use as the --reassignment-json-file option during rollback
>>> Successfully started partition reassignment for __consumer_offsets-49
>>> kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
>>> __consumer_offsets --describe | grep 'Partition: 49'
>>> Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1,0 Isr: 1,0
>>> Offline:
>>> 
>>> 
>>> Andrew
>>> 
>>> 
 On Fri, Dec 16, 2022 at 9:46 AM Chris Peart  wrote:
 
 
 
 Hi,
 
 We have a kafka production cluster that was setup with the defaults for
 __consumer_offsets & __transaction_state topics.
 
 Is there a way to increase the replication factor from 1 to 3 using the
 kafka-reassign-partitions tool?
 
 We are also replacing our 4 brokers with new brokers, this has been
 completed so we now have an 8 broker cluster and have migrated all the
 topics to the new brokers using the reassign tool except for the
 __consumer_offsets & __transaction_state topics.
 
 We stopped kafka on the old brokers today but all out consumers failed
 due to the __consumer_offsets & __transaction_state topics residing on
 the old brokers.
 
 I'm thinking we should could move the __consumer_offsets &
 __transaction_state topics to the now brokers using the reassign tool, i
 have done this on our dev platform and all is good. If you think this is
 good idea we can then can stop kafka on the old nodes and then work on
 the replication factoron the new nodes?
 
 The problem i have is how do we change the replication factor to 3 after
 we migrate the __consumer_offsets & __transaction_state topics?
 
 Thanks in advance.
 
 Chris
>> 
>> 



Re: [ANNOUNCE] New committer: Ron Dagostino

2022-12-16 Thread Raymond Ng
Congratulations, Ron!

/Ray

On Fri, Dec 16, 2022 at 3:04 PM Ron Dagostino  wrote:

> Thanks again, everyone!
>
> Ron
>
> > On Dec 16, 2022, at 12:37 PM, Bill Bejeck  wrote:
> >
> > Congratulations, Ron!
> >
> > -Bill
> >
> >> On Fri, Dec 16, 2022 at 12:33 PM Matthias J. Sax 
> wrote:
> >>
> >> Congrats!
> >>
> >>> On 12/15/22 7:09 AM, Rajini Sivaram wrote:
> >>> Congratulations, Ron! Well deserved!!
> >>>
> >>> Regards,
> >>>
> >>> Rajini
> >>>
> >>> On Thu, Dec 15, 2022 at 11:42 AM Ron Dagostino 
> >> wrote:
> >>>
>  Thank you, everyone!
> 
>  Ron
> 
> > On Dec 15, 2022, at 5:09 AM, Bruno Cadonna 
> wrote:
> >
> > Congrats Ron!
> >
> > Best,
> > Bruno
> >
> >> On 15.12.22 10:23, Viktor Somogyi-Vass wrote:
> >> Congrats Ron! :)
> >>> On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <
>  mickael.mai...@gmail.com>
> >>> wrote:
> >>> Congratulations Ron!
> >>>
> >>> On Thu, Dec 15, 2022 at 9:41 AM Eslam Farag 
>  wrote:
> 
>  Congratulations, Ron ☺️
> 
>  On Thu, 15 Dec 2022 at 10:40 AM Tom Bentley 
>  wrote:
> 
> > Congratulations!
> >
> > On Thu, 15 Dec 2022 at 07:40, Satish Duggana <
>  satish.dugg...@gmail.com
> 
> > wrote:
> >
> >> Congratulations, Ron!!
> >>
> >> On Thu, 15 Dec 2022 at 07:48, ziming deng <
> >> dengziming1...@gmail.com
> >
> >> wrote:
> >>
> >>> Congratulations, Ron!
> >>> Well deserved!
> >>>
> >>> --
> >>> Ziming
> >>>
>  On Dec 15, 2022, at 09:16, Luke Chen 
> wrote:
> 
>  Congratulations, Ron!
>  Well deserved!
> 
>  Luke
> >>>
> >>>
> >>
> >
> >>>
> 
> >>>
> >>
>


Re: [ANNOUNCE] New committer: Ron Dagostino

2022-12-16 Thread Ron Dagostino
Thanks again, everyone!

Ron

> On Dec 16, 2022, at 12:37 PM, Bill Bejeck  wrote:
> 
> Congratulations, Ron!
> 
> -Bill
> 
>> On Fri, Dec 16, 2022 at 12:33 PM Matthias J. Sax  wrote:
>> 
>> Congrats!
>> 
>>> On 12/15/22 7:09 AM, Rajini Sivaram wrote:
>>> Congratulations, Ron! Well deserved!!
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> On Thu, Dec 15, 2022 at 11:42 AM Ron Dagostino 
>> wrote:
>>> 
 Thank you, everyone!
 
 Ron
 
> On Dec 15, 2022, at 5:09 AM, Bruno Cadonna  wrote:
> 
> Congrats Ron!
> 
> Best,
> Bruno
> 
>> On 15.12.22 10:23, Viktor Somogyi-Vass wrote:
>> Congrats Ron! :)
>>> On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <
 mickael.mai...@gmail.com>
>>> wrote:
>>> Congratulations Ron!
>>> 
>>> On Thu, Dec 15, 2022 at 9:41 AM Eslam Farag 
 wrote:
 
 Congratulations, Ron ☺️
 
 On Thu, 15 Dec 2022 at 10:40 AM Tom Bentley 
 wrote:
 
> Congratulations!
> 
> On Thu, 15 Dec 2022 at 07:40, Satish Duggana <
 satish.dugg...@gmail.com
 
> wrote:
> 
>> Congratulations, Ron!!
>> 
>> On Thu, 15 Dec 2022 at 07:48, ziming deng <
>> dengziming1...@gmail.com
> 
>> wrote:
>> 
>>> Congratulations, Ron!
>>> Well deserved!
>>> 
>>> --
>>> Ziming
>>> 
 On Dec 15, 2022, at 09:16, Luke Chen  wrote:
 
 Congratulations, Ron!
 Well deserved!
 
 Luke
>>> 
>>> 
>> 
> 
>>> 
 
>>> 
>> 


Re: __consumer_offsets & __transaction_state topics have ReplicationFactor: 1

2022-12-16 Thread Andrew Grant
Hey Chris,

You'd need to do the same for all partitions. I just showed partition 49 as
an example - I picked 49 because when I ran a describe it showed up at the
bottom of my terminal :) You could do all the partitions in the same
reassignment. In that JSON I just put partition 49 but you could add all
the other partitions in it.

Yeah I'm pretty sure you'd do basically the same for __transaction_state. I
havent tested that myself locally so might be worth doing so on your end.

Hope that helps a bit.

Andrew

On Fri, Dec 16, 2022 at 11:41 AM Chris Peart  wrote:

> Hi Andrew,
>
> Thanks for the speedy reply, so do I just need to do this for partition
> 49? What about partitions 0-48, will these be covered by reassigning
> partition 49.
>
> Do I need to do this for the __transaction_state topics too?
>
> Many thanks,
> Chris
>
> > On 16 Dec 2022, at 4:17 pm, Andrew Grant 
> wrote:
> >
> > Hey Chris,
> > I think you should be able to use the reassignment tool to add replicas.
> > You should be able to do something similar to migrate the partitions away
> > from the old brokers and onto the new ones and also increase the
> > replication factor at the same time. I tested just increasing the
> > replication factor with the following commands:
> >
> > kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
> > __consumer_offsets --describe | grep 'Partition: 49'
> > Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1
> > Offline:
> >
> > kafka % cat reassignment.json
> > {
> >  "version": 1,
> >  "partitions": [
> >{
> >  "topic": "__consumer_offsets",
> >  "partition": 49,
> >  "replicas": [ 1, 0 ]
> >}
> >  ]
> > }
> >
> > kafka % ./bin/kafka-reassign-partitions.sh --bootstrap-server
> > localhost:9092 --reassignment-json-file reassignment.json --execute
> > Current partition replica assignment
> >
> >
> {"version":1,"partitions":[{"topic":"__consumer_offsets","partition":49,"replicas":[1],"log_dirs":["any"]}]}
> >
> > Save this to use as the --reassignment-json-file option during rollback
> > Successfully started partition reassignment for __consumer_offsets-49
> > kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
> > __consumer_offsets --describe | grep 'Partition: 49'
> > Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1,0 Isr: 1,0
> > Offline:
> >
> >
> > Andrew
> >
> >
> >> On Fri, Dec 16, 2022 at 9:46 AM Chris Peart  wrote:
> >>
> >>
> >>
> >> Hi,
> >>
> >> We have a kafka production cluster that was setup with the defaults for
> >> __consumer_offsets & __transaction_state topics.
> >>
> >> Is there a way to increase the replication factor from 1 to 3 using the
> >> kafka-reassign-partitions tool?
> >>
> >> We are also replacing our 4 brokers with new brokers, this has been
> >> completed so we now have an 8 broker cluster and have migrated all the
> >> topics to the new brokers using the reassign tool except for the
> >> __consumer_offsets & __transaction_state topics.
> >>
> >> We stopped kafka on the old brokers today but all out consumers failed
> >> due to the __consumer_offsets & __transaction_state topics residing on
> >> the old brokers.
> >>
> >> I'm thinking we should could move the __consumer_offsets &
> >> __transaction_state topics to the now brokers using the reassign tool, i
> >> have done this on our dev platform and all is good. If you think this is
> >> good idea we can then can stop kafka on the old nodes and then work on
> >> the replication factoron the new nodes?
> >>
> >> The problem i have is how do we change the replication factor to 3 after
> >> we migrate the __consumer_offsets & __transaction_state topics?
> >>
> >> Thanks in advance.
> >>
> >> Chris
>
>


Re: [ANNOUNCE] New committer: Ron Dagostino

2022-12-16 Thread Bill Bejeck
Congratulations, Ron!

-Bill

On Fri, Dec 16, 2022 at 12:33 PM Matthias J. Sax  wrote:

> Congrats!
>
> On 12/15/22 7:09 AM, Rajini Sivaram wrote:
> > Congratulations, Ron! Well deserved!!
> >
> > Regards,
> >
> > Rajini
> >
> > On Thu, Dec 15, 2022 at 11:42 AM Ron Dagostino 
> wrote:
> >
> >> Thank you, everyone!
> >>
> >> Ron
> >>
> >>> On Dec 15, 2022, at 5:09 AM, Bruno Cadonna  wrote:
> >>>
> >>> Congrats Ron!
> >>>
> >>> Best,
> >>> Bruno
> >>>
>  On 15.12.22 10:23, Viktor Somogyi-Vass wrote:
>  Congrats Ron! :)
> > On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <
> >> mickael.mai...@gmail.com>
> > wrote:
> > Congratulations Ron!
> >
> > On Thu, Dec 15, 2022 at 9:41 AM Eslam Farag 
> >> wrote:
> >>
> >> Congratulations, Ron ☺️
> >>
> >> On Thu, 15 Dec 2022 at 10:40 AM Tom Bentley 
> >> wrote:
> >>
> >>> Congratulations!
> >>>
> >>> On Thu, 15 Dec 2022 at 07:40, Satish Duggana <
> >> satish.dugg...@gmail.com
> >>
> >>> wrote:
> >>>
>  Congratulations, Ron!!
> 
>  On Thu, 15 Dec 2022 at 07:48, ziming deng <
> dengziming1...@gmail.com
> >>>
>  wrote:
> 
> > Congratulations, Ron!
> > Well deserved!
> >
> > --
> > Ziming
> >
> >> On Dec 15, 2022, at 09:16, Luke Chen  wrote:
> >>
> >> Congratulations, Ron!
> >> Well deserved!
> >>
> >> Luke
> >
> >
> 
> >>>
> >
> >>
> >
>


Re: [ANNOUNCE] New committer: Viktor Somogyi-Vass

2022-12-16 Thread Bill Bejeck
Congratulations, Viktor!

-Bill

On Fri, Dec 16, 2022 at 12:32 PM Matthias J. Sax  wrote:

> Congrats!
>
> On 12/15/22 7:10 AM, Rajini Sivaram wrote:
> > Congratulations, Viktor!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Dec 15, 2022 at 11:41 AM Ron Dagostino 
> wrote:
> >
> >> Congrats to you too, Victor!
> >>
> >> Ron
> >>
> >>> On Dec 15, 2022, at 4:59 AM, Viktor Somogyi-Vass <
> >> viktor.somo...@cloudera.com.invalid> wrote:
> >>>
> >>> Thank you everyone! :)
> >>>
>  On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <
> >> mickael.mai...@gmail.com>
>  wrote:
> 
>  Congratulations Viktor!
> 
> > On Thu, Dec 15, 2022 at 10:06 AM Tamas Barnabas Egyed
> >  wrote:
> >
> > Congratulations, Viktor!
> 
> >>
> >
>


Re: [ANNOUNCE] New committer: Ron Dagostino

2022-12-16 Thread Matthias J. Sax

Congrats!

On 12/15/22 7:09 AM, Rajini Sivaram wrote:

Congratulations, Ron! Well deserved!!

Regards,

Rajini

On Thu, Dec 15, 2022 at 11:42 AM Ron Dagostino  wrote:


Thank you, everyone!

Ron


On Dec 15, 2022, at 5:09 AM, Bruno Cadonna  wrote:

Congrats Ron!

Best,
Bruno


On 15.12.22 10:23, Viktor Somogyi-Vass wrote:
Congrats Ron! :)

On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <

mickael.mai...@gmail.com>

wrote:
Congratulations Ron!

On Thu, Dec 15, 2022 at 9:41 AM Eslam Farag 

wrote:


Congratulations, Ron ☺️

On Thu, 15 Dec 2022 at 10:40 AM Tom Bentley 

wrote:



Congratulations!

On Thu, 15 Dec 2022 at 07:40, Satish Duggana <

satish.dugg...@gmail.com



wrote:


Congratulations, Ron!!

On Thu, 15 Dec 2022 at 07:48, ziming deng 


wrote:


Congratulations, Ron!
Well deserved!

--
Ziming


On Dec 15, 2022, at 09:16, Luke Chen  wrote:

Congratulations, Ron!
Well deserved!

Luke















Re: [ANNOUNCE] New committer: Viktor Somogyi-Vass

2022-12-16 Thread Matthias J. Sax

Congrats!

On 12/15/22 7:10 AM, Rajini Sivaram wrote:

Congratulations, Viktor!

Regards,

Rajini


On Thu, Dec 15, 2022 at 11:41 AM Ron Dagostino  wrote:


Congrats to you too, Victor!

Ron


On Dec 15, 2022, at 4:59 AM, Viktor Somogyi-Vass <

viktor.somo...@cloudera.com.invalid> wrote:


Thank you everyone! :)


On Thu, Dec 15, 2022 at 10:22 AM Mickael Maison <

mickael.mai...@gmail.com>

wrote:

Congratulations Viktor!


On Thu, Dec 15, 2022 at 10:06 AM Tamas Barnabas Egyed
 wrote:

Congratulations, Viktor!








Re: __consumer_offsets & __transaction_state topics have ReplicationFactor: 1

2022-12-16 Thread Chris Peart
Hi Andrew,

Thanks for the speedy reply, so do I just need to do this for partition 49? 
What about partitions 0-48, will these be covered by reassigning partition 49. 

Do I need to do this for the __transaction_state topics too?

Many thanks,
Chris

> On 16 Dec 2022, at 4:17 pm, Andrew Grant  wrote:
> 
> Hey Chris,
> I think you should be able to use the reassignment tool to add replicas.
> You should be able to do something similar to migrate the partitions away
> from the old brokers and onto the new ones and also increase the
> replication factor at the same time. I tested just increasing the
> replication factor with the following commands:
> 
> kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
> __consumer_offsets --describe | grep 'Partition: 49'
> Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1
> Offline:
> 
> kafka % cat reassignment.json
> {
>  "version": 1,
>  "partitions": [
>{
>  "topic": "__consumer_offsets",
>  "partition": 49,
>  "replicas": [ 1, 0 ]
>}
>  ]
> }
> 
> kafka % ./bin/kafka-reassign-partitions.sh --bootstrap-server
> localhost:9092 --reassignment-json-file reassignment.json --execute
> Current partition replica assignment
> 
> {"version":1,"partitions":[{"topic":"__consumer_offsets","partition":49,"replicas":[1],"log_dirs":["any"]}]}
> 
> Save this to use as the --reassignment-json-file option during rollback
> Successfully started partition reassignment for __consumer_offsets-49
> kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
> __consumer_offsets --describe | grep 'Partition: 49'
> Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1,0 Isr: 1,0
> Offline:
> 
> 
> Andrew
> 
> 
>> On Fri, Dec 16, 2022 at 9:46 AM Chris Peart  wrote:
>> 
>> 
>> 
>> Hi,
>> 
>> We have a kafka production cluster that was setup with the defaults for
>> __consumer_offsets & __transaction_state topics.
>> 
>> Is there a way to increase the replication factor from 1 to 3 using the
>> kafka-reassign-partitions tool?
>> 
>> We are also replacing our 4 brokers with new brokers, this has been
>> completed so we now have an 8 broker cluster and have migrated all the
>> topics to the new brokers using the reassign tool except for the
>> __consumer_offsets & __transaction_state topics.
>> 
>> We stopped kafka on the old brokers today but all out consumers failed
>> due to the __consumer_offsets & __transaction_state topics residing on
>> the old brokers.
>> 
>> I'm thinking we should could move the __consumer_offsets &
>> __transaction_state topics to the now brokers using the reassign tool, i
>> have done this on our dev platform and all is good. If you think this is
>> good idea we can then can stop kafka on the old nodes and then work on
>> the replication factoron the new nodes?
>> 
>> The problem i have is how do we change the replication factor to 3 after
>> we migrate the __consumer_offsets & __transaction_state topics?
>> 
>> Thanks in advance.
>> 
>> Chris



Re: __consumer_offsets & __transaction_state topics have ReplicationFactor: 1

2022-12-16 Thread Andrew Grant
Hey Chris,
I think you should be able to use the reassignment tool to add replicas.
You should be able to do something similar to migrate the partitions away
from the old brokers and onto the new ones and also increase the
replication factor at the same time. I tested just increasing the
replication factor with the following commands:

kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
__consumer_offsets --describe | grep 'Partition: 49'
Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1
Offline:

kafka % cat reassignment.json
{
  "version": 1,
  "partitions": [
{
  "topic": "__consumer_offsets",
  "partition": 49,
  "replicas": [ 1, 0 ]
}
  ]
}

kafka % ./bin/kafka-reassign-partitions.sh --bootstrap-server
localhost:9092 --reassignment-json-file reassignment.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"__consumer_offsets","partition":49,"replicas":[1],"log_dirs":["any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignment for __consumer_offsets-49
kafka % ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic
__consumer_offsets --describe | grep 'Partition: 49'
Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1,0 Isr: 1,0
Offline:


Andrew


On Fri, Dec 16, 2022 at 9:46 AM Chris Peart  wrote:

>
>
> Hi,
>
> We have a kafka production cluster that was setup with the defaults for
> __consumer_offsets & __transaction_state topics.
>
> Is there a way to increase the replication factor from 1 to 3 using the
> kafka-reassign-partitions tool?
>
> We are also replacing our 4 brokers with new brokers, this has been
> completed so we now have an 8 broker cluster and have migrated all the
> topics to the new brokers using the reassign tool except for the
> __consumer_offsets & __transaction_state topics.
>
> We stopped kafka on the old brokers today but all out consumers failed
> due to the __consumer_offsets & __transaction_state topics residing on
> the old brokers.
>
> I'm thinking we should could move the __consumer_offsets &
> __transaction_state topics to the now brokers using the reassign tool, i
> have done this on our dev platform and all is good. If you think this is
> good idea we can then can stop kafka on the old nodes and then work on
> the replication factoron the new nodes?
>
> The problem i have is how do we change the replication factor to 3 after
> we migrate the __consumer_offsets & __transaction_state topics?
>
> Thanks in advance.
>
> Chris


Re: Need clarity on a statement in the Upgrade path

2022-12-16 Thread Tom Cooper
Yes, the IBP is used a gate for certain features. 

Once you have brokers in your cluster using it, they might respond to requests 
using that version that require writing new formats to disk (among other 
behaviours). 

If you were then to introduce a broker that is using an older binary (through a 
downgrade of an existing node or by spinning up a new node with an older binary 
version) that older version may not understand this new feature/message when it 
sees it and would fail.

That is why you should first upgrade all the brokers in you cluster to the new 
binary version but with the old IBP version (and LMF version for older kafka 
versions) and then establish that everything is running smoothly and that all 
your client applications are happy **before** you upgrade the IBP.

Tom Cooper
@tomncooper | tomcooper.dev


--- Original Message ---
On Thursday, December 15th, 2022 at 08:33, Swathi Mocharla 
 wrote:


> Hello,
> In the upgrade docs, it is mentioned "Restart the brokers one by one for
> the new protocol version to take effect. Once the brokers begin using the
> latest protocol version, it will no longer be possible to downgrade the
> cluster to an older version."
> 
> What does this mean? Is this statement applicable to every version other
> than 2.1 as well? Does that mean after the update of the inter broker
> protocol, one should not rollback to a prior version?
> 
> Thanks,
> Swathi


Re: [VOTE] 3.3.2 RC0

2022-12-16 Thread jacob bogers
Hi, I have tried several times to unsub from d...@kafka.apache.org, but it
just isnt working

Can someone help me?

cheers
Jacob

On Thu, Dec 15, 2022 at 5:37 PM Chris Egerton 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 3.3.2.
>
> This is a bugfix release with several fixes since the release of 3.3.1. A
> few of the major issues include:
>
> * KAFKA-14358 Users should not be able to create a regular topic name
> __cluster_metadata
> KAFKA-14379 Consumer should refresh preferred read replica on update
> metadata
> * KAFKA-13586 Prevent exception thrown during connector update from
> crashing distributed herder
>
>
> Release notes for the 3.3.2 release:
> https://home.apache.org/~cegerton/kafka-3.3.2-rc0/RELEASE_NOTES.html
>
>
>
> *** Please download, test and vote by Tuesday, December 20, 10pm UTC
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~cegerton/kafka-3.3.2-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~cegerton/kafka-3.3.2-rc0/javadoc/
>
> * Tag to be voted upon (off 3.3 branch) is the 3.3.2 tag:
> https://github.com/apache/kafka/releases/tag/3.3.2-rc0
>
> * Documentation:
> https://kafka.apache.org/33/documentation.html
>
> * Protocol:
> https://kafka.apache.org/33/protocol.html
>
> The most recent build has had test failures. These all appear to be due to
> flakiness, but it would be nice if someone more familiar with the failed
> tests could confirm this. I may update this thread with passing build links
> if I can get one, or start a new release vote thread if test failures must
> be addressed beyond re-running builds until they pass.
>
> Unit/integration tests:
> https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.3/135/testReport/
>
> System tests:
>
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1670984851--apache--3.3--22af3f29ce/2022-12-13--001./2022-12-13--001./report.html
> (initial with three flaky failures)
> Follow-up system tests:
> https://home.apache.org/~cegerton/system_tests/2022-12-14--015/report.html
> ,
> https://home.apache.org/~cegerton/system_tests/2022-12-14--016/report.html
> ,
>
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1671061000--apache--3.3--69fbaf2457/2022-12-14--001./2022-12-14--001./report.html
>
> (Note that the exact commit used for some of the system test runs will not
> precisely match the commit for the release candidate, but that all
> differences between those two commits should have no effect on the
> relevance or accuracy of the test results.)
>
> Thanks,
>
> Chris
>


__consumer_offsets & __transaction_state topics have ReplicationFactor: 1

2022-12-16 Thread Chris Peart



Hi,

We have a kafka production cluster that was setup with the defaults for 
__consumer_offsets & __transaction_state topics.


Is there a way to increase the replication factor from 1 to 3 using the 
kafka-reassign-partitions tool?


We are also replacing our 4 brokers with new brokers, this has been 
completed so we now have an 8 broker cluster and have migrated all the 
topics to the new brokers using the reassign tool except for the 
__consumer_offsets & __transaction_state topics.


We stopped kafka on the old brokers today but all out consumers failed 
due to the __consumer_offsets & __transaction_state topics residing on 
the old brokers.


I'm thinking we should could move the __consumer_offsets & 
__transaction_state topics to the now brokers using the reassign tool, i 
have done this on our dev platform and all is good. If you think this is 
good idea we can then can stop kafka on the old nodes and then work on 
the replication factoron the new nodes?


The problem i have is how do we change the replication factor to 3 after 
we migrate the __consumer_offsets & __transaction_state topics?


Thanks in advance.

Chris

Re: [VOTE] 3.3.2 RC0

2022-12-16 Thread Federico Valeri
Hi, I did the following to validate the release:

- Checksums and signatures ok
- Build from source using Java 17 and Scala 2.13 ok
- Unit and integration tests ok
- Quickstart in both ZK and KRaft modes ok
- Test app with staging Maven artifacts ok

Documentation still has 3.3.1 version references, but I guess this
will be updated later.

+1 (non binding)

Thanks
Fede


On Fri, Dec 16, 2022 at 11:51 AM jacob bogers  wrote:
>
> Hi, I have tried several times to unsub from d...@kafka.apache.org, but it
> just isnt working
>
> Can someone help me?
>
> cheers
> Jacob
>
> On Thu, Dec 15, 2022 at 5:37 PM Chris Egerton 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 3.3.2.
> >
> > This is a bugfix release with several fixes since the release of 3.3.1. A
> > few of the major issues include:
> >
> > * KAFKA-14358 Users should not be able to create a regular topic name
> > __cluster_metadata
> > KAFKA-14379 Consumer should refresh preferred read replica on update
> > metadata
> > * KAFKA-13586 Prevent exception thrown during connector update from
> > crashing distributed herder
> >
> >
> > Release notes for the 3.3.2 release:
> > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/RELEASE_NOTES.html
> >
> >
> >
> > *** Please download, test and vote by Tuesday, December 20, 10pm UTC
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/javadoc/
> >
> > * Tag to be voted upon (off 3.3 branch) is the 3.3.2 tag:
> > https://github.com/apache/kafka/releases/tag/3.3.2-rc0
> >
> > * Documentation:
> > https://kafka.apache.org/33/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/33/protocol.html
> >
> > The most recent build has had test failures. These all appear to be due to
> > flakiness, but it would be nice if someone more familiar with the failed
> > tests could confirm this. I may update this thread with passing build links
> > if I can get one, or start a new release vote thread if test failures must
> > be addressed beyond re-running builds until they pass.
> >
> > Unit/integration tests:
> > https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.3/135/testReport/
> >
> > System tests:
> >
> > http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1670984851--apache--3.3--22af3f29ce/2022-12-13--001./2022-12-13--001./report.html
> > (initial with three flaky failures)
> > Follow-up system tests:
> > https://home.apache.org/~cegerton/system_tests/2022-12-14--015/report.html
> > ,
> > https://home.apache.org/~cegerton/system_tests/2022-12-14--016/report.html
> > ,
> >
> > http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1671061000--apache--3.3--69fbaf2457/2022-12-14--001./2022-12-14--001./report.html
> >
> > (Note that the exact commit used for some of the system test runs will not
> > precisely match the commit for the release candidate, but that all
> > differences between those two commits should have no effect on the
> > relevance or accuracy of the test results.)
> >
> > Thanks,
> >
> > Chris
> >


Re: [SUSPECTED SPAM] Re: Critical bug in Kafka 2.8.1 | topic Id in memory: <> does not match the topic Id for partition <> provided in the request: <>.

2022-12-16 Thread Atul Kumar (atkumar3)
Hi Divij,

Thanks for the reply!

We were using the Kafka-manager tool (with an older Kafka client) which was the 
root cause of this issue.




Regards,​
Atul

From: Divij Vaidya 
Sent: 08 December 2022 17:55
To: atul.ku...@appdynamics.com.invalid 
Cc: users@kafka.apache.org 
Subject: [SUSPECTED SPAM] Re: Critical bug in Kafka 2.8.1 | topic Id in memory: 
<> does not match the topic Id for partition <> provided in the request: <>.

Hi Atul

There is a known bug which has similar symptoms that you observed. See:
https://issues.apache.org/jira/browse/KAFKA-14190
The bug manifests when you use a client < 2.8.0 with `--zookeeper` flag. It
could be avoided by either upgrading the client or by using
`--bootstrap-server` instead of `--zookeeper`.

Does the above explanation match your Kafka setup?

--
Divij Vaidya



On Thu, Dec 8, 2022 at 1:16 PM Atul Kumar (atkumar3)
 wrote:

> Hello everyone,
>
> We are using Kafka 2.8.1, and recently while rolling restart of one of our
> clusters, we encountered the following issue on the Kafka cluster
>
> topic Id in memory: <> does not match the topic Id for partition <>
> provided in the request: <>.
>
> All the produce requests to the impacted topic were failing. We were able
> to fix the issue by deleting partition.metadata from each broker for the
> impacted topic and doing a rolling restart of the Kafka cluster. We are
> using the Kafka stream application.
>
> The same issue was reported in Jira:
> https://issues.apache.org/jira/browse/KAFKA-12835  and is marked fixed
> with version 2.8.1 but we can still see this issue.
>
> Any help would be highly appreciated on any fix for this issue.
>
> Thanks,
> Atul Kumar
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Regards,​
> Atul
>