Thanks Luke.

This my revised json file

{
 "version":1,
 "partitions":[
{"topic":"md","partition":0,"replicas":[1,3,7]},
{"topic":"md","partition":1,"replicas":[2,8,9]},
{"topic":"md","partition":2,"replicas":[7,10,12]}
]
}
Implemented as

kafka-reassign-partitions.sh --bootstrap-server rhes75:9092
--reassignment-json-file ./reduce_replication_factor2.json --execute

The output looks healthier

Current partition replica assignment
{"version":1,"partitions":[{"topic":"md","partition":0,"replicas":[1,3,7],"log_dirs":["any","any","any"]},{"topic":"md","partition":1,"replicas":[2,8,9],"log_dirs":["any","any","any"]},{"topic":"md","partition":2,"replicas":[7,10,12],"log_dirs":["any","any","any"]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for md-0,md-1,md-2

kafka-topics.sh --describe --bootstrap-server rhes75:9092 --topic md

Topic: md       TopicId: UfQly87bQPCbVKoH-PQheg PartitionCount: 9
ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: md       Partition: 0    Leader: 1       Replicas: 1,3,7
Isr: 1,3,7
        Topic: md       Partition: 1    Leader: 2       Replicas: 2,8,9
Isr: 2,8,9
        Topic: md       Partition: 2    Leader: 7       Replicas: 7,10,12
    Isr: 10,7,12
        Topic: md       Partition: 3    Leader: 1       Replicas:
1,12,9,11,7,3,10,8,2  Isr: 10,1,9,2,12,7,3,11,8
        Topic: md       Partition: 4    Leader: 7       Replicas:
7,9,11,1,3,10,8,2,12  Isr: 10,1,9,2,12,7,3,11,8
        Topic: md       Partition: 5    Leader: 3       Replicas:
3,11,1,7,10,8,2,12,9  Isr: 10,1,9,2,12,7,3,11,8
        Topic: md       Partition: 6    Leader: 10      Replicas:
10,1,7,3,8,2,12,9,11  Isr: 10,1,9,2,12,7,3,11,8
        Topic: md       Partition: 7    Leader: 8       Replicas:
8,7,3,10,2,12,9,11,1  Isr: 10,1,9,2,12,7,3,11,8
        Topic: md       Partition: 8    Leader: 2       Replicas:
2,3,10,8,12,9,11,1,7  Isr: 10,1,9,2,12,7,3,11,8



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 15 May 2023 at 07:41, Luke Chen <show...@gmail.com> wrote:

> Hi Mich,
>
> You might want to take a look at this section: "Increasing replication
> factor" in the doc:
>
> https://kafka.apache.org/documentation/#basic_ops_increase_replication_factor
>
> Simply put, the json file provided in kafka-reassign-partitions.sh should
> show the final replicas assignment after this operation.
> In your case, the size of each replica should be 3, but I saw you put 9
> replicas there.
>
> Hope it helps.
> Luke
>
> On Sat, May 13, 2023 at 7:20 PM Mich Talebzadeh <mich.talebza...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> > From the following list
> >
> >  kafka-topics.sh --describe --bootstrap-server rhes75:9092 --topic md
> >
> > Topic: md       TopicId: UfQly87bQPCbVKoH-PQheg PartitionCount: 9
> > ReplicationFactor: 9    Configs:
> > segment.bytes=1073741824,retention.bytes=1073741824
> >         Topic: md       Partition: 0    Leader: 12      Replicas:
> > 12,10,8,2,9,11,1,7,3  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 1    Leader: 9       Replicas:
> > 9,8,2,12,11,1,7,3,10  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 2    Leader: 11      Replicas:
> > 11,2,12,9,1,7,3,10,8  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 3    Leader: 1       Replicas:
> > 1,12,9,11,7,3,10,8,2  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 4    Leader: 7       Replicas:
> > 7,9,11,1,3,10,8,2,12  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 5    Leader: 3       Replicas:
> > 3,11,1,7,10,8,2,12,9  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 6    Leader: 10      Replicas:
> > 10,1,7,3,8,2,12,9,11  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 7    Leader: 8       Replicas:
> > 8,7,3,10,2,12,9,11,1  Isr: 10,1,9,2,12,7,3,11,8
> >         Topic: md       Partition: 8    Leader: 2       Replicas:
> > 2,3,10,8,12,9,11,1,7  Isr: 10,1,9,2,12,7,3,11,8
> >
> > so for topic md I have 9 Partitions and 9 Replication
> >
> > As for redundancy and prevent data loss I only need 3 replicas (the
> leader
> > and 2 followers) , so I use the following to reduce the number of
> replicas
> > to 3
> >
> >
> > {
> >  "version":1,
> >  "partitions":[
> > {"topic":"md","partition":0,"replicas":[12,10,8,2,9,11,1,7,3]},
> > {"topic":"md","partition":1,"replicas":[9,8,2,12,11,1,7,3,10]},
> > {"topic":"md","partition":2,"replicas":[11,2,12,9,1,7,3,10,8]}
> > ]
> > }
> > with the following command
> >
> > kafka-reassign-partitions.sh --bootstrap-server rhes75:9092
> > --reassignment-json-file ./reduce_replication_factor2.json --execute
> >
> > and this is the output
> >
> > Current partition replica assignment
> >
> >
> >
> {"version":1,"partitions":[{"topic":"md","partition":0,"replicas":[12,10,8,2,9,11,1,7,3],"log_dirs":["any","any","any","any","any","any","any","any","any"]},{"topic":"md","partition":1,"replicas":[9,8,2,12,11,1,7,3,10],"log_dirs":["any","any","any","any","any","any","any","any","any"]},{"topic":"md","partition":2,"replicas":[11,2,12,9,1,7,3,10,8],"log_dirs":["any","any","any","any","any","any","any","any","any"]}]}
> >
> > Save this to use as the --reassignment-json-file option during rollback
> >
> > *Successfully started partition reassignments for md-0,md-1,md-2*
> >
> >
> > It says it is doing it, but nothing is happening!
> >
> >
> > This is the size of Kafka Topic in MB per each per partition remaining:
> >
> >
> > kafka-log-dirs.sh --bootstrap-server rhes75:9092 --topic-list md
> --describe
> > | grep -oP '(?<=size":)\d+' | awk '{ sum += $1 } END { print
> > sum/1024/1024/9 }'
> >
> >
> > Which comes back with 81.5 MB
> >
> >
> > Will this work as I have stopped the queue but still data there. In
> short,
> > is downsizing practical?
> >
> >
> > Thanks
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
>

Reply via email to