It is problem on my side. The code was changing the replicas count but not the log_dirs. Since I am migrating from 0.10 this part of the code was not changed.
I have a follow up question what is the default value of log_dirs if I don't specify it in reassignment.json ? On Sat, Jun 30, 2018 at 11:15 AM, Debraj Manna <subharaj.ma...@gmail.com> wrote: > I am generating the reassignent.json like below > > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper > 127.0.0.1:2181 --generate --topics-to-move-json-file > /home/ubuntu/deploy/kafka/topics_to_move.json --broker-list '%s' |tail -1 > > /home/ubuntu/deploy/kafka/reassignment.json" > > Then I am doing the reassignment using the generated file > > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper > 127.0.0.1:2181 --execute --reassignment-json-file > /home/ubuntu/deploy/kafka/reassignment.json > > kafka-reassign-partitions.sh helps states > > The JSON file with the partition reassignment configurationThe format to >> use is - >> {"partitions":[{"topic": "foo", "partition": 1, "replicas": [1,2,3], >> "log_dirs": ["dir1","dir2","dir3"]}], "version":1} Note that "log_dirs" is >> optional. When it is specified, its length must equal the length of the >> replicas list. The value in this list can be either "any" or the absolution >> path of the log directory on the broker. If absolute log directory path is >> specified, it is currently required that the replica has not already been >> created on that broker. The replica will then be created in the specified >> log directory on the broker later. > > > So it appears reassignment json that is generated by > kafka-reassign-partions.sh is creating an issue with logdirs. Is this > some issue in kafka-reassign-partitions.sh or some misconfiguration from my > side. ? > > On Sat, Jun 30, 2018 at 10:26 AM, Debraj Manna <subharaj.ma...@gmail.com> > wrote: > >> Please find the server.properties from one of the broker. >> >> broker.id=0 >> port=9092 >> num.network.threads=3 >> num.io.threads=8 >> socket.send.buffer.bytes=102400 >> socket.receive.buffer.bytes=102400 >> socket.request.max.bytes=104857600 >> log.dirs=/var/lib/kafka/kafka-logs >> num.recovery.threads.per.data.dir=1 >> log.retention.hours=36 >> log.retention.bytes=1073741824 >> log.segment.bytes=536870912 >> log.retention.check.interval.ms=300000 >> log.cleaner.enable=false >> zookeeper.connect=platform1:2181,platform2:2181,platform3:2181 >> message.max.bytes=15000000 >> replica.fetch.max.bytes=15000000 >> auto.create.topics.enable=true >> zookeeper.connection.timeout.ms=6000 >> unclean.leader.election.enable=false >> delete.topic.enable=false >> offsets.topic.replication.factor=1 >> transaction.state.log.replication.factor=1 >> transaction.state.log.min.isr=1 >> >> I have placed server.log from a broker at https://gist.github.com/deb >> raj-manna/4b4bdae8a1c15c36b313a04f37e8776d >> >> On Sat, Jun 30, 2018 at 8:16 AM, Ted Yu <yuzhih...@gmail.com> wrote: >> >>> Seems to be related to KIP-113. >>> >>> server.properties didn't go thru. Do you mind pastebin'ing its content ? >>> >>> If you can pastebin logs from broker, that should help. >>> >>> Thanks >>> >>> On Fri, Jun 29, 2018 at 10:37 AM, Debraj Manna <subharaj.ma...@gmail.com >>> > >>> wrote: >>> >>> > Hi >>> > >>> > I altered a topic like below in kafka 1.1.0 >>> > >>> > /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper >>> 127.0.0.1:2181 >>> > --alter --topic Topic3 --config min.insync.replicas=2 >>> > >>> > But whenever I am trying to verify the reassignment it is showing the >>> > below exception >>> > >>> > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh >>> --zookeeper 127.0.0.1:2181 --reassignment-json-file >>> /home/ubuntu/deploy/kafka/reassignment.json --verify >>> > >>> > Partitions reassignment failed due to Size of replicas list Vector(3, >>> 0, 2) is different from size of log dirs list Vector(any) for partition >>> Topic3-7 >>> > kafka.common.AdminCommandFailedException: Size of replicas list >>> Vector(3, 0, 2) is different from size of log dirs list Vector(any) for >>> partition Topic3-7 >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply( >>> ReassignPartitionsCommand.scala:262) >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply( >>> ReassignPartitionsCommand.scala:251) >>> > at scala.collection.Iterator$class.foreach(Iterator.scala:891) >>> > at scala.collection.AbstractIterator.foreach(Iterator.scala:133 >>> 4) >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartition >>> sCommand.scala:251) >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartition >>> sCommand.scala:250) >>> > at scala.collection.immutable.List.foreach(List.scala:392) >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1.apply(ReassignPartitionsCommand.scala:250) >>> > at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio >>> nReassignmentData$1.apply(ReassignPartitionsCommand.scala:249) >>> > at scala.collection.immutable.List.foreach(List.scala:392) >>> > at kafka.admin.ReassignPartitionsCommand$.parsePartitionReassig >>> nmentData(ReassignPartitionsCommand.scala:249) >>> > at kafka.admin.ReassignPartitionsCommand$.verifyAssignment(Reas >>> signPartitionsCommand.scala:90) >>> > at kafka.admin.ReassignPartitionsCommand$.verifyAssignment(Reas >>> signPartitionsCommand.scala:84) >>> > at kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitio >>> nsCommand.scala:58) >>> > at kafka.admin.ReassignPartitionsCommand.main(ReassignPartition >>> sCommand.scala) >>> > >>> > >>> > My reassignment.json & server.properties is attached. Same thing used >>> to >>> > work fine in kafka 0.10. Can someone let me what is going wrong? Is >>> > anything changed related to this in kafka 1.1.0 ? >>> > >>> >> >> >