I have three nodes: 100, 101, and 102

When I restart all of them, seems now everything is ok, but I would like to
paste the error messages I got from server.log from each node, see if you
can help to understand what is the problem.

on node 100
[2014-12-23 00:04:39,401] ERROR [KafkaApi-100] Error when processing fetch
request for partition [perf_producer_p8_test,7] offset 125000 from follower
with correlation id 0 (kafka.server.KafkaApis)
kafka.common.OffsetOutOfRangeException: Request for offset 125000 but we
only have log segments in the range 0 to 0.
                         at kafka.log.Log.read(Log.scala:380)
                         at
kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
                         at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)

                         at
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)

                         at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
                         at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)

                         at
scala.collection.immutable.Map$Map3.foreach(Map.scala:154)
                         at
scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
                         at
scala.collection.AbstractTraversable.map(Traversable.scala:105)
..
..


in Node 101 and 102
[2014-12-23 00:04:39,440] ERROR [ReplicaFetcherThread-0-100], Current
offset 1 25000 for partition [perf_producer_p8_test,1] out of range; reset
offset to 0 (kafka.server.ReplicaFetcherThread)
[2014-12-23 00:04:39,442] INFO Truncating log perf_producer_p8_test-7 to
offset 0. (kafka.log.Log)
[2014-12-23 00:04:39,452] WARN [ReplicaFetcherThread-0-100], Replica 102
for partition [perf_producer_p8_test,7] reset its fetch offset to current
leader 100's latest offset 0 (kafka.server.ReplicaFetcherThread)






On Mon, Dec 22, 2014 at 3:55 PM, Sa Li <sal...@gmail.com> wrote:
>
> Hello, Neha
>
> This is the error from server.log
>
> [2014-12-22 23:53:25,663] WARN [KafkaApi-100] Fetch request with
> correlation id 1227732 from client ReplicaFetcherThread-0-100 on partition
> [perf_producer_p8_test,1] failed due to Leader not local for partition
> [perf_producer_p8_test,1] on broker 100 (kafka.server.KafkaApis)
>
>
> On Mon, Dec 22, 2014 at 3:50 PM, Sa Li <sal...@gmail.com> wrote:
>>
>> I restart the kafka server, it is the same thing, sometime nothing listed
>> on ISR, leader, I checked the state-change log
>>
>> [2014-12-22 23:46:38,164] TRACE Broker 100 cached leader info
>> (LeaderAndIsrInfo:(Leader:101,ISR:101,102,100,LeaderEpoch:0,ControllerEpoch:4),ReplicationFactor:3),AllReplicas:101,102,100)
>> for partition [perf_producer_p8_test,1] in response to UpdateMetadata
>> request sent by controller 101 epoch 4 with correlation id 138
>> (state.change.logger)
>>
>>
>>
>> On Mon, Dec 22, 2014 at 2:46 PM, Sa Li <sal...@gmail.com> wrote:
>>
>>> Hi, All
>>>
>>> I created a topic with 3 replications and 6 partitions, but when I check
>>> this topic, seems there is no leader and isr were set for this topic, see
>>>
>>> bin/kafka-topics.sh --create --zookeeper 10.100.98.100:2181
>>> --replication-factor 3 --partitions 6 --topic perf_producer_p6_test
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> Created topic "perf_producer_p6_test".
>>>
>>> root@precise64:/etc/kafka# bin/kafka-topics.sh --describe --zookeeper
>>> 10.100.98.100:2181 --topic perf_producer_p6_test
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> Topic:perf_producer_p6_test     PartitionCount:6
>>> ReplicationFactor:3     Configs:
>>>         Topic: perf_producer_p6_test    Partition: 0    Leader: none
>>> Replicas: 100,101,102   Isr:
>>>         Topic: perf_producer_p6_test    Partition: 1    Leader: none
>>> Replicas: 101,102,100   Isr:
>>>         Topic: perf_producer_p6_test    Partition: 2    Leader: none
>>> Replicas: 102,100,101   Isr:
>>>         Topic: perf_producer_p6_test    Partition: 3    Leader: none
>>> Replicas: 100,102,101   Isr:
>>>         Topic: perf_producer_p6_test    Partition: 4    Leader: none
>>> Replicas: 101,100,102   Isr:
>>>         Topic: perf_producer_p6_test    Partition: 5    Leader: none
>>> Replicas: 102,101,100   Isr:
>>>
>>> Is there a way to specifically set leader and isr in command line, it is
>>> strange when I create the topic with 5 partitions, it has leader and isr:
>>> root@precise64:/etc/kafka# bin/kafka-topics.sh --describe --zookeeper
>>> 10.100.98.100:2181 --topic perf_producer_p5_test
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in
>>> [jar:file:/etc/kafka/core/build/dependant-libs-2.10.4/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> Topic:perf_producer_p5_test     PartitionCount:5
>>> ReplicationFactor:3     Configs:
>>>         Topic: perf_producer_p5_test    Partition: 0    Leader: 102
>>> Replicas: 102,100,101   Isr: 102,100,101
>>>         Topic: perf_producer_p5_test    Partition: 1    Leader: 102
>>> Replicas: 100,101,102   Isr: 102,101
>>>         Topic: perf_producer_p5_test    Partition: 2    Leader: 101
>>> Replicas: 101,102,100   Isr: 101,102,100
>>>         Topic: perf_producer_p5_test    Partition: 3    Leader: 102
>>> Replicas: 102,101,100   Isr: 102,101,100
>>>         Topic: perf_producer_p5_test    Partition: 4    Leader: 102
>>> Replicas: 100,102,101   Isr: 102,101
>>>
>>>
>>> Any ideas?
>>>
>>> thanks
>>>
>>> --
>>>
>>> Alec Li
>>>
>>
>>
>> --
>>
>> Alec Li
>>
>
>
> --
>
> Alec Li
>


-- 

Alec Li

Reply via email to