kafka 0.9.0.1: FATAL exception on startup

2016-03-08 Thread Anatoly Deyneka
Hi,

I need your advice how to start server in the next situation:
It fails on startup with FATAL error:
[2016-03-07 16:30:53,495] FATAL Fatal error during KafkaServerStartable 
startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
kafka.common.InvalidOffsetException: Attempt to append an offset (211046544) to 
position 40048 no larger than the last offset appended (211046546) to 
xyz/000210467262.index.
at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:207)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:197)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:197)
at kafka.log.LogSegment.recover(LogSegment.scala:188)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:160)
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.log.Log.loadSegments(Log.scala:160)
at kafka.log.Log.(Log.scala:90)
at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:150)
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

http://stackoverflow.com/questions/35849673/kafka-0-9-0-1-fails-with-fatal-exception-on-startup

Thanks,
Anatoly


The information contained in this email is strictly confidential and for the 
use of the addressee only, unless otherwise indicated. If you are not the 
intended recipient, please do not read, copy, use or disclose to others this 
message or any attachment. Please also notify the sender by replying to this 
email or by telephone (+44 (0)20 7896 0011) and then delete the email and any 
copies of it. Opinions, conclusions (etc) that do not relate to the official 
business of this company shall be understood as neither given nor endorsed by 
it. IG Group Holdings plc is a company registered in England and Wales under 
number 04677092. VAT registration number 761 2978 07. Registered Office: Cannon 
Bridge House, 25 Dowgate Hill, London EC4R 2YA. Listed on the London Stock 
Exchange. Its subsidiaries IG Markets Limited and IG Index Limited are 
authorised and regulated by the Financial Conduct Authority (IG Markets Limited 
FCA registration number 195355 and IG Index Limited FCA registration number 
114059).-


data archiving

2014-06-16 Thread Anatoly Deyneka
Hi all,

I'm looking for the way of archiving data.
The data is hot for few days in our system.
After that it can rarely be used. Speed is not so important for archive.

Lets say we have kafka cluster and storage system.
It would be great if kafka supported moving data to storage system instead
of eviction and end user could specify what storage system is used(dynamo,
s3, hadoop, etc...).
Is it possible to implement?

What other solutions you can advice?

Regards,
Anatoly


Re: kafka availability

2014-04-07 Thread Anatoly Deyneka
I have found the problem - it's missconfiguration in /etc/hosts/
The current host was double defined:
127.0.1.1 kafka-broker1
192.168.25.134 kafka-broker1

Once I've removed the first entry the availability test passed.

Thank you for the help.

Anatoly


Re: kafka availability

2014-04-04 Thread Anatoly Deyneka
Yes, broker 0 is started.

INFO conflict in /controller data: { "brokerid":0,
"timestamp":"1396619945779", "version":1 } stored data: { "brokerid":1,
"timestamp":"1396511882085", "version":1 } (kafka.utils.ZkUtils$)
INFO [Kafka Server 0], Started (kafka.server.KafkaServer)

if the problem is conflict I guess log level should be at least warn.
How to solve this conflict?

Anatoly


kafka availability

2014-04-02 Thread Anatoly Deyneka
Hi,

I perform availability tests on next kafka setup:
- 3 nodes(2f+1) for zookeeper(zookeeper1, zookeeper2, zookeeper3)
- 2 nodes(f+1) for kafka brokers(id:0,host:kafka.broker1,port:9092;
id:1,host:kafka.broker2,port:9092)
I use java producer and console consumer.

Test stepts:
1. create topic test.app4

bin/kafka-topics.sh --zookeeper zookeeper1 --create --replication-factor 2
--partitions 1 --topic test.app4

bin/kafka-topics.sh --zookeeper zookeeper1 --topic test.app4 --describe
Topic:test.app4PartitionCount:1ReplicationFactor:2Configs:
Topic: test.app4Partition: 0Leader: 1Replicas: 1,0Isr: 1

2. shutdown kafka.broker2(the leader for topic), kafka.broker1 is alive
--> consumer fails in infinite loop:

WARN Fetching topic metadata with correlation id 13 for topics
[Set(test.app4)] from broker [id:0,host:kafka.broker1,port:9092] failed
(kafka.client.ClientUtils$)
java.net.ConnectException: Connection refused
...
WARN
[console-consumer-95116_ad-laptop-1396433768258-bb700f4c-leader-finder-thread],
Failed to find leader for Set([test.app4,0])
(kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics
[Set(test.app4)] from broker
[ArrayBuffer(id:0,host:kafka.broker1,port:9092)] failed
...
Caused by: java.net.ConnectException: Connection refused
...
ERROR Producer connection to kafka.broker1:9092 unsuccessful
(kafka.producer.SyncProducer)
java.net.ConnectException: Connection refused
...

bin/kafka-topics.sh --zookeeper zookeeper1 --topic test.app4
--describeTopic:test.app4PartitionCount:1ReplicationFactor:2
Configs:
Topic: test.app4Partition: 0Leader: 0Replicas: 1,0Isr: 0

The java producer error:
INFO  [kafka.producer.async.DefaultEventHandler] Back off for 100 ms before
retrying send. Remaining retries = 2
INFO  [kafka.client.ClientUtils$] Fetching metadata from broker
id:0,host:kafka.broker1,port:9092 with correlation id 236 for 1 topic(s)
Set(test.app4)
ERROR [kafka.producer.SyncProducer] Producer connection to
kafka.broker1:9092 unsuccessful
java.net.ConnectException: Connection refused
...
WARN  [kafka.client.ClientUtils$] Fetching topic metadata with correlation
id 236 for topics [Set(test.app4)] from broker
[id:0,host:kafka.broker1,port:9092] failed
java.net.ConnectException: Connection refused
...
INFO  [kafka.client.ClientUtils$] Fetching metadata from broker
id:1,host:kafka.broker2,port:9092 with correlation id 236 for 1 topic(s)
Set(test.app4)
ERROR [kafka.producer.SyncProducer] Producer connection to
kafka.broker2:9092 unsuccessful
java.net.ConnectException: Connection refused
   ...
WARN  [kafka.client.ClientUtils$] Fetching topic metadata with correlation
id 236 for topics [Set(test.app4)] from broker
[id:1,host:kafka.broker2,port:9092] failed
java.net.ConnectException: Connection refused
...
ERROR [kafka.utils.Utils$] fetching topic metadata for topics
[Set(test.app4)] from broker
[ArrayBuffer(id:0,host:kafka.broker1,port:9092,
id:1,host:kafka.broker2,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics
[Set(test.app4)] from broker
[ArrayBuffer(id:0,host:kafka.broker1,port:9092,
id:1,host:kafka.broker2,port:9092)] failed
...
Caused by: java.net.ConnectException: Connection refused
...
2014-04-02 13:31:12,419 DEBUG [kafka.producer.BrokerPartitionInfo] Getting
broker partition info for topic test.app4
2014-04-02 13:31:12,419 DEBUG [kafka.producer.BrokerPartitionInfo]
Partition [test.app4,0] has leader 1
2014-04-02 13:31:12,419 DEBUG [kafka.producer.async.DefaultEventHandler]
Broker partitions registered for topic: test.app4 are 0
2014-04-02 13:31:12,419 DEBUG [kafka.producer.async.DefaultEventHandler]
Sending 1 messages with no compression to [test.app4,0]
2014-04-02 13:31:12,420 DEBUG [kafka.producer.async.DefaultEventHandler]
Producer sending messages with correlation id 238 for topics [test.app4,0]
to broker 1 on kafka.broker2:9092
2014-04-02 13:31:12,423 ERROR [kafka.producer.SyncProducer] Producer
connection to kafka.broker2:9092 unsuccessful
java.net.ConnectException: Connection refused
...
WARN  [kafka.producer.async.DefaultEventHandler] Failed to send producer
request with correlation id 238 to broker 1 with data for partitions
[test.app4,0]
java.net.ConnectException: Connection refused
...
INFO  [kafka.producer.async.DefaultEventHandler] Back off for 100 ms before
retrying send. Remaining retries = 1

3. startup the kafka.broker2
--> it fails in infinite loop too:

INFO Reconnect due to socket error: null (kafka.consumer.SimpleConsumer)
WARN [ReplicaFetcherThread-0-0], Error in fetch Name: FetchRequest;
Version: 0; CorrelationId: 183; ClientId: ReplicaFetcherThread-0-0;
ReplicaId: 1; MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo:
[test.app4,0] -> PartitionFetchInfo(2,1048576)
(kafka.server.ReplicaFetcherThread)
java.net.ConnectException: Connection refused
...
[2014-04-02 12:56:33,816] INFO Reconnect