Re: kafka deleted old logs but not released

2015-01-07 Thread Yonghui Zhao
Yes and I found the reason rename in deletion is failed. In rename progress the files is deleted? and then exception blocks file closed in kafka. But I don't know how can rename failure happen, [2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task 'kafka-log-retention'

Re: mirrormaker tool in 0.82beta

2015-01-07 Thread svante karlsson
No, I missed that. thanks, svante 2015-01-07 6:44 GMT+01:00 Jun Rao j...@confluent.io: Did you set offsets.storage to kafka in the consumer of mirror maker? Thanks, Jun On Mon, Jan 5, 2015 at 3:49 PM, svante karlsson s...@csi.se wrote: I'm using 0.82beta and I'm trying to

Re: kafka deleted old logs but not released

2015-01-07 Thread Harsha
Yonghui, Which OS you are running. -Harsha On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote: Yes and I found the reason rename in deletion is failed. In rename progress the files is deleted? and then exception blocks file closed in kafka. But I don't know how can rename

Re: NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Sa Li
Yes, it is weird hostname, ;), that is what our system guys name it. How to take a note to measure the connections open to 10.100.98.102? Thanks AL On Jan 7, 2015 9:42 PM, Jaikiran Pai jai.forums2...@gmail.com wrote: On Thursday 08 January 2015 01:51 AM, Sa Li wrote: see this type of error

Re: kafka deleted old logs but not released

2015-01-07 Thread Jaikiran Pai
Apart from the fact that the file rename is failing (the API notes that there are chances of the rename failing), it looks like the implementation in FileMessageSet's rename can cause a couple of issues, one of them being a leak. The implementation looks like this

Re: NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Jaikiran Pai
On Thursday 08 January 2015 01:51 AM, Sa Li wrote: see this type of error again, back to normal in few secs [2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/ 10.100.98.102 That's a really weird hostname, the harmful-jar.master. Is that really your hostname? You mention

Re: kafka deleted old logs but not released

2015-01-07 Thread Yonghui Zhao
CentOS release 6.3 (Final) 2015-01-07 22:18 GMT+08:00 Harsha ka...@harsha.io: Yonghui, Which OS you are running. -Harsha On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote: Yes and I found the reason rename in deletion is failed. In rename progress the files is deleted?

Re: question about jmxtrans to get kafka metrics

2015-01-07 Thread Jaikiran Pai
Hi Sa, Are you really sure w2 is a real hostname, something that is resolvable from the system where you are running this. The JSON output you posted seems very close to the example from the jmxtrans project page https://code.google.com/p/jmxtrans/wiki/GraphiteWriter, so I suspect you aren't

Re: NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Jaikiran Pai
There are different ways to find the connection count and each one depends on the operating system that's being used. lsof -i is one option, for example, on *nix systems. -Jaikiran On Thursday 08 January 2015 11:40 AM, Sa Li wrote: Yes, it is weird hostname, ;), that is what our system guys

Re: Is it possible to enforce an unique constraint through Kafka?

2015-01-07 Thread Joseph Pachod
Thanks for your answers. @Mark Well, basically we agree. My question was more to figure out the limits of kafka, that's why I picked unicity to figure this out. Unicity doesn't imply ACID, yet it's already way more than a stream. I was wondering if some clever trick could allow to achieve it.

NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Sa Li
Hi, All I am doing performance test by bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test-rep-three 5 100 -1 acks=1 bootstrap.servers=10.100.98.100:9092, 10.100.98.101:9092,10.100.98.102:9092 buffer.memory=67108864 batch.size=8196 where the topic

Re: NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Sa Li
see this type of error again, back to normal in few secs [2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/ 10.100.98.102 (org.apache.kafka.common.network.Selector) java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

Re: KafkaUtils not consuming all the data from all partitions

2015-01-07 Thread Mukesh Jha
I understand that I've to create 10 parallel streams. My code is running fine when the no of partitions is ~20, but when I increase the no of partitions I keep getting in this issue. Below is my code to create kafka streams, along with the configs used. MapString, String kafkaConf = new

Re: KafkaUtils not consuming all the data from all partitions

2015-01-07 Thread francois . garillot
Hi Mukesh, If my understanding is correct, each Stream only has a single Receiver. So, if you have each receiver consuming 9 partitions, you need 10 input DStreams to create 10 concurrent receivers:

Re: KafkaUtils not consuming all the data from all partitions

2015-01-07 Thread Gerard Maas
Hi, Could you add the code where you create the Kafka consumer? -kr, Gerard. On Wed, Jan 7, 2015 at 3:43 PM, francois.garil...@typesafe.com wrote: Hi Mukesh, If my understanding is correct, each Stream only has a single Receiver. So, if you have each receiver consuming 9 partitions, you

Re: KafkaUtils not consuming all the data from all partitions

2015-01-07 Thread francois . garillot
- You are launching up to 10 threads/topic per Receiver. Are you sure your receivers can support 10 threads each ? (i.e. in the default configuration, do they have 10 cores). If they have 2 cores, that would explain why this works with 20 partitions or less. - If you have 90 partitions, why

connection error among nodes

2015-01-07 Thread Sa Li
Hi, Experts Our cluster is a 3 nodes cluster, I simply test producer locally, see bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test-rep-three 100 3000 -1 acks=1 bootstrap.servers=10.100.98.100:9092 buffer.memory=67108864 batch.size=8196 But I got such error, I

Re: NotLeaderForPartitionException while doing performance test

2015-01-07 Thread Sa Li
I checked topic config, isr changes dynamically. root@voluminous-mass:/srv/kafka# bin/kafka-topics.sh --describe --zookeeper 10.100.98.101:2181 --topic test-rep-three Topic:test-rep-threePartitionCount:8ReplicationFactor:3 Configs: Topic: test-rep-three Partition: 0

Re: connection error among nodes

2015-01-07 Thread Sa Li
Things bother me, sometimes, the errors won't pop out, sometimes it comes, why? On Wed, Jan 7, 2015 at 1:49 PM, Sa Li sal...@gmail.com wrote: Hi, Experts Our cluster is a 3 nodes cluster, I simply test producer locally, see bin/kafka-run-class.sh

question about jmxtrans to get kafka metrics

2015-01-07 Thread Sa Li
Hi, All I installed jmxtrans and graphite, wish to be able to graph stuff from kafka, but firstly I start the jmxtrans and getting such errors, (I use the example graphite json). ./jmxtrans.sh start graphite.json [07 Jan 2015 17:55:58] [ServerScheduler_Worker-4] 180214 DEBUG

Re: KafkaUtils not consuming all the data from all partitions

2015-01-07 Thread Gerard Maas
AFAIK, there're two levels of parallelism related to the Spark Kafka consumer: At JVM level: For each receiver, one can specify the number of threads for a given topic, provided as a map [topic - nthreads]. This will effectively start n JVM threads consuming partitions of that kafka topic. At

Re: Consumer and offset management support in 0.8.2 and 0.9

2015-01-07 Thread Jay Kreps
Hey guys, We need to take the versioning of the protocol seriously. People are definitely using the offset commit functionality in 0.8.1 and I really think we should treat this as a bug and revert the change to version 0. -Jay On Wed, Jan 7, 2015 at 9:24 AM, Jun Rao j...@confluent.io wrote:

Re: Consumer and offset management support in 0.8.2 and 0.9

2015-01-07 Thread Joe Stein
We need to take the versioning of the protocol seriously amen People are definitely using the offset commit functionality in 0.8.1 agreed I really think we should treat this as a bug and revert the change to version 0. What do you mean exactly by revert? Why can't we use version as a

Re: Consumer and offset management support in 0.8.2 and 0.9

2015-01-07 Thread Jay Kreps
Yes, I think we are saying the same thing. Basically I am saying version 0 should be considered frozen as of the format and behavior in the 0.8.1 release and we can do whatever we want as a version 1+. -Jay On Wed, Jan 7, 2015 at 10:10 AM, Joe Stein joe.st...@stealth.ly wrote: We need to take

Re: Consumer and offset management support in 0.8.2 and 0.9

2015-01-07 Thread Jun Rao
Yes, we did make an incompatible change in OffsetCommitRequest in 0.8.2, which is a mistake. The incompatible change was introduced in KAFKA-1012 in Mar, 2014 when we added the kafka-based offset management support. However, we didn't realize that this breaks the wire protocol until much later.