Thanks guys for your support.
@Jay, this approach seems the easiest right now, I will try it.
@Jun, following your suggestion I’ve submitted a Jira:
https://issues.apache.org/jira/browse/KAFKA-1689
thanks,
Javier
On Wednesday 8 October 2014 at 00:45, Jay Kreps wrote:
I think the
Hi Jun,
Would by the end of next week be acceptable for 0.8.2?
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Tue, Oct 7, 2014 at 4:04 PM, Jun Rao jun...@gmail.com wrote:
Otis,
Yes, if you guys
Kafka 07 has following property for producer.
connect.timeout.ms5000the maximum time spent
bykafka.producer.SyncProducer trying
to connect to the kafka broker. Once it elapses, the producer throws an
ERROR and stops.
But when i checked in Kafka 08 config , I couldn't find any such property.
Is
Even though I am able to ping to the broker machine from my producer
machine , the producer is throwing below expcetion while connecting to
broker.
I wanted to increase time out for producer but couldnt find any parameter
for that in kafka 8.
Any idea whats wrong here?
[2014-10-08 09:29:47,762]
can you check that you can connect on port 9092 from producer to
broker? (check with telnet or something similar)
ping may succeed when a port is blocked.
On Wed, Oct 8, 2014 at 9:40 AM, ravi singh rrs120...@gmail.com wrote:
Even though I am able to ping to the broker machine from my producer
Hi,
I'm not even sure if this is a valid use-case, but I really wanted to run
it by you guys. How do I load balance my consumers? For example, if my
consumer machine is under load, I'd like to spin up another VM with another
consumer process to keep reading messages off any topic. On similar
Thanks Gwen.
When you're saying that I can add consumers to the same group, does that
also hold true if those consumers are running on different machines? Or in
different JVMs?
--
Sharninder
On Wed, Oct 8, 2014 at 11:35 PM, Gwen Shapira gshap...@cloudera.com wrote:
If you use the high level
If you use the high level consumer implementation, and register all
consumers as part of the same group - they will load-balance
automatically.
When you add a consumer to the group, if there are enough partitions
in the topic, some of the partitions will be assigned to the new
consumer.
When a
yep. exactly.
On Wed, Oct 8, 2014 at 11:07 AM, Sharninder sharnin...@gmail.com wrote:
Thanks Gwen.
When you're saying that I can add consumers to the same group, does that
also hold true if those consumers are running on different machines? Or in
different JVMs?
--
Sharninder
On Wed,
Here's an example (from ConsumerOffsetChecker tool) of 1 topic (t1)
and 1 consumer group (flume), each of the 3 topic partitions is being
read by a different machine running the flume consumer:
Group Topic Pid Offset
logSize Lag Owner
flume
Thanks Gwen. This really helped.
Yes, Kafka is the best thing ever :)
Now how would this be done with the Simple consumer? I'm guessing I'll have
to maintain my own state in Zookeeper or something of that sort?
On Thu, Oct 9, 2014 at 12:01 AM, Gwen Shapira gshap...@cloudera.com wrote:
Here's
Hi,
I am trying to create a topic using TopicCommand and I get an error back. I see
lot of rebalance attempt logs. Can someone help me with this issue?
12702927406-17ccd847], begin rebalancing consumer
System_Dashboard1_DMIPVM-1412702927406-17ccd847 try #0
-
I have few questions regarding Kafka Consumer.
In kafka properties we only mention the zookeeper ip's to connect to.
But I assume the consumer also connects to Kafka broker for actually
consuming the messages.
We have firewall enabled on ports, so in order to connect from my consumer
I need to
Yes, that is all correct--the consumer will use zookeeper for discovery and
then make direct connections to the appropriate brokers on 9092 or whatever
port you have configured.
-Jay
On Wed, Oct 8, 2014 at 3:32 PM, ravi singh rrs120...@gmail.com wrote:
I have few questions regarding Kafka
Hi, All
I setup a kafka cluster, and plan to publish the messages from Web to kafka,
the messages are in the form of json, I want to implement a consumer to write
the message I consumer to postgresql DB, not aggregation at all. I was thinking
to use KafkaSpout in storm to make it happen, now I
I have seen very high Fetch-Consumer-RequestsPerSec (like 15K) per broker
in a relatively idle cluster. My hypothesis some misbehaving consumer has a
tight polling loop without any back-off logic with empty fetch.
Unfortunately, this metric doesn't have per-topic breakdown like
BytesInPerSec or
Otis,
Just have the patch ready asap. We can make a call then.
Thanks,
Jun
On Wed, Oct 8, 2014 at 6:13 AM, Otis Gospodnetic otis.gospodne...@gmail.com
wrote:
Hi Jun,
Would by the end of next week be acceptable for 0.8.2?
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection *
It's called request.timeout.ms in 0.8.
Thanks,
Jun
On Wed, Oct 8, 2014 at 8:58 AM, ravi singh rrs120...@gmail.com wrote:
Kafka 07 has following property for producer.
connect.timeout.ms5000the maximum time spent
bykafka.producer.SyncProducer trying
to connect to the kafka broker. Once it
Which version of Kafka are you using?
Thanks,
Jun
On Wed, Oct 8, 2014 at 12:17 PM, Pradeep Badiger pradeepbadi...@fico.com
wrote:
Hi,
I am trying to create a topic using TopicCommand and I get an error back.
I see lot of rebalance attempt logs. Can someone help me with this issue?
You can look at the consumer example at
http://kafka.apache.org/documentation.html#highlevelconsumerapi
Thanks,
Jun
On Wed, Oct 8, 2014 at 7:51 PM, Sa Li sal...@gmail.com wrote:
Hi, All
I setup a kafka cluster, and plan to publish the messages from Web to
kafka, the messages are in the
If enabled request logging, you can find this out.
Thanks,
Jun
On Wed, Oct 8, 2014 at 8:57 PM, Steven Wu stevenz...@gmail.com wrote:
I have seen very high Fetch-Consumer-RequestsPerSec (like 15K) per broker
in a relatively idle cluster. My hypothesis some misbehaving consumer has a
tight
Jun, you mean trace level logging for requestAppender?
log4j.logger.kafka.network.Processor=TRACE, requestAppender
if it happens again, I can try to enable it.
On Wed, Oct 8, 2014 at 9:54 PM, Jun Rao jun...@gmail.com wrote:
If enabled request logging, you can find this out.
Thanks,
Jun
22 matches
Mail list logo