We are able to telnet to each of the Kafka nodes from the producer so it
doesn't appear to be a connectivity issue.

DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.168 9092
Trying x.x.x.168...
Connected to x.x.x.168.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.48 9092
Trying x.x.x.48...
Connected to x.x.x.48.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.234 9092
Trying x.x.x.234...
Connected to x.x.x.234.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.121 9092
Trying x.x.x.121...
Connected to x.x.x.121.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$ telnet x.x.x.236 9092
Trying x.x.x.236...
Connected to x.x.x.236.
Escape character is '^]'.
^CConnection closed by foreign host.
DNVCOML-2D3FFT3:~ uhodgjo$


On Tue, Jun 25, 2013 at 4:57 AM, Jonathan Hodges <hodg...@gmail.com> wrote:

> Hi Florin,
>
> I work with Yogesh so it is interesting you mention the
> 'metadata.broker.list' property as this was the first error message we saw.
>  Consider the following producer code.
>
> Properties props = new Properties();
> props.put("broker.list", "x.x.x.x:9092, x.x.x.x :9092, x.x.x.x :9092,
> x.x.x.x :9092, x.x.x.x :9092");
> props.put("producer.type", "sync");
> props.put("compression.codec", "2");  //snappy
> ProducerConfig config = new ProducerConfig(props);
> producer = new Producer<byte[], byte[]>(config);
>
> This returns the following exception for the required, property
> 'metadata.broker.list'.
>
> java.lang.IllegalArgumentException: requirement failed: Missing required
> property 'metadata.broker.list'
> at scala.Predef$.require(Predef.scala:145)
>  at
> kafka.utils.VerifiableProperties.getString(VerifiableProperties.scala:158)
> at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:66)
>  at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:56)
> at com.pearson.firehose.KafkaProducer.<init>(KafkaProducer.java:21)
>  at com.pearson.firehose.KafkaProducer.main(KafkaProducer.java:40)
>
> So we just added 'metadata' prefix to the above 'broker.list' property and
> this fixed this exception.  However this is where we start to see this
> producer retries error in the logs.  Could there be some problem with the
> value we are using for 'metadata.broker.list' which is preventing the
> producer from connecting?
>
> Thanks,
> Jonathan
>
>
>
> On Tue, Jun 25, 2013 at 1:12 AM, Florin Trofin <ftro...@adobe.com> wrote:
>
>> I got the same error but I think I had a different issue than you: My code
>> was written for kafka 0.7 and when I switched to 0.8 I changed the
>> "zk.connect" property to "metadata.broker.list" but left it with the same
>> value (which was of course the zookeeper's host and port). In other words
>> a "pilot error" :-) The snippet you provided doesn't seem to have this
>> problem, but it is interesting that we got the same error (which would be
>> nice if it can be customized depending on the actual problem: host
>> unreachable, not responding, etc)
>>
>> F.
>>
>> On 6/24/13 10:55 PM, "Markus Roder" <roder.marku...@gmail.com> wrote:
>>
>> >We had this issue as well but never the less the message was enqueued
>> >four times into the cluster. It would be great to get any hint on this
>> >issue.
>> >
>> >regards
>> >
>> >--
>> >Markus Roder
>> >
>> >Am 25.06.2013 um 07:18 schrieb Yogesh Sangvikar
>> ><yogesh.sangvi...@gmail.com>:
>> >
>> >> Hi Jun,
>> >>
>> >> The stack trace we found is as follow,
>> >>
>> >> log4j:WARN No appenders could be found for logger
>> >> (kafka.utils.VerifiableProperties).
>> >> log4j:WARN Please initialize the log4j system properly.
>> >> kafka.common.FailedToSendMessageException: Failed to send messages
>> >>after 3
>> >> tries.
>> >>        at
>> >>
>>
>> >>kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala
>> >>:90)
>> >>        at kafka.producer.Producer.send(Producer.scala:74)
>> >>        at kafka.javaapi.producer.Producer.send(Producer.scala:32)
>> >>        at
>> >>
>>
>> >>com.pearson.firehose.KafkaProducer.publishTinCanMessage(KafkaProducer.jav
>> >>a:27)
>> >>        at
>> com.pearson.firehose.KafkaProducer.main(KafkaProducer.java:44)
>> >>
>> >> Please let me know if you need the complete producer code.
>> >>
>> >> Thanks,
>> >> Yogesh Sangvikar
>> >>
>> >>
>> >> On Tue, Jun 25, 2013 at 10:05 AM, Jun Rao <jun...@gmail.com> wrote:
>> >>
>> >>> Could you attach the log before FailedToSendMessageException in the
>> >>> producer? It should tell you the reason why the message can't be sent.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Jun
>> >>>
>> >>>
>> >>> On Mon, Jun 24, 2013 at 9:20 PM, Yogesh Sangvikar <
>> >>> yogesh.sangvi...@gmail.com> wrote:
>> >>>
>> >>>> Hi Team,
>> >>>>
>> >>>> We are using  kafka-0.8.0-beta1-candidate1 release. (
>> >>>> https://github.com/apache/kafka/tree/0.8.0-beta1-candidate1).
>> >>>> While running producer with following configuration, we found an
>> issue
>> >>>> "kafka.common.
>> >>>> FailedToSendMessageException: Failed to send messages after 3 tries",
>> >>>>
>> >>>> We are using default broker configurations.
>> >>>>
>> >>>> Code snippet:
>> >>>>
>> >>>> private Producer<byte[], byte[]> producer = null;
>> >>>>
>> >>>>  public KafkaProducer() {
>> >>>>    Properties props = new Properties();
>> >>>>    props.put("metadata.broker.list", "broker1:9092<
>> >>>> http://10.252.8.168:9092>
>> >>>> ,broker2:9092 <http://10.252.8.48:9092>,broker3:9092<
>> >>>> http://10.252.8.234:9092>
>> >>>> ,broker4:9092 <http://10.252.8.121:9092>,broker5:9092<
>> >>>> http://10.252.8.236:9092>
>> >>>> ");
>> >>>>    *props.put("producer.type", "sync");*
>> >>>>    ProducerConfig config = new ProducerConfig(props);
>> >>>>    producer = new Producer<byte[], byte[]>(config);
>> >>>>  }
>> >>>>
>> >>>>  public void publishTinCanMessage(String message, int event) throws
>> >>>> Exception {
>> >>>>    KeyedMessage<byte[], byte[]> data = new KeyedMessage<byte[],
>> >>>> byte[]>("tin_can_topic",
>> >>>> (String.valueOf(event%3)).getBytes(),message.getBytes());
>> >>>>    producer.send(data);
>> >>>>
>> >>>>  }
>> >>>> ......
>> >>>>
>> >>>> Found issue:
>> >>>> kafka.common.
>> >>>> *FailedToSendMessageException: Failed to send messages after 3
>> tries.*
>> >>>>        at
>> >>>
>>
>> >>>kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scal
>> >>>a:90)
>> >>>>        at kafka.producer.Producer.send(Producer.scala:74)
>> >>>>        at kafka.javaapi.producer.Producer.send(Producer.scala:32)
>> >>>>        at
>> >>>
>>
>> >>>com.pearson.firehose.KafkaProducer.publishTinCanMessage(KafkaProducer.ja
>> >>>va:27)
>> >>>>        at
>> >>>>com.pearson.firehose.KafkaProducer.main(KafkaProducer.java:44)
>> >>>>
>> >>>>
>> >>>>
>> >>>> But. with *props.put("producer.type", "async"); *the producer was
>> >>>>working
>> >>>> fine and generating messages.
>> >>>>
>> >>>> Could you please help us to understand is there any configuration
>> >>>>missing
>> >>>> or is there any issue with "producer.type=sync"?
>> >>>>
>> >>>>
>> >>>> Thanks in advance.
>> >>>>
>> >>>> Thanks,
>> >>>> Yogesh Sangikar
>> >>>
>>
>>
>

Reply via email to