Did you try setting message.max.bytes and replica.fetch.max.bytes to
values larger than the message you are trying to send?
>From the error message, they should be at least 1550939497.

On Sat, Jul 11, 2015 at 10:14 PM, David Montgomery
<davidmontgom...@gmail.com> wrote:
> Hi
>
>
> Below is my server.properties
>
> I am not having an issue with consuming from my kafka broker.  I have
> having an issue writing to my broker.  One send bombs.
>
>
> # limitations under the License.
> # see kafka.server.KafkaConfig for additional details and defaults
>
> ############################# Server Basics #############################
>
> # The id of the broker. This must be set to a unique integer for each
> broker.
> broker.id=<%=@broker_id%>
>
> ############################# Socket Server Settings
> #############################
>
> # The port the socket server listens on
> port=9092
>
> # Hostname the broker will bind to and advertise to producers and consumers.
> # If not set, the server will bind to all interfaces and advertise the
> value returned from
> # from java.net.InetAddress.getCanonicalHostName().
> host.name=<%=@ipaddress%>
>
> # The number of threads handling network requests
> num.network.threads=2
>
> # The number of threads doing disk I/O
> num.io.threads=2
>
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=1048576
>
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=1048576
>
> # The maximum size of a request that the socket server will accept
> (protection against OOM)
> socket.request.max.bytes=104857600
>
>
> ############################# Log Basics #############################
>
> # A comma seperated list of directories under which to store log files
> log.dirs=/tmp/kafka-logs
>
>
>
> ############################# Log Flush Policy #############################
>
> # The following configurations control the flush of data to disk. This is
> among the most
> # important performance knob in kafka.
> # There are a few important trade-offs here:
> #    1. Durability: Unflushed data may be lost if you are not using
> replication.
> #    2. Latency: Very large flush intervals may lead to latency spikes when
> the flush does occur as there will be a lot of data to flush.
> #    3. Throughput: The flush is generally the most expensive operation,
> and a small flush interval may lead to exceessive seeks.
> # The settings below allow one to configure the flush policy to flush data
> after a period of time or
> # every N messages (or both). This can be done globally and overridden on a
> per-topic basis.
>
> # The number of messages to accept before forcing a flush of data to disk
> log.flush.interval.messages=10000
>
> # The maximum amount of time a message can sit in a log before we force a
> flush
> log.flush.interval.ms=1000
>
> # Per-topic overrides for log.flush.interval.ms
> #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
>
> ############################# Log Retention Policy
> #############################
>
> # The following configurations control the disposal of log segments. The
> policy can
> # be set to delete segments after a period of time, or after a given size
> has accumulated.
> # A segment will be deleted whenever *either* of these criteria are met.
> Deletion always happens
> # from the end of the log.
>
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=168
>
> # A size-based retention policy for logs. Segments are pruned from the log
> as long as the remaining
> # segments don't drop below log.retention.bytes.
> #log.retention.bytes=1073741824
>
> # The maximum size of a log segment file. When this size is reached a new
> log segment will be created.
> log.segment.bytes=536870912
>
> # The interval at which log segments are checked to see if they can be
> deleted according
> # to the retention policies
> log.cleanup.interval.mins=1
>
> ############################# Zookeeper #############################
>
> # Zookeeper connection string (see zookeeper docs for details).
> # This is a comma separated host:port pairs, each corresponding to a zk
> # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
> # You can also append an optional chroot string to the urls to specify the
> # root directory for all kafka znodes.
> #zookeeper.connect=localhost:2181
> zookeeper.connect=<%=@zookeeper%>
>
>
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=1000000
>
>
> # The number of logical partitions per topic per server. More partitions
> allow greater parallelism
> # for consumption, but also mean more files.
> num.partitions=<%=@paritions%>
>
>
>
>
>
>
>
>
>
> On Sun, Jul 12, 2015 at 12:21 PM, Gwen Shapira <gshap...@cloudera.com>
> wrote:
>
>> You need to configure the Kafka broker to allow you to send larger
>> messages.
>> The relevant parameters are:
>>
>> message.max.bytes (default:1000000) – Maximum size of a message the
>> broker will accept. This has to be smaller than the consumer
>> fetch.message.max.bytes, or the broker will have messages that can’t
>> be consumed, causing consumers to hang.
>> replica.fetch.max.bytes (default: 1MB) – Maximum size of data that a
>> broker can replicate. This has to be larger than message.max.bytes, or
>> a broker will accept messages and fail to replicate them. Leading to
>> potential data loss.
>>
>> Gwen
>>
>> On Sat, Jul 11, 2015 at 9:08 PM, David Montgomery
>> <davidmontgom...@gmail.com> wrote:
>> > I cant send this soooo simple payload using python.
>> >
>> > topic: topic-test-development
>> > payload: {"utcdt": "2015-07-12T03:59:36", "ghznezzhmx": "apple"}
>> >
>> >
>> > No handlers could be found for logger "kafka.conn"
>> > Traceback (most recent call last):
>> >   File "/home/ubuntu/workspace/feed-tests/tests/druid-adstar.py", line
>> 81,
>> > in <module>
>> >     test_send_data_to_realtimenode()
>> >   File "/home/ubuntu/workspace/feed-tests/tests/druid-adstar.py", line
>> 38,
>> > in test_send_data_to_realtimenode
>> >     response = producer.send_messages(test_topic,test_payload)
>> >   File "/usr/local/lib/python2.7/dist-packages/kafka/producer/simple.py",
>> > line 54, in send_messages
>> >     topic, partition, *msg
>> >   File "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py",
>> > line 349, in send_messages
>> >     return self._send_messages(topic, partition, *msg)
>> >   File "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py",
>> > line 390, in _send_messages
>> >     fail_on_error=self.sync_fail_on_error
>> >   File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
>> 480,
>> > in send_produce_request
>> >     (not fail_on_error or not self._raise_on_response_error(resp))]
>> >   File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
>> 247,
>> > in _raise_on_response_error
>> >     raise resp
>> > kafka.common.FailedPayloadsError
>> >
>> > Here is what is in my logs
>> > [2015-07-12 03:29:58,103] INFO Closing socket connection to
>> > /xxx.xxx.xxx.xxx due to invalid request: Request of length 1550939497 is
>> > not valid, it is larger than the maximum size of 104857600 bytes.
>> > (kafka.network.Processor)
>> >
>> >
>> >
>> > Server is 4 gigs of ram.
>> >
>> > I used export KAFKA_HEAP_OPTS=-Xmx256M -Xms128M in kafka-server-start.sh
>> >
>> > So.....why?
>>

Reply via email to