Yes that is correlated, thanks for the reminder.
I've updated the JIRA to reflect your observations as well.
Guozhang
On Wed, Mar 28, 2018 at 12:41 AM, Mihaela Stoycheva <
mihaela.stoych...@gmail.com> wrote:
> Hello Guozhang,
>
> Thank you for the answer, that could explain what is happening.
Hello Guozhang,
Thank you for the answer, that could explain what is happening. Is it
possible that this is related in some way to
https://issues.apache.org/jira/browse/KAFKA-6538?
Mihaela
On Wed, Mar 28, 2018 at 2:21 AM, Guozhang Wang wrote:
> Hello Mihaela,
>
> It is possible that when you h
Hello Mihaela,
It is possible that when you have caching enabled, the value of the record
has already been serialized before sending to the changelogger while the
key was not. Admittedly it is not very friendly for trouble-shooting
related log4j entries..
Guozhang
On Tue, Mar 27, 2018 at 5:25
Hello,
I have a Kafka Streams application that is consuming from two topics and
internally aggregating, transforming and joining data. One of the
aggregation steps is adding an id to an ArrayList of ids. Naturally since
there was a lot of data the changelog message became too big and was not
sent
@James: that was incredible. Thank you.
On Wed, Apr 26, 2017 at 9:53 PM, James Cheng wrote:
> Ramya, Todd, Jiefu, David,
>
> Sorry to drag up an ancient thread. I was looking for something in my
> email archives, and ran across this, and I might have solved part of these
> mysteries.
>
> I ran a
Ramya, Todd, Jiefu, David,
Sorry to drag up an ancient thread. I was looking for something in my email
archives, and ran across this, and I might have solved part of these mysteries.
I ran across this post that talked about seeing weirdly large allocations when
incorrect requests are accidental
st of length 1550939497 is not valid, it is
larger than the maximum size of 104857600 bytes
Are you actually getting requests that are 1.3 GB in size, or is something
else happening, like someone trying to make HTTP requests against the Kafka
broker port?
-Todd
On Mon, Dec 12, 2016 at 4:
Are you actually getting requests that are 1.3 GB in size, or is something
else happening, like someone trying to make HTTP requests against the Kafka
broker port?
-Todd
On Mon, Dec 12, 2016 at 4:19 AM, Ramya Ramamurthy <
ramyaramamur...@teledna.com> wrote:
> We have got exactly the same proble
We have got exactly the same problem.
nvalid receive (size = 1347375956 larger than 104857600).
When trying to increase the size, Java Out of Memory Exception.
Did you find a work around for the same ??
Thanks.
Hi,
Anybody know how to fix the following error? I didn't send any large size
message, seems the system was sending a large message itself:
[2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
invalid request: Request of length 1937006964 is not valid, it is larger
tha
il_on_error=self.sync_fail_on_error
> > > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> > 480,
> > > in send_produce_request
> > > (not fail_on_error or not self._raise_on_response_error(resp))]
> > > File "/usr/l
> File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> 480,
> > in send_produce_request
> > (not fail_on_error or not self._raise_on_response_error(resp))]
> > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> 247,
>
;/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 247,
> in _raise_on_response_error
> raise resp
> kafka.common.FailedPayloadsError
>
> Here is what is in my logs
> [2015-07-12 03:29:58,103] INFO Closing socket connection to
> /xxx.xxx.xxx.xxx due to inval
end_messages
>> > > > return self._send_messages(topic, partition, *msg)
>> > > > File
>> "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py",
>> > > > line 390, in _send_messages
>> > >
packages/kafka/producer/base.py",
> > > > line 390, in _send_messages
> > > > fail_on_error=self.sync_fail_on_error
> > > > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> > > 480,
> > > > in sen
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> > 480,
> > > in send_produce_request
> > > (not fail_on_error or not self._raise_on_response_error(resp))]
> > > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
&
elf.sync_fail_on_error
> > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> 480,
> > in send_produce_request
> > (not fail_on_error or not self._raise_on_response_error(resp))]
> > File "/usr/local/lib/python2.7/dist-packages/kafka/client.
.py", line 480,
> in send_produce_request
> (not fail_on_error or not self._raise_on_response_error(resp))]
> File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 247,
> in _raise_on_response_error
> raise resp
> kafka.common.FailedPayloadsError
>
>
quot;,
>> > line 54, in send_messages
>> > topic, partition, *msg
>> > File "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py",
>> > line 349, in send_messages
>> > return self._send_messages(topic, partition, *msg)
>>
lf.sync_fail_on_error
> > File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line
> 480,
> > in send_produce_request
> > (not fail_on_error or not self._raise_on_response_error(resp))]
> > File "/usr/local/lib/python2.7/
on_response_error
> raise resp
> kafka.common.FailedPayloadsError
>
> Here is what is in my logs
> [2015-07-12 03:29:58,103] INFO Closing socket connection to
> /xxx.xxx.xxx.xxx due to invalid request: Request of length 1550939497 is
> not valid, it is larger than the maximum size of 104857600 bytes.
> (kafka.network.Processor)
>
>
>
> Server is 4 gigs of ram.
>
> I used export KAFKA_HEAP_OPTS=-Xmx256M -Xms128M in kafka-server-start.sh
>
> So.why?
_raise_on_response_error(resp))]
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 247,
in _raise_on_response_error
raise resp
kafka.common.FailedPayloadsError
Here is what is in my logs
[2015-07-12 03:29:58,103] INFO Closing socket connection to
/xxx.xxx.xxx.xxx
22 matches
Mail list logo