Hi Urbán,

I observed that if I update log4j.properties and set
"log4j.logger.org.apache.kafka.clients=DEBUG" I see an ERROR swallowed in a
DEBUG message:

[2022-05-23 16:23:25,272] DEBUG [Producer clientId=producer-3] Exception
occurred during message send:
(org.apache.kafka.clients.producer.KafkaProducer:1000)
org.apache.kafka.common.errors.RecordTooLargeException: The message is
3503993 bytes when serialized which is larger than 1048576, which is the
value of the max.request.size configuration.

This confirms what you suggested.

After adding "max.request.size=5242880" inside
config/connect-distributed.properties and setting the "max.message.bytes"
property of the connect-configs topic the connector now works.

Thanks for the help,

Ryan

On Sat, May 21, 2022 at 10:21 AM Urbán Dániel <urb.dani...@gmail.com> wrote:

> Hi Ryan,
>
> There are some limits, as the configs are stored inside an internal
> topic of Connect. So the usual message size and producer request size
> limitations apply.
> You can reconfigure the internal topic to allow larger messages than the
> default (I think it's 1MB), and the producer max request size for the
> Connect worker.
>
> Besides that, there is one extra bottleneck - Connect followers might
> forward the task configs to the leader through the REST API. The default
> request size limit of the REST client can also cause an issue, not sure
> if there is a way to reconfigure that.
>
> I think you should be able to find some kind of a related error message,
> probably in the leader of the cluster, or the worker which was hosting
> the connector itself.
>
> Daniel
>
> 2022. 05. 19. 17:01 keltezéssel, Ryan Slominski írta:
> > Hi,
> >     Are there limits to the size of configuration data passed via the
> > taskConfigs method of the Connector class?   I'm observing a situation
> > where if I use a large configuration no tasks are created, and no log
> > messages appear in the connect log file.   Using a smaller configuration
> > works.  If there are limits, can I increase them?   Also, it's probably a
> > good idea if Kafka were to log a warning message of some kind in this
> > scenario instead of silently failing.   I'm using a custom Source
> Connector
> > and I have documented steps to reproduce the issue using Docker compose
> > here:
> >
> > https://github.com/JeffersonLab/epics2kafka/issues/11
> >
> > Thanks for any insights!
> >
> > Ryan
> >
>
> --
> Ezt az e-mailt az Avast víruskereső szoftver átvizsgálta.
> https://www.avast.com/antivirus
>
>

Reply via email to