Hi Druid Dev Team,

We have a use case where data from a single kafka topic would serve as the
source for multiple datasources. Currently, we create separate ingestion
spec with a filter criteria. This works, but also creates N number of
supervisors and tasks. We have over 1000+ datasources and growing; and
overlord memory needs to be bumped up frequently to keep them running.
Is it possible to do this ingestion in a single task where filter criteria
drives the target datasource? This would help a great deal of use cases
including multi tenant ones. If such a feature is not available, can we
raise a PR for this?

Thanks

Reply via email to