JD>we still face the join problem though

Can you please clarify what is the typical dataset you are trying to join
(in the number of rows/bytes)?
Am I right you somehow struggle with "fetch everything from Druid and join
via Enumerable" and you are completely fine with "fetch everything from
Druid and join via Spark"?

I'm not sure Spark itself would make things way faster.

Could you share some queries along with dataset sizes and expected/actual
execution plans?

Vladimir

Reply via email to