I have big folder having ORC files. Files have duration field (e.g.
3,12,26, etc)
Also I have small json file  (just 8 rows) with ranges definition (min, max
, name)
0, 10, A
10, 20, B
20, 30, C
etc

Because I can not do equi-join btw duration and range min/max I need to do
cross join and apply WHERE condition to take records which belong to the
range
Cross join is an expensive operation I think that it's better if this
particular join done using Map Join

How to do Map join in Spark Sql?

Reply via email to