I defined my own relation (extending BaseRelation) and implemented the
PrunedFilteredScan interface, but discovered that if the column referenced
in a WHERE = clause is a user-defined type or a field of a struct column,
then Spark SQL passes NO filters to the PrunedFilteredScan.buildScan
method, rendering the interface useless. Is there really no way to
implement a relation to optimize on such fields?

-- 
Rich

Reply via email to