Having to restructure my queries isn't a very satisfactory solution,
unfortunately.
I did notice that if I implement the CatalystScan interface instead, then
the filters DO get passed in, but the column identifiers would need to be
translated somewhat to be usable, so that's another option.
Hi Richard,
I am not sure how to support user-defined type. But regarding your second
question, you can have a walkaround as following.
Suppose you have a struct a, and want to filter a.c with a.c > X. You can
define a alias C as a.c, and add extra column C to the schema of the relation,
I defined my own relation (extending BaseRelation) and implemented the
PrunedFilteredScan interface, but discovered that if the column referenced
in a WHERE = clause is a user-defined type or a field of a struct column,
then Spark SQL passes NO filters to the PrunedFilteredScan.buildScan
method,