om/databricks/spark/csv/util/TextFile.scala#L34-L36
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Reading-lzo-index-with-spark-csv-Splittable-reads-tp26103p26105.html
> Sent from the Ap
Well looking at the src it look like its not implemented:
https://github.com/databricks/spark-csv/blob/master/src/main/scala/com/databricks/spark/csv/util/TextFile.scala#L34-L36
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Reading-lzo-index-with-spark
uot; ->
"/user/sy/data.csv.lzo", "header" -> "true", "inferSchema" ->
"false")).load().count()
Does anyone know if this is currently supported?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Reading-