Hello all,

why are all DataFrameReader and Writer methods tied to Path objects, or in other words why are there no Reader/Writers that can load data from an Input or Output Stream?

After looking at the spark source code, i realize that there is no easy way to add methods that can handle ioStreams.

Does anybody know of a solution, without having to write a full-blown connector?


I basically want to export a DataFrame as a Parquet ByteStream and then take care of persisting the blob.

--
Regards
*Roger Holenweger*
LotaData <http://lotadata.com/>
/spatiotemporal intelligence/

Reply via email to