I'm using a statement like the following to load my dataframe from some text 
file

Upon encountering the first error, the whole thing throws an exception and 
processing stops.

I'd like to continue loading even if that results in zero rows in my dataframe. 
How can i do that?
thanks


spark.read.schema(SomeSchema).option("sep", "\t").format("csv").load("somepath")


Reply via email to