Like `RDD.map`, you can throw whatever exceptions and they will be
propagated to the driver side and fail the Spark job.

On Mon, Apr 8, 2019 at 3:10 PM Andrew Melo <andrew.m...@gmail.com> wrote:

> Hello,
>
> I'm developing a (java) DataSourceV2 to read a columnar fileformat
> popular in a number of physical sciences (https://root.cern.ch/). (I
> also understand that the API isn't fixed and subject to change).
>
> My question is -- what is the expected way to transmit exceptions from
> the DataSource up to Spark? The DSV2 interface (unless I'm misreading
> it) doesn't specify any caught exceptions that can be thrown in the
> DS, so should I instead catch/rethrow any exceptions as uncaught
> exceptions? If so, is there a recommended hierarchy to throw from?
>
> thanks!
> Andrew
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to