As far as I know, in general, there isn't a way to distinguish explicit
null values from missing ones.  (Someone please correct me if I'm wrong,
since I would love to be able to do this for my own reasons).  If you
really must do it, and don't care about performance at all (since it will
be horrible), read each object as a separate batch, while inferring the
schema.  If the schema contains the column, but the value is null, you will
know it was explicitly set that way.  If the schema doesn't contain the
column, you'll know it was missing.

On Tue, Jun 23, 2020 at 7:34 AM Harmanat Singh <wish.man...@gmail.com>
wrote:

> Hi
>
> Please look at my issue from the link below.
>
> https://stackoverflow.com/questions/62526118/how-to-differentiate-between-null-and-missing-mongogdb-values-in-a-spark-datafra
>
>
> Kindly Help
>
>
> Best
> Mannat
>

Reply via email to