Is there a way to check nested column exists from Schema in PySpark?
http://stackoverflow.com/questions/37471346/automatically-and-elegantly-flatten-dataframe-in-spark-sql
shows how to get the list of nested columns in Scala. But, can this be
done in PySpark?
Please help.
On Mon, Sep 12, 2016
I'm trying to analyze XML documents using spark-xml package. Since all XML
columns are optional, some columns may or may not exist. When I register
the Dataframe as a table, how do I check if a nested column is existing or
not? My column name is "emp" which is already exploded and I am trying to