Thanks Mich, Nilesh.
What is also working is create schema object and provide at .schema(X) in
spark.read. statement.
Thanks a lot.
On Sun, May 10, 2020 at 2:37 AM Nilesh Kuchekar
wrote:
> Hi Chetan,
>
> You can have a static parquet file created, and when you
> create a data
Hi Chetan,
You can have a static parquet file created, and when you
create a data frame you can pass the location of both the files, with
option mergeSchema true. This will always fetch you a dataframe even if the
original file is not present.
Kuchekar, Nilesh
On Sat, May 9,
Have you tried catching error when you are creating a dataframe?
import scala.util.{Try, Success, Failure}
val df = Try(spark.read.
format("com.databricks.spark.xml").
option("rootTag", "hierarchy").
option("rowTag", "sms_request").
Hi Spark Users,
I've a spark job where I am reading the parquet path, and that parquet path
data is generated by other systems, some of the parquet paths doesn't
contains any data which is possible. is there a any way to read the parquet
if no data found I can create a dummy dataframe and go