Hi All, Is there a way I can provide static partitions in partitionBy()?
Like: df.write.mode("overwrite").format("MyDataSource").partitionBy("c=c1").save Above code gives following error as it tries to find column `c=c1` in df. org.apache.spark.sql.AnalysisException: Partition column `c=c1` not found in schema struct<a:string,b:string,c:string>; Thanks, Shubham