amandeep-sharma commented on a change in pull request #31545: URL: https://github.com/apache/spark/pull/31545#discussion_r584049545
########## File path: sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala ########## @@ -395,9 +395,9 @@ final class DataFrameNaFunctions private[sql](df: DataFrame) { private def fillMap(values: Seq[(String, Any)]): DataFrame = { // Error handling - values.foreach { case (colName, replaceValue) => + val resolved = values.map { case (colName, replaceValue) => // Check column name exists - df.resolve(colName) + val resolvedColumn = df.resolve(colName) Review comment: Above mentioned approach will not work if the null-fill map has qualified column name(dfAlias.columnName) as below. Eg: `def payment = spark.read.format("csv").option("header", "true") .schema(paymentSchema) .load("Payment.csv").as("payment") ` payment.na.fill(Map("payment.`Customer.Id`" -> -1)) .show() ` Also, column name quoted with back-tick(`) will also not work. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org