kbendick commented on a change in pull request #4396:
URL: https://github.com/apache/iceberg/pull/4396#discussion_r835698360
##########
File path:
spark/v3.2/spark-extensions/src/main/scala/org/apache/spark/sql/catalyst/parser/extensions/IcebergSparkSqlExtensionsParser.scala
##########
@@ -177,11 +189,6 @@ class IcebergSparkSqlExtensionsParser(delegate:
ParserInterface) extends ParserI
private def isIcebergCommand(sqlText: String): Boolean = {
val normalized = sqlText.toLowerCase(Locale.ROOT).trim()
Review comment:
Under the new suggested form of first normalizing the sql text in a
separate function and then passing it to this function, I chose to leave
`toLowerCase` inside of this function because having the input not be
lowercased would potentially break the check.
As mentioned, I prefer to keep the logic all within `isIcebergCommand`,
given that we don't want to use the normalized text outside of this function /
this specific purpose.
That said, if you both agree that the named function is better (despite
possibly being unsafe), I can move all of the normalization into the new
function and leave just the boolean check in `isIcebergCommand`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]