Github user zero323 commented on a diff in the pull request: https://github.com/apache/spark/pull/16792#discussion_r99486909 --- Diff: python/pyspark/sql/dataframe.py --- @@ -1272,16 +1272,18 @@ def replace(self, to_replace, value, subset=None): """Returns a new :class:`DataFrame` replacing a value with another value. :func:`DataFrame.replace` and :func:`DataFrameNaFunctions.replace` are aliases of each other. + Values `to_replace` and `value` should be homogeneous. Mixed string and numeric --- End diff -- Challenge accepted :) This makes me think we should also document uniqueness requirements. User might expect that: ``` df.replace([1, 1.0], [2, 3.0]) ``` or ``` df.replace({1: 2, 1.0: 3.0}) ``` when in fact there will be only one pair considered.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org