Github user icexelloss commented on a diff in the pull request: https://github.com/apache/spark/pull/21427#discussion_r191016970 --- Diff: python/pyspark/worker.py --- @@ -111,9 +114,16 @@ def wrapped(key_series, value_series): "Number of columns of the returned pandas.DataFrame " "doesn't match specified schema. " "Expected: {} Actual: {}".format(len(return_type), len(result.columns))) - arrow_return_types = (to_arrow_type(field.dataType) for field in return_type) - return [(result[result.columns[i]], arrow_type) - for i, arrow_type in enumerate(arrow_return_types)] + try: + # Assign result columns by schema name + return [(result[field.name], to_arrow_type(field.dataType)) for field in return_type] + except KeyError: --- End diff -- One potential issue is if `to_arrow_type(field.dataType)` ever throws KeyError, this can lead to unintended behavior. If we want to use KeyError, maybe limit the try block?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org