[ https://issues.apache.org/jira/browse/SPARK-13802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983094#comment-15983094 ]
Furcy Pin commented on SPARK-13802: ----------------------------------- Hi, I ran into similar issues and found this Jira, so I would like to add some water to the mill. I ran this in the pyspark shell v2.1.0 : {code} >>> from pyspark.sql.types import * >>> from pyspark.sql import Row >>> >>> rdd = spark.sparkContext.parallelize(range(1, 4)) >>> >>> schema = StructType([StructField('a', IntegerType()), StructField('b', >>> StringType())]) >>> spark.createDataFrame(rdd.map(lambda r: Row(a=1, b=None)), schema).collect() [Row(a=1, b=None), Row(a=1, b=None), Row(a=1, b=None)] >>> >>> schema = StructType([StructField('b', IntegerType()), StructField('a', >>> StringType())]) >>> spark.createDataFrame(rdd.map(lambda r: Row(b=1, a=None)), schema).collect() [Row(b=1, a=None), Row(b=1, a=None), Row(b=1, a=None)] {code} When applying a Schema, it seems that the Rows fields names are correctly matched and reordered according to the schema, which is quite nice, even if when creating a Row alone, the fields are ordered differently: {code} >>> Row(b=1, a=None) Row(a=None, b=1) {code} However I get an inconsistent behavior when I start using structs: {code} >>> schema = StructType([StructField('a', IntegerType()), StructField('b', >>> StructType([StructField('c', StringType())]))]) >>> spark.createDataFrame(rdd.map(lambda r: Row(a=1, b=None)), schema).collect() [Row(a=1, b=None), Row(a=1, b=None), Row(a=1, b=None)] >>> >>> schema = StructType([StructField('b', IntegerType()), StructField('a', >>> StructType([StructField('c', StringType())]))]) >>> spark.createDataFrame(rdd.map(lambda r: Row(b=1, a=None)), schema).collect() ... Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main process() File "spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process serializer.dump_stream(func(split_index, iterator), outfile) File "spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream vs = list(itertools.islice(iterator, batch)) File "spark/python/pyspark/sql/types.py", line 576, in toInternal return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) File "spark/python/pyspark/sql/types.py", line 576, in <genexpr> return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) File "spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 436, in toInternal return self.dataType.toInternal(obj) File "spark/python/lib/pyspark.zip/pyspark/sql/types.py", line 593, in toInternal raise ValueError("Unexpected tuple %r with StructType" % obj) ValueError: Unexpected tuple 1 with StructType {code} So it seems to me that pyspark can match a Row's field names with a schema, but only when no struct is involved. This doesn't seem very consistent so I believe that it should be considered a bug. > Fields order in Row(**kwargs) is not consistent with Schema.toInternal method > ----------------------------------------------------------------------------- > > Key: SPARK-13802 > URL: https://issues.apache.org/jira/browse/SPARK-13802 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.6.0 > Reporter: Szymon Matejczyk > > When using Row constructor from kwargs, fields in the tuple underneath are > sorted by name. When Schema is reading the row, it is not using the fields in > this order. > {code} > from pyspark.sql import Row > from pyspark.sql.types import * > schema = StructType([ > StructField("id", StringType()), > StructField("first_name", StringType())]) > row = Row(id="39", first_name="Szymon") > schema.toInternal(row) > Out[5]: ('Szymon', '39') > {code} > {code} > df = sqlContext.createDataFrame([row], schema) > df.show(1) > +------+----------+ > | id|first_name| > +------+----------+ > |Szymon| 39| > +------+----------+ > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org