[ 
https://issues.apache.org/jira/browse/SPARK-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994849#comment-15994849
 ] 

John Berryman commented on SPARK-12467:
---------------------------------------

Here's a slightly different example that I think should point out another 
problem

{code}
from datetime import datetime 
from pyspark.sql import Row

rows = [
    Row(number=1, letters='real1', some_date=datetime(2017,12,1,3,15)),
    Row(number=2, letters='real2', some_date=datetime(2017,12,2,3,15)),
    Row(number=3, letters='real3', some_date=datetime(2017,12,3,3,15)),
]
rows_rdd = spark.sparkContext.parallelize(rows)
df = spark.createDataFrame(rows_rdd)

spark.sql('CREATE DATABASE test_trash')
df.write.mode(saveMode='overwrite').saveAsTable('test_trash.thingy')
schema = spark.sql('SELECT number, letters, some_date FROM 
test_trash.thingy').schema

df = spark.createDataFrame(rows_rdd, schema)
df.count()
{code}

- In the first part of the code I define a bunch of Rows with the schema 
implicit schema {{'number':=int, 'letters'=string, 'some_date'=date}}.
- In the second part of code I query a table made from that data set and I 
query the fields in the same order: {{number, letters, some_date}} so the 
schema should be exactly the same. (Though I still think order shouldn't matter 
since Rows have named fields.)
- In the third part of the code I attempt to create a dataframe using the 
original data and the schema that was created _from_ the original data. But I 
get an error saying that that the original data doesn't fit _in it's own 
implied schema_.

If you can't write data into it's own implied schema, then this is a bug.

> Get rid of sorting in Row's constructor in pyspark
> --------------------------------------------------
>
>                 Key: SPARK-12467
>                 URL: https://issues.apache.org/jira/browse/SPARK-12467
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 1.5.2
>            Reporter: Irakli Machabeli
>            Priority: Minor
>
> Current implementation of Row's __new__ sorts columns by name
> First of all there is no obvious reason to sort, second, if one converts 
> dataframe to rdd and than back to dataframe, order of column changes. While 
> this is not  a bug, nevetheless it makes looking at the data really 
> inconvenient.
>     def __new__(self, *args, **kwargs):
>         if args and kwargs:
>             raise ValueError("Can not use both args "
>                              "and kwargs to create Row")
>         if args:
>             # create row class or objects
>             return tuple.__new__(self, args)
>         elif kwargs:
>             # create row objects
>             names = sorted(kwargs.keys()) # just get rid of sorting here!!!
>             row = tuple.__new__(self, [kwargs[n] for n in names])
>             row.__fields__ = names
>             return row
>         else:
>             raise ValueError("No args or kwargs")



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to