Thanks all. I created a WIP PR at https://github.com/apache/spark/pull/26496,
we can further discuss the details in there.
On Thu, Nov 7, 2019 at 7:01 PM Takuya UESHIN wrote:
> +1
>
> On Thu, Nov 7, 2019 at 6:54 PM Shane Knapp wrote:
>
>> +1
>>
>> On Thu, Nov 7, 2019 at 6:08 PM Hyukjin Kwon wr
+1
On Thu, Nov 7, 2019 at 6:54 PM Shane Knapp wrote:
> +1
>
> On Thu, Nov 7, 2019 at 6:08 PM Hyukjin Kwon wrote:
> >
> > +1
> >
> > 2019년 11월 6일 (수) 오후 11:38, Wenchen Fan 님이 작성:
> >>
> >> Sounds reasonable to me. We should make the behavior consistent within
> Spark.
> >>
> >> On Tue, Nov 5, 20
+1
On Thu, Nov 7, 2019 at 6:08 PM Hyukjin Kwon wrote:
>
> +1
>
> 2019년 11월 6일 (수) 오후 11:38, Wenchen Fan 님이 작성:
>>
>> Sounds reasonable to me. We should make the behavior consistent within Spark.
>>
>> On Tue, Nov 5, 2019 at 6:29 AM Bryan Cutler wrote:
>>>
>>> Currently, when a PySpark Row is cre
+1
2019년 11월 6일 (수) 오후 11:38, Wenchen Fan 님이 작성:
> Sounds reasonable to me. We should make the behavior consistent within
> Spark.
>
> On Tue, Nov 5, 2019 at 6:29 AM Bryan Cutler wrote:
>
>> Currently, when a PySpark Row is created with keyword arguments, the
>> fields are sorted alphabetically.
Sounds reasonable to me. We should make the behavior consistent within
Spark.
On Tue, Nov 5, 2019 at 6:29 AM Bryan Cutler wrote:
> Currently, when a PySpark Row is created with keyword arguments, the
> fields are sorted alphabetically. This has created a lot of confusion with
> users because it
Currently, when a PySpark Row is created with keyword arguments, the fields
are sorted alphabetically. This has created a lot of confusion with users
because it is not obvious (although it is stated in the pydocs) that they
will be sorted alphabetically. Then later when applying a schema and the
fi