Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-10-07 Thread Micah Kornfield
In case any one wants to comment further, I've opened
https://github.com/apache/arrow/pull/8374
 to
canonicalize the details.

On Mon, Sep 28, 2020 at 9:08 PM Micah Kornfield 
wrote:

> OK, I will try to update documentation reflecting this in the next few
> days (in particular it would be good to document which implementations are
> willing to support byte flipping).
>
> On Tue, Sep 22, 2020 at 3:30 AM Antoine Pitrou  wrote:
>
>>
>>
>> Le 22/09/2020 à 06:36, Micah Kornfield a écrit :
>> > I wanted to give this thread a bump, does the proposal I made below
>> sound
>> > reasonable?
>>
>> It does!
>>
>> Regards
>>
>> Antoine.
>>
>>
>> >
>> > On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 
>> > wrote:
>> >
>> >> If I read the responses so far it seems like the following might be a
>> good
>> >> compromise/summary:
>> >>
>> >> 1. It does not seem too invasive to support native endianness in
>> >> implementation libraries.  As long as there is appropriate performance
>> >> testing and CI infrastructure to demonstrate the changes work.
>> >> 2. It is up to implementation maintainers if they wish to accept PRs
>> that
>> >> handle byte swapping between different architectures.  (Right now it
>> sounds
>> >> like C++ is potentially OK with it and for Java at least Jacques is
>> opposed
>> >> to it?
>> >>
>> >> Testing changes that break big-endian can be a potential drag on
>> developer
>> >> productivity but there are methods to run locally (at least on more
>> recent
>> >> OSes).
>> >>
>> >> Thoughts?
>> >>
>> >> Thanks,
>> >> Micah
>> >>
>> >> On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:
>> >>
>> >>> Thank Kazuaki for the survey and thank Micah for starting the
>> discussion.
>> >>>
>> >>> I do not oppose supporting BE. In fact, I am in general optimistic
>> about
>> >>> the performance impact (for Java).
>> >>> IMO, this is going to be a painful way (many byte order related
>> problems
>> >>> are tricky to debug), so I hope we can make it short.
>> >>>
>> >>> It is good that someone is willing to take this on, and I would like
>> to
>> >>> provide help if needed.
>> >>>
>> >>> Best,
>> >>> Liya Fan
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler 
>> wrote:
>> >>>
>>  I also think this would be a worthwhile addition and help the project
>>  expand in more areas. Beyond the Apache Spark optimization use case,
>> >>> having
>>  Arrow interoperability with the Python data science stack on BE
>> would be
>>  very useful. I have looked at the remaining PRs for Java and they
>> seem
>>  pretty minimal and straightforward. Implementing the equivalent
>> record
>>  batch swapping as done in C++ at [1] would be a little more involved,
>> >>> but
>>  still reasonable. Would it make sense to create a branch to apply all
>>  remaining changes with CI to get a better picture before deciding on
>>  bringing into master branch?  I could help out with shepherding this
>> >>> effort
>>  and assist in maintenance, if we decide to accept.
>> 
>>  Bryan
>> 
>>  [1] https://github.com/apache/arrow/pull/7507
>> 
>>  On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
>> >>> wrote:
>> 
>> > I think it's well within the right of an implementation to reject BE
>> > data (or non-native-endian), but if an implementation chooses to
>> > implement and maintain the endianness conversions, then it does not
>> > seem so bad to me.
>> >
>> > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
>>  wrote:
>> >>
>> >> And yes, for those of you looking closely, I commented on ARROW-245
>>  when
>> > it
>> >> was committed. I just forgot about it.
>> >>
>> >> It looks like I had mostly the same concerns then that I do now :)
>> >>> Now
>> > I'm
>> >> just more worried about format sprawl...
>> >>
>> >> On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau > >
>> > wrote:
>> >>
>> >>> What do you mean?  The Endianness field (a Big|Little enum) was
>>  added 4
>>  years ago:
>>  https://issues.apache.org/jira/browse/ARROW-245
>> >>>
>> >>>
>> >>> I didn't realize that was done, my bad. Good example of format rot
>> > from my
>> >>> pov.
>> >>>
>> >>>
>> >>>
>> >
>> 
>> >>>
>> >>
>> >
>>
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-28 Thread Micah Kornfield
OK, I will try to update documentation reflecting this in the next few days
(in particular it would be good to document which implementations are
willing to support byte flipping).

On Tue, Sep 22, 2020 at 3:30 AM Antoine Pitrou  wrote:

>
>
> Le 22/09/2020 à 06:36, Micah Kornfield a écrit :
> > I wanted to give this thread a bump, does the proposal I made below sound
> > reasonable?
>
> It does!
>
> Regards
>
> Antoine.
>
>
> >
> > On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 
> > wrote:
> >
> >> If I read the responses so far it seems like the following might be a
> good
> >> compromise/summary:
> >>
> >> 1. It does not seem too invasive to support native endianness in
> >> implementation libraries.  As long as there is appropriate performance
> >> testing and CI infrastructure to demonstrate the changes work.
> >> 2. It is up to implementation maintainers if they wish to accept PRs
> that
> >> handle byte swapping between different architectures.  (Right now it
> sounds
> >> like C++ is potentially OK with it and for Java at least Jacques is
> opposed
> >> to it?
> >>
> >> Testing changes that break big-endian can be a potential drag on
> developer
> >> productivity but there are methods to run locally (at least on more
> recent
> >> OSes).
> >>
> >> Thoughts?
> >>
> >> Thanks,
> >> Micah
> >>
> >> On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:
> >>
> >>> Thank Kazuaki for the survey and thank Micah for starting the
> discussion.
> >>>
> >>> I do not oppose supporting BE. In fact, I am in general optimistic
> about
> >>> the performance impact (for Java).
> >>> IMO, this is going to be a painful way (many byte order related
> problems
> >>> are tricky to debug), so I hope we can make it short.
> >>>
> >>> It is good that someone is willing to take this on, and I would like to
> >>> provide help if needed.
> >>>
> >>> Best,
> >>> Liya Fan
> >>>
> >>>
> >>>
> >>> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:
> >>>
>  I also think this would be a worthwhile addition and help the project
>  expand in more areas. Beyond the Apache Spark optimization use case,
> >>> having
>  Arrow interoperability with the Python data science stack on BE would
> be
>  very useful. I have looked at the remaining PRs for Java and they seem
>  pretty minimal and straightforward. Implementing the equivalent record
>  batch swapping as done in C++ at [1] would be a little more involved,
> >>> but
>  still reasonable. Would it make sense to create a branch to apply all
>  remaining changes with CI to get a better picture before deciding on
>  bringing into master branch?  I could help out with shepherding this
> >>> effort
>  and assist in maintenance, if we decide to accept.
> 
>  Bryan
> 
>  [1] https://github.com/apache/arrow/pull/7507
> 
>  On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
> >>> wrote:
> 
> > I think it's well within the right of an implementation to reject BE
> > data (or non-native-endian), but if an implementation chooses to
> > implement and maintain the endianness conversions, then it does not
> > seem so bad to me.
> >
> > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
>  wrote:
> >>
> >> And yes, for those of you looking closely, I commented on ARROW-245
>  when
> > it
> >> was committed. I just forgot about it.
> >>
> >> It looks like I had mostly the same concerns then that I do now :)
> >>> Now
> > I'm
> >> just more worried about format sprawl...
> >>
> >> On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
> > wrote:
> >>
> >>> What do you mean?  The Endianness field (a Big|Little enum) was
>  added 4
>  years ago:
>  https://issues.apache.org/jira/browse/ARROW-245
> >>>
> >>>
> >>> I didn't realize that was done, my bad. Good example of format rot
> > from my
> >>> pov.
> >>>
> >>>
> >>>
> >
> 
> >>>
> >>
> >
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-22 Thread Antoine Pitrou



Le 22/09/2020 à 06:36, Micah Kornfield a écrit :
> I wanted to give this thread a bump, does the proposal I made below sound
> reasonable?

It does!

Regards

Antoine.


> 
> On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 
> wrote:
> 
>> If I read the responses so far it seems like the following might be a good
>> compromise/summary:
>>
>> 1. It does not seem too invasive to support native endianness in
>> implementation libraries.  As long as there is appropriate performance
>> testing and CI infrastructure to demonstrate the changes work.
>> 2. It is up to implementation maintainers if they wish to accept PRs that
>> handle byte swapping between different architectures.  (Right now it sounds
>> like C++ is potentially OK with it and for Java at least Jacques is opposed
>> to it?
>>
>> Testing changes that break big-endian can be a potential drag on developer
>> productivity but there are methods to run locally (at least on more recent
>> OSes).
>>
>> Thoughts?
>>
>> Thanks,
>> Micah
>>
>> On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:
>>
>>> Thank Kazuaki for the survey and thank Micah for starting the discussion.
>>>
>>> I do not oppose supporting BE. In fact, I am in general optimistic about
>>> the performance impact (for Java).
>>> IMO, this is going to be a painful way (many byte order related problems
>>> are tricky to debug), so I hope we can make it short.
>>>
>>> It is good that someone is willing to take this on, and I would like to
>>> provide help if needed.
>>>
>>> Best,
>>> Liya Fan
>>>
>>>
>>>
>>> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:
>>>
 I also think this would be a worthwhile addition and help the project
 expand in more areas. Beyond the Apache Spark optimization use case,
>>> having
 Arrow interoperability with the Python data science stack on BE would be
 very useful. I have looked at the remaining PRs for Java and they seem
 pretty minimal and straightforward. Implementing the equivalent record
 batch swapping as done in C++ at [1] would be a little more involved,
>>> but
 still reasonable. Would it make sense to create a branch to apply all
 remaining changes with CI to get a better picture before deciding on
 bringing into master branch?  I could help out with shepherding this
>>> effort
 and assist in maintenance, if we decide to accept.

 Bryan

 [1] https://github.com/apache/arrow/pull/7507

 On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
>>> wrote:

> I think it's well within the right of an implementation to reject BE
> data (or non-native-endian), but if an implementation chooses to
> implement and maintain the endianness conversions, then it does not
> seem so bad to me.
>
> On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
 wrote:
>>
>> And yes, for those of you looking closely, I commented on ARROW-245
 when
> it
>> was committed. I just forgot about it.
>>
>> It looks like I had mostly the same concerns then that I do now :)
>>> Now
> I'm
>> just more worried about format sprawl...
>>
>> On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
> wrote:
>>
>>> What do you mean?  The Endianness field (a Big|Little enum) was
 added 4
 years ago:
 https://issues.apache.org/jira/browse/ARROW-245
>>>
>>>
>>> I didn't realize that was done, my bad. Good example of format rot
> from my
>>> pov.
>>>
>>>
>>>
>

>>>
>>
> 


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-22 Thread Kazuaki Ishizaki
Hi Micah,
Thank you. Your proposal also sounds reasonable to me.

Best Regards,
Kazuaki Ishizaki


Fan Liya  wrote on 2020/09/22 15:51:58:

> From: Fan Liya 
> To: dev , Micah Kornfield 
> Date: 2020/09/22 15:52
> Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was: 
> Re: [Java] Supporting Big Endian)
> 
> Hi Micah,
> 
> Thanks for your summary. Your proposal sounds reasonable to me.
> 
> Best,
> Liya Fan
> 
> 
> On Tue, Sep 22, 2020 at 1:16 PM Micah Kornfield 
> wrote:
> 
> > I wanted to give this thread a bump, does the proposal I made below 
sound
> > reasonable?
> >
> > On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 

> > wrote:
> >
> > > If I read the responses so far it seems like the following might be 
a
> > good
> > > compromise/summary:
> > >
> > > 1. It does not seem too invasive to support native endianness in
> > > implementation libraries.  As long as there is appropriate 
performance
> > > testing and CI infrastructure to demonstrate the changes work.
> > > 2. It is up to implementation maintainers if they wish to accept PRs 
that
> > > handle byte swapping between different architectures.  (Right now it
> > sounds
> > > like C++ is potentially OK with it and for Java at least Jacques is
> > opposed
> > > to it?
> > >
> > > Testing changes that break big-endian can be a potential drag on
> > developer
> > > productivity but there are methods to run locally (at least on more
> > recent
> > > OSes).
> > >
> > > Thoughts?
> > >
> > > Thanks,
> > > Micah
> > >
> > > On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  
wrote:
> > >
> > >> Thank Kazuaki for the survey and thank Micah for starting the
> > discussion.
> > >>
> > >> I do not oppose supporting BE. In fact, I am in general optimistic 
about
> > >> the performance impact (for Java).
> > >> IMO, this is going to be a painful way (many byte order related 
problems
> > >> are tricky to debug), so I hope we can make it short.
> > >>
> > >> It is good that someone is willing to take this on, and I would 
like to
> > >> provide help if needed.
> > >>
> > >> Best,
> > >> Liya Fan
> > >>
> > >>
> > >>
> > >> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  
wrote:
> > >>
> > >> > I also think this would be a worthwhile addition and help the 
project
> > >> > expand in more areas. Beyond the Apache Spark optimization use 
case,
> > >> having
> > >> > Arrow interoperability with the Python data science stack on BE 
would
> > be
> > >> > very useful. I have looked at the remaining PRs for Java and they 
seem
> > >> > pretty minimal and straightforward. Implementing the equivalent 
record
> > >> > batch swapping as done in C++ at [1] would be a little more 
involved,
> > >> but
> > >> > still reasonable. Would it make sense to create a branch to apply 
all
> > >> > remaining changes with CI to get a better picture before deciding 
on
> > >> > bringing into master branch?  I could help out with shepherding 
this
> > >> effort
> > >> > and assist in maintenance, if we decide to accept.
> > >> >
> > >> > Bryan
> > >> >
> > >> > [1] INVALID URI REMOVED
> u=https-3A__github.com_apache_arrow_pull_7507&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=b70dG_9wpCdZSkBJahHYQ4IwKMdp2hQM29f-
> ZCGj9Pg&m=SWYdav0xVJ2EmwdbH9QiN--pc-NOGm4aigFKX2tlZhs&s=JCN-
> VA4mo9RyvlwhiCaHSTxQwcLL3CvkrMY8_mOhTks&e= 
> > >> >
> > >> > On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 

> > >> wrote:
> > >> >
> > >> > > I think it's well within the right of an implementation to 
reject BE
> > >> > > data (or non-native-endian), but if an implementation chooses 
to
> > >> > > implement and maintain the endianness conversions, then it does 
not
> > >> > > seem so bad to me.
> > >> > >
> > >> > > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 

> > >> > wrote:
> > >> > > >
> > >> > > > And yes, for those of you looking closely, I commented on
> > ARROW-245
> > >> > when
> > >> > > it
> > >> > > > was committed. I just forgot about it.
> > >> > > >
> > >> > > > It looks like I had mostly the same concerns then that I do 
now :)
> > >> Now
> > >> > > I'm
> > >> > > > just more worried about format sprawl...
> > >> > > >
> > >> > > > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau <
> > jacq...@apache.org>
> > >> > > wrote:
> > >> > > >
> > >> > > > > What do you mean?  The Endianness field (a Big|Little enum) 
was
> > >> > added 4
> > >> > > > >> years ago:
> > >> > > > >> INVALID URI REMOVED
> 
u=https-3A__issues.apache.org_jira_browse_ARROW-2D245&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=b70dG_9wpCdZSkBJahHYQ4IwKMdp2hQM29f-
> ZCGj9Pg&m=SWYdav0xVJ2EmwdbH9QiN--pc-
> NOGm4aigFKX2tlZhs&s=DSjBn3vdsIO8m3CeztvbX3x6U_7sqWil6NmGY_jlZaQ&e= 
> > >> > > > >
> > >> > > > >
> > >> > > > > I didn't realize that was done, my bad. Good example of 
format
> > rot
> > >> > > from my
> > >> > > > > pov.
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > >
> > >> >
> > >>
> > >
> >




Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-21 Thread Fan Liya
Hi Micah,

Thanks for your summary. Your proposal sounds reasonable to me.

Best,
Liya Fan


On Tue, Sep 22, 2020 at 1:16 PM Micah Kornfield 
wrote:

> I wanted to give this thread a bump, does the proposal I made below sound
> reasonable?
>
> On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 
> wrote:
>
> > If I read the responses so far it seems like the following might be a
> good
> > compromise/summary:
> >
> > 1. It does not seem too invasive to support native endianness in
> > implementation libraries.  As long as there is appropriate performance
> > testing and CI infrastructure to demonstrate the changes work.
> > 2. It is up to implementation maintainers if they wish to accept PRs that
> > handle byte swapping between different architectures.  (Right now it
> sounds
> > like C++ is potentially OK with it and for Java at least Jacques is
> opposed
> > to it?
> >
> > Testing changes that break big-endian can be a potential drag on
> developer
> > productivity but there are methods to run locally (at least on more
> recent
> > OSes).
> >
> > Thoughts?
> >
> > Thanks,
> > Micah
> >
> > On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:
> >
> >> Thank Kazuaki for the survey and thank Micah for starting the
> discussion.
> >>
> >> I do not oppose supporting BE. In fact, I am in general optimistic about
> >> the performance impact (for Java).
> >> IMO, this is going to be a painful way (many byte order related problems
> >> are tricky to debug), so I hope we can make it short.
> >>
> >> It is good that someone is willing to take this on, and I would like to
> >> provide help if needed.
> >>
> >> Best,
> >> Liya Fan
> >>
> >>
> >>
> >> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:
> >>
> >> > I also think this would be a worthwhile addition and help the project
> >> > expand in more areas. Beyond the Apache Spark optimization use case,
> >> having
> >> > Arrow interoperability with the Python data science stack on BE would
> be
> >> > very useful. I have looked at the remaining PRs for Java and they seem
> >> > pretty minimal and straightforward. Implementing the equivalent record
> >> > batch swapping as done in C++ at [1] would be a little more involved,
> >> but
> >> > still reasonable. Would it make sense to create a branch to apply all
> >> > remaining changes with CI to get a better picture before deciding on
> >> > bringing into master branch?  I could help out with shepherding this
> >> effort
> >> > and assist in maintenance, if we decide to accept.
> >> >
> >> > Bryan
> >> >
> >> > [1] https://github.com/apache/arrow/pull/7507
> >> >
> >> > On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
> >> wrote:
> >> >
> >> > > I think it's well within the right of an implementation to reject BE
> >> > > data (or non-native-endian), but if an implementation chooses to
> >> > > implement and maintain the endianness conversions, then it does not
> >> > > seem so bad to me.
> >> > >
> >> > > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
> >> > wrote:
> >> > > >
> >> > > > And yes, for those of you looking closely, I commented on
> ARROW-245
> >> > when
> >> > > it
> >> > > > was committed. I just forgot about it.
> >> > > >
> >> > > > It looks like I had mostly the same concerns then that I do now :)
> >> Now
> >> > > I'm
> >> > > > just more worried about format sprawl...
> >> > > >
> >> > > > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau <
> jacq...@apache.org>
> >> > > wrote:
> >> > > >
> >> > > > > What do you mean?  The Endianness field (a Big|Little enum) was
> >> > added 4
> >> > > > >> years ago:
> >> > > > >> https://issues.apache.org/jira/browse/ARROW-245
> >> > > > >
> >> > > > >
> >> > > > > I didn't realize that was done, my bad. Good example of format
> rot
> >> > > from my
> >> > > > > pov.
> >> > > > >
> >> > > > >
> >> > > > >
> >> > >
> >> >
> >>
> >
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-21 Thread Micah Kornfield
I wanted to give this thread a bump, does the proposal I made below sound
reasonable?

On Sun, Sep 13, 2020 at 9:57 PM Micah Kornfield 
wrote:

> If I read the responses so far it seems like the following might be a good
> compromise/summary:
>
> 1. It does not seem too invasive to support native endianness in
> implementation libraries.  As long as there is appropriate performance
> testing and CI infrastructure to demonstrate the changes work.
> 2. It is up to implementation maintainers if they wish to accept PRs that
> handle byte swapping between different architectures.  (Right now it sounds
> like C++ is potentially OK with it and for Java at least Jacques is opposed
> to it?
>
> Testing changes that break big-endian can be a potential drag on developer
> productivity but there are methods to run locally (at least on more recent
> OSes).
>
> Thoughts?
>
> Thanks,
> Micah
>
> On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:
>
>> Thank Kazuaki for the survey and thank Micah for starting the discussion.
>>
>> I do not oppose supporting BE. In fact, I am in general optimistic about
>> the performance impact (for Java).
>> IMO, this is going to be a painful way (many byte order related problems
>> are tricky to debug), so I hope we can make it short.
>>
>> It is good that someone is willing to take this on, and I would like to
>> provide help if needed.
>>
>> Best,
>> Liya Fan
>>
>>
>>
>> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:
>>
>> > I also think this would be a worthwhile addition and help the project
>> > expand in more areas. Beyond the Apache Spark optimization use case,
>> having
>> > Arrow interoperability with the Python data science stack on BE would be
>> > very useful. I have looked at the remaining PRs for Java and they seem
>> > pretty minimal and straightforward. Implementing the equivalent record
>> > batch swapping as done in C++ at [1] would be a little more involved,
>> but
>> > still reasonable. Would it make sense to create a branch to apply all
>> > remaining changes with CI to get a better picture before deciding on
>> > bringing into master branch?  I could help out with shepherding this
>> effort
>> > and assist in maintenance, if we decide to accept.
>> >
>> > Bryan
>> >
>> > [1] https://github.com/apache/arrow/pull/7507
>> >
>> > On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
>> wrote:
>> >
>> > > I think it's well within the right of an implementation to reject BE
>> > > data (or non-native-endian), but if an implementation chooses to
>> > > implement and maintain the endianness conversions, then it does not
>> > > seem so bad to me.
>> > >
>> > > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
>> > wrote:
>> > > >
>> > > > And yes, for those of you looking closely, I commented on ARROW-245
>> > when
>> > > it
>> > > > was committed. I just forgot about it.
>> > > >
>> > > > It looks like I had mostly the same concerns then that I do now :)
>> Now
>> > > I'm
>> > > > just more worried about format sprawl...
>> > > >
>> > > > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
>> > > wrote:
>> > > >
>> > > > > What do you mean?  The Endianness field (a Big|Little enum) was
>> > added 4
>> > > > >> years ago:
>> > > > >> https://issues.apache.org/jira/browse/ARROW-245
>> > > > >
>> > > > >
>> > > > > I didn't realize that was done, my bad. Good example of format rot
>> > > from my
>> > > > > pov.
>> > > > >
>> > > > >
>> > > > >
>> > >
>> >
>>
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-09-13 Thread Micah Kornfield
If I read the responses so far it seems like the following might be a good
compromise/summary:

1. It does not seem too invasive to support native endianness in
implementation libraries.  As long as there is appropriate performance
testing and CI infrastructure to demonstrate the changes work.
2. It is up to implementation maintainers if they wish to accept PRs that
handle byte swapping between different architectures.  (Right now it sounds
like C++ is potentially OK with it and for Java at least Jacques is opposed
to it?

Testing changes that break big-endian can be a potential drag on developer
productivity but there are methods to run locally (at least on more recent
OSes).

Thoughts?

Thanks,
Micah

On Mon, Aug 31, 2020 at 7:08 PM Fan Liya  wrote:

> Thank Kazuaki for the survey and thank Micah for starting the discussion.
>
> I do not oppose supporting BE. In fact, I am in general optimistic about
> the performance impact (for Java).
> IMO, this is going to be a painful way (many byte order related problems
> are tricky to debug), so I hope we can make it short.
>
> It is good that someone is willing to take this on, and I would like to
> provide help if needed.
>
> Best,
> Liya Fan
>
>
>
> On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:
>
> > I also think this would be a worthwhile addition and help the project
> > expand in more areas. Beyond the Apache Spark optimization use case,
> having
> > Arrow interoperability with the Python data science stack on BE would be
> > very useful. I have looked at the remaining PRs for Java and they seem
> > pretty minimal and straightforward. Implementing the equivalent record
> > batch swapping as done in C++ at [1] would be a little more involved, but
> > still reasonable. Would it make sense to create a branch to apply all
> > remaining changes with CI to get a better picture before deciding on
> > bringing into master branch?  I could help out with shepherding this
> effort
> > and assist in maintenance, if we decide to accept.
> >
> > Bryan
> >
> > [1] https://github.com/apache/arrow/pull/7507
> >
> > On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney 
> wrote:
> >
> > > I think it's well within the right of an implementation to reject BE
> > > data (or non-native-endian), but if an implementation chooses to
> > > implement and maintain the endianness conversions, then it does not
> > > seem so bad to me.
> > >
> > > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
> > wrote:
> > > >
> > > > And yes, for those of you looking closely, I commented on ARROW-245
> > when
> > > it
> > > > was committed. I just forgot about it.
> > > >
> > > > It looks like I had mostly the same concerns then that I do now :)
> Now
> > > I'm
> > > > just more worried about format sprawl...
> > > >
> > > > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
> > > wrote:
> > > >
> > > > > What do you mean?  The Endianness field (a Big|Little enum) was
> > added 4
> > > > >> years ago:
> > > > >> https://issues.apache.org/jira/browse/ARROW-245
> > > > >
> > > > >
> > > > > I didn't realize that was done, my bad. Good example of format rot
> > > from my
> > > > > pov.
> > > > >
> > > > >
> > > > >
> > >
> >
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Fan Liya
Thank Kazuaki for the survey and thank Micah for starting the discussion.

I do not oppose supporting BE. In fact, I am in general optimistic about
the performance impact (for Java).
IMO, this is going to be a painful way (many byte order related problems
are tricky to debug), so I hope we can make it short.

It is good that someone is willing to take this on, and I would like to
provide help if needed.

Best,
Liya Fan



On Tue, Sep 1, 2020 at 7:25 AM Bryan Cutler  wrote:

> I also think this would be a worthwhile addition and help the project
> expand in more areas. Beyond the Apache Spark optimization use case, having
> Arrow interoperability with the Python data science stack on BE would be
> very useful. I have looked at the remaining PRs for Java and they seem
> pretty minimal and straightforward. Implementing the equivalent record
> batch swapping as done in C++ at [1] would be a little more involved, but
> still reasonable. Would it make sense to create a branch to apply all
> remaining changes with CI to get a better picture before deciding on
> bringing into master branch?  I could help out with shepherding this effort
> and assist in maintenance, if we decide to accept.
>
> Bryan
>
> [1] https://github.com/apache/arrow/pull/7507
>
> On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney  wrote:
>
> > I think it's well within the right of an implementation to reject BE
> > data (or non-native-endian), but if an implementation chooses to
> > implement and maintain the endianness conversions, then it does not
> > seem so bad to me.
> >
> > On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau 
> wrote:
> > >
> > > And yes, for those of you looking closely, I commented on ARROW-245
> when
> > it
> > > was committed. I just forgot about it.
> > >
> > > It looks like I had mostly the same concerns then that I do now :) Now
> > I'm
> > > just more worried about format sprawl...
> > >
> > > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
> > wrote:
> > >
> > > > What do you mean?  The Endianness field (a Big|Little enum) was
> added 4
> > > >> years ago:
> > > >> https://issues.apache.org/jira/browse/ARROW-245
> > > >
> > > >
> > > > I didn't realize that was done, my bad. Good example of format rot
> > from my
> > > > pov.
> > > >
> > > >
> > > >
> >
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Bryan Cutler
I also think this would be a worthwhile addition and help the project
expand in more areas. Beyond the Apache Spark optimization use case, having
Arrow interoperability with the Python data science stack on BE would be
very useful. I have looked at the remaining PRs for Java and they seem
pretty minimal and straightforward. Implementing the equivalent record
batch swapping as done in C++ at [1] would be a little more involved, but
still reasonable. Would it make sense to create a branch to apply all
remaining changes with CI to get a better picture before deciding on
bringing into master branch?  I could help out with shepherding this effort
and assist in maintenance, if we decide to accept.

Bryan

[1] https://github.com/apache/arrow/pull/7507

On Mon, Aug 31, 2020 at 1:42 PM Wes McKinney  wrote:

> I think it's well within the right of an implementation to reject BE
> data (or non-native-endian), but if an implementation chooses to
> implement and maintain the endianness conversions, then it does not
> seem so bad to me.
>
> On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau  wrote:
> >
> > And yes, for those of you looking closely, I commented on ARROW-245 when
> it
> > was committed. I just forgot about it.
> >
> > It looks like I had mostly the same concerns then that I do now :) Now
> I'm
> > just more worried about format sprawl...
> >
> > On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau 
> wrote:
> >
> > > What do you mean?  The Endianness field (a Big|Little enum) was added 4
> > >> years ago:
> > >> https://issues.apache.org/jira/browse/ARROW-245
> > >
> > >
> > > I didn't realize that was done, my bad. Good example of format rot
> from my
> > > pov.
> > >
> > >
> > >
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Wes McKinney
I think it's well within the right of an implementation to reject BE
data (or non-native-endian), but if an implementation chooses to
implement and maintain the endianness conversions, then it does not
seem so bad to me.

On Mon, Aug 31, 2020 at 3:33 PM Jacques Nadeau  wrote:
>
> And yes, for those of you looking closely, I commented on ARROW-245 when it
> was committed. I just forgot about it.
>
> It looks like I had mostly the same concerns then that I do now :) Now I'm
> just more worried about format sprawl...
>
> On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau  wrote:
>
> > What do you mean?  The Endianness field (a Big|Little enum) was added 4
> >> years ago:
> >> https://issues.apache.org/jira/browse/ARROW-245
> >
> >
> > I didn't realize that was done, my bad. Good example of format rot from my
> > pov.
> >
> >
> >


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Jacques Nadeau
And yes, for those of you looking closely, I commented on ARROW-245 when it
was committed. I just forgot about it.

It looks like I had mostly the same concerns then that I do now :) Now I'm
just more worried about format sprawl...

On Mon, Aug 31, 2020 at 1:30 PM Jacques Nadeau  wrote:

> What do you mean?  The Endianness field (a Big|Little enum) was added 4
>> years ago:
>> https://issues.apache.org/jira/browse/ARROW-245
>
>
> I didn't realize that was done, my bad. Good example of format rot from my
> pov.
>
>
>


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Jacques Nadeau
>
> What do you mean?  The Endianness field (a Big|Little enum) was added 4
> years ago:
> https://issues.apache.org/jira/browse/ARROW-245


I didn't realize that was done, my bad. Good example of format rot from my
pov.


Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-31 Thread Antoine Pitrou
ittleEndian in Java) is read/written using the NE.
>>>
>>> A and B-C are typical use cases in Apache Arrow. Therefore, no endian
>> swap
>>> occurs in these use cases without performance overhead. B-D is rarely
>> used
>>> (e.g. send data from x86_64 to s390x). Thus, the data swap occurs only
>> once
>>> at the receive. After that, no data swap occurs for performance. For some
>>> use cases, this swap can be stopped by using an option. In these cases,
>>> Arrow will not process any data.
>>> E. allows us to accessing primitive data (e.g. int32, double, decimal128)
>>> without performance loss by using the platform-native endian load/stores.
>>>
>>> 2-1. Implementation strategy in Java Language
>>> The existing primitive data structures such as UnsafeDirectLittleEndian,
>>> ArrowBuf, and ValueVector should handle platform-native endian for the
>>> strategies A, B-C, and E without performance overhead.
>>> In the remaining strategy D, the method
>>> MessageSerializer.deserializeRecordBatch() will handle data swap when the
>>> endian of the host is different from that of the client, which
>> corresponds
>>> to the PR [6] in C++.
>>>
>>> 3. Testing plan
>>> For testing the strategies, A, B-C, and E, it would be good to increase
>>> the test coverage regardless of endianness e.g. increase the types of a
>>> schema to be tested in flight-core).
>>> For testing the strategy D, I already prepared data for be and le. When a
>>> PR will enable the data swap, the PR will also enable integration test.
>>> For performance testing, we can use the existing framework [7] by
>>> extending the support for other languages. We can run performance
>>> benchmarks on a little-endian platform to avoid performance regression.
>>>
>>> [1] https://arrow.apache.org/blog/2017/07/26/spark-arrow/
>>> [2]
>>>
>> https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html
>>> [3]
>>>
>> https://databricks.com/jp/blog/2020/06/01/vectorized-r-i-o-in-upcoming-apache-spark-3-0.html
>>> [4] https://databricks.com/jp/session_na20/wednesday-morning-keynotes
>>> [5] https://github.com/apache/arrow/pull/7507#discussion_r46819873
>>> [6] https://github.com/apache/arrow/pull/7507
>>> [7] https://github.com/apache/arrow/pull/7940#issuecomment-672690540
>>>
>>> Best Regards,
>>> Kazuaki Ishizaki
>>>
>>> Wes McKinney  wrote on 2020/08/26 21:27:49:
>>>
>>>> From: Wes McKinney 
>>>> To: dev , Micah Kornfield >>
>>>> Cc: Fan Liya 
>>>> Date: 2020/08/26 21:28
>>>> Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was:
>>>> Re: [Java] Supporting Big Endian)
>>>>
>>>> hi Micah,
>>>>
>>>> I agree with your reasoning. If supporting BE in some languages (e.g.
>>>> Java) is impractical due to performance regressions on LE platforms,
>>>> then I don't think it's worth it. But if it can be handled at compile
>>>> time or without runtime overhead, and tested / maintained properly on
>>>> an ongoing basis, then it seems reasonable to me. It seems that the
>>>> number of Arrow stakeholders will only increase from here so I would
>>>> hope that there will be more people invested in helping maintain BE in
>>>> the future.
>>>>
>>>> - Wes
>>>>
>>>> On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield
>>>>  wrote:
>>>>>
>>>>> I'm expanding the scope of this thread since it looks like work has
>>> also
>>>>> started for making golang support BigEndian architectures.
>>>>>
>>>>> I think as a community we should come to a consensus on whether we
>>> want to
>>>>> support Big Endian architectures in general.  I don't think it is a
>>> good
>>>>> outcome if some implementations accept PRs for Big Endian fixes and
>>> some
>>>>> don't.
>>>>>
>>>>> But maybe this is OK with others?
>>>>>
>>>>> My current opinion on the matter is that we should support it under
>> the
>>>>> following conditions:
>>>>>
>>>>> 1.  As long as there is CI in place to catch regressions (right now I
>>> think
>>>>> the CI is fairly unreliable?)
>>>>> 2.  No

Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-30 Thread Jacques Nadeau
t; 3. Testing plan
> > For testing the strategies, A, B-C, and E, it would be good to increase
> > the test coverage regardless of endianness e.g. increase the types of a
> > schema to be tested in flight-core).
> > For testing the strategy D, I already prepared data for be and le. When a
> > PR will enable the data swap, the PR will also enable integration test.
> > For performance testing, we can use the existing framework [7] by
> > extending the support for other languages. We can run performance
> > benchmarks on a little-endian platform to avoid performance regression.
> >
> > [1] https://arrow.apache.org/blog/2017/07/26/spark-arrow/
> > [2]
> >
> https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html
> > [3]
> >
> https://databricks.com/jp/blog/2020/06/01/vectorized-r-i-o-in-upcoming-apache-spark-3-0.html
> > [4] https://databricks.com/jp/session_na20/wednesday-morning-keynotes
> > [5] https://github.com/apache/arrow/pull/7507#discussion_r46819873
> > [6] https://github.com/apache/arrow/pull/7507
> > [7] https://github.com/apache/arrow/pull/7940#issuecomment-672690540
> >
> > Best Regards,
> > Kazuaki Ishizaki
> >
> > Wes McKinney  wrote on 2020/08/26 21:27:49:
> >
> > > From: Wes McKinney 
> > > To: dev , Micah Kornfield  >
> > > Cc: Fan Liya 
> > > Date: 2020/08/26 21:28
> > > Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was:
> > > Re: [Java] Supporting Big Endian)
> > >
> > > hi Micah,
> > >
> > > I agree with your reasoning. If supporting BE in some languages (e.g.
> > > Java) is impractical due to performance regressions on LE platforms,
> > > then I don't think it's worth it. But if it can be handled at compile
> > > time or without runtime overhead, and tested / maintained properly on
> > > an ongoing basis, then it seems reasonable to me. It seems that the
> > > number of Arrow stakeholders will only increase from here so I would
> > > hope that there will be more people invested in helping maintain BE in
> > > the future.
> > >
> > > - Wes
> > >
> > > On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield
> > >  wrote:
> > > >
> > > > I'm expanding the scope of this thread since it looks like work has
> > also
> > > > started for making golang support BigEndian architectures.
> > > >
> > > > I think as a community we should come to a consensus on whether we
> > want to
> > > > support Big Endian architectures in general.  I don't think it is a
> > good
> > > > outcome if some implementations accept PRs for Big Endian fixes and
> > some
> > > > don't.
> > > >
> > > > But maybe this is OK with others?
> > > >
> > > > My current opinion on the matter is that we should support it under
> the
> > > > following conditions:
> > > >
> > > > 1.  As long as there is CI in place to catch regressions (right now I
> > think
> > > > the CI is fairly unreliable?)
> > > > 2.  No degradation in performance for little-endian architectures
> > (verified
> > > > by additional micro benchmarks)
> > > > 3.  Not a large amount of invasive code to distinguish between
> > platforms.
> > > >
> > > > Kazuaki Ishizaki I asked question previously, but could you give some
> > data
> > > > points around:
> > > > 1.  The current state of C++ support (how much code needed to
> change)?
> > > > 2.  How many more PRs you expect to need for Java (and approximate
> > size)?
> > > >
> > > > I think this would help myself and others in the decision making
> > process.
> > > >
> > > > Thanks,
> > > > Micah
> > > >
> > > > On Tue, Aug 18, 2020 at 9:15 AM Micah Kornfield <
> emkornfi...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > My thoughts on the points raised so far:
> > > > >
> > > > > * Does supporting Big Endian increase the reach of Arrow by a lot?
> > > > >
> > > > > Probably not a significant amount, but it does provide one more
> > avenue of
> > > > > adoption.
> > > > >
> > > > > * Does it increase code complexity?
> > > > >
> > > > > Yes.  I agree 

Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-30 Thread Micah Kornfield
Looking over the outstanding PRs while the code isn't necessarily pretty, I
don't think they are too invasive.

Also it seems that Kazuaki Ishizaki is willing to add benchmarks where
necessary to verify the lack of performance regressions.  (Please correct
me if I misunderstood).

Jacques and Liya Fan does this address your concerns?  Are there further
details that you would like to discuss?  Are you still opposed to support
in Java?

Do maintainers of other implementations have concerns (in particular Go
seems to be the other language in progress)?

Thanks,
Micah

On Wed, Aug 26, 2020 at 6:57 AM Kazuaki Ishizaki 
wrote:

> Hi,
> I waited for comments regarding Java Big-Endian (BE) support during my
> one-week vacation. Thank you for good suggestions and comments.
> I already responded to some questions in another mail. This mail addresses
> the remaining questions: Use cases, holistic strategy for BE support, and
> testing plans
>
> 1. Use cases
> The use case of Arrow Java is in Apache Spark, which was already published
> in Arrow Blog [1]. This is used as the typical performance acceleration of
> Spark with other languages such as Python [2] and R [3]. In DataBricks
> notebook, 68% of commands come from Python [4].
>
> 2. Holistic strategy of BE support across languages
> I mostly completed BE support in C++. This implementation uses the
> following strategy:
> A. Write and read data in a record batch using platform-native endian (NE)
> when the data is created on a host. The endianness is stored in an endian
> field in the schema.
> B. Send data using the IPC-host endian among processes using IPC.
> C. At B, if an IPC-client endian is different from the received data
> endian, the IPC client receives data without data copy.
> D. At B, if an IPC-client endian is different from the received data
> endian, the IPC client swaps endian of the received data to match the
> endian with the IPC-client endian as default.
> E. The primitive data types in memory (e.g. Decimal128 in C++ and
> UnsafeDirectLittleEndian in Java) is read/written using the NE.
>
> A and B-C are typical use cases in Apache Arrow. Therefore, no endian swap
> occurs in these use cases without performance overhead. B-D is rarely used
> (e.g. send data from x86_64 to s390x). Thus, the data swap occurs only once
> at the receive. After that, no data swap occurs for performance. For some
> use cases, this swap can be stopped by using an option. In these cases,
> Arrow will not process any data.
> E. allows us to accessing primitive data (e.g. int32, double, decimal128)
> without performance loss by using the platform-native endian load/stores.
>
> 2-1. Implementation strategy in Java Language
> The existing primitive data structures such as UnsafeDirectLittleEndian,
> ArrowBuf, and ValueVector should handle platform-native endian for the
> strategies A, B-C, and E without performance overhead.
> In the remaining strategy D, the method
> MessageSerializer.deserializeRecordBatch() will handle data swap when the
> endian of the host is different from that of the client, which corresponds
> to the PR [6] in C++.
>
> 3. Testing plan
> For testing the strategies, A, B-C, and E, it would be good to increase
> the test coverage regardless of endianness e.g. increase the types of a
> schema to be tested in flight-core).
> For testing the strategy D, I already prepared data for be and le. When a
> PR will enable the data swap, the PR will also enable integration test.
> For performance testing, we can use the existing framework [7] by
> extending the support for other languages. We can run performance
> benchmarks on a little-endian platform to avoid performance regression.
>
> [1] https://arrow.apache.org/blog/2017/07/26/spark-arrow/
> [2]
> https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html
> [3]
> https://databricks.com/jp/blog/2020/06/01/vectorized-r-i-o-in-upcoming-apache-spark-3-0.html
> [4] https://databricks.com/jp/session_na20/wednesday-morning-keynotes
> [5] https://github.com/apache/arrow/pull/7507#discussion_r46819873
> [6] https://github.com/apache/arrow/pull/7507
> [7] https://github.com/apache/arrow/pull/7940#issuecomment-672690540
>
> Best Regards,
> Kazuaki Ishizaki
>
> Wes McKinney  wrote on 2020/08/26 21:27:49:
>
> > From: Wes McKinney 
> > To: dev , Micah Kornfield 
> > Cc: Fan Liya 
> > Date: 2020/08/26 21:28
> > Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was:
> > Re: [Java] Supporting Big Endian)
> >
> > hi Micah,
> >
> > I agree with your reasoning. If supporting BE in some languages (e.g.
> > Java) is impractical due to performance regressions on LE platforms,
> > then I don't thi

RE: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-26 Thread Kazuaki Ishizaki
Hi,
I waited for comments regarding Java Big-Endian (BE) support during my 
one-week vacation. Thank you for good suggestions and comments.
I already responded to some questions in another mail. This mail addresses 
the remaining questions: Use cases, holistic strategy for BE support, and 
testing plans 

1. Use cases
The use case of Arrow Java is in Apache Spark, which was already published 
in Arrow Blog [1]. This is used as the typical performance acceleration of 
Spark with other languages such as Python [2] and R [3]. In DataBricks 
notebook, 68% of commands come from Python [4].

2. Holistic strategy of BE support across languages
I mostly completed BE support in C++. This implementation uses the 
following strategy:
A. Write and read data in a record batch using platform-native endian (NE) 
when the data is created on a host. The endianness is stored in an endian 
field in the schema.
B. Send data using the IPC-host endian among processes using IPC.
C. At B, if an IPC-client endian is different from the received data 
endian, the IPC client receives data without data copy.
D. At B, if an IPC-client endian is different from the received data 
endian, the IPC client swaps endian of the received data to match the 
endian with the IPC-client endian as default. 
E. The primitive data types in memory (e.g. Decimal128 in C++ and 
UnsafeDirectLittleEndian in Java) is read/written using the NE.

A and B-C are typical use cases in Apache Arrow. Therefore, no endian swap 
occurs in these use cases without performance overhead. B-D is rarely used 
(e.g. send data from x86_64 to s390x). Thus, the data swap occurs only 
once at the receive. After that, no data swap occurs for performance. For 
some use cases, this swap can be stopped by using an option. In these 
cases, Arrow will not process any data.
E. allows us to accessing primitive data (e.g. int32, double, decimal128) 
without performance loss by using the platform-native endian load/stores.

2-1. Implementation strategy in Java Language
The existing primitive data structures such as UnsafeDirectLittleEndian, 
ArrowBuf, and ValueVector should handle platform-native endian for the 
strategies A, B-C, and E without performance overhead. 
In the remaining strategy D, the method 
MessageSerializer.deserializeRecordBatch() will handle data swap when the 
endian of the host is different from that of the client, which corresponds 
to the PR [6] in C++.

3. Testing plan
For testing the strategies, A, B-C, and E, it would be good to increase 
the test coverage regardless of endianness e.g. increase the types of a 
schema to be tested in flight-core). 
For testing the strategy D, I already prepared data for be and le. When a 
PR will enable the data swap, the PR will also enable integration test.
For performance testing, we can use the existing framework [7] by 
extending the support for other languages. We can run performance 
benchmarks on a little-endian platform to avoid performance regression.

[1] https://arrow.apache.org/blog/2017/07/26/spark-arrow/
[2] 
https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html
[3] 
https://databricks.com/jp/blog/2020/06/01/vectorized-r-i-o-in-upcoming-apache-spark-3-0.html
[4] https://databricks.com/jp/session_na20/wednesday-morning-keynotes
[5] https://github.com/apache/arrow/pull/7507#discussion_r46819873
[6] https://github.com/apache/arrow/pull/7507
[7] https://github.com/apache/arrow/pull/7940#issuecomment-672690540

Best Regards,
Kazuaki Ishizaki

Wes McKinney  wrote on 2020/08/26 21:27:49:

> From: Wes McKinney 
> To: dev , Micah Kornfield 
> Cc: Fan Liya 
> Date: 2020/08/26 21:28
> Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was: 
> Re: [Java] Supporting Big Endian)
> 
> hi Micah,
> 
> I agree with your reasoning. If supporting BE in some languages (e.g.
> Java) is impractical due to performance regressions on LE platforms,
> then I don't think it's worth it. But if it can be handled at compile
> time or without runtime overhead, and tested / maintained properly on
> an ongoing basis, then it seems reasonable to me. It seems that the
> number of Arrow stakeholders will only increase from here so I would
> hope that there will be more people invested in helping maintain BE in
> the future.
> 
> - Wes
> 
> On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield 
>  wrote:
> >
> > I'm expanding the scope of this thread since it looks like work has 
also
> > started for making golang support BigEndian architectures.
> >
> > I think as a community we should come to a consensus on whether we 
want to
> > support Big Endian architectures in general.  I don't think it is a 
good
> > outcome if some implementations accept PRs for Big Endian fixes and 
some
> > don't.
> >
> > But maybe this is OK with others?
> >
> > My current 

Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-26 Thread Kazuaki Ishizaki
Hi Micah,
Thank you for expanding the scope for Big Endian support in Arrow. I am 
glad to see this when I am back from one-week vacation.
I agree with this since we have just seen the kickoff of BE support in Go. 


Hi Wes,
Thank you for your positive comments. We should carefully implement BE 
support without performance overhead.

Let me respond to Micah's comments and questions. 

> 1.  As long as there is CI in place to catch regressions (right now I 
think the CI is fairly unreliable?)
I agree. While TravisCI on s390x are unreliable, each platform can set up 
CI script.

> 2.  No degradation in performance for little-endian architectures 
(verified by additional micro benchmarks)
Yes, we can do it. @kou suggested me to use the existing mechanism like 
[1]. Now, it supports C++.We could expand this to other languages.

> 3.  Not a large amount of invasive code to distinguish between 
platforms.
Yes, I will prepare the current holistic strategy for endian-independent 
implementation in another mail.


> Kazuaki Ishizaki I asked question previously, but could you give some 
data points around:
Sorry for the delay. I was waiting for comments during my one-week 
vacation.

> 1.  The current state of C++ support (how much code needed to change)?
Except the parquet support, in C++, in my understanding [2] is the last PR 
to support big-endian for intra-process and inter-process in C++.

> 2.  How many more PRs you expect to need for Java (and approximate 
size)?
I have just submitted two PRs [5, 6] today to pass the current test using 
“mvn install”. The total size of the four PRs [3, 4, 5, 6] are around 
200 lines. In addition, I will submit another PR (~500 lines per 
estimation) to pass integration test between different endians, which 
corresponds to the PR [2] in C++.
For CI for Java, I submitted one draft PR [7] (~100 lines).


BTW, when I saw some test cases, I think that it would be good to increase 
test coverage for little-endian and big-endian by adding test cases (e.g. 
increase the types of a schema to be tested in flight-core like [8]). If 
it is acceptable, we can submit more PRs regardless of endianness.

[1] https://github.com/apache/arrow/pull/7940#issuecomment-672690540
[2] https://github.com/apache/arrow/pull/7507
[3] https://github.com/apache/arrow/pull/7942
[4] https://github.com/apache/arrow/pull/7944
[5] https://github.com/apache/arrow/pull/8056
[6] https://github.com/apache/arrow/pull/8057
[7] https://github.com/apache/arrow/pull/7938
[8] https://github.com/apache/arrow/pull/7555

Best Regards,
Kazuaki Ishizaki

Wes McKinney  wrote on 2020/08/26 21:27:49:

> From: Wes McKinney 
> To: dev , Micah Kornfield 
> Cc: Fan Liya 
> Date: 2020/08/26 21:28
> Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was: 
> Re: [Java] Supporting Big Endian)
> 
> hi Micah,
> 
> I agree with your reasoning. If supporting BE in some languages (e.g.
> Java) is impractical due to performance regressions on LE platforms,
> then I don't think it's worth it. But if it can be handled at compile
> time or without runtime overhead, and tested / maintained properly on
> an ongoing basis, then it seems reasonable to me. It seems that the
> number of Arrow stakeholders will only increase from here so I would
> hope that there will be more people invested in helping maintain BE in
> the future.
> 
> - Wes
> 
> On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield 
>  wrote:
> >
> > I'm expanding the scope of this thread since it looks like work has 
also
> > started for making golang support BigEndian architectures.
> >
> > I think as a community we should come to a consensus on whether we 
want to
> > support Big Endian architectures in general.  I don't think it is a 
good
> > outcome if some implementations accept PRs for Big Endian fixes and 
some
> > don't.
> >
> > But maybe this is OK with others?
> >
> > My current opinion on the matter is that we should support it under 
the
> > following conditions:
> >
> > 1.  As long as there is CI in place to catch regressions (right now I 
think
> > the CI is fairly unreliable?)
> > 2.  No degradation in performance for little-endian architectures 
(verified
> > by additional micro benchmarks)
> > 3.  Not a large amount of invasive code to distinguish between 
platforms.
> >
> > Kazuaki Ishizaki I asked question previously, but could you give some 
data
> > points around:
> > 1.  The current state of C++ support (how much code needed to change)?
> > 2.  How many more PRs you expect to need for Java (and approximate 
size)?
> >
> > I think this would help myself and others in the decision making 
process.
> >
> > Thanks,
> > Micah
> >
> > On Tue, Aug 18, 2020 at 9:15 

RE: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-26 Thread Kazuaki Ishizaki
Kazuaki Ishizaki, Ph.D., Senior Technical Staff Member (STSM), IBM 
Research - Tokyo
ACM Distinguished Member - Apache Spark committer - IBM Academy of 
Technology Member

Wes McKinney  wrote on 2020/08/26 21:27:49:

> From: Wes McKinney 
> To: dev , Micah Kornfield 
> Cc: Fan Liya 
> Date: 2020/08/26 21:28
> Subject: [EXTERNAL] Re: [DISCUSS] Big Endian support in Arrow (was: 
> Re: [Java] Supporting Big Endian)
> 
> hi Micah,
> 
> I agree with your reasoning. If supporting BE in some languages (e.g.
> Java) is impractical due to performance regressions on LE platforms,
> then I don't think it's worth it. But if it can be handled at compile
> time or without runtime overhead, and tested / maintained properly on
> an ongoing basis, then it seems reasonable to me. It seems that the
> number of Arrow stakeholders will only increase from here so I would
> hope that there will be more people invested in helping maintain BE in
> the future.
> 
> - Wes
> 
> On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield 
>  wrote:
> >
> > I'm expanding the scope of this thread since it looks like work has 
also
> > started for making golang support BigEndian architectures.
> >
> > I think as a community we should come to a consensus on whether we 
want to
> > support Big Endian architectures in general.  I don't think it is a 
good
> > outcome if some implementations accept PRs for Big Endian fixes and 
some
> > don't.
> >
> > But maybe this is OK with others?
> >
> > My current opinion on the matter is that we should support it under 
the
> > following conditions:
> >
> > 1.  As long as there is CI in place to catch regressions (right now I 
think
> > the CI is fairly unreliable?)
> > 2.  No degradation in performance for little-endian architectures 
(verified
> > by additional micro benchmarks)
> > 3.  Not a large amount of invasive code to distinguish between 
platforms.
> >
> > Kazuaki Ishizaki I asked question previously, but could you give some 
data
> > points around:
> > 1.  The current state of C++ support (how much code needed to change)?
> > 2.  How many more PRs you expect to need for Java (and approximate 
size)?
> >
> > I think this would help myself and others in the decision making 
process.
> >
> > Thanks,
> > Micah
> >
> > On Tue, Aug 18, 2020 at 9:15 AM Micah Kornfield 

> > wrote:
> >
> > > My thoughts on the points raised so far:
> > >
> > > * Does supporting Big Endian increase the reach of Arrow by a lot?
> > >
> > > Probably not a significant amount, but it does provide one more 
avenue of
> > > adoption.
> > >
> > > * Does it increase code complexity?
> > >
> > > Yes.  I agree this is a concern.  The PR in question did not seem 
too bad
> > > to me but this is subjective.  I think the remaining question is how 
many
> > > more places need to be fixed up in the code base and how invasive 
are the
> > > changes.  In C++ IIUC it turned out to be a relatively small number 
of
> > > places.
> > >
> > > Kazuaki Ishizaki have you been able to get the Java implementation 
working
> > > fully locally?  How many additional PRs will be needed and what do
> > > they look like (I think there already a few more in the queue)?
> > >
> > > * Will it introduce performance regressions?
> > >
> > > If done properly I suspect no, but I think if we continue with 
BigEndian
> > > support the places that need to be touched should have benchmarks 
added to
> > > confirm this (including for PRs already merged).
> > >
> > > Thanks,
> > > Micah
> > >
> > > On Sun, Aug 16, 2020 at 7:37 PM Fan Liya  
wrote:
> > >
> > >> Thank Kazuaki Ishizaki for working on this.
> > >> IMO, supporting the big-endian should be a large change, as in many
> > >> places of the code base, we have implicitly assumed the 
little-endian
> > >> platform (e.g.
> > >> INVALID URI REMOVED
> 
u=https-3A__github.com_apache_arrow_blob_master_java_memory_memory-2Dcore_src_main_java_org_apache_arrow_memory_util_ByteFunctionHelpers.java&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=b70dG_9wpCdZSkBJahHYQ4IwKMdp2hQM29f-
> ZCGj9Pg&m=3rVsa9EYwGOrvQw8rg0L9EtFs7I7B-
> n7ezRb8qyWtog&s=poFSWqjJv99prou53ciinHyBmh5IZlXLlhYvftb9fu4&e= 
> > >> ).
> > >> Supporting the big-endian platform may introduce branches in such 
places
> > >> (or virtual calls) which will affect the performance.
> > >&

Re: [DISCUSS] Big Endian support in Arrow (was: Re: [Java] Supporting Big Endian)

2020-08-26 Thread Wes McKinney
hi Micah,

I agree with your reasoning. If supporting BE in some languages (e.g.
Java) is impractical due to performance regressions on LE platforms,
then I don't think it's worth it. But if it can be handled at compile
time or without runtime overhead, and tested / maintained properly on
an ongoing basis, then it seems reasonable to me. It seems that the
number of Arrow stakeholders will only increase from here so I would
hope that there will be more people invested in helping maintain BE in
the future.

- Wes

On Tue, Aug 25, 2020 at 11:33 PM Micah Kornfield  wrote:
>
> I'm expanding the scope of this thread since it looks like work has also
> started for making golang support BigEndian architectures.
>
> I think as a community we should come to a consensus on whether we want to
> support Big Endian architectures in general.  I don't think it is a good
> outcome if some implementations accept PRs for Big Endian fixes and some
> don't.
>
> But maybe this is OK with others?
>
> My current opinion on the matter is that we should support it under the
> following conditions:
>
> 1.  As long as there is CI in place to catch regressions (right now I think
> the CI is fairly unreliable?)
> 2.  No degradation in performance for little-endian architectures (verified
> by additional micro benchmarks)
> 3.  Not a large amount of invasive code to distinguish between platforms.
>
> Kazuaki Ishizaki I asked question previously, but could you give some data
> points around:
> 1.  The current state of C++ support (how much code needed to change)?
> 2.  How many more PRs you expect to need for Java (and approximate size)?
>
> I think this would help myself and others in the decision making process.
>
> Thanks,
> Micah
>
> On Tue, Aug 18, 2020 at 9:15 AM Micah Kornfield 
> wrote:
>
> > My thoughts on the points raised so far:
> >
> > * Does supporting Big Endian increase the reach of Arrow by a lot?
> >
> > Probably not a significant amount, but it does provide one more avenue of
> > adoption.
> >
> > * Does it increase code complexity?
> >
> > Yes.  I agree this is a concern.  The PR in question did not seem too bad
> > to me but this is subjective.  I think the remaining question is how many
> > more places need to be fixed up in the code base and how invasive are the
> > changes.  In C++ IIUC it turned out to be a relatively small number of
> > places.
> >
> > Kazuaki Ishizaki have you been able to get the Java implementation working
> > fully locally?  How many additional PRs will be needed and what do
> > they look like (I think there already a few more in the queue)?
> >
> > * Will it introduce performance regressions?
> >
> > If done properly I suspect no, but I think if we continue with BigEndian
> > support the places that need to be touched should have benchmarks added to
> > confirm this (including for PRs already merged).
> >
> > Thanks,
> > Micah
> >
> > On Sun, Aug 16, 2020 at 7:37 PM Fan Liya  wrote:
> >
> >> Thank Kazuaki Ishizaki for working on this.
> >> IMO, supporting the big-endian should be a large change, as in many
> >> places of the code base, we have implicitly assumed the little-endian
> >> platform (e.g.
> >> https://github.com/apache/arrow/blob/master/java/memory/memory-core/src/main/java/org/apache/arrow/memory/util/ByteFunctionHelpers.java
> >> ).
> >> Supporting the big-endian platform may introduce branches in such places
> >> (or virtual calls) which will affect the performance.
> >> So it would be helpful to evaluate the performance impact.
> >>
> >> Best,
> >> Liya Fan
> >>
> >>
> >> On Sat, Aug 15, 2020 at 7:54 AM Jacques Nadeau 
> >> wrote:
> >>
> >>> Hey Micah, thanks for starting the discussion.
> >>>
> >>> I just skimmed that thread and it isn't entirely clear that there was a
> >>> conclusion that the overhead was worth it. I think everybody agrees that
> >>> it
> >>> would be nice to have the code work on both platforms. On the flipside,
> >>> the
> >>> code noise for a rare case makes the cost-benefit questionable.
> >>>
> >>> In the Java code, we wrote the code to explicitly disallow big endian
> >>> platforms and put preconditions checks in. I definitely think if we want
> >>> to
> >>> support this, it should be done holistically across the code with
> >>> appropriate test plan (both functional and perf).
> >>>
> >>> To me, the question is really about how many use cases are blocked by
> >>> this.
> >>> I'm not sure I've heard anyone say that the limiting factor to leveraging
> >>> Java Arrow was the block on endianess. Keep in mind that until very
> >>> recently, using any Arrow Java code would throw a preconditions check
> >>> before you could even get started on big-endian and I don't think we've
> >>> seen a bunch of messages on that exception. Adding if conditions
> >>> throughout
> >>> the codebase like this patch: [1] isn't exactly awesome and it can also
> >>> risk performance impacts depending on how carefully it is done.
> >>>
> >>> If there isn't a preponderance of e