On Mon, 10 Sep 2018 12:35:22 -0400
Wes McKinney <wesmck...@gmail.com> wrote:
> Yes, that's the error. Any patch that has a base prior to the
> parquet-cpp merge has to be rebased. I'm not sure what I did wrong (if
> anything) since I effectively cherry-picked 318 commits into master,
> but for some reason it's fouled up the part of the merge script that
> adds the squashed commit messages

Apparently it's the "git merge" invocation that fails to find the right
ancestor.

Regards

Antoine.


> On Mon, Sep 10, 2018 at 12:14 PM Antoine Pitrou <solip...@pitrou.net> wrote:
> >
> >
> > Hi Wes,
> >
> > I've just got the following error trying to merge a PR after rebasing,
> > is that what you meant?
> >
> >
> > $ ./dev/merge_arrow_pr.py
> > ARROW_HOME = /home/antoine/arrow
> > PROJECT_NAME = arrow
> > Which pull request would you like to merge? (e.g. 34): 2492
> >
> > === Pull Request #2492 ===
> > title   ARROW-3170: [C++] Experimental readahead spooler
> > source  pitrou/ARROW-501-readahead
> > target  master
> > url     https://api.github.com/repos/apache/arrow/pulls/2492
> >
> > Proceed with merging pull request #2492? (y/n): y
> > Depuis https://github.com/apache/arrow
> >  * [nouvelle référence] refs/pull/2492/head -> PR_TOOL_MERGE_PR_2492
> > Depuis github.com:apache/arrow
> >  * [nouvelle branche]  master     -> PR_TOOL_MERGE_PR_2492_MASTER
> >    498215fb..a42d4bf1  master     -> apache/master
> > Basculement sur la branche 'PR_TOOL_MERGE_PR_2492_MASTER'
> > La fusion automatique a réussi ; stoppée avant la validation comme demandé
> > Traceback (most recent call last):
> >   File "./dev/merge_arrow_pr.py", line 375, in <module>
> >     merge_hash = merge_pr(pr_num, target_ref)
> >   File "./dev/merge_arrow_pr.py", line 199, in merge_pr
> >     merge_message_flags)
> >   File "./dev/merge_arrow_pr.py", line 101, in run_cmd
> >     output = subprocess.check_output(cmd)
> >   File "/home/antoine/miniconda3/envs/pyarrow/lib/python3.7/subprocess.py", 
> > line 376, in check_output
> >     **kwargs).stdout
> >   File "/home/antoine/miniconda3/envs/pyarrow/lib/python3.7/subprocess.py", 
> > line 453, in run
> >     with Popen(*popenargs, **kwargs) as process:
> >   File "/home/antoine/miniconda3/envs/pyarrow/lib/python3.7/subprocess.py", 
> > line 756, in __init__
> >     restore_signals, start_new_session)
> >   File "/home/antoine/miniconda3/envs/pyarrow/lib/python3.7/subprocess.py", 
> > line 1499, in _execute_child
> >     raise child_exception_type(errno_num, err_msg, err_filename)
> > OSError: [Errno 7] Argument list too long: 'git'
> >
> >
> > Regards
> >
> > Antoine.
> >
> >
> >
> > On Sat, 8 Sep 2018 13:08:56 -0400
> > Wes McKinney <wesmck...@gmail.com> wrote:  
> > > I'm on plane wifi right now so it's hard for me to investigate too
> > > much, but for the time being any outstanding patches should be rebased
> > > before running the merge script.
> > >
> > > Note that any committer can rebase a contributor's patch if the
> > > contributor has not disallowed it. Please post here if you have any
> > > questions or issues
> > > On Sat, Sep 8, 2018 at 12:50 PM Wes McKinney <wesmck...@gmail.com> wrote: 
> > >  
> > > >
> > > > There's some strangeness with our merge script after the parquet-cpp
> > > > codebase graft -- I just reverted the most recent commit and am taking
> > > > a look  
> > >  
> >
> >
> >
> > On Sat, 8 Sep 2018 13:08:56 -0400
> > Wes McKinney <wesmck...@gmail.com> wrote:
> >  
> > > I'm on plane wifi right now so it's hard for me to investigate too
> > > much, but for the time being any outstanding patches should be rebased
> > > before running the merge script.
> > >
> > > Note that any committer can rebase a contributor's patch if the
> > > contributor has not disallowed it. Please post here if you have any
> > > questions or issues
> > > On Sat, Sep 8, 2018 at 12:50 PM Wes McKinney <wesmck...@gmail.com> wrote: 
> > >  
> > > >
> > > > There's some strangeness with our merge script after the parquet-cpp
> > > > codebase graft -- I just reverted the most recent commit and am taking
> > > > a look  
> > >  
> >  
> 

Reply via email to