revisiting this thread...

i pushed a small change to some R test code (
https://github.com/apache/spark/pull/21864), and the appveyor build timed
out after 90 minutes:

https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/2440-master

to be honest, i don't have a lot of time to debug *why* this happened, or
how to go about triggering another build, but at the very least we should
up the timeout.

On Sun, May 13, 2018 at 7:38 PM, Hyukjin Kwon <gurwls...@gmail.com> wrote:

> Yup, I am not saying it's required but might be better since that's
> written in the guide as so and at least am seeing rebase is more frequent.
> Also, usually merging commits trigger the AppVeyor build if it includes
> some changes in R
> It's fine to merge the commits but better to rebase to save AppVeyor
> resource and prevent such confusions.
>
>
> 2018-05-14 10:05 GMT+08:00 Holden Karau <hol...@pigscanfly.ca>:
>
>> On Sun, May 13, 2018 at 9:43 PM Hyukjin Kwon <gurwls...@gmail.com> wrote:
>>
>>> From a very quick look, I believe that's just occasional network issue
>>> in AppVeyor. For example, in this case:
>>>   Downloading: https://repo.maven.apache.org/
>>> maven2/org/scala-lang/scala-compiler/2.11.8/scala-compiler-2.11.8.jar
>>> This took 26ish mins and seems further downloading jars look mins much
>>> more than usual.
>>>
>>> FYI, It usually takes built 35 ~ 40 mins and R tests 25 ~ 30 mins where
>>> usually ends up 1 hour 5 min.
>>> Will take another look to reduce the time if the usual time reaches 1
>>> hour and 30 mins (which is the current AppVeyor limit).
>>> I did this few times before - https://github.com/apache/spark/pull/19722
>>> and https://github.com/apache/spark/pull/19816.
>>>
>>> The timeout is already increased from 1 hour to 1 hour and 30 mins. They
>>> still look disallowing to increase timeout anymore.
>>> I contacted with them few times and manually requested this.
>>>
>>> For the best, I believe we usually just rebase rather than merging the
>>> commits in any case as mentioned in the contribution guide.
>>>
>> I don’t recal this being a thing that we actually go that far in
>> encouraging. The guide says rebases are one of the ways folks can keep
>> their PRs up to date, but no actually preference is stated. I tend to see
>> PRs from different folks doing either rebases or merges since we do squash
>> commits anyways.
>>
>> I know for some developers keeping their branch up to date merge commits
>> tend to be less effort, and provided the diff is still clear and the
>> resulting merge is also clean I don’t see an issue.
>>
>>> The test failure in the PR should be ignorable if that's not directly
>>> related with SparkR.
>>>
>>>
>>> Thanks.
>>>
>>>
>>>
>>> 2018-05-14 8:45 GMT+08:00 Ilan Filonenko <i...@cornell.edu>:
>>>
>>>> Hi dev,
>>>>
>>>> I recently updated an on-going PR [https://github.com/apache/spa
>>>> rk/pull/21092] that was updated with a merge that included a lot of
>>>> commits from master and I got the following error:
>>>>
>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>
>>>> due to:
>>>>
>>>> *Build execution time has reached the maximum allowed time for your
>>>> plan (90 minutes).*
>>>>
>>>> seen here: https://ci.appveyor.com/project/ApacheSoftwareFoundati
>>>> on/spark/build/2300-master
>>>>
>>>> As this is the first time I am seeing this, I am wondering if this is
>>>> in relation to a large merge and if it is, I am wondering if the timeout
>>>> can be increased.
>>>>
>>>> Thanks!
>>>>
>>>> Best,
>>>> Ilan Filonenko
>>>>
>>>
>>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>


-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu

Reply via email to