Hello Jean,

Nice work!

Le 28/12/2020 à 23:21, Jean Helou a écrit :
> Hello again jamers!
> 
> It's time for a new irregular report on the CI effort on apache infra 🎅 !
> 
> Let's start with the good news : today I finally reached a successful build
> https://builds.apache.org/blue/organizations/jenkins/james%2FApacheJames/detail/PR-268/45/pipeline
> (the first fully successful build on apache infra)

\o/

> 
> You can see in the pipeline that as discussed before the testing phase is
> split in 2 parts: stable tests vs unstable tests, failure in the first
> phase will fail the build, failures in the unstable phase will not be
> considered a build failure (but should still collect the failed tests in
> the reports, however the recent failures where mostly memory related in
> which case the surefire report is not generated :( )
> 
> Over the last 2 weeks and 45 build attempts, I tagged all failing tests as
> "Unstable", I also increased the heap in the forked surefire to resolve
> some of the OutOfMemoryException failures
> 
> At this stage I would really like to see this merged (if only to be able to
> evaluate dangerous changes such as
> https://github.com/apache/james-project/pull/282)

+1 I think this however deserves a separate thread. I will start it now.

> 
> You can look at https://github.com/apache/james-project/pull/268 to see
> which tests have been marked as Unstable. It was rebased on master this
> morning and I intend to clean up the history tonight.

Will do, maybe tomorrow. On the principles, it is a yes from my side.

As someone operating another CI, I want to play even unstable test on
every runs. Is there some adaptation needed to do this?

> I also removed some invasive logging from the webadmin test code (it used
> to log every single http request made in the tests) the full log is still a
> bit over 30MB...

Nice enhancement, it is likely some long-forgotten debug statements.
> 
> Best regards,
> Jean
> 
> On Fri, Dec 11, 2020 at 12:25 PM Jean Helou <jean.he...@gmail.com> wrote:
> 
>> I conclude that my effort to get CI working is cursed by the gods,
>> remember :
>>
>>>> {"message":"No such image: quay.io/testcontainers/ryuk:0.2.3"}
>>>
>>> which repeats for most tests failures, this seems to be common enough
>>> that there is stack overflow for it
>>>
>>> https://stackoverflow.com/questions/61887363/testcontainers-cant-pull-ryuk-image-quay-io-is-not-reachable
>>> I have attempted to upgrade test containers to 1.15.0 (as it will pull
>>> ryuk from docker hub instead of quay.io since 1.14.3 and we were using
>>> 1.12)
>>> hopefully this will help :)
>>>
>>
>> A docker API change broke most of testcontainers versions, which won't be
>> able to pull the images if they are not already available locally !
>> https://github.com/testcontainers/testcontainers-java/issues/3574
>>> yes, this Docker API change applies to most of Testcontainers versions.
>>
>> They should release  a 1.15.1 to resolve the issue shortly, I have tried
>> explicitly pulling the image in the steps of running the tests but sadly it
>> doesn't seem to have helped :(
>>
>> jean
>>
>>>
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org

Reply via email to