On Fri, 1 Nov 2024 at 15:31, Andi Kleen wrote:
>
> On Fri, Nov 01, 2024 at 06:57:21AM +, ci_not...@linaro.org wrote:
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please follow up o
Hi Peter,
You can ignore this notification: we had a temporary hack in our scripts to
apply that patch before it was merged, hence the conflict after your merge.
We have reverted our hack, so builds should be ok again.
Thanks,
Christophe
Le ven. 1 nov. 2024, 07:47, a écrit :
> Dear contribut
On Tue, 29 Oct 2024 at 19:25, Sam James wrote:
>
> Christophe Lyon writes:
>
> > On Sat, 26 Oct 2024 at 14:17, Sam James via Gcc-regression
> > wrote:
> >>
> >> ci_not...@linaro.org writes:
> >>
> >> > Dear contributor, our automa
On Sat, 26 Oct 2024 at 14:17, Sam James via Gcc-regression
wrote:
>
> ci_not...@linaro.org writes:
>
> > Dear contributor, our automatic CI has detected problems related to
> > your patch(es). Please find some details below. If you have any
> > questions, please follow up on linaro-toolchain@lis
Hi!
On Mon, 28 Oct 2024 at 08:56, Li, Pan2 via Gcc-regression
wrote:
>
> I have a try with below command but get error like "cc1: error:
> '-mfloat-abi=hard': selected architecture lacks an FPU".
> Linux ubuntu-arm 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC
> 2018 aarch64 aarch64
On Thu, 17 Oct 2024 at 06:28, Sam James via Gcc-regression
wrote:
>
> This is https://gcc.gnu.org/PR117177 which has a patch posted by Jakub
> already at
> https://inbox.sourceware.org/gcc-patches/ZxArjATvc%2FnI6YiO@tucnak/.
Indeed, thanks for the pointer!
Christophe
___
On Thu, 10 Oct 2024 at 12:15, Jonathan Wakely via Gcc-regression
wrote:
>
> On Thu, 10 Oct 2024 at 06:33, wrote:
> >
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please follow up on l
On Mon, 30 Sept 2024 at 10:49, Jonathan Wakely via Gcc-regression
wrote:
>
> On Mon, 30 Sept 2024 at 07:22, wrote:
> >
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please follow up on
Hi,
Sorry for the delay
On Tue, 24 Sept 2024 at 15:17, Jason Merrill wrote:
>
> On 9/23/24 2:08 AM, ci_not...@linaro.org wrote:
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please fo
On Fri, 27 Sept 2024 at 11:42, Sam James via Gcc-regression
wrote:
>
> ci_not...@linaro.org writes:
>
> > Dear contributor, our automatic CI has detected problems related to
> > your patch(es). Please find some details below. If you have any
> > questions, please follow up on linaro-toolchain@li
On Tue, 24 Sept 2024 at 13:56, Christophe Lyon
wrote:
>
> On Mon, 23 Sept 2024 at 19:54, David Malcolm wrote:
> >
> > On Mon, 2024-09-23 at 15:18 +0200, Christophe Lyon wrote:
> > > Hi David,
> > >
> > > On Sun, 22 Sept 2024 at 00:39, David Malcolm
On Mon, 23 Sept 2024 at 19:54, David Malcolm wrote:
>
> On Mon, 2024-09-23 at 15:18 +0200, Christophe Lyon wrote:
> > Hi David,
> >
> > On Sun, 22 Sept 2024 at 00:39, David Malcolm
> > wrote:
> > >
> > > On Sat, 2024-09-21 at 04:30 +, ci_not...
On Mon, 23 Sept 2024 at 15:44, Guinevere Larsen wrote:
>
> On 9/23/24 10:33 AM, Christophe Lyon wrote:
> > Hi Guinevere,
> >
> > On Mon, 23 Sept 2024 at 14:05, Guinevere Larsen
> > wrote:
> >> I think some issue has happened in the CI. Both this and 2 pat
Hi Guinevere,
On Mon, 23 Sept 2024 at 14:05, Guinevere Larsen wrote:
>
> I think some issue has happened in the CI. Both this and 2 patches I've
> sent to the mailing list (one that changes no code, only the
> SECURITY.txt file) say that I've introduced regressions, yet the
> relevant test only h
Hi David,
On Sun, 22 Sept 2024 at 00:39, David Malcolm wrote:
>
> On Sat, 2024-09-21 at 04:30 +, ci_not...@linaro.org wrote:
> > Dear contributor, our automatic CI has detected problems related to
> > your patch(es). Please find some details below. If you have any
> > questions, please foll
On Fri, 13 Sept 2024 at 15:55, Jason Merrill wrote:
>
> On 9/12/24 9:13 PM, ci_not...@linaro.org wrote:
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please follow up on linaro-toolchai
On Wed, 18 Sept 2024 at 02:05, Alexandre Oliva via Gcc-regression
wrote:
>
> On Sep 17, 2024, ci_not...@linaro.org wrote:
>
> > regressions.sum:
> > FAIL: c-c++-common/analyzer/out-of-bounds-diagram-8.c -std=c++
> > expected multiline pattern lines 20-35
>
> > improvements.sum:
> > FAIL: c-c++-c
Hi,
On Fri, 20 Sept 2024 at 15:24, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg channel
On Fri, 20 Sept 2024 at 15:34, Mark Wielaard wrote:
>
> Hi Christophe,
>
> On Fri, 2024-09-20 at 15:30 +0200, Christophe Lyon wrote:
> > Looks like our build queue is full (well more than full ~60 builds of
> > this type pending), bisecting regressions introduced before I
On Fri, 20 Sept 2024 at 14:47, Mark Wielaard wrote:
>
> Hi,
>
> I thought this was resolved, but...
>
me too :-)
> On Thu, 2024-09-12 at 21:47 +0200, Mark Wielaard wrote:
> > On Thu, Sep 12, 2024 at 09:25:08AM +0200, Christophe Lyon wrote:
> > > Right, sorry fo
Hi Jason,
On Thu, 12 Sept 2024 at 00:15, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg c
Right, sorry for the breakage.
This should now be fixed by
https://sourceware.org/pipermail/binutils/2024-September/136743.html
Christophe
On Thu, 12 Sept 2024 at 08:33, Laurent Alfonsi
wrote:
>
> Yes, I reported to Christophe yesterday, he confirmed this comes from his
> linker commit and he's
Hi Jakub,
On Tue, 10 Sept 2024 at 05:17, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg
On Sun, 4 Aug 2024 at 00:25, Sam James via Gcc-regression
wrote:
>
> ci_not...@linaro.org writes:
>
> > Dear contributor, our automatic CI has detected problems related to
> > your patch(es). Please find some details below. If you have any
> > questions, please follow up on linaro-toolchain@list
Hi Harald,
On Mon, 6 May 2024 at 21:02, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg c
Hi Tom.
As you may have noticed, your patch below caused regressions in the
libstdc++ testsuite on aarch64:
FAIL: libstdc++-prettyprinters/debug.cc print redirected
FAIL: libstdc++-prettyprinters/simple.cc print redirected
FAIL: libstdc++-prettyprinters/simple11.cc print redirected
For instance,
Hi Pedro,
As you may have noticed, this patch caused new failures on arm.
Are you working on a fix?
Thanks,
Christophe
On Sat, 13 Apr 2024 at 16:59, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have a
Hi!
On Mon, 15 Apr 2024 at 15:39, Metzger, Markus T
wrote:
>
> Hello,
>
> > | 4 patches in gdb
> > | Patchwork URL: https://patchwork.sourceware.org/patch/88278
> > | 343a2568d2c gdb, infrun: fix multi-threaded reverse stepping
> > | a4cfc3d32a8 gdb, infrun, record: move no-history notificati
On Mon, 15 Apr 2024 at 12:15, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg channel, or p
On Tue, 5 Mar 2024 at 21:24, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg channel, or pi
Hi Patrick,
This report can be considered as a false alarm: the errors were
already present in the baseline, but the ICE line number changed since
your patch modified the code in the file where the ICE occurs.
That being said, I've noticed another report saying that your patch
broke bootstrap on
On Mon, 26 Feb 2024 at 10:41, Jan Beulich wrote:
>
> On 26.02.2024 10:08, Christophe Lyon wrote:
> > On Mon, 26 Feb 2024 at 09:05, Jan Beulich wrote:
> >> On 23.02.2024 15:24, ci_not...@linaro.org wrote:
> >>> Dear contributor, our automatic CI has detected prob
Hi Jan,
On Mon, 26 Feb 2024 at 09:05, Jan Beulich wrote:
>
> On 23.02.2024 15:24, ci_not...@linaro.org wrote:
> > Dear contributor, our automatic CI has detected problems related to your
> > patch(es). Please find some details below. If you have any questions,
> > please follow up on linaro-t
Hi Stephan,
Sorry this clearly looks like a false alarm.
We have enabled maintainer mode at configure time and it seems to have
unexpected consequences.
We've disabled it again, and will investigate what happened.
Sorry for the inconvenience.
Thanks,
Christophe
On Mon, 12 Feb 2024 at 14:31, w
Hi,
I guess this is a false alarm since the error message was
FAIL: g++.dg/modules/xtreme-header-1_a.H -std=c++2b (internal compiler
error: in core_vals, at cp/module.cc:6110)
and is now:
FAIL: g++.dg/modules/xtreme-header-1_a.H -std=c++2b (internal compiler
error: in core_vals, at cp/module.cc:61
Hi David,
As you have probably guessed, this is a false alarm: the testcases you
updated were already failing before your patch, but it changed the
line numbers, thus making the scripts think a failure disappeared and
a new one appeared.
Thanks,
Christophe
On Mon, 8 Jan 2024 at 01:15, wrote:
>
Hi Jakub,
Of course the CI is confused and reports regressions because after
your patch there are new "FAIL" enabled since you fixed the "ERROR"
cases.
It sees new "FAIL" and interprets that as a regression.
Thanks,
Christophe
On Tue, 12 Dec 2023 at 17:17, wrote:
>
> Dear contributor, our au
Hi!
On Fri, 24 Nov 2023 at 11:41, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow up on linaro-toolchain@lists.linaro.org mailing list, Libera's
> #linaro-tcwg channel,
Sorry, there was a temporary breakage in our CI scripts, you can
ignore this bogus report.
On Tue, 24 Oct 2023 at 18:41, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow
Sorry, there was a temporary breakage in our CI scripts, you can
ignore this bogus report.
On Tue, 24 Oct 2023 at 18:40, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow
Sorry, there was a temporary breakage in our CI scripts, you can
ignore this bogus report.
On Tue, 24 Oct 2023 at 18:38, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow
Sorry, there was a temporary breakage in our CI scripts, you can
ignore this bogus report.
On Tue, 24 Oct 2023 at 18:35, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow
Sorry, there was a temporary breakage in our CI scripts, you can
ignore this bogus report.
On Tue, 24 Oct 2023 at 18:25, wrote:
>
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below. If you have any questions,
> please follow
Hi,
The error reported below was in fact caused by a bug in these tests, which
has now been fixed.
Sorry for the false alarm.
Thanks
On Tue, 26 Sept 2023 at 16:42, wrote:
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below.
Hi,
The error reported below was in fact caused by a bug in these tests, which
has now been fixed.
Sorry for the false alarm.
Thanks
On Tue, 26 Sept 2023 at 16:38, wrote:
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below.
Hi,
The error reported below was in fact caused by a bug in these tests, which
has now been fixed.
Sorry for the false alarm.
Thanks
On Tue, 26 Sept 2023 at 16:38, wrote:
> Dear contributor, our automatic CI has detected problems related to your
> patch(es). Please find some details below.
Hi Jonathan,
Thanks for the heads up.
We do run contrib/gcc_update --touch after applying patches, and before
starting the build, but I realize it doesn't help in the case of
bits/version.h
It looks like we should run make update-version?
Is that documented somewhere? (I'm wondering what's the l
== Progress ==
* GCC
- MVE/vectorization: committed patches for vec_pack / vec_unpack
- handling feedback on patch for PR 100757
* GCC upstream validation:
- reported a couple of regressions
== Next ==
Now leaving Linaro, hopefully I can continue to work on:
* MVE auto-vectorization/intrinsics im
== Progress ==
* GCC
- MVE/vectorization: committed patches for vhadd/vrhadd and vclz
- handling feedback on patch for vec_pack / vec_unpack
- PR 100757
* GCC upstream validation:
- reported a couple of regressions
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing
== Progress ==
* GCC upstream validation:
- reported a couple of regressions
* GCC
- MVE/vectorization: committed patches for vans
- submitted patch for vhadd/vrhadd
- WIP on vclz / vec_pack / vec_unpack
- PR 100757
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testin
== Progress ==
* GCC upstream validation:
- reported a couple of regressions
* GCC
- MVE/vectorization: committed patches for vld2/vst2, vld4/st4, vaddv
- WIP on vhadd/vrhadd
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* GDB/cortex-M
== Progress ==
* GCC upstream validation:
- reported a couple of regressions
* GCC
- MVE/vectorization: committed patches for vcmp, waiting for
feedback on the remaining patches for vld2/vst2, vld4/st4
- started work on vaddv support
- committed a few testsuite improvement patches
- committed patc
Short week (2.5 days off)
== Progress ==
* GCC upstream validation:
- discussing update of the list of configs
* GCC
- MVE/vectorization: committing cleanup patches for vcmp, waiting for
feedback on the remaining patches for vcmp, vld2/vst2, vld4/st4
* Misc
- scripts patch reviews
- looking at g
== Progress ==
* GCC upstream validation:
- Reported a few regressions
- discussing update of the list of configs
- tried qemu-6.0, issue with hwasan testing on aarch64
* GCC
- MVE/vectorization: waiting for feedback on patches for vcmp,
vld2/vst2, vld4/st4
* Misc
- scripts patch reviews
- lookin
== Progress ==
* GCC upstream validation:
- Reported a few regressions
* GCC
- committed cleanup patches
- sent a few testsuite improvement patches
- MVE/vectorization: Send patches for vcmp, vld2/vst2, vld4/st4
* Misc
- scripts patch reviews
== Next ==
* MVE auto-vectorization/intrinsics improv
== Progress ==
* GCC upstream validation:
- Reported a few regressions
* GCC
- committed further fix for testcase for PR96770
- sent a few testsuite improvement patches
- resumed work on MVE/auto-vectorization. Added support for vcmp.f16.
Checking fp16 support in previous patches.
* Misc
- script
== Progress ==
* GCC upstream validation:
- Reported a few regressions
- Reduced build frequency on release branches, now same as trunk:
daily bump and arm/aarch64 "interesting" commits
* GCC
- pinged further fix for testcase for PR96770
- preparing cortex-m55 validation setup
- looking at cmse te
== Progress ==
* GCC upstream validation:
- No regression to report this week. Issues on gcc-9 and gcc-10
release branches had already been reported by other people.
* GCC
- pinged further fix for testcase for PR96770
- Looking at failures for cortex-M, only found testisms so far
* Misc
- Fixed b
== Progress ==
* GCC upstream validation:
- Reported minor testsuite issues (eg failures with -mabi=ilp32 on aarch64)
- re-started looking at validation for cortex-m55, realized that qemu
does not support MVE yet
* GCC
- posted further fix for testcase for PR96770
- fixed PR 99786
- committed fix
== Progress ==
* GCC upstream validation:
- No regression to report this week
* GCC
- testsuite cleanup: committed a patch series
- fixed PR 99727
- filed / discussed PR 99773
- WIP PR 99786
* Misc
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements &
== Progress ==
* GCC upstream validation:
- Small improvement to pre-commit testing scripts to allow running a
subset of the tests (and thus save a lot of time)
* GCC
- MVE autovectorization:
- vcmp support of FP types OK.
- testsuite cleanup: looking at current failures, only found issues
with
== Progress ==
* GCC upstream validation:
- Bisected last week regressions, but there had already been reported/fixed.
* GCC
- MVE autovectorization:
- vcmp support mostly complete. support of FP types looks OK, though
trickier than expected.
- vld2/vst2 and vld4/vst4 done.
* Misc
- stm32 ben
== Progress ==
* GCC upstream validation:
- a couple of regressions to bisect.
- minor testcase fix
- reported a couple of new failures
* GCC
- MVE autovectorization:
- vcmp support mostly complete. Minor update needed to support FP types.
- working on interleaved vector load/store support
*
== Progress ==
* GCC upstream validation:
- a few regressions to bisect. Fixed a minor testcase issue
* GCC
- MVE autovectorization: Working on vcmp. After some cleanup &
factorization, the cmp operators work on GCC vectors. I will now
resume work on auto-vectorization.
* Misc
- fixes in stm32 be
== Progress ==
* GCC upstream validation:
- a few regressions to bisect. Fixed a minor testcase issue
- native validation in Linaro's lab: we still see a few random results
* GCC
- MVE autovectorization: Working on vcmp.
* Misc
- fixes in stm32 benchmarking harness
== Next ==
* MVE auto-vectoriz
== Progress ==
* GCC upstream validation:
- not really catching up, now ~15 days late due to the numerous
commits. Manually fast-forwarded the latest build to today. I'll
bisect manually for regressions if needed.
- re-enabled native validation in Linaro's lab: we are sending test
results again
*
== Progress ==
* GCC upstream validation:
- not really catching up, now ~10 days late due to the numerous commits
* GCC
- Neon intrinsics: vceqzq improvement (PR98730) commited
- MVE autovectorization: vorn patch submitted
- opened PR98891 about Neon vectorization regression
* infra:
- reviewed p
== Progress ==
* GCC upstream validation:
- catching up, still ~1 week late due to the numerous commits
* GCC
- Neon intrinsics: looking at vceqzq improvement (PR98730)
- MVE autovectorization: WIP on vorn and vcmp.
- helping with cortex-m0 libgcc patch validation. Opened PR98779 about
issues in l
== Progress ==
* GCC upstream validation:
- catching up, still ~1 week late due to the numerous commits
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: committed
- MVE autovectorization: movmisalign, vshl and vshr: committed. WIP on
next operators.
- opened 2 PRs about missed optim
== Progress ==
* GCC upstream validation:
- restarted validation after e/o year maintenance
- catching up, no regression to report so far
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: movmisalign accepted, waiting for boostrap results
vshl an
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures, trunk build broken several times
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: committed veor, vbic, vmvn and vneg. Sent
movmisalign, vshl and vshr patches.
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: committed vand and vorr patches. Sent updated
versions of veor, vbic and vmvn. vshr and vshl need some
refactor
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: handling feedback on(vand, vorr, veor, vshr,
vshl). WIP vmvn, vbic. Found further vectorization issues.
* benc
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: sent a few patches (vand, vorr, veor, vshr,
vshl). WIP vmvn, vbic
* benchmarking:
- Scripts to run coremark on
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: more patterns updates (vshl, vshr). WIP
* benchmarking:
- Scripts to run coremark on stm32 now merged, debuggi
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no feedback
- MVE autovectorization: started patterns updates (vand, vorr, veor)
* Misc
- investigated LP bug 1747966
(https://bugs.launchpad.net/g
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- PR96767: patch committed
- PR96770: patch committed
- patch for C++ thunks with -mpure-code and cortex-m0: patches committed
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: handling feedback
*
* 2 days off
== Progress ==
* GCC upstream validation:
- reported several regressions/new failures
* GCC
- PR96767: patch accepted
- PR96770: patch accepted
- patch for C++ thunks with -mpure-code and cortex-m0: iterating, almost OK
- Neon intrinsics: vceqq, vceqz and vceqzq for p64 patch: no fe
* 2 days off
== Progress ==
* GCC upstream validation:
- identified several regressions, but they had already been reported
* GCC
- PR96767: no feedback
- PR96770: no feedback
- patch for C++ thunks with -mpure-code and cortex-m0: handling
further feedback
- Neon intrinsics: vceqq, vceqz and vce
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed minor cleanup fixes
- fixed broken trunk build with gcc-4.8
* GCC
- PR96767: no feedback
- PR96770: no feedback
- patch for C++ thunks with -mpure-code and cortex-m0: sent updated patch
- Neon intrinsics: sent pa
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed minor cleanup fixes
- reduced the number of gcc-testresults emails to avoid too much traffic
* GCC
- PR96767: no feedback
- PR96770: no feedback
- patch for C++ thunks with -mpure-code and cortex-m0: received
com
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed minor cleanup fixes
- improved gcc-testresults email titles after discussion with other contributors
* GCC
- PR96767: patch sent
- PR96770: patch sent
- sent another patch to fix C++ thunks with -mpure-code and c
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed minor cleanup fixes
* GCC
- PR96767: patch almost ready
- PR71233: missing ACLE intrinsics, updated the list. Created PR96914
with the missing MVE intrinsics
* benchmarking:
- Scripts to run coremark on stm32 now
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed testcase fixes
* GCC
- PR96768: switch tables for thumb-1 with -mpure-code. Patch sent,
discussion on-going
- PR96769: fix committed.
- PR71233: missing ACLE intrinsics, updated the list. Created PR96914
with the
1 day off
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed testcase fixes
* GCC
- patch for PR94758 (-mpurecode and cortex-m23) applied to
thunk/gcc-10/gcc-9. Bug closed, but opened several others to track
performance improvements with -mpurecode
- PR96768: swi
== Progress ==
* GCC upstream validation:
- reported several regressions
- committed testcase fixes
* GCC
- sent patch for PR94758 (-mpurecode and cortex-m23)
* benchmarking:
- Scripts to run coremark on stm32 now merged, working on using them
in production
* misc:
- infra patches/reviews
== Ne
- 2 half days off
== Progress ==
* GCC upstream validation:
- reported several regressions (trunk and release branches)
- committed 2 testcase fixes
* GCC
- resumed work on PR94758 (-mpurecode and cortex-m23). Patch almost ready
* benchmarking:
- Scripts to run coremark on stm32 now merged, work
== Progress ==
* GCC upstream validation:
- reported a few regressions / minor testcase fix
* benchmarking:
- Created the Jenkins job to run coremark on stm32 (added/updated
scripts for that)
* misc:
- infra patches/reviews
== Next ==
* GCC/cortex-M testing improvements & fixes
* cortex-m benchm
== Progress ==
* GCC upstream validation:
- reported a few regressions / minor testcase fix
- enabled gcc-testresults for release branches, which will send even more emails
* benchmarking:
- added HAL support for the stm32 board we have in the Lab. Will start
testing once the board is actually con
== Progress ==
* GCC upstream validation:
- reported a few regressions
- added fortran to arm-none-eabi configs
- enabled gcc-testresults for most configurations, which now sends a
lot of emails
* GCC:
- PR94743 (IRQ handler and Neon registers): patch committed.
* benchmarking:
- cleanup of hal
== Progress ==
* GCC upstream validation:
- reported a few regressions
* GCC:
- PR94743 (IRQ handler and Neon registers): No feedback yet.
* misc:
- cleanup of hal lib to run benchmarks on stm32
== Next ==
* PR94743
* GCC/cortex-M testing improvements & fixes
* cortex-m benchmarking
* FDPIC GDB
* Training all week
== Progress ==
* GCC upstream validation:
- reported a few regressions
- enabled sending of some validation results to gcc-testresults mailing-list
* GCC:
- PR94743 (IRQ handler and Neon registers): No feedback yet.
* misc:
- infra fixes / troubleshooting / reviews
- cleanup
(0.5 day off)
== Progress ==
* GCC upstream validation:
- scripts updates and cleanup for cortex-m33 with qemu-system-mode
- investigated a random error with a C++ testcase under qemu-aarch64
since I upgraded to qemu-5.0.
- reported a few regressions
- fixed cross-build of GCC after C++11 upgrade,
Short week (1.5 day off)
== Progress ==
* GCC upstream validation:
- scripts updates and cleanup for cortex-m33 with qemu-system-mode
- reported a few regressions
* GCC:
- PR94743 (IRQ handler and Neon registers): No feedback yet.
* misc:
- infra fixes / troubleshooting / reviews
- looking at c
== Progress ==
* GCC upstream validation:
- scripts updates and cleanup
- new scheme for arm-eabi now in production (cortex-a7 in arm and
thumb modes, cortex-m[0347,33]
- upgraded to qemu-5.0
- reported a few regressions
* GCC:
- PR94743 (IRQ handler and Neon registers): No feedback yet.
* misc:
Short week (3 days)
== Progress ==
* GCC upstream validation:
- updating / cleaning up scripts to switch arm-eabi validation to new scheme
* GCC:
- PR94743 (IRQ handler and Neon registers): No feedback yet.
* misc:
- infra fixes / troubleshooting / reviews
- looking at cortex-m benchmarking har
== Progress ==
* GCC upstream validation:
- reported a couple of regressions
- sent an email to discussed the preferred combinations when running
the testsuite
* GCC:
- PR94743 (IRQ handler and Neon registers): iterating. Sent updated
patch to emit a warning. Cleanup patches merged. Sent WIP patc
Short week (4 days)
== Progress ==
* GCC upstream validation:
- Added gcc-10 branch
- maybe we should agree on a common way of running the testsuite
* GCC:
- PR94743 (IRQ handler and Neon registers): iterating. Refining patch
that emits a warning (testsuite refinements...), Need to update
additi
morello
memory tagging
v8.6 blfoat
v8.1-m MVE
trustzone
would be for next cycle
___
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Short week (4 days)
== Progress ==
* GCC upstream validation:
- reported a couple of failures/regressions
- still looking at improving MVE tests to avoid failures in several
non-supported configurations. No satisfactory solution so far (there
are always combinations of GCC configure option and val
1 - 100 of 485 matches
Mail list logo