[OE-core] [PATCH v4] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-16 Thread Jörg Sommer via lists . openembedded . org
If the ipv6 feature for the distribution is not set, the package should not
contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
socket, and complain that this fails.

Signed-off-by: Jörg Sommer 
---
 .../libtirpc/libtirpc/ipv6.patch  | 52 +++
 .../libtirpc/libtirpc_1.3.3.bb|  9 +++-
 2 files changed, 60 insertions(+), 1 deletion(-)
 create mode 100644 meta/recipes-extended/libtirpc/libtirpc/ipv6.patch

diff --git a/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch 
b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
new file mode 100644
index 00..f746f986f4
--- /dev/null
+++ b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
@@ -0,0 +1,52 @@
+From 077bbd32e8b7474dc5f153997732e1e6aec7fad6 Mon Sep 17 00:00:00 2001
+Message-Id: 
<077bbd32e8b7474dc5f153997732e1e6aec7fad6.1697120796.git.joerg.som...@navimatix.de>
+From: =?UTF-8?q?J=C3=B6rg=20Sommer?= 
+Date: Thu, 12 Oct 2023 16:22:59 +0200
+Subject: [PATCH] netconfig: remove tcp6, udp6 on --disable-ipv6
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+If the configuration for IPv6 is disabled, the netconfig should not contain
+settings for tcp6 and udp6.
+
+The test for the configure option didn't work, because it check the wrong
+variable.
+
+Signed-off-by: Jörg Sommer 
+Upstream-Status: Submitted [libtirpc-de...@lists.sourceforge.net]
+Upstream-Status: Submitted [linux-...@vger.kernel.org]
+---
+ configure.ac| 2 +-
+ doc/Makefile.am | 5 +
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index fe6c517..b687f8d 100644
+--- a/configure.ac
 b/configure.ac
+@@ -64,7 +64,7 @@ fi
+ AC_ARG_ENABLE(ipv6,
+   [AC_HELP_STRING([--disable-ipv6], [Disable IPv6 support 
@<:@default=no@:>@])],
+   [],[enable_ipv6=yes])
+-AM_CONDITIONAL(INET6, test "x$disable_ipv6" != xno)
++AM_CONDITIONAL(INET6, test "x$enable_ipv6" != xno)
+ if test "x$enable_ipv6" != xno; then
+   AC_DEFINE(INET6, 1, [Define to 1 if IPv6 is available])
+ fi
+diff --git a/doc/Makefile.am b/doc/Makefile.am
+index d42ab90..b9678f6 100644
+--- a/doc/Makefile.am
 b/doc/Makefile.am
+@@ -2,3 +2,8 @@ dist_sysconf_DATA  = netconfig bindresvport.blacklist
+ 
+ CLEANFILES   = cscope.* *~
+ DISTCLEANFILES   = Makefile.in
++
++if ! INET6
++install-exec-hook:
++  $(SED) -i '/^tcp6\|^udp6/d' "$(DESTDIR)$(sysconfdir)"/netconfig
++endif
+-- 
+2.34.1
+
diff --git a/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb 
b/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb
index b27c302460..898a952a8b 100644
--- a/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb
+++ b/meta/recipes-extended/libtirpc/libtirpc_1.3.3.bb
@@ -9,7 +9,9 @@ LIC_FILES_CHKSUM = 
"file://COPYING;md5=f835cce8852481e4b2bbbdd23b5e47f3 \
 
 PROVIDES = "virtual/librpc"
 
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2"
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
+   file://ipv6.patch \
+"
 UPSTREAM_CHECK_URI = 
"https://sourceforge.net/projects/libtirpc/files/libtirpc/;
 UPSTREAM_CHECK_REGEX = "(?P\d+(\.\d+)+)/"
 SRC_URI[sha256sum] = 
"6474e98851d9f6f33871957ddee9714fdcd9d8a5ee9abb5a98d63ea2e60e12f3"
@@ -21,6 +23,11 @@ inherit autotools pkgconfig
 PACKAGECONFIG ??= ""
 PACKAGECONFIG[gssapi] = "--enable-gssapi,--disable-gssapi,krb5"
 
+PACKAGECONFIG ??= "\
+   ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
+
 do_install:append() {
test -e ${D}${sysconfdir}/netconfig && chown root:root 
${D}${sysconfdir}/netconfig
 }
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189283): 
https://lists.openembedded.org/g/openembedded-core/message/189283
Mute This Topic: https://lists.openembedded.org/mt/101992881/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH] base-files: Remove localhost ::1 from hosts if ipv6 missing

2023-10-16 Thread Jörg Sommer via lists . openembedded . org
If a distribution doesn't provide IPv6, the mapping of localhost and ::1 has
to be removed.

Signed-off-by: Jörg Sommer 
---
 meta/recipes-core/base-files/base-files_3.0.14.bb | 4 
 1 file changed, 4 insertions(+)

diff --git a/meta/recipes-core/base-files/base-files_3.0.14.bb 
b/meta/recipes-core/base-files/base-files_3.0.14.bb
index 6ba3971e32..4d246126a2 100644
--- a/meta/recipes-core/base-files/base-files_3.0.14.bb
+++ b/meta/recipes-core/base-files/base-files_3.0.14.bb
@@ -136,6 +136,10 @@ do_install () {
echo ${hostname} > ${D}${sysconfdir}/hostname
echo "127.0.1.1 ${hostname}" >> ${D}${sysconfdir}/hosts
fi
+
+   if ${@bb.utils.contains('DISTRO_FEATURES', 'ipv6', 'false', 'true', 
d)}; then
+   sed -i '/^::1/s/ localhost//' ${D}${sysconfdir}/hosts
+   fi
 }
 
 do_install:append:libc-glibc () {
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189282): 
https://lists.openembedded.org/g/openembedded-core/message/189282
Mute This Topic: https://lists.openembedded.org/mt/101992845/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [gentoo-user] OFF TOPIC Need Ubuntu network help

2023-10-16 Thread Wols Lists

On 16/10/2023 08:51, Dale wrote:
Anyone here have ideas?  Keep in mind, that thing uses systemd.  I 
thought I hated that before.  I truly hate that thing now.  Trying to 
figure out how to restart something is like pulling teeth with no pain 
meds.


systemctl restart servicename?

I like systemd, but given my battles with other stuff, I feel your pain. 
Having had to WRITE a service file, though, oh I'm so glad I wasn't 
messing with SystemV or stuff like that!


Just be warned - I feel about apt stuff just like you feel about systemd ...


But anyways. Does your hard disk kernel have the appropriate module for 
the network card loaded? I can't remember the name of the systemd 
networking service, but did you "systemctl enable" it?


Oh, and I think it fires up DHCP by default so you don;'t need to enable 
any of that stuff.


Hopefully those tips will get you somewhere - this is what I remember 
from enabling systemd on gentoo...


Cheers,
Wol



Re: [PATCH 10/11] aarch64: Fix branch-protection error message tests

2023-10-13 Thread Richard Earnshaw (lists)
On 05/09/2023 16:00, Richard Sandiford via Gcc-patches wrote:
> Szabolcs Nagy  writes:
>> Update tests for the new branch-protection parser errors.
>>
>> gcc/testsuite/ChangeLog:
>>
>>  * gcc.target/aarch64/branch-protection-attr.c: Update.
>>  * gcc.target/aarch64/branch-protection-option.c: Update.
> 
> OK, thanks.  (And I agree these are better messages. :))
> 
> I think that's the last of the AArch64-specific ones.  The others
> will need to be reviewed by Kyrill or Richard.
> 
> Richard
> 
>> ---
>>  gcc/testsuite/gcc.target/aarch64/branch-protection-attr.c   | 6 +++---
>>  gcc/testsuite/gcc.target/aarch64/branch-protection-option.c | 2 +-
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/gcc/testsuite/gcc.target/aarch64/branch-protection-attr.c 
>> b/gcc/testsuite/gcc.target/aarch64/branch-protection-attr.c
>> index 272000c2747..dae2a758a56 100644
>> --- a/gcc/testsuite/gcc.target/aarch64/branch-protection-attr.c
>> +++ b/gcc/testsuite/gcc.target/aarch64/branch-protection-attr.c
>> @@ -4,19 +4,19 @@ void __attribute__ ((target("branch-protection=leaf")))
>>  foo1 ()
>>  {
>>  }
>> -/* { dg-error {invalid protection type 'leaf' in 
>> 'target\("branch-protection="\)' pragma or attribute} "" { target *-*-* } 5 
>> } */
>> +/* { dg-error {invalid argument 'leaf' for 
>> 'target\("branch-protection="\)'} "" { target *-*-* } 5 } */
>>  /* { dg-error {pragma or attribute 'target\("branch-protection=leaf"\)' is 
>> not valid} "" { target *-*-* } 5 } */

'leaf' is really a modifier for the other branch protection strategies; perhaps 
it would be better to describe it as that.

But this brings up another issue/question.  If the compiler has been configured 
with, say, '--enable-branch-protection=standard' or some other variety, is 
there (or do we want) a way to extend that to leaf functions without changing 
the underlying strategy?

>>  
>>  void __attribute__ ((target("branch-protection=none+pac-ret")))
>>  foo2 ()
>>  {
>>  }
>> -/* { dg-error "unexpected 'pac-ret' after 'none'" "" { target *-*-* } 12 } 
>> */
>> +/* { dg-error {argument 'none' can only appear alone in 
>> 'target\("branch-protection="\)'} "" { target *-*-* } 12 } */

Or maybe better still: "branch protection strategies 'none' and 'pac-ret' are 
incompatible".

>>  /* { dg-error {pragma or attribute 
>> 'target\("branch-protection=none\+pac-ret"\)' is not valid} "" { target 
>> *-*-* } 12 } */
>>  
>>  void __attribute__ ((target("branch-protection=")))
>>  foo3 ()
>>  {
>>  }
>> -/* { dg-error {missing argument to 'target\("branch-protection="\)' pragma 
>> or attribute} "" { target *-*-* } 19 } */
>> +/* { dg-error {invalid argument '' for 'target\("branch-protection="\)'} "" 
>> { target *-*-* } 19 } */
>>  /* { dg-error {pragma or attribute 'target\("branch-protection="\)' is not 
>> valid} "" { target *-*-* } 19 } */
>> diff --git a/gcc/testsuite/gcc.target/aarch64/branch-protection-option.c 
>> b/gcc/testsuite/gcc.target/aarch64/branch-protection-option.c
>> index 1b3bf4ee2b8..e2f847a31c4 100644
>> --- a/gcc/testsuite/gcc.target/aarch64/branch-protection-option.c
>> +++ b/gcc/testsuite/gcc.target/aarch64/branch-protection-option.c
>> @@ -1,4 +1,4 @@
>>  /* { dg-do "compile" } */
>>  /* { dg-options "-mbranch-protection=leaf -mbranch-protection=none+pac-ret" 
>> } */
>>  
>> -/* { dg-error "unexpected 'pac-ret' after 'none'"  "" { target *-*-* } 0 } 
>> */
>> +/* { dg-error "argument 'none' can only appear alone in 
>> '-mbranch-protection='" "" { target *-*-* } 0 } */

But this is all a matter of taste.

However, this patch should be merged with the patch that changes the error 
messages.  Or has that already gone in?

R


Re: [OE-core] [PATCH v3] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-13 Thread Jörg Sommer via lists . openembedded . org
On 12 October 2023 21:13, Dan McGregor wrote:
> On Thu, 12 Oct 2023 at 11:10, Jörg Sommer via lists.openembedded.org
>  wrote:
> >
> > This is only a minor change, because oelint-adv had warned about the space 
> > after " of PACKAGECONFIG.
> >
> > 
> > From: openembedded-core@lists.openembedded.org 
> >  on behalf of Jörg Sommer via 
> > lists.openembedded.org 
> > Sent: Thursday, 12 October 2023 18:34
> > To: openembedded-core@lists.openembedded.org 
> > 
> > Cc: Jörg Sommer 
> > Subject: [OE-core] [PATCH v3] libtirpc: Support ipv6 in DISTRO_FEATURES
> >
> > If the ipv6 feature for the distribution is not set, the package should not
> > contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
> > socket, and complain that this fails.

> > +PACKAGECONFIG ??= "\
> > +    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
> > +"
> > +PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
> 
> Looks like this will conflict with recent changes I made to master, so
> hopefully those can be resolved automatically.

I'm happy to provide a new version that applies on master, if it's needed. Let 
me know if I should create it.

-- 
Navimatix GmbH
Tatzendpromenade 2
07745 Jena  
Geschäftsführer: Steffen Späthe, Jan Rommeley
Registergericht: Amtsgericht Jena, HRB 501480

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189057): 
https://lists.openembedded.org/g/openembedded-core/message/189057
Mute This Topic: https://lists.openembedded.org/mt/101921995/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [OE-core] [PATCH v3] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-12 Thread Jörg Sommer via lists . openembedded . org
This is only a minor change, because oelint-adv had warned about the space 
after " of PACKAGECONFIG.


From: openembedded-core@lists.openembedded.org 
 on behalf of Jörg Sommer via 
lists.openembedded.org 
Sent: Thursday, 12 October 2023 18:34
To: openembedded-core@lists.openembedded.org 

Cc: Jörg Sommer 
Subject: [OE-core] [PATCH v3] libtirpc: Support ipv6 in DISTRO_FEATURES

If the ipv6 feature for the distribution is not set, the package should not
contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
socket, and complain that this fails.

Signed-off-by: Jörg Sommer 
---
 .../libtirpc/libtirpc/ipv6.patch  | 52 +++
 .../libtirpc/libtirpc_1.3.2.bb|  6 +++
 2 files changed, 58 insertions(+)
 create mode 100644 meta/recipes-extended/libtirpc/libtirpc/ipv6.patch

diff --git a/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch 
b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
new file mode 100644
index 00..f746f986f4
--- /dev/null
+++ b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
@@ -0,0 +1,52 @@
+From 077bbd32e8b7474dc5f153997732e1e6aec7fad6 Mon Sep 17 00:00:00 2001
+Message-Id: 
<077bbd32e8b7474dc5f153997732e1e6aec7fad6.1697120796.git.joerg.som...@navimatix.de>
+From: =?UTF-8?q?J=C3=B6rg=20Sommer?= 
+Date: Thu, 12 Oct 2023 16:22:59 +0200
+Subject: [PATCH] netconfig: remove tcp6, udp6 on --disable-ipv6
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+If the configuration for IPv6 is disabled, the netconfig should not contain
+settings for tcp6 and udp6.
+
+The test for the configure option didn't work, because it check the wrong
+variable.
+
+Signed-off-by: Jörg Sommer 
+Upstream-Status: Submitted [libtirpc-de...@lists.sourceforge.net]
+Upstream-Status: Submitted [linux-...@vger.kernel.org]
+---
+ configure.ac| 2 +-
+ doc/Makefile.am | 5 +
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index fe6c517..b687f8d 100644
+--- a/configure.ac
 b/configure.ac
+@@ -64,7 +64,7 @@ fi
+ AC_ARG_ENABLE(ipv6,
+[AC_HELP_STRING([--disable-ipv6], [Disable IPv6 support 
@<:@default=no@:>@])],
+[],[enable_ipv6=yes])
+-AM_CONDITIONAL(INET6, test "x$disable_ipv6" != xno)
++AM_CONDITIONAL(INET6, test "x$enable_ipv6" != xno)
+ if test "x$enable_ipv6" != xno; then
+AC_DEFINE(INET6, 1, [Define to 1 if IPv6 is available])
+ fi
+diff --git a/doc/Makefile.am b/doc/Makefile.am
+index d42ab90..b9678f6 100644
+--- a/doc/Makefile.am
 b/doc/Makefile.am
+@@ -2,3 +2,8 @@ dist_sysconf_DATA  = netconfig bindresvport.blacklist
+
+ CLEANFILES   = cscope.* *~
+ DISTCLEANFILES   = Makefile.in
++
++if ! INET6
++install-exec-hook:
++  $(SED) -i '/^tcp6\|^udp6/d' "$(DESTDIR)$(sysconfdir)"/netconfig
++endif
+--
+2.34.1
+
diff --git a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb 
b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
index 6980135a92..edb98082f2 100644
--- a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
+++ b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
@@ -11,6 +11,7 @@ PROVIDES = "virtual/librpc"

 SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
file://CVE-2021-46828.patch \
+  file://ipv6.patch \
   "
 UPSTREAM_CHECK_URI = 
"https://sourceforge.net/projects/libtirpc/files/libtirpc/;
 UPSTREAM_CHECK_REGEX = "(?P\d+(\.\d+)+)/"
@@ -20,6 +21,11 @@ inherit autotools pkgconfig

 EXTRA_OECONF = "--disable-gssapi"

+PACKAGECONFIG ??= "\
+${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
+
 do_install:append() {
 test -e ${D}${sysconfdir}/netconfig && chown root:root 
${D}${sysconfdir}/netconfig
 }
--
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189009): 
https://lists.openembedded.org/g/openembedded-core/message/189009
Mute This Topic: https://lists.openembedded.org/mt/101921995/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH v3] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-12 Thread Jörg Sommer via lists . openembedded . org
If the ipv6 feature for the distribution is not set, the package should not
contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
socket, and complain that this fails.

Signed-off-by: Jörg Sommer 
---
 .../libtirpc/libtirpc/ipv6.patch  | 52 +++
 .../libtirpc/libtirpc_1.3.2.bb|  6 +++
 2 files changed, 58 insertions(+)
 create mode 100644 meta/recipes-extended/libtirpc/libtirpc/ipv6.patch

diff --git a/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch 
b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
new file mode 100644
index 00..f746f986f4
--- /dev/null
+++ b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
@@ -0,0 +1,52 @@
+From 077bbd32e8b7474dc5f153997732e1e6aec7fad6 Mon Sep 17 00:00:00 2001
+Message-Id: 
<077bbd32e8b7474dc5f153997732e1e6aec7fad6.1697120796.git.joerg.som...@navimatix.de>
+From: =?UTF-8?q?J=C3=B6rg=20Sommer?= 
+Date: Thu, 12 Oct 2023 16:22:59 +0200
+Subject: [PATCH] netconfig: remove tcp6, udp6 on --disable-ipv6
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+If the configuration for IPv6 is disabled, the netconfig should not contain
+settings for tcp6 and udp6.
+
+The test for the configure option didn't work, because it check the wrong
+variable.
+
+Signed-off-by: Jörg Sommer 
+Upstream-Status: Submitted [libtirpc-de...@lists.sourceforge.net]
+Upstream-Status: Submitted [linux-...@vger.kernel.org]
+---
+ configure.ac| 2 +-
+ doc/Makefile.am | 5 +
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index fe6c517..b687f8d 100644
+--- a/configure.ac
 b/configure.ac
+@@ -64,7 +64,7 @@ fi
+ AC_ARG_ENABLE(ipv6,
+   [AC_HELP_STRING([--disable-ipv6], [Disable IPv6 support 
@<:@default=no@:>@])],
+   [],[enable_ipv6=yes])
+-AM_CONDITIONAL(INET6, test "x$disable_ipv6" != xno)
++AM_CONDITIONAL(INET6, test "x$enable_ipv6" != xno)
+ if test "x$enable_ipv6" != xno; then
+   AC_DEFINE(INET6, 1, [Define to 1 if IPv6 is available])
+ fi
+diff --git a/doc/Makefile.am b/doc/Makefile.am
+index d42ab90..b9678f6 100644
+--- a/doc/Makefile.am
 b/doc/Makefile.am
+@@ -2,3 +2,8 @@ dist_sysconf_DATA  = netconfig bindresvport.blacklist
+ 
+ CLEANFILES   = cscope.* *~
+ DISTCLEANFILES   = Makefile.in
++
++if ! INET6
++install-exec-hook:
++  $(SED) -i '/^tcp6\|^udp6/d' "$(DESTDIR)$(sysconfdir)"/netconfig
++endif
+-- 
+2.34.1
+
diff --git a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb 
b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
index 6980135a92..edb98082f2 100644
--- a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
+++ b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
@@ -11,6 +11,7 @@ PROVIDES = "virtual/librpc"
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
   file://CVE-2021-46828.patch \
+  file://ipv6.patch \
  "
 UPSTREAM_CHECK_URI = 
"https://sourceforge.net/projects/libtirpc/files/libtirpc/;
 UPSTREAM_CHECK_REGEX = "(?P\d+(\.\d+)+)/"
@@ -20,6 +21,11 @@ inherit autotools pkgconfig
 
 EXTRA_OECONF = "--disable-gssapi"
 
+PACKAGECONFIG ??= "\
+${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
+
 do_install:append() {
test -e ${D}${sysconfdir}/netconfig && chown root:root 
${D}${sysconfdir}/netconfig
 }
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189008): 
https://lists.openembedded.org/g/openembedded-core/message/189008
Mute This Topic: https://lists.openembedded.org/mt/101921995/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH v2] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-12 Thread Jörg Sommer via lists . openembedded . org
If the ipv6 feature for the distribution is not set, the package should not
contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
socket, and complain that this fails.

Signed-off-by: Jörg Sommer 
---
 .../libtirpc/libtirpc/ipv6.patch  | 52 +++
 .../libtirpc/libtirpc_1.3.2.bb|  6 +++
 2 files changed, 58 insertions(+)
 create mode 100644 meta/recipes-extended/libtirpc/libtirpc/ipv6.patch

diff --git a/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch 
b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
new file mode 100644
index 00..f746f986f4
--- /dev/null
+++ b/meta/recipes-extended/libtirpc/libtirpc/ipv6.patch
@@ -0,0 +1,52 @@
+From 077bbd32e8b7474dc5f153997732e1e6aec7fad6 Mon Sep 17 00:00:00 2001
+Message-Id: 
<077bbd32e8b7474dc5f153997732e1e6aec7fad6.1697120796.git.joerg.som...@navimatix.de>
+From: =?UTF-8?q?J=C3=B6rg=20Sommer?= 
+Date: Thu, 12 Oct 2023 16:22:59 +0200
+Subject: [PATCH] netconfig: remove tcp6, udp6 on --disable-ipv6
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+If the configuration for IPv6 is disabled, the netconfig should not contain
+settings for tcp6 and udp6.
+
+The test for the configure option didn't work, because it check the wrong
+variable.
+
+Signed-off-by: Jörg Sommer 
+Upstream-Status: Submitted [libtirpc-de...@lists.sourceforge.net]
+Upstream-Status: Submitted [linux-...@vger.kernel.org]
+---
+ configure.ac| 2 +-
+ doc/Makefile.am | 5 +
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index fe6c517..b687f8d 100644
+--- a/configure.ac
 b/configure.ac
+@@ -64,7 +64,7 @@ fi
+ AC_ARG_ENABLE(ipv6,
+   [AC_HELP_STRING([--disable-ipv6], [Disable IPv6 support 
@<:@default=no@:>@])],
+   [],[enable_ipv6=yes])
+-AM_CONDITIONAL(INET6, test "x$disable_ipv6" != xno)
++AM_CONDITIONAL(INET6, test "x$enable_ipv6" != xno)
+ if test "x$enable_ipv6" != xno; then
+   AC_DEFINE(INET6, 1, [Define to 1 if IPv6 is available])
+ fi
+diff --git a/doc/Makefile.am b/doc/Makefile.am
+index d42ab90..b9678f6 100644
+--- a/doc/Makefile.am
 b/doc/Makefile.am
+@@ -2,3 +2,8 @@ dist_sysconf_DATA  = netconfig bindresvport.blacklist
+ 
+ CLEANFILES   = cscope.* *~
+ DISTCLEANFILES   = Makefile.in
++
++if ! INET6
++install-exec-hook:
++  $(SED) -i '/^tcp6\|^udp6/d' "$(DESTDIR)$(sysconfdir)"/netconfig
++endif
+-- 
+2.34.1
+
diff --git a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb 
b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
index 6980135a92..dca5a964a8 100644
--- a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
+++ b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
@@ -11,6 +11,7 @@ PROVIDES = "virtual/librpc"
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
   file://CVE-2021-46828.patch \
+  file://ipv6.patch \
  "
 UPSTREAM_CHECK_URI = 
"https://sourceforge.net/projects/libtirpc/files/libtirpc/;
 UPSTREAM_CHECK_REGEX = "(?P\d+(\.\d+)+)/"
@@ -20,6 +21,11 @@ inherit autotools pkgconfig
 
 EXTRA_OECONF = "--disable-gssapi"
 
+PACKAGECONFIG ??= " \
+${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
+
 do_install:append() {
test -e ${D}${sysconfdir}/netconfig && chown root:root 
${D}${sysconfdir}/netconfig
 }
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#189004): 
https://lists.openembedded.org/g/openembedded-core/message/189004
Mute This Topic: https://lists.openembedded.org/mt/101919607/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH] libtirpc: Support ipv6 in DISTRO_FEATURES

2023-10-12 Thread Jörg Sommer via lists . openembedded . org
If the ipv6 feature for the distribution is not set, the package should not
contain settings for ipv6. This makes rpcbind doesn't try to bind to a IPv6
socket, and complain that this fails.

Signed-off-by: Jörg Sommer 
---
 meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb 
b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
index 6980135a92..14db4a5eda 100644
--- a/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
+++ b/meta/recipes-extended/libtirpc/libtirpc_1.3.2.bb
@@ -20,8 +20,20 @@ inherit autotools pkgconfig
 
 EXTRA_OECONF = "--disable-gssapi"
 
+PACKAGECONFIG ??= " \
+${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6"
+
 do_install:append() {
-   test -e ${D}${sysconfdir}/netconfig && chown root:root 
${D}${sysconfdir}/netconfig
+   if test -e ${D}${sysconfdir}/netconfig
+   then
+   if ${@bb.utils.contains('DISTRO_FEATURES', 'ipv6', 'false', 
'true', d)}
+   then
+   sed -i '/^tcp6\|^udp6/d' ${D}${sysconfdir}/netconfig
+   fi
+   chown root:root ${D}${sysconfdir}/netconfig
+   fi
 }
 
 BBCLASSEXTEND = "native nativesdk"
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188993): 
https://lists.openembedded.org/g/openembedded-core/message/188993
Mute This Topic: https://lists.openembedded.org/mt/101916628/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-virtualization][PATCH v2] packagegroup-container: require ipv6 for podman

2023-10-12 Thread Jörg Sommer via lists . yoctoproject . org
The recipe *podman* requires the distro feature *ipv6*. Using a distro
without it causes the build of *packagegroup-container* fails, even if
*packagegroup-podman* is not used:

ERROR: Nothing RPROVIDES 'podman' (but 
/build/../work/layers-3rdparty/meta-virtualization/recipes-core/packagegroups/packagegroup-container.bb
 RDEPENDS on or otherwise requires it)
podman was skipped: missing required distro feature 'ipv6' (not in 
DISTRO_FEATURES)
NOTE: Runtime target 'podman' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['podman']
NOTE: Runtime target 'packagegroup-docker' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['packagegroup-docker', 
'podman']

Signed-off-by: Jörg Sommer 
---
 recipes-core/packagegroups/packagegroup-container.bb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/recipes-core/packagegroups/packagegroup-container.bb 
b/recipes-core/packagegroups/packagegroup-container.bb
index 8d418e9..8309a08 100644
--- a/recipes-core/packagegroups/packagegroup-container.bb
+++ b/recipes-core/packagegroups/packagegroup-container.bb
@@ -9,7 +9,7 @@ PACKAGES = "\
 packagegroup-lxc \
 packagegroup-docker \
 packagegroup-oci \
-${@bb.utils.contains('DISTRO_FEATURES', 'seccomp', \
+${@bb.utils.contains('DISTRO_FEATURES', 'seccomp ipv6', \
  'packagegroup-podman', '', d)} \
 packagegroup-containerd \
 "
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8351): 
https://lists.yoctoproject.org/g/meta-virtualization/message/8351
Mute This Topic: https://lists.yoctoproject.org/mt/101916200/21656
Group Owner: meta-virtualization+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [gentoo-user] world updates blocked by Qt

2023-10-12 Thread Wols Lists

On 11/10/2023 17:44, Philip Webb wrote:

231011 Alan McKinnon wrote:

Today a sync and emerge world produces a huge list of blockers.
qt 5.15.10 is currently installed and qt 5.15.11 is new in the tree and
being blocked.
All the visible blockers are Qt itself so --verbose-conflicts is needed.


My experience for some time has been that Qt pkgs block one another,
st the only way out is to unmerge them all, then remerge them all.
If anyone knows a better method, please let us know

I haven't had that in a long time. If I get blocks like that (rare) 
--backtrack=100 (or whatever it is) unusually unblocks it.


The other thing is, I don't have any explicit perl code on my system, 
but on at least one occasion running perl-cleaner --all unbunged a 
problem ...


There's a whole bunch of incantations that are rarely needed but need to 
be remembered for when they are ...


Cheers,
Wol



Re: Principles of the C99 testsuite conversion

2023-10-11 Thread Richard Earnshaw (lists)
On 11/10/2023 14:56, Jeff Law wrote:
> 
> 
> On 10/11/23 04:39, Florian Weimer wrote:
>> I've started to look at what it is required to convert the testsuite to
>> C99 (without implicit ints, without implicit function declarations, and
>> a few other legacy language features).
> I bet those older tests originating from c-torture will be a bit painful.  
> Torbjorn liked having them minimized, to the point of squashing out nearly 
> everything he considered extraneous.  I'd bet many of those older tests are 
> going to need lots of changes.
> 

I've often wondered just how much of the original c-torture suite is still 
relevant today.  Most of those tests were written at a  time when the compiler 
expanded tree directly into RTL and I suspect that today the tests never get 
even close to tickling the original bug they were intended to validate.

R.



Re: some mails get delayed 5min, some don't

2023-10-11 Thread lists
On Wed, October 11, 2023 10:44 pm, Matus UHLAR - fantomas wrote:
> On 11.10.23 22:21, li...@sbt.net.au wrote:

> Here is your 5-minute delay. It's possible that scanning took too much of
>  time. Perhaps too many concurrent amavis sessions?

from htop , by CPU %

  PID USER  PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
22097 amavis 20   0  392M 70084  5676 R 100.  1.8  0:27.24
/usr/sbin/amavisd (virgin child)
 8675 amavis 20   0  403M 97980  6264 S  0.0  2.5  9:59.10
/usr/sbin/amavisd (ch11-avail)
17438 amavis 20   0  386M 37200   664 S  0.0  1.0  0:08.36
/usr/sbin/amavisd (master)
24018 amavis 20   0 1833M 1320M  1392 S  0.0 34.8  0:00.87
/usr/sbin/clamd -c /etc/clamd.d/amavisd.conf
24003 amavis 20   0 1833M 1320M  1392 S  0.0 34.8 25:32.45
/usr/sbin/clamd -c /etc/clamd.d/amavisd.conf
22250 amavis 20   0 1833M 1320M  1392 S  0.0 34.8  0:00.20
/usr/sbin/clamd -c /etc/clamd.d/amavisd.conf
22251 amavis 20   0 1833M 1320M  1392 S  0.0 34.8  0:00.11
/usr/sbin/clamd -c /etc/clamd.d/amavisd.conf

as I'm watching, the very first line seems stuck at first line position,
others come and go

Voytek



Re: Register allocation cost question

2023-10-11 Thread Richard Earnshaw (lists) via Gcc
On 11/10/2023 09:58, Andrew Stubbs wrote:
> On 11/10/2023 07:54, Chung-Lin Tang wrote:
>>
>>
>> On 2023/10/10 11:11 PM, Andrew Stubbs wrote:
>>> Hi all,
>>>
>>> I'm trying to add a new register set to the GCN port, but I've hit a
>>> problem I don't understand.
>>>
>>> There are 256 new registers (each 2048 bit vector register) but the
>>> register file has to be divided between all the running hardware
>>> threads; if you can use fewer registers you can get more parallelism,
>>> which means that it's important that they're allocated in order.
>>>
>>> The problem is that they're not allocated in order. Somehow the IRA pass
>>> is calculating different costs for the registers within the class. It
>>> seems to prefer registers a32, a96, a160, and a224.
>>>
>>> The internal regno are 448, 512, 576, 640. These are not random numbers!
>>> They all have zero for the 6 LSB.
>>>
>>> What could cause this? Did I overrun some magic limit? What target hook
>>> might I have miscoded?
>>>
>>> I'm also seeing wrong-code bugs when I allow more than 32 new registers,
>>> but that might be an unrelated problem. Or the allocation is broken? I'm
>>> still analyzing this.
>>>
>>> If it matters, ... the new registers can't be used for general purposes,
>>> so I'm trying to set them up as a temporary spill destination. This
>>> means they're typically not busy. It feels like it shouldn't be this
>>> hard... :(
>>
>> Have you tried experimenting with REG_ALLOC_ORDER? I see that the GCN port 
>> currently isn't using this target macro.
> 
> The default definition is 0,1,2,3,4 and is already the desired behaviour.
> 
> Andrew

You may need to define HONOR_REG_ALLOC_ORDER though.


[meta-virtualization][PATCH] packagegroup-container: require ipv6 for podman

2023-10-11 Thread Jörg Sommer via lists . yoctoproject . org
The recipe *podman* requires *ipv6* in *DISTRO_FEATURES*, which causes the
build of the whole recipe fail, even if packagegroup-podman is not used.

Signed-off-by: Jörg Sommer 
---
 recipes-core/packagegroups/packagegroup-container.bb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/recipes-core/packagegroups/packagegroup-container.bb 
b/recipes-core/packagegroups/packagegroup-container.bb
index 8d418e9..8309a08 100644
--- a/recipes-core/packagegroups/packagegroup-container.bb
+++ b/recipes-core/packagegroups/packagegroup-container.bb
@@ -9,7 +9,7 @@ PACKAGES = "\
 packagegroup-lxc \
 packagegroup-docker \
 packagegroup-oci \
-${@bb.utils.contains('DISTRO_FEATURES', 'seccomp', \
+${@bb.utils.contains('DISTRO_FEATURES', 'seccomp ipv6', \
  'packagegroup-podman', '', d)} \
 packagegroup-containerd \
 "
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8347): 
https://lists.yoctoproject.org/g/meta-virtualization/message/8347
Mute This Topic: https://lists.yoctoproject.org/mt/101893004/21656
Group Owner: meta-virtualization+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [OE-Core][PATCH 0/2] Fix regression reporting for master-next

2023-10-11 Thread Alexis Lothoré via lists . openembedded . org
On 10/10/23 11:30, Alexis Lothoré via lists.openembedded.org wrote:
> With those two patches, I have been able to properly generate the
> regression report from [1] with the following command:

It looks like I forgot to paste the relevant command. Here it is, for
documentation purpose:
yocto_testresults_query.py regression-report
561c63e94710a755227357e90004aafa63ec9c7e 
2994f51ffc4699970abed42d2e2a6452d04128cd

With 2994f51ffc4699970abed42d2e2a6452d04128cd being the SHA-1 of master-next
HEAD at time of sending this series. But the command may not work anymore
as-it-is, because master-next has likely been force-pushed since then

> [1] 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6025/steps/29/logs/stdio


-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188948): 
https://lists.openembedded.org/g/openembedded-core/message/188948
Mute This Topic: https://lists.openembedded.org/mt/101871466/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH 6/6] aarch64: Add front-end argument type checking for target builtins

2023-10-10 Thread Richard Earnshaw (lists)
On 09/10/2023 14:12, Victor Do Nascimento wrote:
> 
> 
> On 10/7/23 12:53, Richard Sandiford wrote:
>> Richard Earnshaw  writes:
>>> On 03/10/2023 16:18, Victor Do Nascimento wrote:
 In implementing the ACLE read/write system register builtins it was
 observed that leaving argument type checking to be done at expand-time
 meant that poorly-formed function calls were being "fixed" by certain
 optimization passes, meaning bad code wasn't being properly picked up
 in checking.

 Example:

     const char *regname = "amcgcr_el0";
     long long a = __builtin_aarch64_rsr64 (regname);

 is reduced by the ccp1 pass to

     long long a = __builtin_aarch64_rsr64 ("amcgcr_el0");

 As these functions require an argument of STRING_CST type, there needs
 to be a check carried out by the front-end capable of picking this up.

 The introduced `check_general_builtin_call' function will be called by
 the TARGET_CHECK_BUILTIN_CALL hook whenever a call to a builtin
 belonging to the AARCH64_BUILTIN_GENERAL category is encountered,
 carrying out any appropriate checks associated with a particular
 builtin function code.
>>>
>>> Doesn't this prevent reasonable wrapping of the __builtin... names with
>>> something more palatable?  Eg:
>>>
>>> static inline __attribute__(("always_inline")) long long get_sysreg_ll
>>> (const char *regname)
>>> {
>>>     return __builtin_aarch64_rsr64 (regname);
>>> }
>>>
>>> ...
>>>     long long x = get_sysreg_ll("amcgcr_el0");
>>> ...
>>
>> I think it's case of picking your poison.  If we didn't do this,
>> and only checked later, then it's unlikely that GCC and Clang would
>> be consistent about when a constant gets folded soon enough.
>>
>> But yeah, it means that the above would need to be a macro in C.
>> Enlightened souls using C++ could instead do:
>>
>>    template
>>    long long get_sysreg_ll()
>>    {
>>  return __builtin_aarch64_rsr64(regname);
>>    }
>>
>>    ... get_sysreg_ll<"amcgcr_el0">() ...
>>
>> Or at least I hope so.  Might be nice to have a test for this.
>>
>> Thanks,
>> Richard
> 
> As Richard Earnshaw mentioned, this does break the use of `static inline 
> __attribute__(("always_inline"))', something I had found out in my testing.  
> My chosen implementation was indeed, to quote Richard Sandiford, a case of 
> "picking your poison" to have things line up with Clang and behaving 
> consistently across optimization levels.
> 
> Relaxing the the use of `TARGET_CHECK_BUILTIN_CALL' meant optimizations were 
> letting too many things through. Example:
> 
> const char *regname = "amcgcr_el0";
> long long a = __builtin_aarch64_rsr64 (regname);
> 
> gets folded to
> 
> long long a = __builtin_aarch64_rsr64 ("amcgcr_el0");
> 
> and compilation passes at -01 even though it fails at -O0.
> 
> I had, however, not given any thought to the use of a template as a valid C++ 
> alternative.
> 
> I will evaluate the use of templates and add tests accordingly.

This just seems inconsistent with all the builtins we already have that require 
literal constants for parameters.  For example (to pick just one of many), 
vshr_n_q8(), where the second parameter must be a literal value.  In practice 
we accept anything that resolves to a compile-time constant integer expression 
and rely on that to avoid having to have hundreds of macros binding the ACLE 
names to the underlying builtin equivalents.

Furthermore, I don't really see the problem with the examples you cite.  It's 
not as though the user can change these at run-time and expect to get a 
different register.

R.

> 
> Cheers,
> Victor



Re: solving file conflicts

2023-10-10 Thread Genes Lists

On 10/10/23 07:00, Erich Eckner wrote:

Hi,


...
This is (one possibility for) the second option, that I mentioned, which 
I was afraid might break a lot of stuff on my machines :-/


I meant put it in /usr/local/xxx/ not directly in /usr/local.
Or in /opt/xxx/



I see, that the root cause (and the "clean fix") is to rename the backup 
script in my package. 


Yah that is the right way. What I do is name all my stuff to ensure the 
name is reasonably unique ( not UUID unique but good enough ).


For example you could simply add a prefix (or suffix) - like backup-ee.

best

gene




Re: Documenting common C/C++ options

2023-10-10 Thread Richard Earnshaw (lists) via Gcc
On 10/10/2023 11:46, Richard Earnshaw (lists) via Gcc wrote:
> On 10/10/2023 10:47, Florian Weimer via Gcc wrote:
>> Currently, -fsigned-char and -funsigned-char are only documented as C
>> language options, although they work for C++ as well (and Objective-C
>> and Objective-C++, I assume, but I have not tested this).  There does
>> not seem to be a place for this kind of options in the manual.
>>
>> The options -fshort-enums and -fshort-wchar are documented under
>> code-generation options, but this seems to be a bit of a stretch because
>> (at least for -fshort-wchar), these too seem to be more about front-end
>> behavior.
>>
>> What would be a good way to address this?
>>
>> Thanks,
>> Florian
>>
> 
> 
> All of these are ABI; so where ever it goes, it should be documented that 
> changing them will potentially cause issues with any pre-compiled object 
> files having different settings.
> 
> R.

And you can add -f[un]signed-bitfield to that list as well.

R.


Re: Documenting common C/C++ options

2023-10-10 Thread Richard Earnshaw (lists) via Gcc
On 10/10/2023 10:47, Florian Weimer via Gcc wrote:
> Currently, -fsigned-char and -funsigned-char are only documented as C
> language options, although they work for C++ as well (and Objective-C
> and Objective-C++, I assume, but I have not tested this).  There does
> not seem to be a place for this kind of options in the manual.
> 
> The options -fshort-enums and -fshort-wchar are documented under
> code-generation options, but this seems to be a bit of a stretch because
> (at least for -fshort-wchar), these too seem to be more about front-end
> behavior.
> 
> What would be a good way to address this?
> 
> Thanks,
> Florian
> 


All of these are ABI; so where ever it goes, it should be documented that 
changing them will potentially cause issues with any pre-compiled object files 
having different settings.

R.


[OE-Core][PATCH 2/2] oeqa/utils/gitarchive: ensure tag matches regex before getting its fields

2023-10-10 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

Whenever we ask gitarchive to retrieve test results for specific revisions,
we first do a "large" search in get_tags, which uses glob patterns with git
ls-remote, and then we filter received tags with a regex to parse the tags
fields.
Currently gitarchive assumes that all tags returned by get_tags will match
the regex. This assumption is wrong (for example searching "master-next" in
get_tags may return some tags like "abelloni/master-next), and leads then
to exception when we try to retrieve tags fields:
Traceback (most recent call last):
  File "/home/pokybuild/yocto-worker/a-full/build/scripts/resulttool", line 78, 
in 
sys.exit(main())
 ^^
  File "/home/pokybuild/yocto-worker/a-full/build/scripts/resulttool", line 72, 
in main
ret = args.func(args, logger)
  ^^^
  File 
"/home/pokybuild/yocto-worker/a-full/build/scripts/lib/resulttool/regression.py",
 line 315, in regression_git
revs2 = gitarchive.get_test_revs(logger, repo, tag_name, 
branch=args.branch2)

^
  File 
"/home/pokybuild/yocto-worker/a-full/build/meta/lib/oeqa/utils/gitarchive.py", 
line 246, in get_test_revs
fields, runs = get_test_runs(log, repo, tag_name, **kwargs)
   
  File 
"/home/pokybuild/yocto-worker/a-full/build/meta/lib/oeqa/utils/gitarchive.py", 
line 238, in get_test_runs
groups = m.groupdict()

Fix this exception by merely skipping those additionals tags which won't
match the regex

Signed-off-by: Alexis Lothoré 
---
 meta/lib/oeqa/utils/gitarchive.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meta/lib/oeqa/utils/gitarchive.py 
b/meta/lib/oeqa/utils/gitarchive.py
index 2fe48cdcac7f..10cb267dfa92 100644
--- a/meta/lib/oeqa/utils/gitarchive.py
+++ b/meta/lib/oeqa/utils/gitarchive.py
@@ -235,6 +235,8 @@ def get_test_runs(log, repo, tag_name, **kwargs):
 revs = []
 for tag in tags:
 m = tag_re.match(tag)
+if not m:
+continue
 groups = m.groupdict()
 revs.append([groups[f] for f in undef_fields] + [tag])
 
-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188878): 
https://lists.openembedded.org/g/openembedded-core/message/188878
Mute This Topic: https://lists.openembedded.org/mt/101871468/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-Core][PATCH 0/2] Fix regression reporting for master-next

2023-10-10 Thread Alexis Lothoré via lists . openembedded . org
There are still issues in Autobuilder when trying to generate regression
reports on master-next branch, as visible in [1]. This issue makes build
status as failed

After being finally able to replicate the issue locally (which is quite
difficult since master-next is a force-pushed branch), I observed that the
issue lies within tag searching/management in tests results repository: we
execute a "two-step" search, first with git ls-remote + glob patterns, and
then filtered with a regex. The second step may fail because some
additional tags retrieved from first step do not match exactly what is
expected by the regex. For example, tag
abelloni/master-next/69846-g01903ca0b1f7e120abd5135fb8554216ae7059c6/0
will be returned when searching for master-next tags

The two small patches in this series aim to:
1. restrict list of returned tags
2. make sure that once regex is used, the tag has matched before trying to
parse its fields

With those two patches, I have been able to properly generate the
regression report from [1] with the following command:

[1] 
https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/6025/steps/29/logs/stdio

Alexis Lothoré (2):
  oeqa/utils/gitarchive: fix tag pattern searching
  oeqa/utils/gitarchive: ensure tag matches regex before getting its
fields

 meta/lib/oeqa/utils/gitarchive.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188876): 
https://lists.openembedded.org/g/openembedded-core/message/188876
Mute This Topic: https://lists.openembedded.org/mt/101871466/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-Core][PATCH 1/2] oeqa/utils/gitarchive: fix tag pattern searching

2023-10-10 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

Whenever we ask gitarchive to search for tags, we can provide it with a
pattern (containing glob patterns). However, when searching for example for
tags matching branch master-next, it can find more tags which does not
correspond exactly to branch master-next (e.g. abelloni/master-next tags
will match).

Prevent those additional tags from being fetched by gitarchive by using a
more specific pattern: prefix user-provided pattern with "refs/tags"

Signed-off-by: Alexis Lothoré 
---
 meta/lib/oeqa/utils/gitarchive.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/utils/gitarchive.py 
b/meta/lib/oeqa/utils/gitarchive.py
index f9c152681db7..2fe48cdcac7f 100644
--- a/meta/lib/oeqa/utils/gitarchive.py
+++ b/meta/lib/oeqa/utils/gitarchive.py
@@ -113,7 +113,7 @@ def get_tags(repo, log, pattern=None, url=None):
 # First try to fetch tags from repository configured remote
 cmd.append('origin')
 if pattern:
-cmd.append(pattern)
+cmd.append("refs/tags/"+pattern)
 try:
 tags_refs = repo.run_cmd(cmd)
 tags = ["".join(d.split()[1].split('/', 2)[2:]) for d in 
tags_refs.splitlines()]
-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188877): 
https://lists.openembedded.org/g/openembedded-core/message/188877
Mute This Topic: https://lists.openembedded.org/mt/101871467/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: solving file conflicts

2023-10-10 Thread Genes Lists

On 10/10/23 02:41, Erich Eckner wrote:

Hi fellow-archers,


Hi!



I have two packages, A and B, which both provide the same file X. 

...

Without knowing more its a bit hard to say.

For example are A and B actually the same application but different 
versions (git vs stable). If so, then just make the unofficial git 
version conflict with official one.


If not as above, without knowing more it's difficult to say.

One possibility is to simply change the unofficial one to install in 
/usr/local (all of it /usr/local/etc, /usr/local/bin, /usr/local/lib, 
/usr/local/var and so on. That way nothing will conflict.


That said,  no 2 packages, unofficial or official should ever provide 
the same file at the same path location other than the case above where 
only 1 of the 2 can be installed due to conflicts=(other).


regards,

gene








Re: How to enable opencl?

2023-10-07 Thread Genes Lists

On 10/7/23 02:59, Zener wrote:

Hi.




[opencl_init] could not get platforms: Unknown OpenCL error
[opencl_init] FINALLY: opencl is NOT AVAILABLE and NOT ENABLED.



I dont use this but the docs [1] have a (long) list of requirements that 
must be met to use it.  It sounds like or more is not satisfied.


Maybe the manual helps:


 - Check if the available graphics card comes with OpenCL support.
 - A sufficient amount of graphics memory (1GB+) needs to be available
 - If that check passes, tries to setup its OpenCL environment:
   - a processing context needs to be initialized,
   - a calculation pipeline to be started,
   - OpenCL source code files (extension is .cl) needs to be read and 
compiled and the included routines (OpenCL kernels) need to be prepared 
for darktable’s modules.


 If all of that completes successfully, the preparation is complete.
---

gene

[1] 
https://docs.darktable.org/usermanual/development/en/special-topics/opencl/activate-opencl/


[BlueOnyx:26538] Netcraft

2023-10-06 Thread Steve Lists via Blueonyx
Just an FYI for you all, Netcraft have been flagging the contents of the 
/error/ folder (a bunch of basic plain HTML) as phishing.. They de-listed the 
ones I responded to and have a separate ticket with them about the issue in 
general - they have acknowledged it is a mistake..

I do wonder if that folder could be blocked in some manner by default? Doesn't 
really need to be exposed. But clearly it's also harmless!

Steve
___
Blueonyx mailing list
Blueonyx@mail.blueonyx.it
http://mail.blueonyx.it/mailman/listinfo/blueonyx


Re: [halLEipzig] Tischfahne für OSM-Stammtisch-Treffen

2023-10-06 Thread Antonin Delpeuch (lists)

Hallo,

ich fände es sehr nützlich! Letztes Mal hatten wir tatsächlich genau das 
Problem, dass unserer Tisch nicht so erkennbar war.


Zu den anderen HalLEipziger·innen: soll ich einfach eine OSM-Fahne für 
unseren Stammtisch bestellen?


Ich hätte übrigens Lust, uns bald noch mal zu treffen :)

Viele Grüße,

Antonin

Le 05/10/2023 à 17:57, Katja Haferkorn a écrit :

Hallo,

falls Euer Stammtisch aktiv läuft und Ihr eine OSM-Tischfahne haben 
möchtet, bitte hier lesen: 
https://community.openstreetmap.org/t/tischfahne-fur-osm-stammtisch/104575


Freundliche Grüße
Katja

___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


[OE-core] [PATCH] scripts/resulttool: do not try to parse metadata as tests results

2023-10-06 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

When regression report is computed during a CI build, a lot of errors
often appears regarding missing test status:

ERROR: Failed to retrieved base test case status: ptestresult.sections
ERROR: Failed to retrieved base test case status: ptestresult.sections
ERROR: Failed to retrieved base test case status: reproducible
ERROR: Failed to retrieved base test case status: reproducible.rawlogs
[...]

Those errors are caused by entries in test results which are not exactly
test results (i.e. an entry with a relevant "status" field containing value 
such as
"PASSED", "FAILED, "SKIPPED", etc) but additional data, which depends on
the log parser associated to the test, or tests which store results in a
different way. For example, the ptestresult.sections entry is generated by
the ptest log parser and can contain additional info about ptest such as
"begin", "end", "duration", "exitcode" or "timeout". Another example is a
"reproducible" section, which does not have a "status" field but rather
contains a big "files" entry containing lists of identical, missing, or
different files between two builds.

Remove those errors by adding a list of known entries which do not hold
test results as expected by resulttool, and by ignoring those keys when
encountered during test results comparison. I could also have completely
removed the warning about missing test case status, but that would
silently hide any real future issue with relevant test results

Signed-off-by: Alexis Lothoré 
---
 scripts/lib/resulttool/regression.py | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/scripts/lib/resulttool/regression.py 
b/scripts/lib/resulttool/regression.py
index 3d64b8f4af7c..e15a268c0206 100644
--- a/scripts/lib/resulttool/regression.py
+++ b/scripts/lib/resulttool/regression.py
@@ -78,6 +78,16 @@ STATUS_STRINGS = {
 "None": "No matching test result"
 }
 
+TEST_KEY_WHITELIST = [
+"ltpposixresult.rawlogs",
+"ltpposixresult.sections",
+"ltpresult.rawlogs",
+"ltpresult.sections",
+"ptestresult.sections",
+"reproducible",
+"reproducible.rawlogs"
+]
+
 def test_has_at_least_one_matching_tag(test, tag_list):
 return "oetags" in test and any(oetag in tag_list for oetag in 
test["oetags"])
 
@@ -189,6 +199,10 @@ def compare_result(logger, base_name, target_name, 
base_result, target_result):
 
 if base_result and target_result:
 for k in base_result:
+# Some entries present in test results are known not to be test
+# results but metadata about tests
+if k in TEST_KEY_WHITELIST:
+continue
 base_testcase = base_result[k]
 base_status = base_testcase.get('status')
 if base_status:
-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#188770): 
https://lists.openembedded.org/g/openembedded-core/message/188770
Mute This Topic: https://lists.openembedded.org/mt/101797859/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH v2] Add a GCC Security policy

2023-10-05 Thread Richard Earnshaw (lists)
On 28/09/2023 12:55, Siddhesh Poyarekar wrote:
> +Security features implemented in GCC
> +
> +
[...]
> +
> +Similarly, GCC may transform code in a way that the correctness of
> +the expressed algorithm is preserved, but supplementary properties
> +that are not specifically expressible in a high-level language
> +are not preserved. Examples of such supplementary properties
> +include absence of sensitive data in the program's address space
> +after an attempt to wipe it, or data-independent timing of code.
> +When the source code attempts to express such properties, failure
> +to preserve them in resulting machine code is not a security issue
> +in GCC.

I think it would be worth mentioning here that compilers interpret source code 
according to an abstract machine defined by the source language.  Properties of 
a program that cannot be described in the abstract machine may not be 
translated into the generated machine code.

This is, fundamentally, describing the 'as if' rule.

R.


[oe] [meta-oe][PATCH] collectd: Use https in SRC_URI, add HOMEPAGE

2023-10-05 Thread Jörg Sommer via lists . openembedded . org
Signed-off-by: Jörg Sommer 
---
 meta-oe/recipes-extended/collectd/collectd_5.12.0.bb | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/meta-oe/recipes-extended/collectd/collectd_5.12.0.bb 
b/meta-oe/recipes-extended/collectd/collectd_5.12.0.bb
index 479c12d6a..bd4a5b3e8 100644
--- a/meta-oe/recipes-extended/collectd/collectd_5.12.0.bb
+++ b/meta-oe/recipes-extended/collectd/collectd_5.12.0.bb
@@ -1,11 +1,12 @@
 SUMMARY = "Collects and summarises system performance statistics"
 DESCRIPTION = "collectd is a daemon which collects system performance 
statistics periodically and provides mechanisms to store the values in a 
variety of ways, for example in RRD files."
+HOMEPAGE = "https://collectd.org/;
 LICENSE = "GPL-2.0-only & MIT"
 LIC_FILES_CHKSUM = "file://COPYING;md5=1bd21f19f7f0c61a7be8ecacb0e28854"
 
 DEPENDS = "rrdtool curl libpcap libxml2 yajl libgcrypt libtool lvm2"
 
-SRC_URI = "http://collectd.org/files/collectd-${PV}.tar.bz2 \
+SRC_URI = "https://collectd.org/files/collectd-${PV}.tar.bz2 \
file://collectd.init \
file://collectd.service \
file://no-gcrypt-badpath.patch \
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#105354): 
https://lists.openembedded.org/g/openembedded-devel/message/105354
Mute This Topic: https://lists.openembedded.org/mt/101774188/21656
Group Owner: openembedded-devel+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-devel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH7/8] vect: Add TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM

2023-10-04 Thread Andre Vieira (lists)




On 04/10/2023 11:41, Richard Biener wrote:

On Wed, 4 Oct 2023, Andre Vieira (lists) wrote:




On 30/08/2023 14:04, Richard Biener wrote:

On Wed, 30 Aug 2023, Andre Vieira (lists) wrote:


This patch adds a new target hook to enable us to adapt the types of return
and parameters of simd clones.  We use this in two ways, the first one is
to
make sure we can create valid SVE types, including the SVE type attribute,
when creating a SVE simd clone, even when the target options do not support
SVE.  We are following the same behaviour seen with x86 that creates simd
clones according to the ABI rules when no simdlen is provided, even if that
simdlen is not supported by the current target options.  Note that this
doesn't mean the simd clone will be used in auto-vectorization.


You are not documenting the bool parameter of the new hook.

What's wrong with doing the adjustment in TARGET_SIMD_CLONE_ADJUST?


simd_clone_adjust_argument_types is called after that hook, so by the time we
call TARGET_SIMD_CLONE_ADJUST the types are still in scalar, not vector.  The
same is true for the return type one.

Also the changes to the types need to be taken into consideration in
'adjustments' I think.


Nothing in the three existing implementations of TARGET_SIMD_CLONE_ADJUST
relies on this ordering I think, how about moving the hook invocation
after simd_clone_adjust_argument_types?



But that wouldn't change the 'ipa_param_body_adjustments' for when we 
have a function definition and we need to redo the body.

Richard.


PS: I hope the subject line survived, my email client is having a bit of a
wobble this morning... it's what you get for updating software :(


Re: [PATCH7/8] vect: Add TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM

2023-10-04 Thread Andre Vieira (lists)




On 30/08/2023 14:04, Richard Biener wrote:

On Wed, 30 Aug 2023, Andre Vieira (lists) wrote:


This patch adds a new target hook to enable us to adapt the types of return
and parameters of simd clones.  We use this in two ways, the first one is to
make sure we can create valid SVE types, including the SVE type attribute,
when creating a SVE simd clone, even when the target options do not support
SVE.  We are following the same behaviour seen with x86 that creates simd
clones according to the ABI rules when no simdlen is provided, even if that
simdlen is not supported by the current target options.  Note that this
doesn't mean the simd clone will be used in auto-vectorization.


You are not documenting the bool parameter of the new hook.

What's wrong with doing the adjustment in TARGET_SIMD_CLONE_ADJUST?


simd_clone_adjust_argument_types is called after that hook, so by the 
time we call TARGET_SIMD_CLONE_ADJUST the types are still in scalar, not 
vector.  The same is true for the return type one.


Also the changes to the types need to be taken into consideration in 
'adjustments' I think.


PS: I hope the subject line survived, my email client is having a bit of 
a wobble this morning... it's what you get for updating software :(


[yocto] [yocto-autobuilder-helper][PATCH 0/3] Make sure to pick tested rev as reference for regression report

2023-10-04 Thread Alexis Lothoré via lists . yoctoproject . org
Some failures have been observed on jobs targeting master-next, with the
following logs as an example:

Exception: No reference found for commit
3edb9acca18171894771c36c19b0c2e905852ce5 in /tmp/sendqaemail.dkwg__g9

See [1] for more logs, which is trying to compare master-next results to
master results. While master-next results necessarily exist (because they
have been generated in the very same build presenting the error), it may
not be true for master: current HEAD of master may not have been the object
of a build generating test results.

To fix that, this series propose to stop blindly searching for test results
corresponding to current HEAD on master: instead it reads HEAD on master
branch of test results respository, and extract corresponding Poky revision
from it, so we are sure that selected revision in Poky is the most recent
AND have corresponding tests results
The actual fix is in last commits: the first commit aims to re-clarify the
naming used for revisions used to comparision, and the second commit is a
small refactoring

[1] 
https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5973/steps/31/logs/stdio

Alexis Lothoré (3):
  scripts/send_qa_email: re-clarify base and target revisions
  scripts/send-qa-email: define tests results repository url only once
  scripts/send_qa_email: guess latest tested revision when dealing with
branch

 scripts/send_qa_email.py  | 61 ++-
 scripts/test_send_qa_email.py | 31 ++
 scripts/utils.py  |  6 ++--
 3 files changed, 58 insertions(+), 40 deletions(-)

-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61230): https://lists.yoctoproject.org/g/yocto/message/61230
Mute This Topic: https://lists.yoctoproject.org/mt/101751693/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/3] scripts/send-qa-email: define tests results repository url only once

2023-10-04 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Test results repository url is used at least twice, so define a constant
holding the url instead of hardcoding it multiple times

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index f9a982ae9143..ac8b4716f07b 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -15,6 +15,7 @@ import logging
 
 import utils
 
+TEST_RESULTS_REPOSITORY_URL="g...@push.yoctoproject.org:yocto-testresults"
 exitcode = 0
 
 def is_release_version(version):
@@ -146,10 +147,10 @@ def send_qa_email():
 elif targetbranch:
 cloneopts = ["--branch", targetbranch]
 try:
-subprocess.check_call(["git", "clone", 
"g...@push.yoctoproject.org:yocto-testresults", tempdir, "--depth", "1"] + 
cloneopts)
+subprocess.check_call(["git", "clone", 
TEST_RESULTS_REPOSITORY_URL, tempdir, "--depth", "1"] + cloneopts)
 except subprocess.CalledProcessError:
 log.info("No comparision branch found, falling back to master")
-subprocess.check_call(["git", "clone", 
"g...@push.yoctoproject.org:yocto-testresults", tempdir, "--depth", "1"])
+subprocess.check_call(["git", "clone", 
TEST_RESULTS_REPOSITORY_URL, tempdir, "--depth", "1"])
 
 # If the base comparision branch isn't present regression 
comparision won't work
 # at least until we can tell the tool to ignore internal branch 
information
-- 
2.42.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61231): https://lists.yoctoproject.org/g/yocto/message/61231
Mute This Topic: https://lists.yoctoproject.org/mt/101751694/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/3] scripts/send_qa_email: re-clarify base and target revisions

2023-10-04 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

There are some inversions in words used to describe elements of comparison
for regression reporting: the main function of send_qa_email starts using
"base" to talk about the target revision and "compare" to talk about the
reference against which it is compared. Then later in the script, the
"base" is used as "base of comparison"/reference revision, while the
"target" branch/revision appears. This becomes quite confusing when we need
to update the script

Re-align wording to avoid confusion:
- always use "target" to talk about current branch/revision of interest
  (the newest)
- always use "base" to talk about the reference branch/revision  (the
  oldest), against which we want to compare the target revision

Signed-off-by: Alexis Lothoré 
---
This commit does not change any behavior in the script, it is only about
renaming variables
---
 scripts/send_qa_email.py  | 44 ++-
 scripts/test_send_qa_email.py | 26 ++---
 scripts/utils.py  |  6 ++---
 3 files changed, 39 insertions(+), 37 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 54b701f409bf..f9a982ae9143 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -52,20 +52,22 @@ def get_previous_tag(targetrepodir, version):
 defaultbaseversion, _, _ = 
utils.get_version_from_string(subprocess.check_output(["git", "describe", 
"--abbrev=0"], cwd=targetrepodir).decode('utf-8').strip())
 return utils.get_tag_from_version(defaultbaseversion, None)
 
-def get_regression_base_and_target(basebranch, comparebranch, release, 
targetrepodir):
-if not basebranch:
-# Basebranch/comparebranch is an arbitrary configuration (not defined 
in config.json): do not run regression reporting
+def get_regression_base_and_target(targetbranch, basebranch, release, 
targetrepodir):
+if not targetbranch:
+# Targetbranch/basebranch is an arbitrary configuration (not defined 
in config.json): do not run regression reporting
 return None, None
 
 if is_release_version(release):
-# We are on a release: ignore comparebranch (which is very likely 
None), regression reporting must be done against previous tag
-return get_previous_tag(targetrepodir, release), basebranch
-elif comparebranch:
-# Basebranch/comparebranch is defined in config.json: regression 
reporting must be done against branches as defined in config.json
-return comparebranch, basebranch
+# We are on a release: ignore basebranch (which is very likely None),
+# regression reporting must be done against previous tag
+return get_previous_tag(targetrepodir, release), targetbranch
+elif basebranch:
+# Targetbranch/basebranch is defined in config.json: regression
+# reporting must be done against branches as defined in config.json
+return basebranch, targetbranch
 
 #Default case: return previous tag as base
-return get_previous_tag(targetrepodir, release), basebranch
+return get_previous_tag(targetrepodir, release), targetbranch
 
 def generate_regression_report(querytool, targetrepodir, base, target, 
resultdir, outputdir, log):
 log.info(f"Comparing {target} to {base}")
@@ -130,7 +132,7 @@ def send_qa_email():
 branch = repos['poky']['branch']
 repo = repos['poky']['url']
 
-basebranch, comparebranch = utils.getcomparisonbranch(ourconfig, repo, 
branch)
+targetbranch, basebranch = utils.getcomparisonbranch(ourconfig, repo, 
branch)
 report = subprocess.check_output([resulttool, "report", 
args.results_dir])
 with open(args.results_dir + "/testresult-report.txt", "wb") as f:
 f.write(report)
@@ -139,10 +141,10 @@ def send_qa_email():
 try:
 utils.printheader("Importing test results repo data")
 cloneopts = []
-if comparebranch:
-cloneopts = ["--branch", comparebranch]
-elif basebranch:
+if basebranch:
 cloneopts = ["--branch", basebranch]
+elif targetbranch:
+cloneopts = ["--branch", targetbranch]
 try:
 subprocess.check_call(["git", "clone", 
"g...@push.yoctoproject.org:yocto-testresults", tempdir, "--depth", "1"] + 
cloneopts)
 except subprocess.CalledProcessError:
@@ -151,30 +153,30 @@ def send_qa_email():
 
 # If the base comparision branch isn't present regression 
comparision won't work
 # at least until we can tell the tool to ignore internal branch 
information
-if basebranch:
+if targetbranch:
 try:
-subprocess.check_call(["git", "rev-parse", "--verify", 
basebranch], cwd=tempdir)
+subprocess.check_call(["git", "rev-parse", "--verify", 
targetbranch], cwd=tempdir)
 except 

[yocto] [yocto-autobuilder-helper][PATCH 3/3] scripts/send_qa_email: guess latest tested revision when dealing with branch

2023-10-04 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

It has been observed that some regression reports generation may failed
when the comparision base is a branch (e.g master) because we can not find
any test results associated to the branch HEAD. This is especially true for
branches which often change, because not all revisions on those branch are
subject to CI tests.

To fix that, whenever we are not dealing with a release, parse the latest
tested revision in test results repository on target branch in order to
guess the corresponding revision in poky repository, so we are sure that
revisions passed to yocto_testresults_query are indeed tested and
regression report can be generated

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py  | 22 +-
 scripts/test_send_qa_email.py | 11 +++
 2 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index ac8b4716f07b..14446a274e90 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -53,7 +53,17 @@ def get_previous_tag(targetrepodir, version):
 defaultbaseversion, _, _ = 
utils.get_version_from_string(subprocess.check_output(["git", "describe", 
"--abbrev=0"], cwd=targetrepodir).decode('utf-8').strip())
 return utils.get_tag_from_version(defaultbaseversion, None)
 
-def get_regression_base_and_target(targetbranch, basebranch, release, 
targetrepodir):
+def get_last_tested_rev_on_branch(branch, log):
+# Fetch latest test results revision on corresponding branch in test
+# results repository
+tags_list = subprocess.check_output(["git", "ls-remote", "--refs", "-t", 
TEST_RESULTS_REPOSITORY_URL, "refs/tags/" + branch + 
"/*"]).decode('utf-8').strip()
+latest_test_tag=tags_list.splitlines()[-1].split()[1]
+# From test results tag, extract Poky revision
+tested_revision = re.match('refs\/tags\/.*\/\d+-g([a-f0-9]+)\/\d', 
latest_test_tag).group(1)
+log.info(f"Last tested revision on branch {branch} is {tested_revision}")
+return tested_revision
+
+def get_regression_base_and_target(targetbranch, basebranch, release, 
targetrepodir, log):
 if not targetbranch:
 # Targetbranch/basebranch is an arbitrary configuration (not defined 
in config.json): do not run regression reporting
 return None, None
@@ -63,9 +73,11 @@ def get_regression_base_and_target(targetbranch, basebranch, 
release, targetrepo
 # regression reporting must be done against previous tag
 return get_previous_tag(targetrepodir, release), targetbranch
 elif basebranch:
-# Targetbranch/basebranch is defined in config.json: regression
-# reporting must be done against branches as defined in config.json
-return basebranch, targetbranch
+# Basebranch/targetbranch are defined in config.json: regression
+# reporting must be done between latest test result available on base 
branch
+# and latest result on targetbranch
+latest_tested_rev_on_basebranch = 
get_last_tested_rev_on_branch(basebranch, log)
+return latest_tested_rev_on_basebranch, targetbranch
 
 #Default case: return previous tag as base
 return get_previous_tag(targetrepodir, release), targetbranch
@@ -177,7 +189,7 @@ def send_qa_email():
 log.warning("Test results not published on release version. 
Faulty AB configuration ?")
 
 utils.printheader("Processing regression report")
-regression_base, regression_target = 
get_regression_base_and_target(targetbranch, basebranch, args.release, 
targetrepodir)
+regression_base, regression_target = 
get_regression_base_and_target(targetbranch, basebranch, args.release, 
targetrepodir, log)
 if regression_base and regression_target:
 generate_regression_report(querytool, targetrepodir, 
regression_base, regression_target, tempdir, args.results_dir, log)
 
diff --git a/scripts/test_send_qa_email.py b/scripts/test_send_qa_email.py
index 74d60d55655d..5509b3c2510e 100755
--- a/scripts/test_send_qa_email.py
+++ b/scripts/test_send_qa_email.py
@@ -11,7 +11,10 @@ import os
 import sys
 import unittest
 import send_qa_email
+import logging
 
+logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
+log = logging.getLogger('send-qa-email')
 
 class TestVersion(unittest.TestCase):
 test_data_get_version = [
@@ -45,9 +48,9 @@ class TestVersion(unittest.TestCase):
 {"name": "Older release", "input": {"targetbranch": "kirkstone",
 "basebranch": None, "release": 
"yocto-4.0.8.rc2"}, "expected": ("yocto-4.0.7", "kirkstone")},
 {"name": "Master Next", "input": {"targetbranch": "master-next",
-  "basebranch": "master", "release": 
None}, "expected": ("master", "master-next")},
+  "basebranch": "master", "release": 
None}, "expected": ("LAST_TESTED_REV", 

Re: maximum ipv4 bgp prefix length of /24 ?

2023-10-03 Thread Justin Wilson (Lists)
I think it is going to have to happen.  We have several folks on the IX and 
various consulting clients who only need 3-6 Ips but have to burn a full /24 to 
participate in BGP. I wrote a blog post awhile back on this topic 
https://blog.j2sw.com/data-center/unpopular-opinion-bgp-should-accept-smaller-than-a-24/




Justin Wilson
j...@mtin.net

—
https://j2sw.com (AS399332)
https://blog.j2sw.com - Podcast and Blog

> On Sep 30, 2023, at 1:48 PM, Randy Bush  wrote:
> 
>> About 60% of the table is /24 routes.
>> Just going to /25 will probably double the table size.
> 
> or maybe just add 60%, not 100%.  and it would take time.
> 
> agree it would be quite painful.  would rather not go there.  sad to
> say, i suspect some degree of lengthening is inevitable.  we have
> ourselves to blame; but blame does not move packets.
> 
> randy, who was in the danvers cabal for the /19 agreement
> 



Re: Check that passes do not forget to define profile

2023-10-03 Thread Andre Vieira (lists)

Hi Honza,

My current patch set for AArch64 VLA omp codegen started failing on 
gcc.dg/gomp/pr87898.c after this. I traced it back to 
'move_sese_region_to_fn' in tree/cfg.cc not setting count for the bb 
created.


I was able to 'fix' it locally by setting the count of the new bb to the 
accumulation of e->count () of all the entry_endges (if initialized). 
I'm however not even close to certain that's the right approach, 
attached patch for illustration.


Kind regards,
Andre

On 24/08/2023 14:14, Jan Hubicka via Gcc-patches wrote:

Hi,
this patch extends verifier to check that all probabilities and counts are
initialized if profile is supposed to be present.  This is a bit complicated
by the posibility that we inline !flag_guess_branch_probability function
into function with profile defined and in this case we need to stop
verification.  For this reason I added flag to cfg structure tracking this.

Bootstrapped/regtested x86_64-linux, comitted.

gcc/ChangeLog:

* cfg.h (struct control_flow_graph): New field full_profile.
* auto-profile.cc (afdo_annotate_cfg): Set full_profile to true.
* cfg.cc (init_flow): Set full_profile to false.
* graphite.cc (graphite_transform_loops): Set full_profile to false.
* lto-streamer-in.cc (input_cfg): Initialize full_profile flag.
* predict.cc (pass_profile::execute): Set full_profile to true.
* symtab-thunks.cc (expand_thunk): Set full_profile to true.
* tree-cfg.cc (gimple_verify_flow_info): Verify that profile is full
if full_profile is set.
* tree-inline.cc (initialize_cfun): Initialize full_profile.
(expand_call_inline): Combine full_profile.


diff --git a/gcc/auto-profile.cc b/gcc/auto-profile.cc
index e3af3555e75..ff3b763945c 100644
--- a/gcc/auto-profile.cc
+++ b/gcc/auto-profile.cc
@@ -1578,6 +1578,7 @@ afdo_annotate_cfg (const stmt_set _stmts)
  }
update_max_bb_count ();
profile_status_for_fn (cfun) = PROFILE_READ;
+  cfun->cfg->full_profile = true;
if (flag_value_profile_transformations)
  {
gimple_value_profile_transformations ();
diff --git a/gcc/cfg.cc b/gcc/cfg.cc
index 9eb9916f61a..b7865f14e7f 100644
--- a/gcc/cfg.cc
+++ b/gcc/cfg.cc
@@ -81,6 +81,7 @@ init_flow (struct function *the_fun)
  = ENTRY_BLOCK_PTR_FOR_FN (the_fun);
the_fun->cfg->edge_flags_allocated = EDGE_ALL_FLAGS;
the_fun->cfg->bb_flags_allocated = BB_ALL_FLAGS;
+  the_fun->cfg->full_profile = false;
  }
  
  /* Helper function for remove_edge and free_cffg.  Frees edge structure
diff --git a/gcc/cfg.h b/gcc/cfg.h
index a0e944979c8..53e2553012c 100644
--- a/gcc/cfg.h
+++ b/gcc/cfg.h
@@ -78,6 +78,9 @@ struct GTY(()) control_flow_graph {
/* Dynamically allocated edge/bb flags.  */
int edge_flags_allocated;
int bb_flags_allocated;
+
+  /* Set if the profile is computed on every edge and basic block.  */
+  bool full_profile;
  };
  
  
diff --git a/gcc/graphite.cc b/gcc/graphite.cc

index 19f8975ffa2..2b387d5b016 100644
--- a/gcc/graphite.cc
+++ b/gcc/graphite.cc
@@ -512,6 +512,8 @@ graphite_transform_loops (void)
  
if (changed)

  {
+  /* FIXME: Graphite does not update profile meaningfully currently.  */
+  cfun->cfg->full_profile = false;
cleanup_tree_cfg ();
profile_status_for_fn (cfun) = PROFILE_ABSENT;
release_recorded_exits (cfun);
diff --git a/gcc/lto-streamer-in.cc b/gcc/lto-streamer-in.cc
index 0cce14414ca..d3128fcebe4 100644
--- a/gcc/lto-streamer-in.cc
+++ b/gcc/lto-streamer-in.cc
@@ -1030,6 +1030,7 @@ input_cfg (class lto_input_block *ib, class data_in 
*data_in,
basic_block p_bb;
unsigned int i;
int index;
+  bool full_profile = false;
  
init_empty_tree_cfg_for_function (fn);
  
@@ -1071,6 +1072,8 @@ input_cfg (class lto_input_block *ib, class data_in *data_in,

  data_in->location_cache.input_location_and_block (>goto_locus,
, ib, data_in);
  e->probability = profile_probability::stream_in (ib);
+ if (!e->probability.initialized_p ())
+   full_profile = false;
  
  	}
  
@@ -1145,6 +1148,7 @@ input_cfg (class lto_input_block *ib, class data_in *data_in,
  
/* Rebuild the loop tree.  */

flow_loops_find (loops);
+  cfun->cfg->full_profile = full_profile;
  }
  
  
diff --git a/gcc/predict.cc b/gcc/predict.cc

index 5a1a561cc24..396746cbfd1 100644
--- a/gcc/predict.cc
+++ b/gcc/predict.cc
@@ -4131,6 +4131,7 @@ pass_profile::execute (function *fun)
  scev_initialize ();
  
tree_estimate_probability (false);

+  cfun->cfg->full_profile = true;
  
if (nb_loops > 1)

  scev_finalize ();
diff --git a/gcc/symtab-thunks.cc b/gcc/symtab-thunks.cc
index 4c04235c41b..23ead0d2138 100644
--- a/gcc/symtab-thunks.cc
+++ b/gcc/symtab-thunks.cc
@@ -648,6 +648,7 @@ expand_thunk (cgraph_node *node, bool output_asm_thunks,
  ? PROFILE_READ : PROFILE_GUESSED;
/* FIXME: C++ FE 

Re: [PATCH 6/8] vect: Add vector_mode paramater to simd_clone_usable

2023-09-28 Thread Andre Vieira (lists)




On 31/08/2023 07:39, Richard Biener wrote:

On Wed, Aug 30, 2023 at 5:02 PM Andre Vieira (lists)
 wrote:




On 30/08/2023 14:01, Richard Biener wrote:

On Wed, Aug 30, 2023 at 11:15 AM Andre Vieira (lists) via Gcc-patches
 wrote:


This patch adds a machine_mode parameter to the TARGET_SIMD_CLONE_USABLE
hook to enable rejecting SVE modes when the target architecture does not
support SVE.


How does the graph node of the SIMD clone lack this information?  That is, it
should have information on the types (and thus modes) for all formal arguments
and return values already, no?  At least the target would know how to
instantiate
it if it's not readily available at the point of use.



Yes it does, but that's the modes the simd clone itself uses, it does
not know what vector_mode we are currently vectorizing for. Which is
exactly why we need the vinfo's vector_mode to make sure the simd clone
and its types are compatible with the vector mode.

In practice, to make sure that a SVE simd clones are only used in loops
being vectorized for SVE modes. Having said that... I just realized that
the simdlen check already takes care of that currently...

by simdlen check I mean the one that writes off simdclones that match:
  if (!constant_multiple_p (vf, n->simdclone->simdlen, _calls)

However, when using -msve-vector-bits this will become an issue, as the
VF will be constant and we will match NEON simdclones.  This requires
some further attention though given that we now also reject the use of
SVE simdclones when using -msve-vector-bits, and I'm not entirely sure
we should...


Hmm, but vectorizable_simdclone should check for compatible types here
and if they are compatible why should we reject them?  Are -msve-vector-bits
"SVE" modes different from "NEON" modes?  I suppose not, because otherwise
the type compatibility check would say incompatible.

Prior to transformation we do all checks on the original scalar values, 
not the vector types. But I do believe you are right in that we don't 
need to pass the vector_mode. The simdlen check should be enough and if 
the length is the same or a multiple of the rest of the could should be 
able to deal with that and any conversions when dealing with things like 
SVE types that require the attribute.


I'll update the patch series soon and after that I'll look at how this 
reacts to -msve-vector-bits in more detail.


Thanks,
Andre


Re: [PING][PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops

2023-09-28 Thread Andre Vieira (lists)

Hi,

On 14/09/2023 13:10, Kyrylo Tkachov via Gcc-patches wrote:

Hi Stam,





The arm parts look sensible but we'd need review for the df-core.h and 
df-core.cc changes.
Maybe Jeff can help or can recommend someone to take a look?
Thanks,
Kyrill



FWIW the changes LGTM, if we don't want these in df-core we can always 
implement the extra utility locally. It's really just a helper function 
to check if df_bb_regno_first_def_find and df_bb_regno_last_def_find 
yield the same result, meaning we only have a single definition.


Kind regards,
Andre


Re: [PATCH] vect, omp: inbranch simdclone dropping const

2023-09-27 Thread Andre Vieira (lists)



On 26/09/2023 17:37, Andrew Stubbs wrote:

I don't have authority to approve anything, but here's a review anyway.

Thanks for working on this.


Thank you for reviewing and apologies for the mess of a patch, may have 
rushed it ;)


diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c

new file mode 100644
index 
..09127b8cb6f2e3699b6073591f58be7047330273

--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c
@@ -0,0 +1,23 @@
+/* { dg-require-effective-target vect_simd_clones } */
+/* { dg-do compile } */
+/* { dg-additional-options "-fopenmp-simd" } */
+


Do you need -fopenmp-simd for this?

Nope, I keep forgetting that you only need it for pragmas.

Dealt with the other comments too.

Any thoughts on changing gimple_call_internal_fn  instead? My main 
argument against is that IFN_MASK_CALL should not appear outside of 
ifconvert and vectorizer. On the other hand, we may inspect the flags 
elsewhere in the vectorizer (now or in the future) and changing 
gimple_call_internal_fn would prevent the need to handle the IFN 
separately elsewhere.


Kind Regards,
Andrediff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c
new file mode 100644
index 
..e7ed56ca75470464307d0d266dacfa0d8d6e43c1
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c
@@ -0,0 +1,22 @@
+/* { dg-require-effective-target vect_simd_clones } */
+/* { dg-do compile } */
+
+int __attribute__ ((__simd__, const)) fn (int);
+
+void test (int * __restrict__ a, int * __restrict__ b, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  int a_;
+  if (b[i] > 0)
+a_ = fn (b[i]);
+  else
+a_ = b[i] + 5;
+  a[i] = a_;
+}
+}
+
+/* { dg-final { scan-tree-dump-not {loop contains function calls or data 
references} "vect" } } */
+
+/* The LTO test produces two dump files and we scan the wrong one.  */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
diff --git a/gcc/tree-data-ref.cc b/gcc/tree-data-ref.cc
index 
6d3b7c2290e4db9c1168a4c763facb481157c97c..689aaeed72282bb0da2a17e19fb923a06e8d62fa
 100644
--- a/gcc/tree-data-ref.cc
+++ b/gcc/tree-data-ref.cc
@@ -100,6 +100,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "vr-values.h"
 #include "range-op.h"
 #include "tree-ssa-loop-ivopts.h"
+#include "calls.h"
 
 static struct datadep_stats
 {
@@ -5816,6 +5817,15 @@ get_references_in_stmt (gimple *stmt, vec *references)
}
  case IFN_MASK_LOAD:
  case IFN_MASK_STORE:
+ break;
+ case IFN_MASK_CALL:
+   {
+ tree orig_fndecl
+   = gimple_call_addr_fndecl (gimple_call_arg (stmt, 0));
+ if (!orig_fndecl
+ || (flags_from_decl_or_type (orig_fndecl) & ECF_CONST) == 0)
+   clobbers_memory = true;
+   }
break;
  default:
clobbers_memory = true;
@@ -5852,7 +5862,7 @@ get_references_in_stmt (gimple *stmt, vec *references)
 }
   else if (stmt_code == GIMPLE_CALL)
 {
-  unsigned i, n;
+  unsigned i = 0, n;
   tree ptr, type;
   unsigned int align;
 
@@ -5879,13 +5889,16 @@ get_references_in_stmt (gimple *stmt, vec *references)
   ptr);
references->safe_push (ref);
return false;
+ case IFN_MASK_CALL:
+   i = 1;
+   gcc_fallthrough ();
  default:
break;
  }
 
   op0 = gimple_call_lhs (stmt);
   n = gimple_call_num_args (stmt);
-  for (i = 0; i < n; i++)
+  for (; i < n; i++)
{
  op1 = gimple_call_arg (stmt, i);
 


Re: [PATCH] vect, omp: inbranch simdclone dropping const

2023-09-26 Thread Andre Vieira (lists)




On 26/09/2023 21:26, Bernhard Reutner-Fischer wrote:

On 26 September 2023 18:46:11 CEST, Tobias Burnus  
wrote:

On 26.09.23 18:37, Andrew Stubbs wrote:

If the fall-through is deliberate please add a /* FALLTHROUGH */
comment (or whatever spelling disables the warning).


It's: gcc_fallthrough ();

Which gets converted to "__attribute__((fallthrough))"; it could also
expand to "[[fallthrough]]" but that's C++17 (and, also, an C23 feature
- albeit so far unimplemented in gcc).


OT
IIRC we do parse comments for a number of spellings of the hint by the user 
that the fallthrough is deliberate:

https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

See the numerous levels of -Wimplicit-fallthrough=n, the default being 3.

---8<---
-Wimplicit-fallthrough=3 case sensitively matches one of the following regular 
expressions:
-fallthrough
@fallthrough@
lint -fallthrough[ \t]*
[ \t.!]*(ELSE,? |INTENTIONAL(LY)? )?
FALL(S | |-)?THR(OUGH|U)[ \t.!]*(-[^\n\r]*)?
[ \t.!]*(Else,? |Intentional(ly)? )?
Fall((s | |-)[Tt]|t)hr(ough|u)[ \t.!]*(-[^\n\r]*)?
[ \t.!]*([Ee]lse,? |[Ii]ntentional(ly)? )?
fall(s | |-)?thr(ough|u)[ \t.!]*(-[^\n\r]*)?
---8<---

Just FWIW.
thanks,


I was surprised my bootstrap didn't catch this, I thought we generated 
warnings in such cases and bootstrap builds with -Werror does it not?


Re: [PATCH] vect, omp: inbranch simdclone dropping const

2023-09-26 Thread Andre Vieira (lists)




On 26/09/2023 17:48, Jakub Jelinek wrote:

On Tue, Sep 26, 2023 at 05:24:26PM +0100, Andre Vieira (lists) wrote:

@@ -5816,6 +5817,18 @@ get_references_in_stmt (gimple *stmt, vec *references)
}
  case IFN_MASK_LOAD:
  case IFN_MASK_STORE:
+ case IFN_MASK_CALL:
+   {
+ tree orig_fndecl
+   = gimple_call_addr_fndecl (gimple_call_arg (stmt, 0));
+ if (!orig_fndecl)
+   {
+ clobbers_memory = true;
+ break;
+   }
+ if ((flags_from_decl_or_type (orig_fndecl) & ECF_CONST) == 0)
+   clobbers_memory = true;
+   }


Should IFN_MASK_LOAD/STORE really go through this?  I thought those have
first argument address of the memory being conditionally loaded/stored, not
function address.


No it shouldn't, my bad...
Surprising testing didn't catch it though, I'm guessing 
gimple_call_addr_fndecl just returned null everytime for those. I'll 
clean it up.


[PATCH] vect, omp: inbranch simdclone dropping const

2023-09-26 Thread Andre Vieira (lists)
The const attribute is ignored when simdclone's are used inbranch. This 
is due to the fact that when analyzing a MASK_CALL we were not looking 
at the targeted function for flags, but instead only at the internal 
function call itself.
This patch adds code to make sure we look at the target function to 
check for the const attribute and enables the autovectorization of 
inbranch const simdclones without needing the loop to be adorned the 
'openmp simd' pragma.


Not sure about how to add new includes to the ChangeLog. Which brings me 
to another point, I contemplated changing gimple_call_flags to do the 
checking of flags of the first argument of IFN_MASK_CALL itself rather 
than only calling internal_fn_flags on gimple_call_internal_fn (stmt), 
but that might be a bit too intrusive, opinions welcome :)


Bootstrapped and regression tested on aarch64-unknown-linux-gnu and 
x86_64-pc-linux-gnu.


Is this OK for trunk?

gcc/ChangeLog:

* tree-vect-data-ref.cc (include calls.h): Add new include.
(get_references_in_stmt): Correctly handle IFN_MASK_CALL.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/vect-simd-clone-19.c: New test.diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c
new file mode 100644
index 
..09127b8cb6f2e3699b6073591f58be7047330273
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-19.c
@@ -0,0 +1,23 @@
+/* { dg-require-effective-target vect_simd_clones } */
+/* { dg-do compile } */
+/* { dg-additional-options "-fopenmp-simd" } */
+
+int __attribute__ ((__simd__, const)) fn (int);
+
+void test (int * __restrict__ a, int * __restrict__ b, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  int a_;
+  if (b[i] > 0)
+a_ = fn (b[i]);
+  else
+a_ = b[i] + 5;
+  a[i] = a_;
+}
+}
+
+/* { dg-final { scan-tree-dump-not {loop contains function calls or data 
references} "vect" } } */
+
+/* The LTO test produces two dump files and we scan the wrong one.  */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
diff --git a/gcc/tree-data-ref.cc b/gcc/tree-data-ref.cc
index 
6d3b7c2290e4db9c1168a4c763facb481157c97c..2926c3925ee7897fef53c16cfd1d19d23dbf05f3
 100644
--- a/gcc/tree-data-ref.cc
+++ b/gcc/tree-data-ref.cc
@@ -100,6 +100,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "vr-values.h"
 #include "range-op.h"
 #include "tree-ssa-loop-ivopts.h"
+#include "calls.h"
 
 static struct datadep_stats
 {
@@ -5816,6 +5817,18 @@ get_references_in_stmt (gimple *stmt, vec *references)
}
  case IFN_MASK_LOAD:
  case IFN_MASK_STORE:
+ case IFN_MASK_CALL:
+   {
+ tree orig_fndecl
+   = gimple_call_addr_fndecl (gimple_call_arg (stmt, 0));
+ if (!orig_fndecl)
+   {
+ clobbers_memory = true;
+ break;
+   }
+ if ((flags_from_decl_or_type (orig_fndecl) & ECF_CONST) == 0)
+   clobbers_memory = true;
+   }
break;
  default:
clobbers_memory = true;
@@ -5852,7 +5865,7 @@ get_references_in_stmt (gimple *stmt, vec *references)
 }
   else if (stmt_code == GIMPLE_CALL)
 {
-  unsigned i, n;
+  unsigned i  = 0, n;
   tree ptr, type;
   unsigned int align;
 
@@ -5879,13 +5892,15 @@ get_references_in_stmt (gimple *stmt, vec *references)
   ptr);
references->safe_push (ref);
return false;
+ case IFN_MASK_CALL:
+   i = 1;
  default:
break;
  }
 
   op0 = gimple_call_lhs (stmt);
   n = gimple_call_num_args (stmt);
-  for (i = 0; i < n; i++)
+  for (; i < n; i++)
{
  op1 = gimple_call_arg (stmt, i);
 


Re: [PATCH] AArch64: Remove BTI from outline atomics

2023-09-26 Thread Richard Earnshaw (lists)
On 26/09/2023 14:46, Wilco Dijkstra wrote:
> 
> The outline atomic functions have hidden visibility and can only be called
> directly.  Therefore we can remove the BTI at function entry.  This improves
> security by reducing the number of indirect entry points in a binary.
> The BTI markings on the objects are still emitted.

Please can you add a comment to that effect in the source code.  OK with that 
change.

R.

> 
> Passes regress, OK for commit?
> 
> libgcc/ChangeLog:
>     * config/aarch64/lse.S (BTI_C): Remove define.
> 
> ---
> 
> diff --git a/libgcc/config/aarch64/lse.S b/libgcc/config/aarch64/lse.S
> index 
> ba05047ff02b6fc5752235bffa924fc4a2f48c04..dbfb83fb09083641bf06c50b631a5f27bdf61b80
>  100644
> --- a/libgcc/config/aarch64/lse.S
> +++ b/libgcc/config/aarch64/lse.S
> @@ -163,8 +163,6 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  
> If not, see
>  #define tmp3    14
>  #define tmp4    13
>  
> -#define BTI_C  hint    34
> -
>  /* Start and end a function.  */
>  .macro  STARTFN name
>  .text
> @@ -174,7 +172,6 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  
> If not, see
>  .type   \name, %function
>  .cfi_startproc
>  \name:
> -   BTI_C
>  .endm
>  
>  .macro  ENDFN name



Re: Any though of having archlinux-keyring-wkd-sync check for iptables and recommend rule?

2023-09-24 Thread Genes Lists

On 9/24/23 07:22, Genes Lists wrote:


  nft -c nftables.conf


typo - should be:

nft -c -f nftables.conf

gene


Re: Any though of having archlinux-keyring-wkd-sync check for iptables and recommend rule?

2023-09-24 Thread Genes Lists

On 9/24/23 02:52, David C. Rankin wrote:

On 9/23/23 12:51, Christian wrote:




In addition to the workstation (single interface) nftables example, I 
have just uploaded an example of nftables firewall rules. i.e. for a 
router with 2 interfaces that sits between the internet and internal 
network.


This supports services provided by firewall itself (DNS or ssh etc) as 
well as forwarded services to servers on internal network (web server, 
ssh, vpn etc).


It has blocks and whitelist - and includes both inet and netdev blocks.

I hand edited a fully working firewall for this example and hope it's 
useful. After edits, before trying please confirm no typos etc by 
running check:


 nft -c nftables.conf

 The nftables rules and sample files containing sets of CIDR blocks for 
whitelist or blocks are included. Obviously these will need editing.
The set files are designed to be easily generated from a script - after 
any changes to the sets, reload the rules to pick up the new set data.


It's available in my gh blog area in the nftables/firewall directory:

https://github.com/gene-git/blog/tree/master/nftables

Hope you find this helpful. And if you find typos or boo boos please let 
me know!


thanks

gene


Re: Any though of having archlinux-keyring-wkd-sync check for iptables and recommend rule?

2023-09-23 Thread Genes Lists

On 9/23/23 13:51, Christian wrote:
...


In case of interest, the nft rules that I shared with David previously 
are available here [1].


This is a sample nftables ruleset for a laptop or workstation.

It allows established / related packets to come back. These packets are 
returned after a connection is initiated from the local machine. e.g. 
going to a website, or sending an icmp ping.


It supports local services running on same machine where nftables rules 
are installewd (these are services which are available to the internet):


 - DNS server
 - SSH server
 - WEB server for http and https including http/2 and http/3.
   Uncomment to turn on.

It also allows blocking from a list of CIDR addresses. This prevents any 
IP from the blocked list any access to the above services offered on the 
machine.


N.B. Replies from these "blocked" IPs, are still permitted to come back 
in if they are related/established. e.g. if you go to website hosted at 
a blocked IP everything should work normally.


The reason this works, is that these 'blocks' are done in the 'inet' 
table. If you wanted to block inbound SYN and in addition block 
established/related - then add similar blocks for ingress hook in the 
'netdev' table.


Adding ingress blocks prevents any packets from those IPs from getting 
in - regardless if related/established. It is very early in the packet 
flow - see [2] for how the packets flow in nftables.  ingress hook is 
not available in (legacy) iptables last I checked.



gene

[1] https://github.com/gene-git/blog/tree/master/nftables
[2] https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_hooks





Re: [gentoo-user] Password questions, looking for opinions. cryptsetup question too.

2023-09-23 Thread Wols Lists

On 19/09/2023 10:13, Dale wrote:

That's a interesting way to come up with passwords tho.  I've seen that
is a few whodunit type shows.  Way back in the old days, they had some
interesting ways of coding messages.  Passwords are sort of similar.


Back when we were busy conquering India ...

The story goes of a General trying to send a message back of his latest 
conquest, but he didn't want to use codes because he had a suspicion the 
Indians could read them if his messenger was captured.


It appears the story is apocryphal, but the message he sent read "peccavi".

https://www.ft.com/content/49036e66-ac48-11e8-94bd-cba20d67390c

Cheers,
Wol



Re: [gentoo-user] Password questions, looking for opinions. cryptsetup question too.

2023-09-23 Thread Wols Lists

On 20/09/2023 19:05, Frank Steinmetzger wrote:

In principle, a repeated space character in your passphrase could help
reduce the computational burden of an offline brute force attack, by e.g.
helping an attacker to identify the number of individual words in a
passphrase.



Due to the rotation, the Enigma encoded each subsequent letter differently,
even if the same one repeated, which was (one of) the big strengths of the
Enigma cipher. The flaws were elsewhere, for example that a character could
never be encrypted onto itself due to the internal wiring and certain
message parts were always the same, like message headers and greetings.


And, as always, one of the biggest weaknesses was the operator.

Enigma had three (or in later versions four) rotors. The code book 
specified the INITIAL "settings of the day" for those rotors. What was 
*supposed* to happen was the operator was supposed to select a random 
three/four character string, transmit that string twice, then reset the 
rotors to that string before carrying on. So literally no two messages 
were supposed to have the same settings beyond the first six characters.


Except that a lot of operators re-used the same characters time and time 
again. So if you got a message from an operator you recognised, you 
might well know his "seventh character reset". That saved a lot of grief 
trying to crack which of the several rotors were "the rotors of the day".


And given that, for a large chunk of the war, the radio operators were 
"chatty", you generally got a lot of six-character strings for which you 
had a damn good idea what the plain text was.


So even where some of the operators were seriously crypto-aware and 
careful, once you'd cracked the rotors and initial settings from the 
careless, you could read every message sent by everyone (using those 
settings) that day.


Along with other things like RDF giving subs positions away (although 
I'm not quite sure how much we had good RDF and how much it was a cover 
for us reading their location in status reports), it certainly helped us 
loads hunting them down.


Cheers,
Wol



Re: [gentoo-user] Password questions, looking for opinions. cryptsetup question too.

2023-09-23 Thread Wols Lists

On 19/09/2023 10:10, Jude DaShiell wrote:

Once the set spots got figured
five dice got used for letters add the total and subtract 4 for the
particular letter.


Which actually isn't random. It's a bell curve peaking probably between 
J and M. Think, if you throw 2 dice, there are 36 possible combinations. 
Only one of them generates 2, only one generates 12, but 6 combinations 
can generate 7.


Cheers,
Wol



Re: [gentoo-user] Re: How to move ext4 partition

2023-09-22 Thread Wols Lists

On 20/09/2023 23:39, Grant Edwards wrote:

Assuming GParted is smart enough to do overlapping moves, is it smart
enough to only copy filesystem data and not copy "empty" sectors?
According to various forum posts, it is not: moving a partion copies
every sector. [That's certainly the obvious, safe thing to do.]


Seeing as it knows nothing about filesystems, and everything about 
partitions, it will treat the partition as an opaque blob and move it as 
a single object ...


The partition in question is 200GB, but only 7GB is used, so I think
backup/restore is the way to go...


You would think so :-)

I use ext4, and make heavy use of hard links. Last time I tried a 
straight copy (not backup/restore) I think the copied partition would 
have been three times the size of the original - that is if it hadn't 
run out of space first :-)


But it sounds like that would work well for you.

Cheers,
Wol



Re: [PATCH] AArch64: Fix strict-align cpymem/setmem [PR103100]

2023-09-20 Thread Richard Earnshaw (lists)
On 20/09/2023 14:50, Wilco Dijkstra wrote:
> 
> The cpymemdi/setmemdi implementation doesn't fully support strict alignment.
> Block the expansion if the alignment is less than 16 with STRICT_ALIGNMENT.
> Clean up the condition when to use MOPS.
> 
> Passes regress/bootstrap, OK for commit?
> 
> gcc/ChangeLog/
> PR target/103100
> * config/aarch64/aarch64.md (cpymemdi): Remove pattern condition.

Shouldn't this be a separate patch?  It's not immediately obvious that this is 
a necessary part of this change.

> (setmemdi): Likewise.
> * config/aarch64/aarch64.cc (aarch64_expand_cpymem): Support
> strict-align.  Cleanup condition for using MOPS.
> (aarch64_expand_setmem): Likewise.
> 
> ---
> 
> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> index 
> dd6874d13a75f20d10a244578afc355b25c73da2..8f3bfb91c0f4ec43f37fe9289a66092a29a47e4d
>  100644
> --- a/gcc/config/aarch64/aarch64.cc
> +++ b/gcc/config/aarch64/aarch64.cc
> @@ -25261,27 +25261,23 @@ aarch64_expand_cpymem (rtx *operands)
>int mode_bits;
>rtx dst = operands[0];
>rtx src = operands[1];
> +  unsigned align = INTVAL (operands[3]);

This should read the value with UINTVAL.  Given the useful range of the 
alignment, it should be OK that we're not using unsigned HWI.

>rtx base;
>machine_mode cur_mode = BLKmode;
> +  bool size_p = optimize_function_for_size_p (cfun);
>  
> -  /* Variable-sized memcpy can go through the MOPS expansion if available.  
> */
> -  if (!CONST_INT_P (operands[2]))
> +  /* Variable-sized or strict-align copies may use the MOPS expansion.  */
> +  if (!CONST_INT_P (operands[2]) || (STRICT_ALIGNMENT && align < 16))
>  return aarch64_expand_cpymem_mops (operands);

So what about align=4 and copying, for example, 8 or 12 bytes; wouldn't we want 
a sequence of LDR/STR in that case?  Doesn't this fall back to MOPS too eagerly?


>  
>unsigned HOST_WIDE_INT size = INTVAL (operands[2]);
>  
> -  /* Try to inline up to 256 bytes or use the MOPS threshold if available.  
> */
> -  unsigned HOST_WIDE_INT max_copy_size
> -= TARGET_MOPS ? aarch64_mops_memcpy_size_threshold : 256;
> -
> -  bool size_p = optimize_function_for_size_p (cfun);
> +  /* Try to inline up to 256 bytes.  */
> +  unsigned max_copy_size = 256;
> +  unsigned max_mops_size = aarch64_mops_memcpy_size_threshold;

I find this name slightly confusing.  Surely it's min_mops_size (since above 
that we want to use MOPS rather than inlined loads/stores).  But why not just 
use aarch64_mops_memcpy_size_threshold directly in the one place it's used?

>  
> -  /* Large constant-sized cpymem should go through MOPS when possible.
> - It should be a win even for size optimization in the general case.
> - For speed optimization the choice between MOPS and the SIMD sequence
> - depends on the size of the copy, rather than number of instructions,
> - alignment etc.  */
> -  if (size > max_copy_size)
> +  /* Large copies use MOPS when available or a library call.  */
> +  if (size > max_copy_size || (TARGET_MOPS && size > max_mops_size))
>  return aarch64_expand_cpymem_mops (operands);
>  
>int copy_bits = 256;
> @@ -25445,12 +25441,13 @@ aarch64_expand_setmem (rtx *operands)

Similar comments apply to this code as well.

>unsigned HOST_WIDE_INT len;
>rtx dst = operands[0];
>rtx val = operands[2], src;
> +  unsigned align = INTVAL (operands[3]);
>rtx base;
>machine_mode cur_mode = BLKmode, next_mode;
>  
> -  /* If we don't have SIMD registers or the size is variable use the MOPS
> - inlined sequence if possible.  */
> -  if (!CONST_INT_P (operands[1]) || !TARGET_SIMD)
> +  /* Variable-sized or strict-align memset may use the MOPS expansion.  */
> +  if (!CONST_INT_P (operands[1]) || !TARGET_SIMD
> +  || (STRICT_ALIGNMENT && align < 16))
>  return aarch64_expand_setmem_mops (operands);
>  
>bool size_p = optimize_function_for_size_p (cfun);
> @@ -25458,10 +25455,13 @@ aarch64_expand_setmem (rtx *operands)

And here.

>/* Default the maximum to 256-bytes when considering only libcall vs
>   SIMD broadcast sequence.  */
>unsigned max_set_size = 256;
> +  unsigned max_mops_size = aarch64_mops_memset_size_threshold;
>  
>len = INTVAL (operands[1]);
> -  if (len > max_set_size && !TARGET_MOPS)
> -return false;
> +
> +  /* Large memset uses MOPS when available or a library call.  */
> +  if (len > max_set_size || (TARGET_MOPS && len > max_mops_size))
> +return aarch64_expand_setmem_mops (operands);
>  
>int cst_val = !!(CONST_INT_P (val) && (INTVAL (val) != 0));
>/* The MOPS sequence takes:
> @@ -25474,12 +25474,6 @@ aarch64_expand_setmem (rtx *operands)
>   the arguments + 1 for the call.  */
>unsigned libcall_cost = 4;
>  
> -  /* Upper bound check.  For large constant-sized setmem use the MOPS 
> sequence
> - when available.  */
> -  if (TARGET_MOPS
> -  && len >= 

Re: Any though of having archlinux-keyring-wkd-sync check for iptables and recommend rule?

2023-09-20 Thread Genes Lists

On 9/20/23 04:36, David C. Rankin wrote:

Archdevs,

   Depending on how restrictive the iptables rules, if the IP for 
archlinux-keyring-wkd-sync falls into a blocked range, the logs quickly 
fill. An idea is to have the service insert a temporary rule to either 
(1) allow the IP for the sync check, or (2) allow established, related 
connections while the service runs.


   It may also be worth updating the wiki to provide model rules for 
iptables/nftables to allow archlinux-keyring-wkd-sync to run successfully.


   Just food for thought.



You brought this up in Feb [1] and then as now, I don't understand what 
actual problem you're facing. Inbound 'blocked ranges' (SYN packets) 
should have no effect. Nothing is 'inbound' to your machines other than 
replies initiated from your machine - i.e. ESTABLISHED,RELATED. So if 
you're not able to get replies from arch web servers back to your own 
machines, then likely your firewall rules are incorrect.


As per the earlier thread, WKD is simply a "web key directory" service - 
so all the application is doing is pulling from a web server.
Unless you're blocking outbound packets to such web servers everything 
should just work provided you allow arch to reply when you go their web 
servers.


You should not need any nftables (or legacy iptables) rules provided you 
allow client machine to have access to the web servers.


One caveat would be if using nftables instead of iptables. nftables 
supports 2 kinds of blocks - 'netdev' and 'inet'. 'inet' are the normal 
blocks.  'netdev' blocks very early - and any IPs blocked at that level 
would not allow inbound or even replies. For anything you want to get 
replies for you should use 'inet' blocks not netdev. iptables didn't 
have netdev blocks available last I used it several years ago.


What problem do you actually have?


best

gene

[1] 
https://lists.archlinux.org/archives/list/arch-general@lists.archlinux.org/thread/6SS6WNPVXT44PJNQFDQAHQU2XO4IVLFV/#6SS6WNPVXT44PJNQFDQAHQU2XO4IVLFV





Re: mynetworks - should server IP be included ?

2023-09-20 Thread lists
On Sun, September 17, 2023 4:45 pm, Patrick Ben Koetter wrote:

Benny, Patrick, thanks

> Even simplier than that:
> $ postconf -d mynetworks
> mynetworks = 127.0.0.0/8 192.168.179.0/24 192.168.122.0/24 [::1]/128
> [fd00:0:0:1::]/64 [fe80::]/64

that includes 2nd ethernet, eth1, do I need to keep 10.0.12.0/22 ?

# postconf -d mynetworks
mynetworks = 127.0.0.0/8 103.106.168.104/29 10.0.12.0/22 [::1]/128
[fe80::]/64

eth1: flags=4163  mtu 1500
inet 10.0.12.3  netmask 255.255.252.0  broadcast 10.0.15.255
inet6 fe80::5054:ff:fe62:28c2  prefixlen 64  scopeid 0x20
ether 52:54:00:62:28:c2  txqueuelen 1000  (Ethernet)
RX packets 78  bytes 5460 (5.3 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 10  bytes 712 (712.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

thanks, Voytek



Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Wols Lists

On 18/09/2023 11:13, Frank Steinmetzger wrote:

With so many drives, you should also include a pricey power supply. And/or a
server board which supports staggered spin-up. Also, drives of the home NAS
category (and consumer drives anyways) are only certified for operation in
groups of up to 8-ish. Anything above and you sail in grey warranty waters.
Higher-tier drives are specced for the vibrations of so many drives (at
least I hope, because that’s what they™ tell us).


Have you seen the article where somebody tests that? And yes, it's true. 
The more drives you have, the more you need damping. If all the drives 
move their heads together, the harder it is for them to home in on the 
correct track, to the point where you get the "perfect storm" of 
vibration causing them all to reset, go back to park, try again, and 
they are shaking so much none of them can find what they're looking for.



To be honest, I kinda like the Fractal Design Define 7
XL right now despite the higher cost.  I could make a NAS/backup box
with it and I doubt I'd run out of drive space even if I started using
RAID and mirrored everything, at a minimum.

With 12 drives, I would go for parity RAID with two parity drives per six
drives, not for a mirror. That way you get 2/3 storage efficiency vs. 1/2
and more robustness; in parity, any two drives may fail, but in a cluster of
mirrors, only specific drives may fail (not two of the same mirror). If the
drives are huge, nine drives with three parity drives may be even better
(because rebuilds get scarier the bigger the drives get).

One of my projects in my copious (not) free time was to try and 
implement raid-61. Like raid-10, you could spread it across any number 
of drives (subject to a minimum). You could lose any 4 drives which 
gives you a minimum of five (although with that few that would be the 
equivalent of a five-times mirror).


Hey ho, I don't think that's going to happen now.

Cheers,
Wol



Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Wols Lists

On 18/09/2023 12:16, Rich Freeman wrote:

This is part of why I like storage implementations that have more
robustness built into the software.  Granted, it is still only as good
as your clients, but with distributed storage I really don't want to
be paying for ECC on all of my nodes.  If the client calculates a
checksum and it remains independent of the data, then any RAM
corruption should be detectable as a mismatch (that of course assumes
the checksum is preserved and not re-calculated at any point).


Which is why I run raid-5 over dm-integrity. I'm not sure it's that 
stable :-( :-( but it means any disk corruption will get picked up at 
the integrity level, and raid-5 will just get a read error which will 
trigger a parity recalc without data loss.


Cheers,
Wol



Re: [gentoo-user] Computer case for new build

2023-09-19 Thread Wols Lists

On 19/09/2023 00:40, Dale wrote:

I get it when you wanna do it your way because it always worked™ (which is
not wrong — don’t misunderstand me) and perhaps you had some bad experience
in the past. OTOH it’s a pricey component usually only needed by gamers and
number crunchers. On-board graphics are just fine for Desktop and even
(very) light gaming and they lower power draw considerably. Give it a swirl,
maybe you like it.  Both Intel and AMD work just fine with the kernel
drivers.

Well, for one, I usually upgrade the video card several times before I
upgrade the mobo.  When it is built in, not a option.  I think I'm on my
third in this rig.  I also need multiple outputs, two at least.  One for
monitor and one for TV.  My little NAS box I'm currently using is a Dell
something.  The video works but it has no GUI.  At times during the boot
up process, things don't scroll up the screen.  I may be missing a
setting somewhere but when it blanks out, it comes back with a different
resolution and font size.  I figure it is blanking during the switch.
My Gentoo box doesn't do that.  I can see the screen from BIOS all the
way to when it finishes booting and the GUI comes up.  I'm one of those
who watches.  

Well, in my case I've only recently upgraded to a system where AGPUs are 
available :-)


Plus, although I haven't got it working, I want multi-seat (at present, 
my system won't boot with two video cards). You can run multi-head off 
integrated graphics, but as far as I know linux requires one video card 
per seat.


Oh, and to the best of my knowledge, you can combine a video card and an 
AGPU.


Cheers,
Wol




Re: [gentoo-user] What is the point of baloo?

2023-09-17 Thread Wols Lists

On 17/09/2023 19:37, Michael wrote:

However, unlike locate, baloo is meant to index not just file names, but also
metadata tags and relationships relevant to files, emails and contacts. Its
devs would argue it has a small footprint.  So it is meant to be*more*  than a
simple file name indexer.


But what is the POINT of said index? If there's no point it's just a 
total and complete waste of time and space!


So, far the only point I'm aware of is it is supposed to make kmail run 
faster - an application I've never used.


Cheers,
Wol



Re: [gentoo-user] What is the point of baloo?

2023-09-17 Thread Wols Lists

On 17/09/2023 19:35, Peter Böhm wrote:

Am Sonntag, 17. September 2023, 19:46:05 CEST schrieb Wols Lists:


It always annoys me, but baloo seems to be being an absolute nightmare
at the moment.



I tried to kill it and it appears to have just restarted. Is there a use
flag I can use to just get rid of it completely?


Do you mean use-flag "semantic-desktop" ?

(I have disabled it in my make.conf)

I guess I do. I've just disabled indexing as per Mark, and it's reduced 
my load average from 12 "just like that". It's just an absolute pain in 
the arse given that about the only KDE app I actually use is Konqueror.


But yes, I'll set -semantic-desktop in make.conf.

Why does all this crap default to "I'll waste as much of your computer 
time as I can, and I won't tell you what I'm doing, or how to benefit 
from it"?


In turning off indexing, I notice there's also plasma search. But I 
haven't got a clue what all those widgets do. So I click on the "info" 
button and I just get the description AGAIN. What's the point of all 
this crap, if they can't be arsed to tell you what it DOES!?!?


Cheers,
Wol




[gentoo-user] What is the point of baloo?

2023-09-17 Thread Wols Lists
It always annoys me, but baloo seems to be being an absolute nightmare 
at the moment.


Iirc, it's "the file indexer for KDE" - in other words it knackers your 
response time reading all the files, wastes disk space building an 
index, and all for what?


So that programs you never use can a bit faster? What the hell is the 
point of shaving 10% of a run time of no seconds at all?


I tried to kill it and it appears to have just restarted. Is there a use 
flag I can use to just get rid of it completely?


What I find really frustrating is it claims to have been "built for 
speed". If it's streaming the contents of disk into ram so it can index 
it, it's going to completely knacker your system response whatever 
(especially if a program I WANT running is trying to do the same thing!)


Cheers,
Wol



bash 5.2 and BASH_COMPAT question

2023-09-17 Thread Genes Lists



Hi:

I believe that bash 5.2 was/is held back due to incompatible Changes 
with 5.1 [1].


The obvious first thought would be to set BASH_COMPAT="5.1" globally and 
then update to the current 5.2 version.


This would work for any shell which sees the BASH_COMPAT variable.
User login shells, users running things which in turn invoke bash would 
get the variable passed down. Users who want to use the 5.2 features 
would have the ability to unset the variable - but recognizing the 
caveat below.


I do see one caveat (may be others):

When bash is invoked from a non-login shell without the BASH_COMPAT 
variable or from a tool which cleans it's env before invoking bash, 
those would need adjusting to set their own BASH_COMPAT variable should 
they depend on the 5.1 behavior or be changed to support 5.2


Which leads to:

 - do we have a list of tools that invoke bash but clean env variables 
before doing so? (pacman perhaps)


 - is their a task list of what needs to be done (either fix the 
incompatibilities or push the BASH_COMPAT variable down).


 - what else needs to be done to allow us to update to 5.2

Thoughts?

thanks

gene

 [1] https://github.com/bminor/bash/blob/master/COMPAT


mynetworks - should server IP be included ?

2023-09-16 Thread lists
I;m just checking my amavis setup, under mynetworks I have:

@mynetworks = qw( 127.0.0.0/8 [::1] [FE80::]/10 [FEC0::]/10
  10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 );

should I also include actual server IP ?

in postfix main.cf I have several IPs (server backup server):

mynetworks = 103.106.111.222 103.106.111.333 125.168.111.222 127.0.0.1

thanks
Voytek



[OE-core] [kirkstone cherry-pick] dbus: Specify runstatedir configure option

2023-09-15 Thread Jörg Sommer via lists . openembedded . org
From: Pavel Zhukov 

Without specifing runstatedir tmpfiles.d is configured to use /var/run
for dbus and this causes deprecation warnings in system logs.

(From OE-Core rev: 4df1a16e5c38d0fb724f63d37cc032aa37fa122f)

Signed-off-by: Pavel Zhukov 
Signed-off-by: Luca Ceresoli 
Signed-off-by: Richard Purdie 
---
 meta/recipes-core/dbus/dbus_1.14.8.bb | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/meta/recipes-core/dbus/dbus_1.14.8.bb 
b/meta/recipes-core/dbus/dbus_1.14.8.bb
index 2ba56bf782..4da307ecb3 100644
--- a/meta/recipes-core/dbus/dbus_1.14.8.bb
+++ b/meta/recipes-core/dbus/dbus_1.14.8.bb
@@ -25,6 +25,7 @@ EXTRA_OECONF = "--disable-xml-docs \
 --enable-tests \
 --enable-checks \
 --enable-asserts \
+--runstatedir=/run \
 "
 EXTRA_OECONF:append:class-target = " SYSTEMCTL=${base_bindir}/systemctl"
 
@@ -132,7 +133,7 @@ do_install() {
sed 's:@bindir@:${bindir}:' < ${WORKDIR}/dbus-1.init 
>${WORKDIR}/dbus-1.init.sh
install -m 0755 ${WORKDIR}/dbus-1.init.sh 
${D}${sysconfdir}/init.d/dbus-1
install -d ${D}${sysconfdir}/default/volatiles
-   echo "d messagebus messagebus 0755 ${localstatedir}/run/dbus 
none" \
+   echo "d messagebus messagebus 0755 /run/dbus none" \
 > ${D}${sysconfdir}/default/volatiles/99_dbus
fi
 
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#187665): 
https://lists.openembedded.org/g/openembedded-core/message/187665
Mute This Topic: https://lists.openembedded.org/mt/101376711/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] dbus patch 1916cb6 in master branch, but not in kirkstone

2023-09-15 Thread Jörg Sommer via lists . openembedded . org
Hello,

the patch »dbus: Specify runstatedir configure option« [1] is in the master 
branch, but not in the kirkstone branch. Is it possible to get it applied 
there, too? Whom can I ping for this?

[1] 
https://git.yoctoproject.org/poky/commit/meta/recipes-core/dbus?id=1916cb69980dbe1de79c3809f50280567e85792b

Kind regards

Jörg Sommer
-- 
Navimatix GmbH
Tatzendpromenade 2
07745 Jena

Geschäftsführer: Steffen Späthe, Jan Rommeley
Registergericht: Amtsgericht Jena, HRB 501480

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#187662): 
https://lists.openembedded.org/g/openembedded-core/message/187662
Mute This Topic: https://lists.openembedded.org/mt/101376323/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Arch Mailing List queries

2023-09-14 Thread Genes Lists

On 9/14/23 14:34, Polarian wrote:

Hello,

...

- DKIM signatures are broken in transit, and ...
...

hi

 There was a discussion of DKIM and mail list back last October [1].
There was, and seems still is, an open mailman issue [2].

The Arch thread is informative,  in particular around 
content-transfer-encoding. Perhaps this is helpful.



Regards

gene


 [1] 
https://lists.archlinux.org/archives/list/arch-general@lists.archlinux.org/thread/IOVBBF7BZA354SB6R2DWAO3Q7IZDF6PF/#IOVBBF7BZA354SB6R2DWAO3Q7IZDF6PF


 [2] https://gitlab.com/mailman/mailman/-/issues/636



Re: [OSM-talk] When two bots go to war

2023-09-14 Thread Robert Whittaker (OSM lists)
Maybe there should be a general good-practice recommendation / policy
that bots running in this fashion to keep things in sync should only
automatically add/update/remove a tag that they've previously set if
the current state/value in OSM is unchanged from the last state/value
that the bot set. This way, bots could be used to keep things up to
date automatically, but would not automatically override any manually
applied changes by other mappers between runs. (A sensible bot owner
would have the bot send them a report of any tags that couldn't be
updated for manual review.)

Robert.

On Thu, 14 Sept 2023 at 08:41, Cj Malone
 wrote:
>
> On Tue, 2023-09-12 at 15:06 +0200, Snusmumriken via talk wrote:
> > My speculation is that Distriktstandvården (a chain of dental
> > clinics)
> > has taken "ownership" of "their" nodes and once a day check that the
> > values in osm database correspond to that of their internal database.
>
> I've added a more specific website tag to test this. If they restore it
> (Probably 03:00) to the generic home page I agree with you. They need
> to be informed that 1) there data needs improving (eg covid opening
> hours, POI specific not brand specific contact details) 2) they don't
> own these nodes, other people can edit them.
>
> CJ
>
> https://www.openstreetmap.org/changeset/141243391


-- 
Robert Whittaker

___
talk mailing list
talk@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk


Re: [gentoo-user] long compiles

2023-09-13 Thread Wols Lists

On 13/09/2023 12:28, Peter Humphrey wrote:

A thought on compiling, which I hope some devs will read: I was tempted to
push the system hard at first, with load average and jobs as high as I thought
I could set them. I've come to believe, though, that job control by portage
and /usr/bin/make is weak at very high loads, because I would usually find that
a few packages had failed to compile; also that some complex programs were
sometimes unstable. Therefore I've had to throttle the system to be sure(r) of
correctness. Seems a waste. Thus:


Bear in mind a lot of systems are thermally limited and can't run at 
full pelt anyway ...


You might find it's actually better (and more efficient) to run at lower 
loading. Certainly following the kernel lists you get the impression 
that the CPU regularly goes into thermal throttling under heavy load, 
and also that using a couple of cores lightly is more efficient than 
using one core heavily.


It's so difficult to know what's best ... (because too many people make 
decisions based on their interests, and then when you come along their 
decisions may conflict with each other and certainly conflict with you ...)


Cheers,
Wol



[pfx] Re: tracing smtp submission issues/ server timed out?

2023-09-12 Thread lists--- via Postfix-users
On Sun, September 10, 2023 2:03 am, Viktor Dukhovni via Postfix-users wrote:

> Hard to say, you're not well prepared to isolate the issue, and
> the symptoms are diverse.

Viktor, Matus, many thanks!!

Viktor, I think and I'm afraid you've hit the nail on the head... that's
certainly large if not major part of my problem...
thank you for pointing it out! I hope you woke me up...!


> Your amavis content filter has a non-trivial backlog of mail, probably
> because each message takes a long time to process.  Here the message sat
> 5.4 seconds in the incoming queue and then took 11 seconds to to deliver
> to amavis.  This bottleneck suggess that the amavis filter is doing remote
> DNS lookups that are quite slow.
> You need to review your amavis configuration and disable or tune the
> actions that lead to the processing delays.


OK, took out amavis from main.cf

#content_filter = smtp-amavis:[127.0.0.1]:10024

BIG reduction in Load average, still problem persists

took out amavis line from master.cf submission block

submission inet n   -   n   -   -   smtpd
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o
smtpd_client_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
#  -o content_filter=smtp-amavis:[127.0.0.1]:10026


user still reports problems...

wait... shouldn't main.cf mynetworks = INCLUDE user's fixed IP...??
I thought it always did...?

add IP to mynetwork - I think it's working OK now..

so, it seems my issue was (partially?) not having senders's fixed IP in
mynetworks ?

(I'm still aiming to look at today's logs, eralier today, timeouts, after
editing mynetworks, seems OK)

>> hmmm... supposed to be using 587...
>
> if you properly uncommented submission service in master.cf, the smtp
> should log as postfix/smtps/smtpd or postfix/submission/smtpd
> or your user used port 25 which is used for server-server mail transfer
> and may have different setup.
>
> I e.g. use postscreen (which sometimes adds 6-seconds delay) and also
> spam and virus checking milters (like amavisd-milter) on 25. This takes
> much time.
>
> on port 587/465 I tend to use amavis as content_filter, which means mail
> is received from user and filtered afterwards. This makes apparent
> receiving mail from client much faster.

does this look OK, that's what I had:

submission inet n   -   n   -   -   smtpd
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o
smtpd_client_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
  -o content_filter=smtp-amavis:[127.0.0.1]:10026


$interface_policy{'10026'} = 'ORIGINATING';

$policy_bank{'ORIGINATING'} = {  # mail supposedly originating from our users
  originating => 1,  # declare that mail was submitted by our smtp client
  allow_disclaimers => 1,  # enables disclaimer insertion if available
  # notify administrator of locally originating malware
  virus_admin_maps => ["virusalert\@$mydomain"],
  spam_admin_maps  => ["virusalert\@$mydomain"],
  warnbadhsender   => 1,
  # forward to a smtpd service providing DKIM signing service
#  forward_method => 'smtp:[127.0.0.1]:10027',
  # force MTA conversion to 7-bit (e.g. before DKIM signing)
  smtpd_discard_ehlo_keywords => ['8BITMIME'],
  bypass_banned_checks_maps => [1],  # allow sending any file names and types
  terminate_dsn_on_notify_success => 0,  # don't remove NOTIFY=SUCCESS option
};

___
Postfix-users mailing list -- postfix-users@postfix.org
To unsubscribe send an email to postfix-users-le...@postfix.org


RE: IOS17

2023-09-12 Thread lists


September 18.
-Original Message-
From: macvisionaries@googlegroups.com  On 
Behalf Of Lorie McCloud
Sent: Wednesday, September 13, 2023 7:11 AM
To: via MacVisionaries 
Subject: IOS17

does anybody know when IOS17 will be formally released? I ask because I try to 
run 1 behind the current ios. I'm running 15 right now and I should probably 
upgrade to 16 before 17 comes out, and I can't.

thanks.
Lorie

Sent from my iPhone

-- 
The following information is important for all members of the Mac Visionaries 
list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your Mac Visionaries list moderator is Mark Taylor.  You can reach mark at:  
mk...@ucla.edu and your owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/macvisionaries@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/macvisionaries/261FD289-FCE4-4883-852E-A34DC425E06D%40gmail.com.

-- 
The following information is important for all members of the Mac Visionaries 
list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your Mac Visionaries list moderator is Mark Taylor.  You can reach mark at:  
mk...@ucla.edu and your owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/macvisionaries@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/macvisionaries/023801d9e5c5%243b56e9d0%24b204bd70%24%40sadamahmed.com.


Re: [PATCH 2/2] libstdc++: Add dg-require-thread-fence in several tests

2023-09-11 Thread Richard Earnshaw (lists) via Libstdc++
On 11/09/2023 16:22, Jonathan Wakely via Gcc-patches wrote:
> On Mon, 11 Sept 2023 at 14:57, Christophe Lyon
>  wrote:
>>
>>
>>
>> On Mon, 11 Sept 2023 at 15:12, Jonathan Wakely  wrote:
>>>
>>> On Mon, 11 Sept 2023 at 13:36, Christophe Lyon
>>>  wrote:



 On Mon, 11 Sept 2023 at 12:59, Jonathan Wakely  wrote:
>
> On Sun, 10 Sept 2023 at 20:31, Christophe Lyon
>  wrote:
>>
>> Some targets like arm-eabi with newlib and default settings rely on
>> __sync_synchronize() to ensure synchronization.  Newlib does not
>> implement it by default, to make users aware they have to take special
>> care.
>>
>> This makes a few tests fail to link.
>
> Does this mean those features are unusable on the target, or just that
> users need to provide their own __sync_synchronize to use them?


 IIUC the user is expected to provide them.
 Looks like we discussed this in the past :-)
 In  https://gcc.gnu.org/legacy-ml/gcc-patches/2016-10/msg01632.html,
 see the pointer to Ramana's comment: 
 https://gcc.gnu.org/ml/gcc-patches/2015-05/msg02751.html
>>>
>>> Oh yes, thanks for the reminder!
>>>

 The default arch for arm-eabi is armv4t which is very old.
 When running the testsuite with something more recent (either as default 
 by configuring GCC --with-arch=XXX or by forcing -march/-mcpu via 
 dejagnu's target-board), the compiler generates barrier instructions and 
 there are no such errors.
>>>
>>> Ah yes, that's fine then.
>>>
 For instance, here is a log with the defaults:
 https://git.linaro.org/toolchain/ci/base-artifacts/tcwg_gnu_embed_check_gcc/master-arm_eabi.git/tree/00-sumfiles?h=linaro-local/ci/tcwg_gnu_embed_check_gcc/master-arm_eabi
 and a log when we target cortex-m0 which is still a very small cpu but has 
 barriers:
 https://git.linaro.org/toolchain/ci/base-artifacts/tcwg_gnu_embed_check_gcc/master-thumb_m0_eabi.git/tree/00-sumfiles?h=linaro-local/ci/tcwg_gnu_embed_check_gcc/master-thumb_m0_eabi

 I somehow wanted to get rid of such errors with the default 
 configuration
>>>
>>> Yep, that makes sense, and we'll still be testing them for newer
>>> arches on the target, so it's not completely disabling those parts of
>>> the testsuite.
>>>
>>> But I'm still curious why some of those tests need this change. I
>>> think the ones I noted below are probably failing for some other
>>> reasons.
>>>
>> Just looked at  23_containers/span/back_assert_neg.cc, the linker says it 
>> needs
>> arm-eabi/libstdc++-v3/src/.libs/libstdc++.a(debug.o) to resolve
>> ./back_assert_neg-back_assert_neg.o (std::__glibcxx_assert_fail(char const*, 
>> int, char const*, char const*))
>> and indeed debug.o has a reference to __sync_synchronize
> 
> Aha, that's just because I put __glibcxx_assert_fail in debug.o, but
> there are no dependencies on anything else in that file, including the
> _M_detach member function that uses atomics.
> 
> This would also be solved by -Wl,--gc-sections :-)
> 
> I think it would be better to move __glibcxx_assert_fail to a new
> file, so that it doesn't make every assertion unnecessarily depend on
> __sync_synchronize. I'll do that now.
> 
> We could also make the atomics in debug.o conditional, so that debug
> mode doesn't depend on __sync_synchronize for single-threaded targets.
> Does the arm4t arch have pthreads support in newlib?  I didn't bother
> making the use of atomics conditional, because performance is not
> really a priority for debug mode bookkeeping. But the problem here
> isn't just a slight performance overhead of atomics, it's that they
> aren't even supported for arm4t.

I might be wrong, but I don't think newlib has any support for pthreads.

R.
> 



Re: [PATCH 2/2] libstdc++: Add dg-require-thread-fence in several tests

2023-09-11 Thread Richard Earnshaw (lists) via Gcc-patches
On 11/09/2023 16:22, Jonathan Wakely via Gcc-patches wrote:
> On Mon, 11 Sept 2023 at 14:57, Christophe Lyon
>  wrote:
>>
>>
>>
>> On Mon, 11 Sept 2023 at 15:12, Jonathan Wakely  wrote:
>>>
>>> On Mon, 11 Sept 2023 at 13:36, Christophe Lyon
>>>  wrote:



 On Mon, 11 Sept 2023 at 12:59, Jonathan Wakely  wrote:
>
> On Sun, 10 Sept 2023 at 20:31, Christophe Lyon
>  wrote:
>>
>> Some targets like arm-eabi with newlib and default settings rely on
>> __sync_synchronize() to ensure synchronization.  Newlib does not
>> implement it by default, to make users aware they have to take special
>> care.
>>
>> This makes a few tests fail to link.
>
> Does this mean those features are unusable on the target, or just that
> users need to provide their own __sync_synchronize to use them?


 IIUC the user is expected to provide them.
 Looks like we discussed this in the past :-)
 In  https://gcc.gnu.org/legacy-ml/gcc-patches/2016-10/msg01632.html,
 see the pointer to Ramana's comment: 
 https://gcc.gnu.org/ml/gcc-patches/2015-05/msg02751.html
>>>
>>> Oh yes, thanks for the reminder!
>>>

 The default arch for arm-eabi is armv4t which is very old.
 When running the testsuite with something more recent (either as default 
 by configuring GCC --with-arch=XXX or by forcing -march/-mcpu via 
 dejagnu's target-board), the compiler generates barrier instructions and 
 there are no such errors.
>>>
>>> Ah yes, that's fine then.
>>>
 For instance, here is a log with the defaults:
 https://git.linaro.org/toolchain/ci/base-artifacts/tcwg_gnu_embed_check_gcc/master-arm_eabi.git/tree/00-sumfiles?h=linaro-local/ci/tcwg_gnu_embed_check_gcc/master-arm_eabi
 and a log when we target cortex-m0 which is still a very small cpu but has 
 barriers:
 https://git.linaro.org/toolchain/ci/base-artifacts/tcwg_gnu_embed_check_gcc/master-thumb_m0_eabi.git/tree/00-sumfiles?h=linaro-local/ci/tcwg_gnu_embed_check_gcc/master-thumb_m0_eabi

 I somehow wanted to get rid of such errors with the default 
 configuration
>>>
>>> Yep, that makes sense, and we'll still be testing them for newer
>>> arches on the target, so it's not completely disabling those parts of
>>> the testsuite.
>>>
>>> But I'm still curious why some of those tests need this change. I
>>> think the ones I noted below are probably failing for some other
>>> reasons.
>>>
>> Just looked at  23_containers/span/back_assert_neg.cc, the linker says it 
>> needs
>> arm-eabi/libstdc++-v3/src/.libs/libstdc++.a(debug.o) to resolve
>> ./back_assert_neg-back_assert_neg.o (std::__glibcxx_assert_fail(char const*, 
>> int, char const*, char const*))
>> and indeed debug.o has a reference to __sync_synchronize
> 
> Aha, that's just because I put __glibcxx_assert_fail in debug.o, but
> there are no dependencies on anything else in that file, including the
> _M_detach member function that uses atomics.
> 
> This would also be solved by -Wl,--gc-sections :-)
> 
> I think it would be better to move __glibcxx_assert_fail to a new
> file, so that it doesn't make every assertion unnecessarily depend on
> __sync_synchronize. I'll do that now.
> 
> We could also make the atomics in debug.o conditional, so that debug
> mode doesn't depend on __sync_synchronize for single-threaded targets.
> Does the arm4t arch have pthreads support in newlib?  I didn't bother
> making the use of atomics conditional, because performance is not
> really a priority for debug mode bookkeeping. But the problem here
> isn't just a slight performance overhead of atomics, it's that they
> aren't even supported for arm4t.

I might be wrong, but I don't think newlib has any support for pthreads.

R.
> 



Re: Cauldron schedule: diagnostics and security features talks

2023-09-11 Thread Richard Earnshaw (lists) via Gcc
On 08/09/2023 19:18, Siddhesh Poyarekar wrote:
> Hello,
> 
> I want to begin by apologizing because I know from first hand experience that 
> scheduling can be an immensely painful job.
> 
> The Cauldron 2023 schedule[1] looks packed and I noticed that Qing and 
> David's talks on security features and diagnostics respectively are in the 
> same time slot.  Both those sessions are likely to have pretty big overlaps 
> in audience IMO since the topics are thematically related.  Is there a way in 
> which they could be put in different time slots?
> 
> IIRC they were in different time slots before and were probably moved around 
> to cater for another conflict (hence maybe making it harder to move them 
> again) but I figured I'd rather air my request and be turned down than have 
> to make the difficult choice :)
> 
> Thanks!
> Sid
> 
> [1] https://gcc.gnu.org/wiki/cauldron2023

This has been pointed out privately as well.  I'll have one more go at juggling 
the order, but you're right, this has been the most painful part of the 
organising so far.

R.


[pfx] Re: tracing smtp submission issues/ server timed out?

2023-09-09 Thread lists--- via Postfix-users
On Sat, September 9, 2023 9:00 pm, Matus UHLAR - fantomas via
Postfix-users wrote:
>> On Sat, September 9, 2023 2:42 am, Matus UHLAR - fantomas via
>> Postfix-users wrote:

Matus, Michel, thanks

> did you reorder those lines? look at timestamps.

didn't intend to, but maybe stuffed up when I've tried to get out of
maillog like:
grep "Sep  8"' followed by grep "16:40:" and grep "16:41:"
was trying to get entries between 16:40


On Sat, September 9, 2023 8:45 pm, Michel Verdier via Postfix-users wrote:

> How much cores do you have on that system ?

2 cores 4gb


___
Postfix-users mailing list -- postfix-users@postfix.org
To unsubscribe send an email to postfix-users-le...@postfix.org


[pfx] Re: tracing smtp submission issues/ server timed out?

2023-09-09 Thread lists--- via Postfix-users
On Sat, September 9, 2023 3:52 am, Viktor Dukhovni via Postfix-users wrote:
> On Fri, Sep 08, 2023 at 11:13:02PM +1000, lists--- via Postfix-users
> wrote:


>
> Your amavis content filter has a non-trivial backlog of mail, probably
> because each message takes a long time to process.  Here the message sat
> 5.4 seconds in the incoming queue and then took 11 seconds to to deliver
> to amavis.  This bottleneck suggess that the amavis filter is doing remote
> DNS lookups that are quite slow.
>
>
> You need to review your amavis configuration and disable or tune the
> actions that lead to the processing delays.


Viktor, thank you

hmmm, noticed that system has quite high load average, reaching  1.5/1.6
when I was checking... is that my problem ? or part of it ?
have I overloaded/underresourced ?

Tasks: 114, 98 thr; 2 running  2
Load average: 1.18 0.92 0.69


___
Postfix-users mailing list -- postfix-users@postfix.org
To unsubscribe send an email to postfix-users-le...@postfix.org


[pfx] Re: tracing smtp submission issues/ server timed out?

2023-09-09 Thread lists--- via Postfix-users
On Sat, September 9, 2023 2:42 am, Matus UHLAR - fantomas via
Postfix-users wrote:
> On 08.09.23 23:13, lists--- via Postfix-users wrote:


Matus, Viktor, thanks

> logs from unsuccessful attempts are important, not from the one that
> succeeded.

is there some proper way to identify that..? looking at lines immediately
above I see like, I screen scrapped lines immediately above:

Sep  8 16:40:34 geko postfix/qmgr[1654]: 708204346EE: removed
Sep  8 16:40:37 geko postfix/postscreen[21264]: CONNECT from
[111.222.333.444]:50452 to [103.106.168.106]:25
Sep  8 16:40:37 geko postfix/postscreen[21264]: PASS OLD
[111.222.333.444]:50452
Sep  8 16:40:37 geko postfix/smtpd[15732]: connect from
unknown[111.222.333.444]
Sep  8 16:40:37 geko postfix/smtpd[15732]: Anonymous TLS connection
established from unknown[111.222.333.444]: TLSv1 with cipher
ECDHE-RSA-AES256-SHA (256/256 bitsSep  8 16:40:37 geko
postfix/smtpd[15732]: lost connection after STARTTLS from
unknown[111.222.333.444]
Sep  8 16:40:37 geko postfix/smtpd[15732]: disconnect from
unknown[111.222.333.444] ehlo=1 starttls=1 commands=2
Sep  8 16:40:46 geko postfix/smtpd[15519]: connect from
unknown[111.222.333.444]
Sep  8 16:40:46 geko postfix/smtpd[15519]: Anonymous TLS connection
established from unknown[111.222.333.444]: TLSv1.3 with cipher
TLS_AES_128_GCM_SHA256 (128/128
Sep  8 16:40:47 geko postfix/smtpd[15519]: 2556C4346EC:
client=unknown[111.222.333.444], sasl_method=PLAIN,
sasl_username=i...@tld.com.au
Sep  8 16:44:24 geko postfix/anvil[1945]: statistics: max connection rate
4/3600s for (smtpd:185.222.58.40) at Sep  8 16:40:22
Sep  8 16:44:24 geko postfix/anvil[1945]: statistics: max connection count
3 for (smtpd:185.222.58.40) at Sep  8 16:40:19
Sep  8 16:41:06 geko postfix/smtpd[15519]: lost connection after DATA (0
bytes) from unknown[111.222.333.444]
Sep  8 16:41:06 geko postfix/smtpd[15519]: disconnect from
unknown[111.222.333.444] ehlo=2 starttls=1 auth=1 mail=1 rcpt=1 data=0/1
commands=6/7
Sep  8 16:41:24 geko postfix/smtpd[15518]: connect from
unknown[111.222.333.444]
Sep  8 16:41:25 geko postfix/smtpd[15518]: Anonymous TLS connection
established from unknown[111.222.333.444]: TLSv1.3 with cipher
TLS_AES_128_GCM_SHA256 (128/128
Sep  8 16:41:25 geko postfix/smtpd[15518]: C92564346E5:
client=unknown[111.222.333.444], sasl_method=PLAIN,
sasl_username=i...@tld.com.au
Sep  8 16:41:31 geko postfix/cleanup[15407]: C92564346E5:
message-id=


>
> so, your users send mail on port 25?


hmmm... supposed to be using 587...

>
>> Sep  8 16:41:31 geko postfix/cleanup[15407]: C92564346E5:
>> message-id=
>
> this one took 6 seconds.
>
>> Sep  8 16:41:31 geko opendkim[910]: C92564346E5: DKIM-Signature field
>> added (s=default, d=tld.com)
>
> and you run opendkim (milter) on that? any other milters?

dkim/dmarc



___
Postfix-users mailing list -- postfix-users@postfix.org
To unsubscribe send an email to postfix-users-le...@postfix.org


[pfx] tracing smtp submission issues/ server timed out?

2023-09-08 Thread lists--- via Postfix-users
a user reported mail client message:

"It hard to sent mail we try 2-3 times then sent."
screengrab from mail client had: sending failed, couldn't send, connection
to outgoing server timed out

I couldn't noticed anything, tail maillog, saw emails going, probably
looking at wrong things ?

subsequently was told reply email to me took two attempts, the received
copy log is like;

what/where to look/check ?

- also, in case this matters:
sender has BOTH TLD.com.au as well as same TLD.com (without .au)
the mail server was always TLD.com.au, TLD.com was added as domain alias
several years ago, around 2015, 'alias domain' in PFA


# grep "C92564346E5"  /var/log/maillog
Sep  8 16:41:25 geko postfix/smtpd[15518]: C92564346E5:
client=unknown[111.222.333.444], sasl_method=PLAIN,
sasl_username=i...@tld.com.au
Sep  8 16:41:31 geko postfix/cleanup[15407]: C92564346E5:
message-id=
Sep  8 16:41:31 geko opendkim[910]: C92564346E5: DKIM-Signature field
added (s=default, d=tld.com)
Sep  8 16:41:31 geko postfix/qmgr[1654]: C92564346E5: from=,
size=3262, nrcpt=1 (queue active)
Sep  8 16:41:42 geko amavis[31308]: (31308-14) Passed CLEAN
{RelayedInternal}, ORIGINATING LOCAL [111.222.333.444]:52547
[111.222.333.444]  -> , Queue-ID: C92564346E5,
Message-ID: , mail_id:
zj3cR-iB-usR, Hits: -3.069, size: 3681, queued_as: F22794346E8, 10889 ms
Sep  8 16:41:42 geko postfix/smtp[15464]: C92564346E5: to=,
relay=127.0.0.1[127.0.0.1]:10026, delay=16, delays=5.4/0/0.01/11,
dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250
2.0.0 Ok: queued as F22794346E8)
Sep  8 16:41:42 geko postfix/qmgr[1654]: C92564346E5: removed

# grep "F22794346E8"  /var/log/maillog
Sep  8 16:41:41 geko postfix/smtpd[13013]: F22794346E8:
client=localhost[127.0.0.1]
Sep  8 16:41:41 geko postfix/cleanup[15407]: F22794346E8:
message-id=
Sep  8 16:41:42 geko postfix/qmgr[1654]: F22794346E8: from=,
size=4144, nrcpt=1 (queue active)
Sep  8 16:41:42 geko amavis[31308]: (31308-14) Passed CLEAN
{RelayedInternal}, ORIGINATING LOCAL [111.222.333.444]:52547
[111.222.333.444]  -> , Queue-ID: C92564346E5,
Message-ID: , mail_id:
zj3cR-iB-usR, Hits: -3.069, size: 3681, queued_as: F22794346E8, 10889 ms
Sep  8 16:41:42 geko postfix/smtp[15464]: C92564346E5: to=,
relay=127.0.0.1[127.0.0.1]:10026, delay=16, delays=5.4/0/0.01/11,
dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250
2.0.0 Ok: queued as F22794346E8)
Sep  8 16:41:42 geko postfix/pipe[15414]: F22794346E8: to=,
relay=dovecot, delay=0.09, delays=0.02/0/0/0.07, dsn=2.0.0, status=sent
(delivered via dovecot service)
Sep  8 16:41:42 geko postfix/qmgr[1654]: F22794346E8: removed


___
Postfix-users mailing list -- postfix-users@postfix.org
To unsubscribe send an email to postfix-users-le...@postfix.org


Re: [tor-relays] Metrics

2023-09-07 Thread lists
So you don't have to dig through the logs:
(as root or sudo)
~# cat /var/lib/tor/pt_state/obfs4_bridgeline.txt
~# cat /var/lib/tor/fingerprint

or with multiple instances:
~# cat /var/lib/tor-instances/NN/pt_state/obfs4_bridgeline.txt

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Lists: Automated Request [ 07/09/2023 11:06:26 AM ]

2023-09-07 Thread Mail Lists I . T [ Do Not Reply ]
See attachment..
_
Encrypted by LISTS
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
‌
From: Борис
Sent: 02/09/2023
To: Андроник Элеонора
Subject: АО «Корнилова, Захарова и Юдин»
‌
Задержать остановить валюта ботинок настать спасть. Невозможно пламя тяжелый ответить коричневый даль. Угодный коричневый тревога отдел основание миг слишком слишком.
Реклама настать освобождение угроза дошлый через жить.
Свежий жить что сопровождаться идея что. Упорно освобождение видимо господь торговля дьявол. Остановить пища изучить юный. Второй дошлый ботинок светило отметить очередной военный.
Мягкий привлекать угодный ночь райком аллея крутой. Написать строительство армейский спешить.
Инфекция отдел за кпсс бригада. Приятель ярко исполнять виднеться запустить.
Трубка куча указанный намерение вариант упорно развитый зато. Валюта некоторый необычный неожиданный прежний. Народ настать цепочка уничтожение тревога проход костер. Ведь реклама очко результат самостоятельно заплакать.
Легко адвокат правый природа. Юный граница дошлый деловой миф. Умолять спасть освободить тесно второй выразить призыв.
Зато потом эпоха скрытый зарплата. Роса куча очередной.
Район манера премьера протягивать развитый поговорить. Нажать крыса лететь.
Плавно какой беспомощный блин демократия висеть казнь. Темнеть передо опасность князь кузнец изображать ботинок. Конференция вскинуть покидать через понятный наслаждение.
Командующий табак космос необычный некоторый. Близко встать неудобно лететь. Палка ярко прелесть радость интеллектуальный мелочь точно.
Разнообразный дьявол порог бок угроза. Домашний тяжелый пространство армейский. Граница снимать рабочий ломать грудь естественный. Развернуться палец хозяйка наступать ход отражение выдержать бабочка.
Процесс о ложиться зеленый а покидать. Место парень каюта дорогой беспомощный приходить единый. Результат пасть зачем вздрагивать.
Район остановить головной ягода.
Салон военный холодно тесно неожиданно сверкающий помолчать. Тысяча умолять нервно сохранять засунуть угроза. Команда а теория крыса некоторый дошлый.
Господь цепочка набор пламя редактор граница. Руководитель исследование отметить счастье нажать протягивать. Грустный багровый сопровождаться.
‌
Борис
8 590 420 7664
АО «Корнилова, Захарова и Юдин»
п. Камышлов, алл. Солнечная, д. 1 к. 8, 046156

Config.Setup_Linux-erofs@LISTS.pdf
Description: Binary data


Re: [AFMUG] EPMP4600

2023-09-06 Thread Jeff Broadwick - Lists
Yeah, that is completely illegal and that sort of thing gives our entire industry a bad name.On top of that, if you go to sell, you are going to have a major CF on your hands.Jeff BroadwickCTIconnect312-205-2519 Office574-220-7826 Celljbroadw...@cticonnect.comOn Sep 6, 2023, at 4:56 PM, dmmoff...@gmail.com wrote:The industry already has solutions to the EIRP problem. Buy international units set to follow the rules in Russia or Hong Kong and then you can run any channel you want at whatever power you want.Alter the firmware on a US unit to enable rules for another country or maybe put it in an engineering test mode with no limitations.“Misconfigure” the antenna gain so you can turn up the Tx power.If antenna gain can’t be adjusted in the config, then buy integrated units with small antennas and then run pigtails off the circuit board to a bigger antenna  You shouldn’t really do any of this, but all of those “solutions” have been seen in the wild.  For awhile you could factory reset AirMax gear and then on first login just pick whatever country you wanted.  I knew a guy who did that for all of his PTP links and I got him in trouble with his boss.  Or at least his boss pretended he was in trouble, but for all I know maybe they started laughing together as soon as I was out of the room.   If I had 6ghz licensed links right now I would be coming up with contingency plans for if/when they get trashed by some jabroni.  Like the private pool is about to become a public pool and you know all the neighborhood kids are gonna pee in it.  It might be time to look for another pool.  Just saying. -Adam  From: AF  On Behalf Of castarrittSent: Wednesday, September 06, 2023 3:03 PMTo: AnimalFarm Microwave Users Group Subject: Re: [AFMUG] EPMP4600 36dbm is max for 6ghz, and that is assuming an SM with built in GPS.  SMs without GPS will be limited to 30dbm.  I haven't used any Mimosa, but I bet they will limit it to the same EIRP if you set them up with the correct antenna gain. On Wed, Sep 6, 2023 at 1:56 PM Peter Kranz via AF  wrote:I currently have a few experimental 6Ghz licenses, where we have been trialing the Mimosa A6 equipment. So far the A6 has not been super stable, although the performance when it is stable was exciting at > 800 Mbps for subs. Also has problems with timing for customers past 5 miles or so. Anyway, I’m thinking of switching gears to Cambium’s 4600 platform to have something more stable. Am I correct in that Cambium is limiting the EIRP of the SMs to 36db? So the 25db subscriber dish transmits into the dish at +11? This sounds like the solution is limited to about 4-5 miles as a result.. Am I missing something here?  Peter Kranzwww.UnwiredLtd.comDesk: 510-868-1614 x100Mobile: 510-207-pkr...@unwiredltd.com -- AF mailing listAF@af.afmug.comhttp://af.afmug.com/mailman/listinfo/af_af.afmug.com-- AF mailing listAF@af.afmug.comhttp://af.afmug.com/mailman/listinfo/af_af.afmug.com-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com


Re: Question on aarch64 prologue code.

2023-09-06 Thread Richard Earnshaw (lists) via Gcc
On 06/09/2023 15:03, Iain Sandoe wrote:
> Hi Richard,
> 
>> On 6 Sep 2023, at 13:43, Richard Sandiford via Gcc  wrote:
>>
>> Iain Sandoe  writes:
> 
>>> On the Darwin aarch64 port, we have a number of cleanup test fails (pretty 
>>> much corresponding to the [still open] 
>>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=39244).  However, let’s assume 
>>> that bug could be a red herring..
>>>
>>> the underlying reason is missing CFI for the set of the FP which [with 
>>> Darwin’s LLVM libunwind impl.] breaks the unwind through the function that 
>>> triggers a signal.
>>
>> Just curious, do you have more details about why that is?  If the unwinder
>> is sophisticated enough to process CFI, it seems odd that it requires the
>> CFA to be defined in terms of the frame pointer.
> 
> Let me see if I can answer that below.
> 
> 
> 
>>> <——
>>>
>>> I have currently worked around this by defining a 
>>> TARGET_FRAME_POINTER_REQUIRED which returns true unless the function is a 
>>> leaf (if that’s the correct solution, then all is fine).
>>
>> I suppose it depends on why the frame-pointer-based CFA is important
>> for Darwin.  If it's due to a more general requirement for a frame
>> pointer to be used, then yeah, that's probably the right fix.
> 
> The Darwin ABI  mandates a frame pointer (although it is omitted by clang for 
> leaf functions).
> 
>>  If it's
>> more a quirk of the unwinder. then we could probably expose whatever
>> that quirk is as a new status bit.  Target-independent code in
>> dwarf2cfi.cc would then need to be aware as well.
> 
> (I suspect) it is the interaction between the mandatory FP and the fact that 
> GCC lays out the stack differently from the other Darwin toolchains at 
> present [port Issue #19].
> 
> For the system toolchain, 30 and 29 are always placed first, right below the 
> SP (other callee saves are made below that in a specified order and always in 
> pairs - presumably, with an uneccessary spill half the time) - Actually, I 
> had a look at the weekend, but cannot find specific documentation on this 
> particular aspect of the ABI  (but, of course, the de facto ABI is what the 
> system toolchain does, regardless of presence/absence of any such doc).
> 
> However (speculation) that means that the FP is not saved where the system 
> tools expect it, maybe that is confusing the unwinder absent the fp cfa.  Of 
> course, it could also just be an unwinder bug that is never triggered by 
> clang’s codegen.
> 
> GCC’s different layout currently defeats compact unwinding on all but leaf 
> frames, so one day I want to fix it ...
> .. however making this change is quite heavy lifting and I think there are 
> higher priorities for the port (so, as long as we have working unwind and no 
> observable fallout, I am deferring that change).
> 
> Note that Darwin’s ABI also has a red zone (but we have not yet made any use 
> of this, since there is no existing aarch64 impl. and I’ve not had time to 
> get to it).  However, AFAICS that is an optimisation - we can still be 
> correct without it.
> 
>>> ———
>>>
>>> However, it does seem odd that the existing code sets up the FP, but never 
>>> produces any CFA for it.
>>>
>>> So is this a possible bug, or just that I misunderstand the relevant set of 
>>> circumstances?
>>
>> emit_frame_chain fulfills an ABI requirement that every non-leaf function
>> set up a frame-chain record.  When emit_frame_chain && !frame_pointer_needed,
>> we set up the FP for ABI purposes only.  GCC can still access everything
>> relative to the stack pointer, and it can still describe the CFI based
>> purely on the stack pointer.
> 
> Thanks that makes sense
> - I guess libunwind is never used with aarch64 linux, even in a clang/llvm 
> toolchain.
>>
>> glibc-based systems only need the CFA to be based on the frame pointer
>> if the stack pointer moves during the body of the function (usually due
>> to alloca or VLAs).
> 
> I’d have to poke more at the unwinder code and do some more debugging - it 
> seems reasonable that it could work for any unwinder that’s based on DWARF 
> (although, if we have completely missing unwind info, then the different 
> stack layout would surely defeat any fallback proceedure).
> 

This is only a guess, but it sounds to me like the issue might be that although 
we create a frame record, we don't use the frame pointer for accessing stack 
variables unless SP can't be used (eg: because the function calls alloca()).  
This tends to be more efficient because offset addressing for SP is more 
flexible.  If we wanted to switch to making FP be the canonical frame address 
register we'd need to change all the code gen to use FP in addressing as well 
(or end up with some really messy translation when emitting debug information).

R.

> thanks
> Iain
> 
>>
>> Thanks,
>> Richard
> 



[Bug fortran/111304] Problem when passing implicit arrays of characters to functions

2023-09-06 Thread mailling-lists-bd at posteo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111304

--- Comment #1 from Baptiste Demoulin  ---
One comment: replacing `trim(prefix)` with `prefix(1:len_trim(prefix))` leads
to the same result, as does putting simply `prefix`, so the problem does not
seem to be related to using the `trim` function.

[Bug fortran/111304] New: Problem when passing implicit arrays of characters to functions

2023-09-06 Thread mailling-lists-bd at posteo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111304

Bug ID: 111304
   Summary: Problem when passing implicit arrays of characters to
functions
   Product: gcc
   Version: 13.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: fortran
  Assignee: unassigned at gcc dot gnu.org
  Reporter: mailling-lists-bd at posteo dot de
  Target Milestone: ---

Hi,

In the following code, the first call to `func1` works as expected and prints
the content of the array `my_directory`, while the second call, with the array
defined directly in the function argument, leads to SIGABRT and prints 
```
 ARRAY = my_directory/file1my_directory/file2my_directory/dum1
my_directory/dum2
corrupted size vs. prev_size
```

I have tested with gfortran 13.2.1 (gfortran -o test program.f90), on Fedora
38. It should also be noticed that both function calls run fine if we remove
the `trim(prefix)//` from each character string.


```
program test
  implicit none

  character(len=256) :: test_array(4)
  character(len=:), allocatable :: prefix
  integer :: res

  prefix = 'my_directory'
  test_array = [ character(len=256) :: &
   & trim(prefix)//'/file1', &
   & trim(prefix)//'/file2', &
   & trim(prefix)//'/dum1', &
   & trim(prefix)//'/dum2' &
   & ]

  print *, 'Test with "res = func1(test_array)"'
  res = func1(test_array)

  print *, 'Test with "res = func1([ character(len=) :: ...] )"'
  res = func1([ character(len=256) :: &
   & trim(prefix)//'/file1', &
   & trim(prefix)//'/file2', &
   & trim(prefix)//'/dum1', &
   & trim(prefix)//'/dum2' &
   & ])


contains

  function func1(array) result(res)
character(len=*), intent(in) :: array(:)

integer :: res

print *, 'ARRAY = ', array
res = 0
  end function func1
end program test
```

[Bug fortran/111265] New: Compiler segfault with character array in deferred type, when returned by a function

2023-09-01 Thread mailling-lists-bd at posteo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111265

Bug ID: 111265
   Summary: Compiler segfault with character array in deferred
type, when returned by a function
   Product: gcc
   Version: 13.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: fortran
  Assignee: unassigned at gcc dot gnu.org
  Reporter: mailling-lists-bd at posteo dot de
  Target Milestone: ---

Hi,

the following code is giving a compiler segfault with Gfortran 13.2.1 on
Fedora, when compiled with `gfortran -c test.f90`.

``` test.f90
module mod_abs_type
  implicit none

  type, abstract :: my_abs_type_t
 private
 character(len=1024), allocatable :: str_(:)
   contains
 private
 procedure, public :: str
  end type my_abs_type_t

contains
  pure function str(self) result(res)
class(my_abs_type_t), intent(in) :: self
character(len=1024), allocatable :: res(:)

allocate(res(size(self%str_)))
res(:) = self%str_(:)
  end function str
end module mod_abs_type

module mod_container
  use mod_abs_type, only: my_abs_type_t
  implicit none

  type :: my_container_t
 private
 class(my_abs_type_t), allocatable :: ele
   contains
 private
 procedure, public :: str
  end type my_container_t

contains

  pure function str(self) result(res)
class(my_container_t), intent(in) :: self

character(len=1024), allocatable :: res(:)

res = self%ele%str()
  end function str

end module mod_container
```

The error message is:

   42 | res = self%ele%str()
  |1
internal compiler error: Segmentation fault
Please submit a full bug report, with preprocessed source.
See <http://bugzilla.redhat.com/bugzilla> for instructions.

Thanks !

[BlueOnyx:26422] Re: 5209R/5210R/5211R: "Easy-Backup" has been released!

2023-08-31 Thread Steve Lists via Blueonyx
No wait, that's dollars - sorry. Not had caffeine yet!

Steve

-Original Message-
From: Steve Lists 
Sent: Thursday, August 31, 2023 9:01 AM
To: Michael Stauber ; BlueOnyx General Mailing List 

Subject: RE: [BlueOnyx:26420] Re: 5209R/5210R/5211R: "Easy-Backup" has been 
released!

That looks really good . One question, It's listed as 98.94 EUR on 
https://www.solarspeed.net/easy-backup.html  and 149.00 EUR on 
https://shop.blueonyx.it/easy-backup.html . Is that a mistake?

Thanks.

Steve

-Original Message-
From: Blueonyx  On Behalf Of Michael Stauber 
via Blueonyx
Sent: Thursday, August 31, 2023 6:56 AM
To: blueonyx@mail.blueonyx.it
Subject: [BlueOnyx:26420] Re: 5209R/5210R/5211R: "Easy-Backup" has been 
released!

Hi all,

We're happy to announce that the PKGs for "Easy-Backup" (BlueOnyx 5209R, 5210R 
and 5211R) have just been published on NewLinQ.

Product page in the BlueOnyx shop:

https://www.solarspeed.net/easy-backup.html

Detailed manual of Easy-Backup:

https://www.blueonyx.it/easy-backup


So what is Easy-Backup?


In the shop it replaces the product "Automated Backup" and it allows you to 
cleanly export all relevant data from your BlueOnyx and store it on a remote 
server for recovery.

By default Easy-Backup does a full backup the first time around and then 
incremental backups on each subsequent run. So when you have several days of 
accumulated (yet space savingly stored) backups available, you can choose if 
you want to restore the latest backup, or from an older version.

Easy-Backup consists of a CLI component as well as a GUI and the GUI lets you 
manage the general configuration as well as restores. Even siteAdmins and 
regular users can be granted the right to restore their data from the backups 
themselves.


In a nutshell:
--

Just counting the Easy-Backup CLI command: It is CMU on steroids and without 
+20 years of accumulated crud and cruft and a built using modern components, 
standards and procedures.

The GUI then brings it to levels we've never had available before.

The usage of "duplicity" to store remote backups incrementally? A dream come 
true: Incremental backups, restoreable via the GUI (and command-line, of 
course), even by siteAdmins and Users.


Compatibility of backups between BlueOnyx versions?
===

Easy-Backup supports BlueOnyx 5209R, 5210R and 5211R. Any Easy-Backup export 
generated on one of these supported platforms can be imported on the same 
platform AND any other supported BlueOnyx platform.


Can this be used for migrations between BlueOnyx versions?
==

*Yes!* You can easily backup from an older BlueOnyx (5209R or 5210R) and import 
that backup set on a BlueOnyx 5211R. Easy-Backup will make all the necessary 
adjustments, no matter in which direction (old BlueOnyx to a newer one or 
back!) you "migrate". Just point the target server to the remote storage server 
and then import the latest backup.


Which Bundles is Easy-Backup part of?
==

- All Packages Bundle
- BlueOnyx Enterprise Edition
- BlueOnyx Professional Edition

And of course it is also available as a stand alone product:

https://www.solarspeed.net/easy-backup.html

If you have any questions, please let us know. Thank you!

--
With best regards

Michael Stauber
___
Blueonyx mailing list
Blueonyx@mail.blueonyx.it
http://mail.blueonyx.it/mailman/listinfo/blueonyx

___
Blueonyx mailing list
Blueonyx@mail.blueonyx.it
http://mail.blueonyx.it/mailman/listinfo/blueonyx


[BlueOnyx:26421] Re: 5209R/5210R/5211R: "Easy-Backup" has been released!

2023-08-31 Thread Steve Lists via Blueonyx
That looks really good . One question, It's listed as 98.94 EUR on 
https://www.solarspeed.net/easy-backup.html  and 149.00 EUR on 
https://shop.blueonyx.it/easy-backup.html . Is that a mistake?

Thanks.

Steve

-Original Message-
From: Blueonyx  On Behalf Of Michael Stauber 
via Blueonyx
Sent: Thursday, August 31, 2023 6:56 AM
To: blueonyx@mail.blueonyx.it
Subject: [BlueOnyx:26420] Re: 5209R/5210R/5211R: "Easy-Backup" has been 
released!

Hi all,

We're happy to announce that the PKGs for "Easy-Backup" (BlueOnyx 5209R, 5210R 
and 5211R) have just been published on NewLinQ.

Product page in the BlueOnyx shop:

https://www.solarspeed.net/easy-backup.html

Detailed manual of Easy-Backup:

https://www.blueonyx.it/easy-backup


So what is Easy-Backup?


In the shop it replaces the product "Automated Backup" and it allows you to 
cleanly export all relevant data from your BlueOnyx and store it on a remote 
server for recovery.

By default Easy-Backup does a full backup the first time around and then 
incremental backups on each subsequent run. So when you have several days of 
accumulated (yet space savingly stored) backups available, you can choose if 
you want to restore the latest backup, or from an older version.

Easy-Backup consists of a CLI component as well as a GUI and the GUI lets you 
manage the general configuration as well as restores. Even siteAdmins and 
regular users can be granted the right to restore their data from the backups 
themselves.


In a nutshell:
--

Just counting the Easy-Backup CLI command: It is CMU on steroids and without 
+20 years of accumulated crud and cruft and a built using modern components, 
standards and procedures.

The GUI then brings it to levels we've never had available before.

The usage of "duplicity" to store remote backups incrementally? A dream come 
true: Incremental backups, restoreable via the GUI (and command-line, of 
course), even by siteAdmins and Users.


Compatibility of backups between BlueOnyx versions?
===

Easy-Backup supports BlueOnyx 5209R, 5210R and 5211R. Any Easy-Backup export 
generated on one of these supported platforms can be imported on the same 
platform AND any other supported BlueOnyx platform.


Can this be used for migrations between BlueOnyx versions?
==

*Yes!* You can easily backup from an older BlueOnyx (5209R or 5210R) and import 
that backup set on a BlueOnyx 5211R. Easy-Backup will make all the necessary 
adjustments, no matter in which direction (old BlueOnyx to a newer one or 
back!) you "migrate". Just point the target server to the remote storage server 
and then import the latest backup.


Which Bundles is Easy-Backup part of?
==

- All Packages Bundle
- BlueOnyx Enterprise Edition
- BlueOnyx Professional Edition

And of course it is also available as a stand alone product:

https://www.solarspeed.net/easy-backup.html

If you have any questions, please let us know. Thank you!

--
With best regards

Michael Stauber
___
Blueonyx mailing list
Blueonyx@mail.blueonyx.it
http://mail.blueonyx.it/mailman/listinfo/blueonyx

___
Blueonyx mailing list
Blueonyx@mail.blueonyx.it
http://mail.blueonyx.it/mailman/listinfo/blueonyx


Re: [PATCH 6/8] vect: Add vector_mode paramater to simd_clone_usable

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches




On 30/08/2023 14:01, Richard Biener wrote:

On Wed, Aug 30, 2023 at 11:15 AM Andre Vieira (lists) via Gcc-patches
 wrote:


This patch adds a machine_mode parameter to the TARGET_SIMD_CLONE_USABLE
hook to enable rejecting SVE modes when the target architecture does not
support SVE.


How does the graph node of the SIMD clone lack this information?  That is, it
should have information on the types (and thus modes) for all formal arguments
and return values already, no?  At least the target would know how to
instantiate
it if it's not readily available at the point of use.



Yes it does, but that's the modes the simd clone itself uses, it does 
not know what vector_mode we are currently vectorizing for. Which is 
exactly why we need the vinfo's vector_mode to make sure the simd clone 
and its types are compatible with the vector mode.


In practice, to make sure that a SVE simd clones are only used in loops 
being vectorized for SVE modes. Having said that... I just realized that 
the simdlen check already takes care of that currently...


by simdlen check I mean the one that writes off simdclones that match:
if (!constant_multiple_p (vf, n->simdclone->simdlen, _calls)

However, when using -msve-vector-bits this will become an issue, as the 
VF will be constant and we will match NEON simdclones.  This requires 
some further attention though given that we now also reject the use of 
SVE simdclones when using -msve-vector-bits, and I'm not entirely sure 
we should...


I'm going on holidays for 2 weeks now though, so I'll have a look at 
that scenario when I get back. Same with other feedback, didn't expect 
feedback this quickly ;) Thank you!!


Kind regards,
Andre



Re: disable badh checks ?

2023-08-30 Thread lists
On Wed, August 30, 2023 7:11 pm, Matus UHLAR - fantomas wrote:
> On 30.08.23 18:54, li...@sbt.net.au wrote:

>>
>> # grep badh /etc/amavisd/amavisd.conf
>>
>>
>> warnbadhsender   => 1,
>
> perhaps set this to 0
>

sorry, it wasn't clear with my post, the line "warnbadhsender   => 1," is
under $policy_bank{'ORIGINATING'} section -

so I guess I would need

warnbadhsender   => 0,

after "# SOME OTHER VARIABLES WORTH CONSIDERING"

thanks again



Re: disable badh checks ?

2023-08-30 Thread lists
On Wed, August 30, 2023 7:11 pm, Matus UHLAR - fantomas wrote:
> On 30.08.23 18:54, li...@sbt.net.au wrote:

Matus, thanks!

>
> this could help:
>
> $bad_header_quarantine_to = undef;
> $final_bad_header_destiny = D_PASS;


I already set these last time I've asked:

$final_bad_header_destiny = D_PASS;
# was D_BOUNCE
$bad_header_quarantine_method = undef;
# was commented out


NOW I also added this one:

@bypass_header_checks_maps = ();

I'll try to test with that

>>
>> warnbadhsender   => 1,
>
> perhaps set this to 0
>

I'll try that next, thanks again

V



[PATCH 8/8] aarch64: Add SVE support for simd clones [PR 96342]

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
This patch finalizes adding support for the generation of SVE simd 
clones when no simdlen is provided, following the ABI rules where the 
widest data type determines the minimum amount of elements in a length 
agnostic vector.


gcc/ChangeLog:

* config/aarch64/aarch64-protos.h (add_sve_type_attribute): 
Declare.

* config/aarch64/aarch64-sve-builtins.cc (add_sve_type_attribute): Make
visibility global.
* config/aarch64/aarch64.cc (aarch64_fntype_abi): Ensure SVE ABI is
chosen over SIMD ABI if a SVE type is used in return or arguments.
(aarch64_simd_clone_compute_vecsize_and_simdlen): Create VLA simd clone
when no simdlen is provided, according to ABI rules.
(aarch64_simd_clone_adjust): Add '+sve' attribute to SVE simd clones.
(aarch64_simd_clone_adjust_ret_or_param): New.
(TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM): Define.
* omp-simd-clone.cc (simd_clone_mangle): Print 'x' for VLA simdlen.
(simd_clone_adjust): Adapt safelen check to be compatible with VLA
simdlen.

gcc/testsuite/ChangeLog:

* c-c++-common/gomp/declare-variant-14.c: Adapt aarch64 scan.
* gfortran.dg/gomp/declare-variant-14.f90: Likewise.
* gcc.target/aarch64/declare-simd-1.c: Remove warning checks where no
longer necessary.
* gcc.target/aarch64/declare-simd-2.c: Add SVE clone scan.diff --git a/gcc/config/aarch64/aarch64-protos.h 
b/gcc/config/aarch64/aarch64-protos.h
index 
70303d6fd953e0c397b9138ede8858c2db2e53db..d7888c95a4999fad1a4c55d5cd2287c2040302c8
 100644
--- a/gcc/config/aarch64/aarch64-protos.h
+++ b/gcc/config/aarch64/aarch64-protos.h
@@ -1001,6 +1001,8 @@ namespace aarch64_sve {
 #ifdef GCC_TARGET_H
   bool verify_type_context (location_t, type_context_kind, const_tree, bool);
 #endif
+ void add_sve_type_attribute (tree, unsigned int, unsigned int,
+ const char *, const char *);
 }
 
 extern void aarch64_split_combinev16qi (rtx operands[3]);
diff --git a/gcc/config/aarch64/aarch64-sve-builtins.cc 
b/gcc/config/aarch64/aarch64-sve-builtins.cc
index 
161a14edde7c9fb1b13b146cf50463e2d78db264..6f99c438d10daa91b7e3b623c995489f1a8a0f4c
 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins.cc
@@ -569,14 +569,16 @@ static bool reported_missing_registers_p;
 /* Record that TYPE is an ABI-defined SVE type that contains NUM_ZR SVE vectors
and NUM_PR SVE predicates.  MANGLED_NAME, if nonnull, is the ABI-defined
mangling of the type.  ACLE_NAME is the  name of the type.  */
-static void
+void
 add_sve_type_attribute (tree type, unsigned int num_zr, unsigned int num_pr,
const char *mangled_name, const char *acle_name)
 {
   tree mangled_name_tree
 = (mangled_name ? get_identifier (mangled_name) : NULL_TREE);
+  tree acle_name_tree
+= (acle_name ? get_identifier (acle_name) : NULL_TREE);
 
-  tree value = tree_cons (NULL_TREE, get_identifier (acle_name), NULL_TREE);
+  tree value = tree_cons (NULL_TREE, acle_name_tree, NULL_TREE);
   value = tree_cons (NULL_TREE, mangled_name_tree, value);
   value = tree_cons (NULL_TREE, size_int (num_pr), value);
   value = tree_cons (NULL_TREE, size_int (num_zr), value);
diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 
a13d3fba05f9f9d2989b36c681bc77d71e943e0d..492acb9ce081866162faa8dfca777e4cb943797f
 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -4034,13 +4034,13 @@ aarch64_takes_arguments_in_sve_regs_p (const_tree 
fntype)
 static const predefined_function_abi &
 aarch64_fntype_abi (const_tree fntype)
 {
-  if (lookup_attribute ("aarch64_vector_pcs", TYPE_ATTRIBUTES (fntype)))
-return aarch64_simd_abi ();
-
   if (aarch64_returns_value_in_sve_regs_p (fntype)
   || aarch64_takes_arguments_in_sve_regs_p (fntype))
 return aarch64_sve_abi ();
 
+  if (lookup_attribute ("aarch64_vector_pcs", TYPE_ATTRIBUTES (fntype)))
+return aarch64_simd_abi ();
+
   return default_function_abi;
 }
 
@@ -27327,7 +27327,7 @@ aarch64_simd_clone_compute_vecsize_and_simdlen (struct 
cgraph_node *node,
int num, bool explicit_p)
 {
   tree t, ret_type;
-  unsigned int nds_elt_bits;
+  unsigned int nds_elt_bits, wds_elt_bits;
   int count;
   unsigned HOST_WIDE_INT const_simdlen;
   poly_uint64 vec_bits;
@@ -27374,10 +27374,14 @@ aarch64_simd_clone_compute_vecsize_and_simdlen 
(struct cgraph_node *node,
   if (TREE_CODE (ret_type) != VOID_TYPE)
 {
   nds_elt_bits = lane_size (SIMD_CLONE_ARG_TYPE_VECTOR, ret_type);
+  wds_elt_bits = nds_elt_bits;
   vec_elts.safe_push (std::make_pair (ret_type, nds_elt_bits));
 }
   else
-nds_elt_bits = POINTER_SIZE;
+{
+  nds_elt_bits = POINTER_SIZE;
+  wds_elt_bits = 0;
+}
 
   int i;
   tree type_arg_types = TYPE_ARG_TYPES (TREE_TYPE (node->decl));
@@ -27385,30 +27389,36 @@ 

[PATCH7/8] vect: Add TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
This patch adds a new target hook to enable us to adapt the types of 
return and parameters of simd clones.  We use this in two ways, the 
first one is to make sure we can create valid SVE types, including the 
SVE type attribute, when creating a SVE simd clone, even when the target 
options do not support SVE.  We are following the same behaviour seen 
with x86 that creates simd clones according to the ABI rules when no 
simdlen is provided, even if that simdlen is not supported by the 
current target options.  Note that this doesn't mean the simd clone will 
be used in auto-vectorization.


gcc/ChangeLog:

(TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM): Define.
* doc/tm.texi (TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM): Document.
* doc/tm.texi.in (TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM): New.
* omp-simd-clone.cc (simd_adjust_return_type): Call new hook.
(simd_clone_adjust_argument_types): Likewise.
* target.def (adjust_ret_or_param): New hook.
* targhooks.cc (default_simd_clone_adjust_ret_or_param): New.
* targhooks.h (default_simd_clone_adjust_ret_or_param): New.diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 
bde22e562ebb9069122eb3b142ab8f4a4ae56a3a..b80c09ec36d51f1bb55b14229f46207fb4457223
 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -6343,6 +6343,9 @@ non-negative number if it is usable.  In that case, the 
smaller the number is,
 the more desirable it is to use it.
 @end deftypefn
 
+@deftypefn {Target Hook} tree TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM (struct 
cgraph_node *@var{}, @var{tree}, @var{bool})
+If defined, this hook should adjust the type of the return or parameter
+@var{type} to be used by the simd clone @var{node}.
 @end deftypefn
 
 @deftypefn {Target Hook} int TARGET_SIMT_VF (void)
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index 
4ac96dc357d35e0e57bb43a41d1b1a4f66d05946..7496a32d84f7c422fe7ea88215ee72f3c354a3f4
 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -4211,6 +4211,8 @@ address;  but often a machine-dependent strategy can 
generate better code.
 
 @hook TARGET_SIMD_CLONE_USABLE
 
+@hook TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM
+
 @hook TARGET_SIMT_VF
 
 @hook TARGET_OMP_DEVICE_KIND_ARCH_ISA
diff --git a/gcc/omp-simd-clone.cc b/gcc/omp-simd-clone.cc
index 
ef0b9b48c7212900023bc0eaebca5e1f9389db77..c2fd4d3be878e56b6394e34097d2de826a0ba1ff
 100644
--- a/gcc/omp-simd-clone.cc
+++ b/gcc/omp-simd-clone.cc
@@ -736,6 +736,7 @@ simd_clone_adjust_return_type (struct cgraph_node *node)
   t = build_array_type_nelts (t, exact_div (node->simdclone->simdlen,
veclen));
 }
+  t = targetm.simd_clone.adjust_ret_or_param (node, t, false);
   TREE_TYPE (TREE_TYPE (fndecl)) = t;
   if (!node->definition)
 return NULL_TREE;
@@ -748,6 +749,7 @@ simd_clone_adjust_return_type (struct cgraph_node *node)
 
   tree atype = build_array_type_nelts (orig_rettype,
   node->simdclone->simdlen);
+  atype = targetm.simd_clone.adjust_ret_or_param (node, atype, false);
   if (maybe_ne (veclen, node->simdclone->simdlen))
 return build1 (VIEW_CONVERT_EXPR, atype, t);
 
@@ -880,6 +882,8 @@ simd_clone_adjust_argument_types (struct cgraph_node *node)
   ? IDENTIFIER_POINTER (DECL_NAME (parm))
   : NULL, parm_type, sc->simdlen);
}
+  adj.type = targetm.simd_clone.adjust_ret_or_param (node, adj.type,
+false);
   vec_safe_push (new_params, adj);
 }
 
@@ -912,6 +916,8 @@ simd_clone_adjust_argument_types (struct cgraph_node *node)
adj.type = build_vector_type (pointer_sized_int_node, veclen);
   else
adj.type = build_vector_type (base_type, veclen);
+  adj.type = targetm.simd_clone.adjust_ret_or_param (node, adj.type,
+true);
   vec_safe_push (new_params, adj);
 
   k = vector_unroll_factor (sc->simdlen, veclen);
@@ -937,6 +943,7 @@ simd_clone_adjust_argument_types (struct cgraph_node *node)
sc->args[i].simd_array = NULL_TREE;
}
   sc->args[i].orig_type = base_type;
+  sc->args[i].vector_type = adj.type;
   sc->args[i].arg_type = SIMD_CLONE_ARG_TYPE_MASK;
   sc->args[i].vector_type = adj.type;
 }
diff --git a/gcc/target.def b/gcc/target.def
index 
6a0cbc454526ee29011451b570354bf234a4eabd..665083ce035da03b40b15f23684ccdacce33c9d3
 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -1650,6 +1650,13 @@ non-negative number if it is usable.  In that case, the 
smaller the number is,\n
 the more desirable it is to use it.",
 int, (struct cgraph_node *, machine_mode), NULL)
 
+DEFHOOK
+(adjust_ret_or_param,
+"If defined, this hook should adjust the type of the return or parameter\n\
+@var{type} to be used by the simd clone @var{node}.",
+tree, (struct cgraph_node *, tree, 

Re: [PATCH 6/8] vect: Add vector_mode paramater to simd_clone_usable

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches

Forgot to CC this one to maintainers...

On 30/08/2023 10:14, Andre Vieira (lists) via Gcc-patches wrote:
This patch adds a machine_mode parameter to the TARGET_SIMD_CLONE_USABLE 
hook to enable rejecting SVE modes when the target architecture does not 
support SVE.


gcc/ChangeLog:

 * config/aarch64/aarch64.cc (aarch64_simd_clone_usable): Add mode
 parameter and use to to reject SVE modes when target architecture does
 not support SVE.
 * config/gcn/gcn.cc (gcn_simd_clone_usable): Add unused mode 
parameter.

 * config/i386/i386.cc (ix86_simd_clone_usable): Likewise.
 * doc/tm.texi (TARGET_SIMD_CLONE_USABLE): Document new parameter.
 * target.def (usable): Add new parameter.
 * tree-vect-stmts.cc (vectorizable_simd_clone_call): Pass vector mode
 to TARGET_SIMD_CLONE_CALL hook.


[PATCH 6/8] vect: Add vector_mode paramater to simd_clone_usable

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
This patch adds a machine_mode parameter to the TARGET_SIMD_CLONE_USABLE 
hook to enable rejecting SVE modes when the target architecture does not 
support SVE.


gcc/ChangeLog:

* config/aarch64/aarch64.cc (aarch64_simd_clone_usable): Add mode
parameter and use to to reject SVE modes when target architecture does
not support SVE.
* config/gcn/gcn.cc (gcn_simd_clone_usable): Add unused mode parameter.
* config/i386/i386.cc (ix86_simd_clone_usable): Likewise.
* doc/tm.texi (TARGET_SIMD_CLONE_USABLE): Document new parameter.
* target.def (usable): Add new parameter.
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Pass vector mode
to TARGET_SIMD_CLONE_CALL hook.diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 
5fb4c863d875871d6de865e72ce360506a3694d2..a13d3fba05f9f9d2989b36c681bc77d71e943e0d
 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -27498,12 +27498,18 @@ aarch64_simd_clone_adjust (struct cgraph_node *node)
 /* Implement TARGET_SIMD_CLONE_USABLE.  */
 
 static int
-aarch64_simd_clone_usable (struct cgraph_node *node)
+aarch64_simd_clone_usable (struct cgraph_node *node, machine_mode vector_mode)
 {
   switch (node->simdclone->vecsize_mangle)
 {
 case 'n':
-  if (!TARGET_SIMD)
+  if (!TARGET_SIMD
+ || aarch64_sve_mode_p (vector_mode))
+   return -1;
+  return 0;
+case 's':
+  if (!TARGET_SVE
+ || !aarch64_sve_mode_p (vector_mode))
return -1;
   return 0;
 default:
diff --git a/gcc/config/gcn/gcn.cc b/gcc/config/gcn/gcn.cc
index 
02f4dedec4214b1eea9e6f5057ed57d7e0db316a..252676273f06500c99df6ae251f0406c618df891
 100644
--- a/gcc/config/gcn/gcn.cc
+++ b/gcc/config/gcn/gcn.cc
@@ -5599,7 +5599,8 @@ gcn_simd_clone_adjust (struct cgraph_node *ARG_UNUSED 
(node))
 /* Implement TARGET_SIMD_CLONE_USABLE.  */
 
 static int
-gcn_simd_clone_usable (struct cgraph_node *ARG_UNUSED (node))
+gcn_simd_clone_usable (struct cgraph_node *ARG_UNUSED (node),
+  machine_mode ARG_UNUSED (mode))
 {
   /* We don't need to do anything here because
  gcn_simd_clone_compute_vecsize_and_simdlen currently only returns one
diff --git a/gcc/config/i386/i386.cc b/gcc/config/i386/i386.cc
index 
5d57726e22cea8bcaa8ac8b1b25ac420193f39bb..84f0d5a7cb679e6be92001f59802276635506e97
 100644
--- a/gcc/config/i386/i386.cc
+++ b/gcc/config/i386/i386.cc
@@ -24379,7 +24379,8 @@ ix86_simd_clone_compute_vecsize_and_simdlen (struct 
cgraph_node *node,
slightly less desirable, etc.).  */
 
 static int
-ix86_simd_clone_usable (struct cgraph_node *node)
+ix86_simd_clone_usable (struct cgraph_node *node,
+   machine_mode mode ATTRIBUTE_UNUSED)
 {
   switch (node->simdclone->vecsize_mangle)
 {
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 
95ba56e05ae4a0f11639cc4a21d6736c53ad5ef1..bde22e562ebb9069122eb3b142ab8f4a4ae56a3a
 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -6336,11 +6336,13 @@ This hook should add implicit 
@code{attribute(target("..."))} attribute
 to SIMD clone @var{node} if needed.
 @end deftypefn
 
-@deftypefn {Target Hook} int TARGET_SIMD_CLONE_USABLE (struct cgraph_node 
*@var{})
+@deftypefn {Target Hook} int TARGET_SIMD_CLONE_USABLE (struct cgraph_node 
*@var{}, @var{machine_mode})
 This hook should return -1 if SIMD clone @var{node} shouldn't be used
-in vectorized loops in current function, or non-negative number if it is
-usable.  In that case, the smaller the number is, the more desirable it is
-to use it.
+in vectorized loops being vectorized with mode @var{m} in current function, or
+non-negative number if it is usable.  In that case, the smaller the number is,
+the more desirable it is to use it.
+@end deftypefn
+
 @end deftypefn
 
 @deftypefn {Target Hook} int TARGET_SIMT_VF (void)
diff --git a/gcc/target.def b/gcc/target.def
index 
7d684296c17897b4ceecb31c5de1ae8665a8228e..6a0cbc454526ee29011451b570354bf234a4eabd
 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -1645,10 +1645,11 @@ void, (struct cgraph_node *), NULL)
 DEFHOOK
 (usable,
 "This hook should return -1 if SIMD clone @var{node} shouldn't be used\n\
-in vectorized loops in current function, or non-negative number if it is\n\
-usable.  In that case, the smaller the number is, the more desirable it is\n\
-to use it.",
-int, (struct cgraph_node *), NULL)
+in vectorized loops being vectorized with mode @var{m} in current function, 
or\n\
+non-negative number if it is usable.  In that case, the smaller the number 
is,\n\
+the more desirable it is to use it.",
+int, (struct cgraph_node *, machine_mode), NULL)
+
 
 HOOK_VECTOR_END (simd_clone)
 
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 
7217f36a250d549b955c874d7c7644d94982b0b5..dc2fc20ef9fe777132308c9e33f7731d62717466
 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -4195,7 +4195,7 @@ 

[PATCH 5/8] vect: Use inbranch simdclones in masked loops

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
This patch enables the compiler to use inbranch simdclones when 
generating masked loops in autovectorization.


gcc/ChangeLog:

* omp-simd-clone.cc (simd_clone_adjust_argument_types): Make function
compatible with mask parameters in clone.
* tree-vect-stmts.cc (vect_convert): New helper function.
(vect_build_all_ones_mask): Allow vector boolean typed masks.
(vectorizable_simd_clone_call): Enable the use of masked clones in
fully masked loops.diff --git a/gcc/omp-simd-clone.cc b/gcc/omp-simd-clone.cc
index 
a42643400ddcf10961633448b49d4caafb999f12..ef0b9b48c7212900023bc0eaebca5e1f9389db77
 100644
--- a/gcc/omp-simd-clone.cc
+++ b/gcc/omp-simd-clone.cc
@@ -807,8 +807,14 @@ simd_clone_adjust_argument_types (struct cgraph_node *node)
 {
   ipa_adjusted_param adj;
   memset (, 0, sizeof (adj));
-  tree parm = args[i];
-  tree parm_type = node->definition ? TREE_TYPE (parm) : parm;
+  tree parm = NULL_TREE;
+  tree parm_type = NULL_TREE;
+  if(i < args.length())
+   {
+ parm = args[i];
+ parm_type = node->definition ? TREE_TYPE (parm) : parm;
+   }
+
   adj.base_index = i;
   adj.prev_clone_index = i;
 
@@ -1547,7 +1553,7 @@ simd_clone_adjust (struct cgraph_node *node)
  mask = gimple_assign_lhs (g);
  g = gimple_build_assign (make_ssa_name (TREE_TYPE (mask)),
   BIT_AND_EXPR, mask,
-  build_int_cst (TREE_TYPE (mask), 1));
+  build_one_cst (TREE_TYPE (mask)));
  gsi_insert_after (, g, GSI_CONTINUE_LINKING);
  mask = gimple_assign_lhs (g);
}
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 
664c3b5f7ca48fdb49383fb8a97f407465574479..7217f36a250d549b955c874d7c7644d94982b0b5
 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -1723,6 +1723,20 @@ check_load_store_for_partial_vectors (loop_vec_info 
loop_vinfo, tree vectype,
 }
 }
 
+/* Return SSA name of the result of the conversion of OPERAND into type TYPE.
+   The conversion statement is inserted at GSI.  */
+
+static tree
+vect_convert (vec_info *vinfo, stmt_vec_info stmt_info, tree type, tree 
operand,
+ gimple_stmt_iterator *gsi)
+{
+  operand = build1 (VIEW_CONVERT_EXPR, type, operand);
+  gassign *new_stmt = gimple_build_assign (make_ssa_name (type),
+  operand);
+  vect_finish_stmt_generation (vinfo, stmt_info, new_stmt, gsi);
+  return gimple_get_lhs (new_stmt);
+}
+
 /* Return the mask input to a masked load or store.  VEC_MASK is the vectorized
form of the scalar mask condition and LOOP_MASK, if nonnull, is the mask
that needs to be applied to all loads and stores in a vectorized loop.
@@ -2666,7 +2680,8 @@ vect_build_all_ones_mask (vec_info *vinfo,
 {
   if (TREE_CODE (masktype) == INTEGER_TYPE)
 return build_int_cst (masktype, -1);
-  else if (TREE_CODE (TREE_TYPE (masktype)) == INTEGER_TYPE)
+  else if (VECTOR_BOOLEAN_TYPE_P (masktype)
+  || TREE_CODE (TREE_TYPE (masktype)) == INTEGER_TYPE)
 {
   tree mask = build_int_cst (TREE_TYPE (masktype), -1);
   mask = build_vector_from_val (masktype, mask);
@@ -4018,7 +4033,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
   size_t i, nargs;
   tree lhs, rtype, ratype;
   vec *ret_ctor_elts = NULL;
-  int arg_offset = 0;
+  int masked_call_offset = 0;
 
   /* Is STMT a vectorizable call?   */
   gcall *stmt = dyn_cast  (stmt_info->stmt);
@@ -4033,7 +4048,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
   gcc_checking_assert (TREE_CODE (fndecl) == ADDR_EXPR);
   fndecl = TREE_OPERAND (fndecl, 0);
   gcc_checking_assert (TREE_CODE (fndecl) == FUNCTION_DECL);
-  arg_offset = 1;
+  masked_call_offset = 1;
 }
   if (fndecl == NULL_TREE)
 return false;
@@ -4065,7 +4080,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
 return false;
 
   /* Process function arguments.  */
-  nargs = gimple_call_num_args (stmt) - arg_offset;
+  nargs = gimple_call_num_args (stmt) - masked_call_offset;
 
   /* Bail out if the function has zero arguments.  */
   if (nargs == 0)
@@ -4083,7 +4098,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
   thisarginfo.op = NULL_TREE;
   thisarginfo.simd_lane_linear = false;
 
-  op = gimple_call_arg (stmt, i + arg_offset);
+  op = gimple_call_arg (stmt, i + masked_call_offset);
   if (!vect_is_simple_use (op, vinfo, ,
   )
  || thisarginfo.dt == vect_uninitialized_def)
@@ -4161,14 +4176,6 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
 }
 
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
-  if (!vf.is_constant ())
-{
-  if (dump_enabled_p ())
-   dump_printf_loc 

[PATCH 4/8] vect: don't allow fully masked loops with non-masked simd clones [PR 110485]

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
When analyzing a loop and choosing a simdclone to use it is possible to 
choose a simdclone that cannot be used 'inbranch' for a loop that can 
use partial vectors.  This may lead to the vectorizer deciding to use 
partial vectors which are not supported for notinbranch simd clones. 
This patch fixes that by disabling the use of partial vectors once a 
notinbranch simd clone has been selected.


gcc/ChangeLog:

PR tree-optimization/110485
* tree-vect-stmts.cc (vectorizable_simd_clone_call): Disable partial
vectors usage if a notinbranch simdclone has been selected.

gcc/testsuite/ChangeLog:

* gcc.dg/gomp/pr110485.c: New test.diff --git a/gcc/testsuite/gcc.dg/gomp/pr110485.c 
b/gcc/testsuite/gcc.dg/gomp/pr110485.c
new file mode 100644
index 
..ba6817a127f40246071e32ccebf692cc4d121d15
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/gomp/pr110485.c
@@ -0,0 +1,19 @@
+/* PR 110485 */
+/* { dg-do compile } */
+/* { dg-additional-options "-Ofast -fdump-tree-vect-details" } */
+/* { dg-additional-options "-march=znver4 --param=vect-partial-vector-usage=1" 
{ target x86_64-*-* } } */
+#pragma omp declare simd notinbranch uniform(p)
+extern double __attribute__ ((const)) bar (double a, double p);
+
+double a[1024];
+double b[1024];
+
+void foo (int n)
+{
+  #pragma omp simd
+  for (int i = 0; i < n; ++i)
+a[i] = bar (b[i], 71.2);
+}
+
+/* { dg-final { scan-tree-dump-not "MASK_LOAD" "vect" } } */
+/* { dg-final { scan-tree-dump "can't use a fully-masked loop because a 
non-masked simd clone was selected." "vect" { target x86_64-*-* } } } */
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 
35207de7acb410358220dbe8d1af82215b5091bf..664c3b5f7ca48fdb49383fb8a97f407465574479
 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -4349,6 +4349,17 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
   ? boolean_true_node : boolean_false_node;
STMT_VINFO_SIMD_CLONE_INFO (stmt_info).safe_push (sll);
  }
+
+  if (!bestn->simdclone->inbranch)
+   {
+ if (dump_enabled_p ()
+ && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo))
+   dump_printf_loc (MSG_NOTE, vect_location,
+"can't use a fully-masked loop because a"
+" non-masked simd clone was selected.\n");
+ LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) = false;
+   }
+
   STMT_VINFO_TYPE (stmt_info) = call_simd_clone_vec_info_type;
   DUMP_VECT_SCOPE ("vectorizable_simd_clone_call");
 /*  vect_model_simple_cost (vinfo, stmt_info, ncopies,


[Patch 3/8] vect: Fix vect_get_smallest_scalar_type for simd clones

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
The vect_get_smallest_scalar_type helper function was using any argument 
to a simd clone call when trying to determine the smallest scalar type 
that would be vectorized.  This included the function pointer type in a 
MASK_CALL for instance, and would result in the wrong type being 
selected.  Instead this patch special cases simd_clone_call's and uses 
only scalar types of the original function that get transformed into 
vector types.


gcc/ChangeLog:

* tree-vect-data-refs.cci (vect_get_smallest_scalar_type): Special case
simd clone calls and only use types that are mapped to vectors.
* tree-vect-stmts.cc (simd_clone_call_p): New helper function.
* tree-vectorizer.h (simd_clone_call_p): Declare new function.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/vect-simd-clone-16f.c: Remove unnecessary differentation
between targets with different pointer sizes.
* gcc.dg/vect/vect-simd-clone-17f.c: Likewise.
* gcc.dg/vect/vect-simd-clone-18f.c: Likewise.diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16f.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16f.c
index 
574698d3e133ecb8700e698fa42a6b05dd6b8a18..7cd29e894d0502a59fadfe67db2db383133022d3
 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16f.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16f.c
@@ -7,9 +7,8 @@
 #include "vect-simd-clone-16.c"
 
 /* Ensure the the in-branch simd clones are used on targets that support them.
-   Some targets use pairs of vectors and do twice the calls.  */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
{ target { ! { { i?86-*-* x86_64-*-* } && { ! lp64 } } } } } } */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 4 "vect" 
{ target { { i?86*-*-* x86_64-*-* } && { ! lp64 } } } } } */
+ */
+/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
} } */
 
 /* The LTO test produces two dump files and we scan the wrong one.  */
 /* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17f.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17f.c
index 
8bb6d19301a67a3eebce522daaf7d54d88f708d7..177521dc44531479fca1f1a1a0f2010f30fa3fb5
 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17f.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17f.c
@@ -7,9 +7,8 @@
 #include "vect-simd-clone-17.c"
 
 /* Ensure the the in-branch simd clones are used on targets that support them.
-   Some targets use pairs of vectors and do twice the calls.  */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
{ target { ! { { i?86-*-* x86_64-*-* } && { ! lp64 } } } } } } */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 4 "vect" 
{ target { { i?86*-*-* x86_64-*-* } && { ! lp64 } } } } } */
+ */
+/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
} } */
 
 /* The LTO test produces two dump files and we scan the wrong one.  */
 /* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-18f.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-18f.c
index 
d34f23f4db8e9c237558cc22fe66b7e02b9e6c20..4dd51381d73c0c7c8ec812f24e5054df038059c5
 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-18f.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-18f.c
@@ -7,9 +7,8 @@
 #include "vect-simd-clone-18.c"
 
 /* Ensure the the in-branch simd clones are used on targets that support them.
-   Some targets use pairs of vectors and do twice the calls.  */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
{ target { ! { { i?86-*-* x86_64-*-* } && { ! lp64 } } } } } } */
-/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 4 "vect" 
{ target { { i?86*-*-* x86_64-*-* } && { ! lp64 } } } } } */
+ */
+/* { dg-final { scan-tree-dump-times {[\n\r] [^\n]* = foo\.simdclone} 2 "vect" 
} } */
 
 /* The LTO test produces two dump files and we scan the wrong one.  */
 /* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
diff --git a/gcc/tree-vect-data-refs.cc b/gcc/tree-vect-data-refs.cc
index 
a3570c45b5209281ac18c1220c3b95398487f389..1bdbea232afc6facddac23269ee3da033eb1ed50
 100644
--- a/gcc/tree-vect-data-refs.cc
+++ b/gcc/tree-vect-data-refs.cc
@@ -119,6 +119,7 @@ tree
 vect_get_smallest_scalar_type (stmt_vec_info stmt_info, tree scalar_type)
 {
   HOST_WIDE_INT lhs, rhs;
+  cgraph_node *node;
 
   /* During the analysis phase, this function is called on arbitrary
  statements that might not have scalar results.  */
@@ -145,6 +146,23 @@ vect_get_smallest_scalar_type (stmt_vec_info stmt_info, 
tree scalar_type)
scalar_type = rhs_type;
}
 }
+  else if (simd_clone_call_p (stmt_info->stmt, ))
+{
+  auto clone = node->simd_clones->simdclone;
+  for (unsigned int i = 0; i < clone->nargs; ++i)
+   {
+ if (clone->args[i].arg_type == 

[Patch 2/8] parloops: Allow poly nit and bound

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches
Teach parloops how to handle a poly nit and bound e ahead of the changes 
to enable non-constant simdlen.


gcc/ChangeLog:

* tree-parloops.cc (try_to_transform_to_exit_first_loop_alt): Accept
poly NIT and ALT_BOUND.diff --git a/gcc/tree-parloops.cc b/gcc/tree-parloops.cc
index 
a35f3d5023b06e5ef96eb4222488fcb34dd7bd45..cf713e53d712fb5ad050e274f373adba5a90c5a7
 100644
--- a/gcc/tree-parloops.cc
+++ b/gcc/tree-parloops.cc
@@ -2531,14 +2531,16 @@ try_transform_to_exit_first_loop_alt (class loop *loop,
   tree nit_type = TREE_TYPE (nit);
 
   /* Figure out whether nit + 1 overflows.  */
-  if (TREE_CODE (nit) == INTEGER_CST)
+  if (TREE_CODE (nit) == INTEGER_CST
+  || TREE_CODE (nit) == POLY_INT_CST)
 {
   if (!tree_int_cst_equal (nit, TYPE_MAX_VALUE (nit_type)))
{
  alt_bound = fold_build2_loc (UNKNOWN_LOCATION, PLUS_EXPR, nit_type,
   nit, build_one_cst (nit_type));
 
- gcc_assert (TREE_CODE (alt_bound) == INTEGER_CST);
+ gcc_assert (TREE_CODE (alt_bound) == INTEGER_CST
+ || TREE_CODE (alt_bound) == POLY_INT_CST);
  transform_to_exit_first_loop_alt (loop, reduction_list, alt_bound);
  return true;
}


[PATCH 1/8] parloops: Copy target and optimizations when creating a function clone

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches


SVE simd clones require to be compiled with a SVE target enabled or the 
argument types will not be created properly. To achieve this we need to 
copy DECL_FUNCTION_SPECIFIC_TARGET from the original function 
declaration to the clones.  I decided it was probably also a good idea 
to copy DECL_FUNCTION_SPECIFIC_OPTIMIZATION in case the original 
function is meant to be compiled with specific optimization options.


gcc/ChangeLog:

* tree-parloops.cc (create_loop_fn): Copy specific target and
optimization options to clone.diff --git a/gcc/tree-parloops.cc b/gcc/tree-parloops.cc
index 
e495bbd65270bdf90bae2c4a2b52777522352a77..a35f3d5023b06e5ef96eb4222488fcb34dd7bd45
 100644
--- a/gcc/tree-parloops.cc
+++ b/gcc/tree-parloops.cc
@@ -2203,6 +2203,11 @@ create_loop_fn (location_t loc)
   DECL_CONTEXT (t) = decl;
   TREE_USED (t) = 1;
   DECL_ARGUMENTS (decl) = t;
+  DECL_FUNCTION_SPECIFIC_TARGET (decl)
+= DECL_FUNCTION_SPECIFIC_TARGET (act_cfun->decl);
+  DECL_FUNCTION_SPECIFIC_OPTIMIZATION (decl)
+= DECL_FUNCTION_SPECIFIC_OPTIMIZATION (act_cfun->decl);
+
 
   allocate_struct_function (decl, false);
 


disable badh checks ?

2023-08-30 Thread lists
I have a amavisd-new-2.12.1 with Postfix/Dovecot since like forever,  all
working well

I'm thinking of disabling bad header checks (seems surplus today..?)

mails with badh get delivered to 'badh' in basket in Maildir

what's the proper way to disable badh check ?

if I delete badh sub folder in Maildir - will that then simply flag badh
and, deliver to main inbox, will that do ?

is there a way to generate a test badh mail to check/test my setup ?

thanks for any pointers!

# grep badh /etc/amavisd/amavisd.conf

  warnbadhsender   => 1,
@addr_extension_bad_header_maps = ('badh');
# $warnbadhsender,
# $warnvirusrecip, $warnbannedrecip, $warnbadhrecip, (or @warn*recip_maps)




aarch64, vect, omp: Add SVE support for simd clones [PR 96342]

2023-08-30 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch series aims to implement support for SVE simd clones when not 
specifying a 'simdlen' clause for AArch64. This patch depends on my 
earlier patch: '[PATCH] aarch64: enable mixed-types for aarch64 simdclones'.


Bootstrapped and regression tested the series on 
aarch64-unknown-linux-gnu and x86_64-pc-linux-gnu. I also tried building 
the patches separately, but that was before some further clean-up 
restructuring, so will do that again prior to pushing.


Andre Vieira (8):

parloops: Copy target and optimizations when creating a function clone
parloops: Allow poly nit and bound
vect: Fix vect_get_smallest_scalar_type for simd clones
vect: don't allow fully masked loops with non-masked simd clones [PR 110485]
vect: Use inbranch simdclones in masked loops
vect: Add vector_mode paramater to simd_clone_usable
vect: Add TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM
aarch64: Add SVE support for simd clones [PR 96342]


<    1   2   3   4   5   6   7   8   9   10   >