Re: [PATCH] Fix vect-simd-clone testcase dump scanning

2023-04-14 Thread Andre Vieira (lists) via Gcc-patches
On the other thread I commented that inbranch simdclones are failing for 
AVX512F because it sets the mask_mode, for which inbranch hasn't been 
implemented, and so it is rejected.


On 14/04/2023 11:25, Jakub Jelinek via Gcc-patches wrote:

On Fri, Apr 14, 2023 at 10:15:06AM +, Richard Biener wrote:

Oops.  Indeed target_avx checks whether it can compile sth with
-O2 -mavx rather than verifying avx is present.  I've seen scan
failures with -m32/-march=cascadelake on a zen2 host.  I'm not exactly
sure why.


That is strange.  Sure, -march=cascadelake implies -mavx (-mavx512f even),
but it would surprise me if on such a host avx_runtime wasn't true.
But we've been there before, I think cascadelake turns on the vector
epilogues.
In r13-6784 I've added --param vect-epilogues-nomask=0 to some testcases
that were affected at that point, but perhaps something is affected since
then.  Will have a look.

Jakub



Re: [PATCH] aarch64: Add -mveclibabi=sleefgnu

2023-04-14 Thread Andre Vieira (lists) via Gcc-patches
I have (outdated) RFC's here: 
https://gcc.gnu.org/pipermail/gcc-patches/2023-March/613593.html


I am working on this patch series for stage 1. The list of features I am 
working on are:

* SVE support for #pragma omp declare simd
* Support for simdclone usage in autovec from #pragma omp declare variant
  This offers us a more fine-tuned approach to define what is and 
what's not available per function

* Support for use of simdclones in SLP

Also planning to enable the use of mixed-types that is currently 
disabled for AArch64, it's not a feature I suspect we need for our 
use-case but it will enable better testing as we can then enable AArch64 
as a simdclone target in the testsuite.


I could try to post some updates to the RFCs, I have been rebasing them 
on top of Andrew Stubbs latest patch to enable inbranch codegen. Let me 
know if you'd like to see these updates sooner rather than later so you 
can try them out for your usecase.


Kind regards,
Andre

On 14/04/2023 10:34, Lou Knauer via Gcc-patches wrote:

-Original Message-
From: Andrew Pinski 
Sent: Friday, April 14, 2023 09:08
To: Lou Knauer 
Cc: gcc-patches@gcc.gnu.org; Etienne Renault 
Subject: Re: [PATCH] aarch64: Add -mveclibabi=sleefgnu

On Fri, Apr 14, 2023 at 12:03 AM Lou Knauer via Gcc-patches
 wrote:


This adds support for the -mveclibabi option to the AArch64 backend of GCC by
implementing the builtin_vectorized_function target hook for AArch64.
The SLEEF Vectorized Math Library's GNUABI interface is used, and
NEON/Advanced SIMD as well as SVE are supported.

This was tested on the gcc testsuite and the llvm-test-suite on a AArch64
host for NEON and SVE as well as on hand-written benchmarks. Where the
vectorization of builtins was applied successfully in loops bound by the
calls to those, significant (>2) performance gains can be observed.


This is so wrong and it is better if you actually just used a header
file instead.  Specifically the openmp vect pragmas.

Thanks,
Andrew Pinski



Thank you for your quick response. I do not fully understand your point:
the OpenMP Declare SIMD pragmas are not yet implemented for SVE (here [0]
someone started working on that, but it does not work in its current state).
The `-mveclibabi` flag seems to be the only solution for SVE vectorization of
libm functions from our point of view.

Indeed, a custom header that redirects regular libm function calls to their
Sleef equivalent would be a solution for NEON since OpenMP Declare SIMD
pragmas are implemented for NEON in GCC. Nonetheless as far as I can tell,
the libmvec is not yet support for AArch64, so Sleef is unavoidable. I
therefore opted for a solution similar to the one for x86 and the SVML, where
only a additional flag during compilation is needed (instead of having to
modify source code to add includes). From a vectorization legality perspective,
this strategy also seems more reliable than a redirecting header since
Sleef functions (even the scalar ones) never set the errno and GCC already
verifies such details when transforming libm calls to builtins.

Alternatively, do you prefere a patch that adds SVE support for
#pragma omp declare simd declarations, thus enabling the same header-based
strategy for SVE as for NEON?

Thank you and kind regards,
Lou Knauer

[0]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96342





gcc/ChangeLog:

 * config/aarch64/aarch64.opt: Add -mveclibabi option.
 * config/aarch64/aarch64-opts.h: Add aarch64_veclibabi enum.
 * config/aarch64/aarch64-protos.h: Add
 aarch64_builtin_vectorized_function declaration.
 * config/aarch64/aarch64.cc: Handle -mveclibabi option and pure
 scalable type info for scalable vectors without "SVE type" attributes.
 * config/aarch64/aarch64-builtins.cc: Add
 aarch64_builtin_vectorized_function definition.
 * doc/invoke.texi: Document -mveclibabi for AArch64 targets.

gcc/testsuite/ChangeLog:

 * gcc.target/aarch64/vect-vecabi-sleefgnu-neon.c: New testcase.
 * gcc.target/aarch64/vect-vecabi-sleefgnu-sve.c: New testcase.
---
  gcc/config/aarch64/aarch64-builtins.cc| 113 ++
  gcc/config/aarch64/aarch64-opts.h |   5 +
  gcc/config/aarch64/aarch64-protos.h   |   3 +
  gcc/config/aarch64/aarch64.cc |  66 ++
  gcc/config/aarch64/aarch64.opt|  15 +++
  gcc/doc/invoke.texi   |  15 +++
  .../aarch64/vect-vecabi-sleefgnu-neon.c   |  16 +++
  .../aarch64/vect-vecabi-sleefgnu-sve.c|  16 +++
  8 files changed, 249 insertions(+)
  create mode 100644 
gcc/testsuite/gcc.target/aarch64/vect-vecabi-sleefgnu-neon.c
  create mode 100644 gcc/testsuite/gcc.target/aarch64/vect-vecabi-sleefgnu-sve.c

diff --git a/gcc/config/aarch64/aarch64-builtins.cc 
b/gcc/config/aarch64/aarch64-builtins.cc
index cc6b7c01fd1..f53fa91b8d0 100644
--- a/gcc/config/aarch64/aarch64-builtins.cc
+++ 

Re: [r13-7135 Regression] FAIL: gcc.dg/vect/vect-simd-clone-18f.c scan-tree-dump-times vect "[\\n\\r] [^\\n]* = foo\\.simdclone" 2 on Linux/x86_64

2023-04-14 Thread Andre Vieira (lists) via Gcc-patches




On 14/04/2023 10:09, Richard Biener wrote:

On Fri, Apr 14, 2023 at 10:43 AM Andre Vieira (lists)
 wrote:


Resending this to everyone (sorry for the double send Richard).

On 14/04/2023 09:15, Andre Vieira (lists) wrote:
  >
  >
  > On 14/04/2023 07:55, Richard Biener wrote:
  >> On Thu, Apr 13, 2023 at 4:25 PM Andre Vieira (lists)
  >>  wrote:
  >>>
  >>>
  >>>
  >>> On 13/04/2023 15:00, Richard Biener wrote:
  >>>> On Thu, Apr 13, 2023 at 3:00 PM Andre Vieira (lists) via Gcc-patches
  >>>>  wrote:
  >>>>>
  >>>>>
  >>>>>
  >>>
  >>> But that's not it, I've been looking at it, and there is code in place
  >>> that does what I expected which is defer the choice of vectype for simd
  >>> clones until vectorizable_simd_clone_call, unfortunately it has a
  >>> mistaken assumption that simdclones don't return :/
  >>
  >> I think that's not it - when the SIMD clone returns a vector we have to
  >> determine the vector type in this function.  We cannot defer this.
  >
  > What's 'this function' here, do you mean we have to determine the
  > vectype in 'vect_get_vector_types_for_stmt' &
  > 'vect_determine_vf_for_stmt' ?


Yes.


Because at that time we don't yet know
  > what clone we will be using, this choice is done inside
  > vectorizable_simd_clone_call. In fact, to choose the simd clone, we need
  > to know the vf as that has to be a multiple of the chosen clone's
  > simdlen. So we simply can't use the simdclone's types (as that depends
  > on the simdlen) to choose the vf because the choice of simdlend depends
  > on the vf. And there was already code in place to handle this,
  > unfortunately that code was wrong and had the wrong assumption that
  > simdclones didn't return (probably was true at some point and bitrotted).


But to compute the VF we need to know the vector types!  We're only
calling vectorizable_* when the VF is final.  That said, the code you quote:


  >>
  >>> see vect_get_vector_types_for_stmt:
  >>> ...
  >>> if (gimple_get_lhs (stmt) == NULL_TREE


is just for the case of a function without return value.  For this case
it's OK to do nothing - 'vectype' is the vector type of all vector defs
a stmt produces.

For calls with a LHS it should fall through to generic code doing
get_vectype_for_scalar_type on the LHS type.


I think that may work, but right now it will still go and look at the 
arguments of the call and use the smallest type among them to adjust the 
nunits (in 'vect_get_vector_types_for_stmt').


In fact (this is just for illustration) if I hack that function like this:
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -12745,8 +12745,11 @@ vect_get_vector_types_for_stmt (vec_info 
*vinfo, stmt_vec_info stmt_info,

   /* The number of units is set according to the smallest scalar
 type (or the largest vector size, but we only support one
 vector size per vectorization).  */
-  scalar_type = vect_get_smallest_scalar_type (stmt_info,
-  TREE_TYPE (vectype));
+  if (simd_clone_call_p (stmt_info->stmt))
+   scalar_type = TREE_TYPE (vectype);
+  else
+   scalar_type = vect_get_smallest_scalar_type (stmt_info,
+TREE_TYPE (vectype));
   if (scalar_type != TREE_TYPE (vectype))
{
  if (dump_enabled_p ())

It will use the same types as before without (-m32), like I explained 
before the -m32 turns the pointer inside MASK_CALL into a 32-bit pointer 
so now the smallest size is 32-bits. This makes it pick V8SI instead of 
the original V4DI (scalar return type is DImode). Changing the VF to 8, 
thus unrolling the loop as it needs to make 2 calls, each handling 4 nunits.






Re: [r13-7135 Regression] FAIL: gcc.dg/vect/vect-simd-clone-18f.c scan-tree-dump-times vect "[\\n\\r] [^\\n]* = foo\\.simdclone" 2 on Linux/x86_64

2023-04-14 Thread Andre Vieira (lists) via Gcc-patches

Resending this to everyone (sorry for the double send Richard).

On 14/04/2023 09:15, Andre Vieira (lists) wrote:
>
>
> On 14/04/2023 07:55, Richard Biener wrote:
>> On Thu, Apr 13, 2023 at 4:25 PM Andre Vieira (lists)
>>  wrote:
>>>
>>>
>>>
>>> On 13/04/2023 15:00, Richard Biener wrote:
>>>> On Thu, Apr 13, 2023 at 3:00 PM Andre Vieira (lists) via Gcc-patches
>>>>  wrote:
>>>>>
>>>>>
>>>>>
>>>
>>> But that's not it, I've been looking at it, and there is code in place
>>> that does what I expected which is defer the choice of vectype for simd
>>> clones until vectorizable_simd_clone_call, unfortunately it has a
>>> mistaken assumption that simdclones don't return :/
>>
>> I think that's not it - when the SIMD clone returns a vector we have to
>> determine the vector type in this function.  We cannot defer this.
>
> What's 'this function' here, do you mean we have to determine the
> vectype in 'vect_get_vector_types_for_stmt' &
> 'vect_determine_vf_for_stmt' ? Because at that time we don't yet know
> what clone we will be using, this choice is done inside
> vectorizable_simd_clone_call. In fact, to choose the simd clone, we need
> to know the vf as that has to be a multiple of the chosen clone's
> simdlen. So we simply can't use the simdclone's types (as that depends
> on the simdlen) to choose the vf because the choice of simdlend depends
> on the vf. And there was already code in place to handle this,
> unfortunately that code was wrong and had the wrong assumption that
> simdclones didn't return (probably was true at some point and bitrotted).
>
>>
>>> see vect_get_vector_types_for_stmt:
>>> ...
>>> if (gimple_get_lhs (stmt) == NULL_TREE
>>> /* MASK_STORE has no lhs, but is ok.  */
>>> && !gimple_call_internal_p (stmt, IFN_MASK_STORE))
>>>   {
>>> if (is_a  (stmt))
>>>   {
>>> /* Ignore calls with no lhs.  These must be calls to
>>>#pragma omp simd functions, and what vectorization 
factor

>>>it really needs can't be determined until
>>>vectorizable_simd_clone_call.  */
>>> if (dump_enabled_p ())
>>>   dump_printf_loc (MSG_NOTE, vect_location,
>>>"defer to SIMD clone analysis.\n");
>>> return opt_result::success ();
>>>   }
>>>
>>> return opt_result::failure_at (stmt,
>>>"not vectorized: irregular
>>> stmt.%G", stmt);
>>>   }
>>> ...
>>>
>>> I'm working on a patch.
>>>>
>>>>> Kind Regards,
>>>>> Andre


Re: [r13-7135 Regression] FAIL: gcc.dg/vect/vect-simd-clone-18f.c scan-tree-dump-times vect "[\\n\\r] [^\\n]* = foo\\.simdclone" 2 on Linux/x86_64

2023-04-13 Thread Andre Vieira (lists) via Gcc-patches




On 13/04/2023 15:00, Richard Biener wrote:

On Thu, Apr 13, 2023 at 3:00 PM Andre Vieira (lists) via Gcc-patches
 wrote:




On 13/04/2023 11:01, Andrew Stubbs wrote:

Hi Andre,

I don't have a cascadelake device to test on, nor any knowledge about
what makes it different from regular x86_64.


Not sure you need one, but yeah I don't know either, it looks like it
fails because:
in-branch vector clones are not yet supported for integer mask modes.

A quick look tells me this is because mask_mode is not VOIDmode.
i386.cc's TARGET_SIMD_CLONE_COMPUTE_VECSIZE_AND_SIMDLEN will set
mask_mode to either DI or SI mode when TARGET_AVX512F. So I suspect
cascadelake is TARGET_AVX512F.

This is where I bail out as I really don't want to dive into the target
specific simd clone handling of x86 ;)



If the cascadelake device is supposed to work the same as other x86_64
devices for these vectors then the test has found a bug in the compiler
and you should be looking to fix that, not fudge the testcase.

Alternatively, if the device's capabilities really are different and the
tests should behave differently, then the actual expectations need to be
encoded in the dejagnu directives. If you can't tell the difference by
looking at the "x86_64*-*-*" target selector alone then the correct
solution is to invent a new "effective-target" selector. There are lots
of examples of using these throughout the testsuite (you could use
dg-require-effective-target to disable the whole testcase, or just use
the name in the scan-tree-dump-times directive to customise the
expectations), and the definitions can be found in the
lib/target-supports.exp and lib/target-supports-dg.exp scripts. Some are
fixed expressions and some run the compiler to probe the configuration,
but in this case you probably want to do something with "check-flags".


Even though I agree with you, I'm not the right person to do this
digging for such target specific stuff. So for now I'd probably suggest
xfailing this for avx512f.


For the unroll problem, you can probably tweak the optimization options
to disable that, the same as has been done for the epilogues feature
that had the same problem.


I mistaken the current behaviour for unrolling, it's actually because of
a latent bug. The vectorizer calls `vect_get_smallest_scalar_type` to
determine the vectype of a stmt. For a function like foo, that has the
same type (long long) everywhere this wouldn't be a problem, however,
because you transformed it into a MASK_CALL that has a function pointer
(which is 32-bit in -m32) that now becomes the 'smallest' type.

This is all a red-herring though, I don't think we should be calling
this function for potential simdclone calls as the type on which the
veclen is not necessarily the 'smallest' type. And some arguments (like
uniform and linear) should be ignored anyway as they won't be mapped to
vectors.  So I do think this might have been broken even before your
changes, but needs further investigation.

Since these are new tests for a new feature, I don't really understand
why this is classed as a regression?

Andrew

P.S. there was a commit to these tests in the last few days, so make
sure you pull that before making changes.


The latest commit to these tests was mine, it's the one Haochen is
reporting this regression against. My commit was to fix the issue richi
had introduced that was preventing the feature you introduced from
working. The reason nobody noticed was because the tests you introduced
didn't actually test your feature, since you didn't specify 'inbranch'
the omp declare simd pragma was allowing the use of not-inbranch simd
clones and the vectorizer was being smart enough to circumvent the
conditional and was still able to use simdclones (non inbranch ones) so
when the inbranch stopped working, the test didn't notice.

The other changes to this test were already after the fix for 10
that broke the inbranch feature you added, and so it was fixing a
cascadelake testism but for the not-inbranch simdclones. So basically
fixing a testism of a testism :/


I am working on simdclone's for AArch64 for next Stage 1 so I don't mind
looking at the issue with the vectype being chosen wrongly, as for the
other x86 specific testisms I'll leave them to someone else.


Btw, the new testsuite FAILs could be just epiloge vectorizations, so
maybe try the usual --param vect-epilogues-nomask=0 ...

It already has those, Jakub added them.

But that's not it, I've been looking at it, and there is code in place 
that does what I expected which is defer the choice of vectype for simd 
clones until vectorizable_simd_clone_call, unfortunately it has a 
mistaken assumption that simdclones don't return :/

see vect_get_vector_types_for_stmt:
...
  if (gimple_get_lhs (stmt) == NULL_TREE
  /* MASK_STORE has no lhs, but is ok.  */
  && !gimple_call_internal_p (stmt, IFN_MASK_STORE))
{
  if (is_a  (stmt))
{
  /*

Re: [r13-7135 Regression] FAIL: gcc.dg/vect/vect-simd-clone-18f.c scan-tree-dump-times vect "[\\n\\r] [^\\n]* = foo\\.simdclone" 2 on Linux/x86_64

2023-04-13 Thread Andre Vieira (lists) via Gcc-patches




On 13/04/2023 11:01, Andrew Stubbs wrote:

Hi Andre,

I don't have a cascadelake device to test on, nor any knowledge about 
what makes it different from regular x86_64.


Not sure you need one, but yeah I don't know either, it looks like it 
fails because:

in-branch vector clones are not yet supported for integer mask modes.

A quick look tells me this is because mask_mode is not VOIDmode. 
i386.cc's TARGET_SIMD_CLONE_COMPUTE_VECSIZE_AND_SIMDLEN will set 
mask_mode to either DI or SI mode when TARGET_AVX512F. So I suspect 
cascadelake is TARGET_AVX512F.


This is where I bail out as I really don't want to dive into the target 
specific simd clone handling of x86 ;)




If the cascadelake device is supposed to work the same as other x86_64 
devices for these vectors then the test has found a bug in the compiler 
and you should be looking to fix that, not fudge the testcase.


Alternatively, if the device's capabilities really are different and the 
tests should behave differently, then the actual expectations need to be 
encoded in the dejagnu directives. If you can't tell the difference by 
looking at the "x86_64*-*-*" target selector alone then the correct 
solution is to invent a new "effective-target" selector. There are lots 
of examples of using these throughout the testsuite (you could use 
dg-require-effective-target to disable the whole testcase, or just use 
the name in the scan-tree-dump-times directive to customise the 
expectations), and the definitions can be found in the 
lib/target-supports.exp and lib/target-supports-dg.exp scripts. Some are 
fixed expressions and some run the compiler to probe the configuration, 
but in this case you probably want to do something with "check-flags".


Even though I agree with you, I'm not the right person to do this 
digging for such target specific stuff. So for now I'd probably suggest 
xfailing this for avx512f.


For the unroll problem, you can probably tweak the optimization options 
to disable that, the same as has been done for the epilogues feature 
that had the same problem.


I mistaken the current behaviour for unrolling, it's actually because of 
a latent bug. The vectorizer calls `vect_get_smallest_scalar_type` to 
determine the vectype of a stmt. For a function like foo, that has the 
same type (long long) everywhere this wouldn't be a problem, however, 
because you transformed it into a MASK_CALL that has a function pointer 
(which is 32-bit in -m32) that now becomes the 'smallest' type.


This is all a red-herring though, I don't think we should be calling 
this function for potential simdclone calls as the type on which the 
veclen is not necessarily the 'smallest' type. And some arguments (like 
uniform and linear) should be ignored anyway as they won't be mapped to 
vectors.  So I do think this might have been broken even before your 
changes, but needs further investigation.
Since these are new tests for a new feature, I don't really understand 
why this is classed as a regression?


Andrew

P.S. there was a commit to these tests in the last few days, so make 
sure you pull that before making changes.


The latest commit to these tests was mine, it's the one Haochen is 
reporting this regression against. My commit was to fix the issue richi 
had introduced that was preventing the feature you introduced from 
working. The reason nobody noticed was because the tests you introduced 
didn't actually test your feature, since you didn't specify 'inbranch' 
the omp declare simd pragma was allowing the use of not-inbranch simd 
clones and the vectorizer was being smart enough to circumvent the 
conditional and was still able to use simdclones (non inbranch ones) so 
when the inbranch stopped working, the test didn't notice.


The other changes to this test were already after the fix for 10 
that broke the inbranch feature you added, and so it was fixing a 
cascadelake testism but for the not-inbranch simdclones. So basically 
fixing a testism of a testism :/



I am working on simdclone's for AArch64 for next Stage 1 so I don't mind 
looking at the issue with the vectype being chosen wrongly, as for the 
other x86 specific testisms I'll leave them to someone else.


Kind Regards,
Andre


Re: [qubes-users] dependency problem after upgrading standalone debian 11 VM

2023-04-12 Thread qubes-lists

Do you have any recommendation on how to solve this issue?


I also tried:
https://github.com/QubesOS/qubes-dist-upgrade/blob/release4.0/scripts/upgrade-template-standalone.sh#L37-L72

found via:
https://github.com/QubesOS/qubes-issues/issues/7865#issuecomment-1407236960

but running:
apt-get install --allow-downgrades -y \
'xen-utils-common=4.14*' \
'libxenstore3.0=4.14*' \
'xenstore-utils=4.14*'

fails because I did run
'apt upgrade; apt dist-upgrade'
already:

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Selected version '4.14.5+94-ge49571868d-1' (Debian-Security:11/stable-security 
[amd64]) for 'xen-utils-common'
Selected version '4.14.5+94-ge49571868d-1' (Debian-Security:11/stable-security 
[amd64]) for 'libxenstore3.0'
Selected version '4.14.5+94-ge49571868d-1' (Debian-Security:11/stable-security 
[amd64]) for 'xenstore-utils'
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
 qubes-core-agent : Depends: xen-utils-guest but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or 
specify a solution).
exit


Is there a way out?

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4614fb88-814a-90a8-d68a-c52782b785ce%40riseup.net.


[qubes-users] dependency problem after upgrading standalone debian 11 VM

2023-04-12 Thread qubes-lists

Hello!

a while ago when migrating Qubes 4.0 to Qubes 4.1
I restored a standalone debian VM (created on r4.0) on a fresh r4.1 system and 
did not
notice that I also should replace the r4.0 repos _in_ the VM to r4.1 repos
but it still worked fine.

Today I replaced this line:
deb [arch=amd64] https://deb.qubes-os.org/r4.0/vm bullseye main

with this:
deb [arch=amd64] https://deb.qubes-os.org/r4.1/vm bullseye main

in the standalone VM.

After an
apt update
apt upgrade
apt dist-upgrade

I'm running into this error:


The following additional packages will be installed:
  xen-utils-guest
The following NEW packages will be installed:
  xen-utils-guest
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
36 not fully installed or removed.
Need to get 0 B/30.1 kB of archives.
After this operation, 53.2 kB of additional disk space will be used.
Do you want to continue? [Y/n]
(Reading database ... 242630 files and directories currently installed.)
Preparing to unpack .../xen-utils-guest_4.14.5-20+deb11u1_amd64.deb ...
Unpacking xen-utils-guest (4.14.5-20+deb11u1) ...
dpkg: error processing archive 
/var/cache/apt/archives/xen-utils-guest_4.14.5-20+deb11u1_amd64.deb (--unpack):
 trying to overwrite '/lib/systemd/system/xendriverdomain.service', which is 
also in package xen-utils-common 2001:4.8.5-42+deb11u1
Errors were encountered while processing:
 /var/cache/apt/archives/xen-utils-guest_4.14.5-20+deb11u1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)


Do you have any recommendation on how to solve this issue?

thanks!

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4a0c67be-0d8a-662e-3c53-52bf8dcdef31%40riseup.net.


Re: [tor-relays] new exit relay

2023-04-12 Thread lists
On Mittwoch, 12. April 2023 17:14:46 CEST Linux-Hus Oni via tor-relays wrote:
> hi again, actulay i have made my exit to a bridge, so my bandwith is not so
> big for an exit. it is automatically removed from the metrics ?

Get a new IP, you put users at risk!

It doesn't matter, even if your relay no longer appears in metrics after a few 
days every Tor relay IP is in many private and professional databases after a 
few hours.

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Police request regarding relay

2023-04-12 Thread lists
On Mittwoch, 12. April 2023 18:28:09 CEST tor-opera...@urdn.com.ua wrote:
> Finn  wrote:
> > The weird thing is, that the relay in question is only a relay and
> > not an exit node since its creation (185.241.208.179)
> > (https://nusenu.github.io/OrNetStats/w/relay/B67C7039B04487854129A66B16F5E
> > E3CFFCBB491.html) - anyone has an idea how this happens? Best regards
> 
> We receive this mostly from France and Germany. We figured out that
> they downloaded the Tor Browser then looked at the Tor Circuit widget
> and just collected the addresses they could see there.
> 
> This is the same as when Police, Attention Seekers, Cyber White
> Knights, Censors and other scoundrels contact every ISP they see in a
> traceroute.

Without a court order, the cops have no right to request data at all.

Generally also for commercial providers:
The European Court of Justice ruled that German data retention 
(Vorratsdatenspeicherung) is incompatible with EU law and therefore 
inapplicable.

https://digitalcourage.de/blog/2023/vorratsdatenspeicherung-medienberichte
(only in German)

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: PKCS#7 signature not signed with a trusted key

2023-04-12 Thread Genes Lists

On 4/12/23 07:54, Ralf Mardorf wrote:


I don't understand it. If it should be a signing issue, then it does
matter when using one mobo and doesn't matter, if the same SSD holding
the Arch Linux install is connected to another mobo? It only matters
when UEFI booting (with secure boot disabled), but doesn't matter when
legacy booting is enabled by the older mobo? Isn't this signing
independent of the used boot mechanism?

Maybe the culprit is something else, but I couldn't identify something
else.




1) Nothing you've shared so far indicates a fatal module signing issue - 
right? All I've seen is benign warning.


2) uefi vs mbr are not related directly to signed modules in-tree or 
out-of-tree (OOT) - no.


3) That said, if OOT signed modules are somehow making a warning or 
error, please keep in mind that dkms is -supposed- to use the 
appropriate key to sign the modules - and that can happen on every boot 
with dkms if it decides to rebuild the out-of-tree module.


My comment was simply make sure you always have the correct keys 
available for dkms to sign with - correct being the same one compiled 
into the kernel of course as I describe on my gh page.


That way, when those OOT modules do get signed (via dkms) they at least 
get signed with a key the kernel trusts (the same one used when building 
that kernel).





Re: PKCS#7 signature not signed with a trusted key

2023-04-12 Thread Genes Lists

On 4/12/23 03:55, Ralf Mardorf wrote:

Hi,


Bit hard to say from above - clearly these need 2 different keys 
(right?) also you dont say what CONFIG_MODULE_COMPRESS_xx are set to 
either since you have 2 different module compressions as well as keys 
being different.


Maybe post the actual fatal kernel error exactly - is it possible the 
error you printed was non-fatal and something else kiilled boot?


My understanding of out of tree signed kernel modules (and some tools) 
is captured here (wiki is similar but likely bit out of date vs GH):


https://github.com/gene-git/Arch-SKM

That all said, it appears you've built the kernel with one cert and 
signed using a different one - maybe?


best,

gene



Re: [tor-relays] Police request regarding relay

2023-04-11 Thread lists
On Dienstag, 11. April 2023 14:09:15 CEST Finn wrote:
> Hello everyone,
> 
> We are hosting multiple relays under our AS 210558 and received an email
> from a local police station in Germany requesting user data, nothing
> unusual.
Nothing unusual? I had a house search because of exits but never a user data 
request because of entry nodes.

As a German organization, you must fully comply with Telekommunikation-
Telemedien-Datenschutz-Gesetz §9 (the German telemedia data protection law), 
which prohibits to log any personally identifiable data or usage data unless 
required for billing purposes. As you do not charge for using your services, 
you will never be able to keep any connection data. ¯\_(ツ)_/¯

Tor routers owned by German media services are protected by Telemediengesetz 
§8

https://www.gesetze-im-internet.de/ttdsg/__9.html
https://www.gesetze-im-internet.de/tmg/__8.html

Updated german exit page
https://github.com/chgans/tor-exit-notice

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: Creating a "multicast bridge"?

2023-04-09 Thread Why 42? The lists account.


On Thu, Apr 06, 2023 at 04:17:26PM +0200, Martin Schröder wrote:
> > I'd like to create a "bridge" between two IP networks which will pass
> > only multicast info. / traffic.
> 
> So it should only route FF00::/8?

I'm not exactly sure of the siginificance of that address range, but in
the current configuration/version the networks are both IPv4 with a /24
netmask. There's no intent to use IPv6 at the moment.

Actually, by way of clarification, in this system the two networks to be
bridged/connected are essentially the same:

Both networks are based on the same model of switch
Both have idential set of devices
Both use the same IP addresses

The goal is to create a single "multicast domain" between the networks
i.e. to allow multicast communication betweeen applications running in
each of the networks ...

Does that make sense?

As I mentioned, grateful for any advice!

Cheers,
Robb.



Re: [halLEipzig] Leipziger OSM Stammtisch wieder auf die Beine stellen?

2023-04-08 Thread Antonin Delpeuch (lists)
Super! Dann reserviere ich dort für sechs. Es kann bestimmt mehr Leute 
dazu kommen (es gibt dort ziemlich viel Platz).


Antonin

On 05/04/2023 13:42, wiebkerein...@posteo.de wrote:

Hallo ihr,

ja Nichtraucher.
Nicht ganz reizarm, weil etwas Musik läuft und die Akustik des Raums die 
Stimmengemenge verstärkt.

Nur falls es jemand wissen muss.

LG
Wiebke

Am 05.04.2023 11:49 schrieb Antonin Delpeuch (lists):

Hallo alle,

das Volkshaus klingt perfekt, wenn die Moritzbastei schon
Veranstaltungen hat. Ist es Nichtraucher?

Bis bald,
Antonin

On 05/04/2023 11:18, wiebkerein...@posteo.de wrote:

Hallo ihr,

Würde das Volkshaus auf der Karli vorschlagen.
Recht zentral, faire Preise, leckeres Essen, genug Platz.
Könnte jemand reservieren?
0341 23105505

Liebe Grüße
Wiebke

Am 03.04.2023 19:16 schrieb Fabian Schmidt:

Hallo Antonin und stumme Gesellinnen und Gesellen,

dann wird es wohl der Dienstag (25.).

In der mb sind zwei Veranstaltungen. Wollen wir es trotzdem dort
versuchen? Das Green Soul fällt aus.


Viele Grüße,

Fabian.

Am 29.03.23 schrieb Antonin Delpeuch:


Hallo Fabian,

toll, vielen Dank für den Poll :) Ich gucke mal, ob ich andere 
lokale Benutzer von OSM einzeln einladen könnte.


Bis bald,
Antonin

On 28/03/2023 17:17, Fabian Schmidt wrote:

 Hallo,

 von mir aus gern! Wer hat denn Zeit, z.B. am:

 https://dud-poll.inf.tu-dresden.de/x7yQ-4h_xw/ ?

 Laut Wiki waren die letzten Stammtische 18:30.


 Gruß, Fabian.

 Am 26.03.23 schrieb Antonin Delpeuch (lists):


 Hallo die Runde,

 ich bin neu in Leipzig und hätte Lust, die lokale 
OSM-Gemeinschaft kennen
 zu lernen. Anscheinend ist der Stammtisch nicht mehr regelmäßig 
aktiv,
 aber ich frage mich, ob irgendwelche Leute doch Interesse 
hätten, uns zu

 treffen.

 Vielen Dank an allen für den wunderbaren Zustand von OSM in der 
Gegend,

 auf jeden Fall!

 Bis hoffentlich bald,

 Antonin (username Pintoch)


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig

___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig

___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


Re: firefox no longer starts - could it be wayland package?

2023-04-07 Thread Genes Lists

On 4/7/23 09:27, Genes Lists wrote:


Closing the loop - this is now been fixed by mesa 23.0.2 in testing repo.

Big thanks to heftig for sorting it out so quickly!


gene


Re: firefox no longer starts - could it be wayland package?

2023-04-07 Thread Genes Lists

On 4/7/23 09:31, Petr Mánek wrote:


See also:  https://bugs.archlinux.org/task/78137

Best I can tell there are 2 (possibly related) issues - (a) firefox 
crashes on start and (b) firefox crashes on exit.


Course to get to (b) you have to not experience (a) :)

gene


Re: firefox no longer starts - could it be wayland package?

2023-04-07 Thread Genes Lists

On 4/7/23 09:22, Genes Lists wrote:


Running on gnome

This may have to do with updated updated wayland package (1.22.0-1) - 



I confirm the problem goes away if I roll wayland back to prev version 
(1.21.0-2)


So indeed the problem package is wayland.





firefox no longer starts - could it be wayland package?

2023-04-07 Thread Genes Lists



Running on gnome

This may have to do with updated updated wayland package (1.22.0-1) - 
didn't see much else that may be related.
I tried rolling back firefox and makes no difference - I also tried with 
and without MOZ_ENABLE_WAYLAND - always get instant crash with:


ExceptionHandler::GenerateDump cloned child 5628
ExceptionHandler::SendContinueSignalToChild sent continue signal to child
ExceptionHandler::WaitForContinueSignal waiting for continue signal...

Anyone else seeing similar and any suggestions for fix?

thanks

gene



Creating a "multicast bridge"?

2023-04-06 Thread Why 42? The lists account.


Hi All,

I'd like to create a "bridge" between two IP networks which will pass
only multicast info. / traffic.

Is that something that I could do using OpenBSD and pf? I don't see
anything specific to multicasting in the pf.conf man page but I suppose
it should be possible to define a set of rules based on the standard
multicast address ranges that would pass (or forward?) traffic between
two interfaces X and Y.

In this case the traffic should be passed "bidirectionally", if that's
actually a word :-)

Or, I see that "bridge(4)" might also be a potential solution for this,
although I've never used that before. Would that be a better basis?

Are there examples of how to define pf rules for a bridge configuration?

It's not entirely clear to me, but from what I've read it may be
necessary to pass additional management / meta traffic, in addition to
the actual multicast data, i.e. so that the switches on either side can
track the multicast groups being created and their members?

The source of the multicast data will be Windows 10 based applications.

Since I'll be creating the system specifically for this purpose, I can
use any version of OpenBSD for this.

When I get it running, I'd like to track the behaviour of the traffic.
Are there any tools that would be recommended for this? I thought of
using wireshark, or more likely tshark, perhaps with its "-z" statistics
option.

Grateful for any advice - thanks in advance!

Cheers,
Robb.



Re: [FRnOG] [TECH] MVNO Orange & SFR

2023-04-05 Thread Fabien VINCENT - lists via frnog
ah non, pour une fois qu'un commercial se plante en public, laissons nous lui 
proposer un RDV ! 


Fabien VINCENT
@beufanet


--- Original Message ---
Le mercredi 5 avril 2023 à 12:44, Jeremy  a écrit :


> On peu ban ce mec qui n'a rien à faire sur cette liste et qui est en
> infraction complète aux règles ?
> Sur BIZ c'est déjà très moyen (je me suis fait taclé là dessus, j'ai
> compris ma douleur), mais sur Tech, c'est limite du viol.
> 
> Merci.
> Jérémy
> 
> Le 05/04/2023 à 12:41, Francois SANTOS via frnog a écrit :
> 
> > Bonjour Steeve,
> > 
> > J’attends bien vos propos.
> > 
> > Je vous présent au nom de Phenix Partner toutes nos excuses.
> > 
> > La fin d’année et le premier trimestre ont été très compliqué et une
> > forte réorganisation en interne.
> > 
> > Durant cette période, nous avons fait évoluer et enrichi nos offres
> > pour être le plus pertinent.
> > 
> > Si le sujet Collecte Data Privée est toujours d’actualité et que vous
> > acceptiez mes sincères excuses, pouvons nous planifier une nouvelle
> > réunion de travail ?
> > 
> > Dans de vous relire.
> > 
> > Bien à vous,
> > 
> > François SANTOS
> > 
> > f.san...@phenix-partner.fr
> > 
> > De : frnog-requ...@frnog.org frnog-requ...@frnog.org De la part
> > de Steeve BEAUVAIS - Société Serinya Telecom
> > Envoyé : lundi 3 avril 2023 20:55
> > À : Charles ENEL-REHEL charles.enelre...@gmail.com
> > Cc : frnog-tech frnog-t...@frnog.org; Francois SANTOS
> > f.san...@phenix-partner.fr
> > Objet : Re: [FRnOG] [TECH] MVNO Orange & SFR
> > 
> > Mon conseil : passez votre chemin.
> > 
> > J'ai demandé la possibilité de faire de l'APN dédié avec interco privée.
> > 
> > Au début c'est tout beau, on me promet que c'est OK et pour pas cher.
> > Mon ingé avait même des étoiles dans les yeux.
> > 
> > Je creuse un peu, toutes les réponses à mes questions techniques
> > étaient à côté de la plaque.
> > 
> > Je leur ai laissé l'opportunité de revoir leurs réponses à froid. Mes
> > interlocuteurs ont préféré nous ghoster.
> > 
> > Si c'est pour avoir un partenaire qui disparaît à la moindre
> > difficulté c'est pas la peine.
> > 
> > Steeve
> > 
> > Le lun. 3 avr. 2023, 12:31, Charles ENEL-REHEL
> > charles.enelre...@gmail.com a écrit :
> > 
> > Hallucinante cette propension qu'ont certains à détourner notre
> > FRnOG en marketplace. Qui plus est en catégorie [TECH] !
> > 
> > C'est plus fort qu'eux ...
> > 
> > Charles ENEL-REHEL
> > 
> > Le lun. 3 avr. 2023 à 12:07, Francois SANTOS via frnog
> > frnog@frnog.org a écrit :
> > 
> > Bonjour,
> > 
> > Phenix Partner MVNO Orange et SFR challenger dans la vente
> > indirecte recherche quelques nouveaux partenaires.
> > 
> > Nous travaillons exclusivement en indirect pour vous proposer
> > des offres spécifiques, avec un accompagnement opérationnel et
> > fonctionnel afin de répondre aux besoins de vos clients.
> > 
> > L’une de nos forces est la souplesse que nous accordons à nos
> > partenaires pour changer à la voler les forfaits mobiles avec
> > une prise en compte immédiate. Cela peut être très pertinent
> > pour le forfait en Data Only.
> > 
> > N’hésité pas à venir vers moi pour un premier contact.
> > 
> > Bien à vous,
> 
> 
> 
> ---
> Liste de diffusion du FRnOG
> http://www.frnog.org/


---
Liste de diffusion du FRnOG
http://www.frnog.org/


Re: [halLEipzig] Leipziger OSM Stammtisch wieder auf die Beine stellen?

2023-04-05 Thread Antonin Delpeuch (lists)

Hallo alle,

das Volkshaus klingt perfekt, wenn die Moritzbastei schon 
Veranstaltungen hat. Ist es Nichtraucher?


Bis bald,
Antonin

On 05/04/2023 11:18, wiebkerein...@posteo.de wrote:

Hallo ihr,

Würde das Volkshaus auf der Karli vorschlagen.
Recht zentral, faire Preise, leckeres Essen, genug Platz.
Könnte jemand reservieren?
0341 23105505

Liebe Grüße
Wiebke

Am 03.04.2023 19:16 schrieb Fabian Schmidt:

Hallo Antonin und stumme Gesellinnen und Gesellen,

dann wird es wohl der Dienstag (25.).

In der mb sind zwei Veranstaltungen. Wollen wir es trotzdem dort
versuchen? Das Green Soul fällt aus.


Viele Grüße,

Fabian.

Am 29.03.23 schrieb Antonin Delpeuch:


Hallo Fabian,

toll, vielen Dank für den Poll :) Ich gucke mal, ob ich andere lokale 
Benutzer von OSM einzeln einladen könnte.


Bis bald,
Antonin

On 28/03/2023 17:17, Fabian Schmidt wrote:

 Hallo,

 von mir aus gern! Wer hat denn Zeit, z.B. am:

 https://dud-poll.inf.tu-dresden.de/x7yQ-4h_xw/ ?

 Laut Wiki waren die letzten Stammtische 18:30.


 Gruß, Fabian.

 Am 26.03.23 schrieb Antonin Delpeuch (lists):


 Hallo die Runde,

 ich bin neu in Leipzig und hätte Lust, die lokale OSM-Gemeinschaft 
kennen
 zu lernen. Anscheinend ist der Stammtisch nicht mehr regelmäßig 
aktiv,
 aber ich frage mich, ob irgendwelche Leute doch Interesse hätten, 
uns zu

 treffen.

 Vielen Dank an allen für den wunderbaren Zustand von OSM in der 
Gegend,

 auf jeden Fall!

 Bis hoffentlich bald,

 Antonin (username Pintoch)


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig

___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


Re: [PATCH] tree-optimization/108888 - call if-conversion

2023-04-05 Thread Andre Vieira (lists) via Gcc-patches

Hi,

The original patch to fix this PR broke the if-conversion of calls into 
IFN_MASK_CALL.  This patch restores that original behaviour and makes 
sure the tests added earlier specifically test inbranch SIMD clones.


Bootstrapped and regression tested on aarch64-none-linux-gnu and 
x86_64-pc-linux-gnu.


Is this OK for trunk?

gcc/ChangeLog:

PR tree-optimization/10
* tree-if-conv.cc (predicate_statements): Fix gimple call check.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/vect-simd-clone-16.c: Make simd clone inbranch only.
* gcc.dg/vect/vect-simd-clone-17.c: Likewise.
* gcc.dg/vect/vect-simd-clone-18.c: Likewise.

On 23/02/2023 10:10, Richard Biener via Gcc-patches wrote:

The following makes sure to only predicate calls necessary.

Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed.

PR tree-optimization/10
* tree-if-conv.cc (if_convertible_stmt_p): Set PLF_2 on
calls to predicate.
(predicate_statements): Only predicate calls with PLF_2.

* g++.dg/torture/pr10.C: New testcase.
---
  gcc/testsuite/g++.dg/torture/pr10.C | 18 ++
  gcc/tree-if-conv.cc | 17 ++---
  2 files changed, 28 insertions(+), 7 deletions(-)
  create mode 100644 gcc/testsuite/g++.dg/torture/pr10.C

diff --git a/gcc/testsuite/g++.dg/torture/pr10.C 
b/gcc/testsuite/g++.dg/torture/pr10.C
new file mode 100644
index 000..29a22e21102
--- /dev/null
+++ b/gcc/testsuite/g++.dg/torture/pr10.C
@@ -0,0 +1,18 @@
+// { dg-do compile }
+
+int scaleValueSaturate_scalefactor, scaleValueSaturate___trans_tmp_2,
+scaleValuesSaturate_i;
+int scaleValueSaturate(int value) {
+  int result = __builtin_clz(value);
+  if (value)
+if (-result <= scaleValueSaturate_scalefactor)
+  return 0;
+  return scaleValueSaturate___trans_tmp_2;
+}
+short scaleValuesSaturate_dst;
+short *scaleValuesSaturate_src;
+void scaleValuesSaturate() {
+  for (; scaleValuesSaturate_i; scaleValuesSaturate_i++)
+scaleValuesSaturate_dst =
+scaleValueSaturate(scaleValuesSaturate_src[scaleValuesSaturate_i]);
+}
diff --git a/gcc/tree-if-conv.cc b/gcc/tree-if-conv.cc
index a7a8406374d..0e384e36394 100644
--- a/gcc/tree-if-conv.cc
+++ b/gcc/tree-if-conv.cc
@@ -1099,6 +1099,7 @@ if_convertible_stmt_p (gimple *stmt, 
vec refs)
   n = n->simdclone->next_clone)
if (n->simdclone->inbranch)
  {
+   gimple_set_plf (stmt, GF_PLF_2, true);
need_to_predicate = true;
return true;
  }
@@ -2541,7 +2542,8 @@ predicate_statements (loop_p loop)
  release_defs (stmt);
  continue;
}
- else if (gimple_plf (stmt, GF_PLF_2))
+ else if (gimple_plf (stmt, GF_PLF_2)
+  && is_gimple_assign (stmt))
{
  tree lhs = gimple_assign_lhs (stmt);
  tree mask;
@@ -2625,13 +2627,14 @@ predicate_statements (loop_p loop)
  gimple_assign_set_rhs1 (stmt, ifc_temp_var (type, rhs, ));
  update_stmt (stmt);
}
-
- /* Convert functions that have a SIMD clone to IFN_MASK_CALL.  This
-will cause the vectorizer to match the "in branch" clone variants,
-and serves to build the mask vector in a natural way.  */
- gcall *call = dyn_cast  (gsi_stmt (gsi));
- if (call && !gimple_call_internal_p (call))
+ else if (gimple_plf (stmt, GF_PLF_2)
+  && is_gimple_call (stmt))
{
+ /* Convert functions that have a SIMD clone to IFN_MASK_CALL.
+This will cause the vectorizer to match the "in branch"
+clone variants, and serves to build the mask vector
+in a natural way.  */
+ gcall *call = dyn_cast  (gsi_stmt (gsi));
  tree orig_fn = gimple_call_fn (call);
  int orig_nargs = gimple_call_num_args (call);
  auto_vec args;diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16.c
index 
3ff1cfee05951609d8ca93291d5d7c47cb07ec0d..125ff4f6c8d7df5e289187e523d32e0d12db9769
 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-16.c
@@ -9,7 +9,7 @@
 #endif
 
 /* A simple function that will be cloned.  */
-#pragma omp declare simd
+#pragma omp declare simd inbranch
 TYPE __attribute__((noinline))
 foo (TYPE a)
 {
diff --git a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17.c 
b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17.c
index 
803e0f25d45c1069633486c7b7d805638db83482..3430d6f5aa4f3ae3ed8bdfda80ef99d5517f15c6
 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-simd-clone-17.c
@@ -9,7 +9,7 @@
 #endif
 
 /* A simple function that will be cloned.  */
-#pragma omp declare simd 

[Wikidata-tech] Re: Request to whitelist domain for CORS

2023-04-02 Thread Antonin Delpeuch (lists)

Hi,

If you are only fetching data via the API, then you should only be 
making GET requests, right? In that case, did you try setting the 
"origin=*" GET parameter? That should be enough to set the appropriate 
CORS headers on the response.


See: https://www.mediawiki.org/wiki/API:Cross-site_requests

Cheers,
Antonin


On 31/03/2023 16:28, ad...@hindibox.co.in wrote:

Dear Wikidata API team,

I am writing to request that you whitelist my domain, hindibox.co.in, 
for Cross-Origin Resource Sharing (CORS). I am currently working on a 
project that requires fetching data from the Wikidata API, but I am 
encountering the "Cross-Origin Request Blocked" error due to the 
Same-Origin Policy.


I have tried using the |https://cors-anywhere.herokuapp.com| proxy to 
bypass this issue, but I am now getting a 403 error. After researching 
this issue, I found that the best solution is to request that my domain 
be added to your CORS whitelist.


I would greatly appreciate it if you could add hindibox.co.in to your 
CORS whitelist so that I can continue working on my project. Please let 
me know if there is any additional information or steps that I need to 
take to make this happen.


Thank you for your attention to this matter.

Best regards,

Hindibox Team


___
Wikidata-tech mailing list -- wikidata-tech@lists.wikimedia.org
To unsubscribe send an email to wikidata-tech-le...@lists.wikimedia.org


___
Wikidata-tech mailing list -- wikidata-tech@lists.wikimedia.org
To unsubscribe send an email to wikidata-tech-le...@lists.wikimedia.org


Re: Enabling CSM fails, can somebody recommend a bootloader

2023-04-02 Thread Genes Lists

On 4/2/23 12:07, Matthew Blankenbeheler wrote:

Does this method 2 mean making 3 partitions?



The UEFI spec requires that the Extended Boot Loader be its own 
partition of type XBOOTLDR (gpt EA00) - so yes thats correct.


1 partition for  (/efi), 1 for extended boot loader (/boot) and 
whatever else you need for root, home, data etc. in this setup the 
(strong) recommendation is to mount esp as /efi and NOT as /boot/efi.


hope that helps.

gene






Re: Enabling CSM fails, can somebody recommend a bootloader

2023-04-02 Thread Genes Lists

On 4/2/23 07:44, Genes Lists wrote:

  [1] XBOOTLDR 
https://uapi-group.org/specifications/specs/discoverable_partitions_specification/




  Oops, Forgot to provide this link as well:

   https://uapi-group.org/specifications/specs/boot_loader_specification/



Re: Enabling CSM fails, can somebody recommend a bootloader

2023-04-02 Thread Genes Lists

On 4/2/23 04:04, Ralf Mardorf wrote:


Assuming I would do without the museum, then the modern kernels would
have to be in the ESP, a FAT partition without file permissions. Or do I
misunderstand something?




Ralf

Here's a brief overview. There are 2 methods available for UEFI 
booting (see spec [1] for more details).  Both require :


  -  partition with GPT Type EF00 and VFAT filesystem

  - Method 1.
 typically mounted on "/boot" and Kernels and initrd bot 
reside in this partition along with loader and loader information.


  - Method 2.
 typically mounted as /efi and a separate Extended Boot Loader 
Partition (XBOOTLDR) which has GPT Type EA00 is used to hold kernels and 
initrds. This can be any filesystem for which efi drivers are available 
(see efifs package). These include ext4, btrfs etc.  The XBOOTLDR 
partition is mounted on "/boot".


With systemd-boot, which is clean, simple and very robust, one 
simply copies the efi drivers to /efi/EFI/systemd/drivers.


 Hope this makes it more clear.

 [1] XBOOTLDR 
https://uapi-group.org/specifications/specs/discoverable_partitions_specification/




Re: [tor-relays] Selecting Exit Addresses

2023-03-31 Thread lists
On Freitag, 31. März 2023 16:56:16 CEST denny.obre...@a-n-o-n-y-m-e.net wrote:
> The second IP is still in "Exit Addresses" with the new configuration ...
> https://metrics.torproject.org/rs.html#details/3B85067588C3F017D5CCF7D8F65B
> 5881B7D4C97C

I don't understand that now either. I have at least 200 relays configured like 
this. I don't understand that now either. I have at least 200 relays configured 
like this with different IP's and subnets. I always set OutboundBindAddresses 
for relays, bridges and HS.

Everything I can think of, it may take up to 24 hours for the 2nd IP to 
disappear from tor metrics.


> torrc:
> 
> Address 209.141.39.157
> OutboundBindAddress 209.141.39.157
> ORPort  9001 IPv4Only


> 
> denny.obre...@a-n-o-n-y-m-e.net wrote ..
> 
> > Thanks Marco.
> > 
> > First, I had to change my ORPort to 9001 with your proposed configuration
> > because using 443 caused an error => "Could not bind to 0.0.0.0:443:
> > Address already in use. Is Tor already running?"
> > Probably because my other Tor instance (hidden service) is using it.
> > 
> > Now I'm just waiting for the metrics to update to see if everything is as
> > expected.
> > 
> > Finally, thanks for the help with IPv6 because I cannot get it to work.
> > Somehow when I try to check IPv6 availability (
> > https://community.torproject.org/relay/setup/post-install/ ), I get
> > "ping6: connect: Network is unreachable". I don't have time to set it up
> > right now (I already spent hours last week) so I'll get back to you for
> > that.
> > 
> > Denny
> > 


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Selecting Exit Addresses

2023-03-31 Thread lists
Hi denny,

> Hi,
> 
> I just activated my first exit relay. (
> https://metrics.torproject.org/rs.html#details/3B85067588C3F017D5CCF7D8F65B
> 5881B7D4C97C ) I had the following in my torrc (plus some other things):

I've answered the rest to the list.
If you want to enable IPv6 at Frantech/BuyVM:

First create one in Stallion from your given subnet.
This is what my /etc/network/interfaces looks like at Frantech


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 104.244.73.43/24
gateway 104.244.73.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 127.0.0.1 107.189.0.68 107.189.0.69
dns-search for-privacy.net

iface eth0 inet6 static
address 2605:6400:0030:f78b::2/64
up  ip -6 route add 2605:6400:0030::1 dev eth0
up  ip -6 route add default via 2605:6400:0030::1
down ip -6 route del default via 2605:6400:0030::1
down ip -6 route del 2605:6400:0030::1 dev eth0
dns-nameservers ::1 IPv6ns1 IPv6ns2


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Selecting Exit Addresses

2023-03-31 Thread lists
On Freitag, 31. März 2023 01:26:42 CEST denny.obre...@a-n-o-n-y-m-e.net wrote:
> Hi,
> 
> I just activated my first exit relay. (
> https://metrics.torproject.org/rs.html#details/3B85067588C3F017D5CCF7D8F65B
> 5881B7D4C97C ) I had the following in my torrc (plus some other things):

Don't forget to write Francisco a ticket. So he knows abuse mails come from a 
tor exit. https://buyvm.net/acceptable-use-policy/

> SocksPort 0
> ControlPort 9052
> ORPort  209.141.39.157:443
> 
> 
> I have 2 IPs on my server and I wanted Tor to use 209.141.39.157. I thought
> setting it with ORPort would suffice. But under "Exit Addresses" in the
> metrics it was my other IP. So I added the following in my torrc:
> 
> Address 209.141.39.157
> OutboundBindAddress 209.141.39.157

> 
> And now I have both IPs in the "Exit Addresses". How can I prevent my exit
> relay from using the other IP? Note that I have also another instance of
> Tor running a hidden service that I intended to run on the other IP.

For IPv4 only a flag is missing at the ORPort
See [NEW FEATURE] Relay IPv6 Address Discovery
https://www.mail-archive.com/tor-relays@lists.torproject.org/msg17760.html

Dual stack config:
Address 185.220.101.33
Address [2a0b:f4c2:2::33]

OutboundBindAddress 185.220.101.33
OutboundBindAddress [2a0b:f4c2:2::33]

ORPort 185.220.101.33:9001
ORPort [2a0b:f4c2:2::33]:9001

IPv4 only:
Address 185.220.101.33
OutboundBindAddress 185.220.101.33
ORPort 9001 IPv4Only

Then restart the relay, a reload is not enough.

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [halLEipzig] Leipziger OSM Stammtisch wieder auf die Beine stellen?

2023-03-29 Thread Antonin Delpeuch (lists)

Hallo Fabian,

toll, vielen Dank für den Poll :) Ich gucke mal, ob ich andere lokale 
Benutzer von OSM einzeln einladen könnte.


Bis bald,
Antonin

On 28/03/2023 17:17, Fabian Schmidt wrote:

Hallo,

von mir aus gern! Wer hat denn Zeit, z.B. am:

https://dud-poll.inf.tu-dresden.de/x7yQ-4h_xw/ ?

Laut Wiki waren die letzten Stammtische 18:30.


Gruß, Fabian.

Am 26.03.23 schrieb Antonin Delpeuch (lists):


Hallo die Runde,

ich bin neu in Leipzig und hätte Lust, die lokale OSM-Gemeinschaft 
kennen zu lernen. Anscheinend ist der Stammtisch nicht mehr regelmäßig 
aktiv, aber ich frage mich, ob irgendwelche Leute doch Interesse 
hätten, uns zu treffen.


Vielen Dank an allen für den wunderbaren Zustand von OSM in der 
Gegend, auf jeden Fall!


Bis hoffentlich bald,

Antonin (username Pintoch)


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


Re: [AFMUG] OT...killing..."It has begun " with dinner

2023-03-27 Thread Jeff Broadwick - Lists
:-)

Jeff Broadwick
CTIconnect
312-205-2519 Office
574-220-7826 Cell
jbroadw...@cticonnect.com

> On Mar 27, 2023, at 1:15 PM, Jaime Solorza  wrote:
> 
> 
> Here you go Jeff..
> <20230311_172607.jpg>
> -- 
> AF mailing list
> AF@af.afmug.com
> http://af.afmug.com/mailman/listinfo/af_af.afmug.com


-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com


Re: [gentoo-user] PCIe x1 or PCIe x4 SATA controller card

2023-03-27 Thread Wols Lists

On 27/03/2023 01:18, Dale wrote:

Thanks for any light you can shed on this.  Googling just leads to a ton
of confusion.  What's true 6 months ago is wrong today.  :/  It's hard
to tell what still applies.


Well, back in the days of the megahurtz wars, a higher clock speed 
allegedly meant a faster CPU. Now they all run about 5GHz, and anything 
faster would break the speed of light ... so how they do it nowadays I 
really don't know ...


Cheers,
Wol



[halLEipzig] Leipziger OSM Stammtisch wieder auf die Beine stellen?

2023-03-26 Thread Antonin Delpeuch (lists)

Hallo die Runde,

ich bin neu in Leipzig und hätte Lust, die lokale OSM-Gemeinschaft 
kennen zu lernen. Anscheinend ist der Stammtisch nicht mehr regelmäßig 
aktiv, aber ich frage mich, ob irgendwelche Leute doch Interesse hätten, 
uns zu treffen.


Vielen Dank an allen für den wunderbaren Zustand von OSM in der Gegend, 
auf jeden Fall!


Bis hoffentlich bald,

Antonin (username Pintoch)
___
halLEipzig mailing list
halLEipzig@lists.openstreetmap.de
https://lists.openstreetmap.de/mailman/listinfo/halleipzig


Re: gui network icon not connected

2023-03-26 Thread Genes Lists

On 3/26/23 01:02, rino mardo wrote:
if i stop the iwd, my wireless connections goes away. wlan0 also 


Try tell network manager to use iwd - create this file:

   /etc/NetworkManager/conf.d/wifi_backend.conf

with these 2 lines

[device]
wifi.backend=iwd




Re: [AFMUG] It has begun

2023-03-25 Thread Jeff Broadwick - Lists
Stop it please.  I enjoy this list for tidbits of industry info and it’s friendly banter.  I’m even interested in Jaime’s breakfast and weather girls.If the politics was light and friendly, I’d be all in…this is anything but.Jeff BroadwickCTIconnect312-205-2519 Office574-220-7826 Celljbroadw...@cticonnect.comOn Mar 25, 2023, at 3:45 PM, Jan-GAMs  wrote:
  

  
  
Darin, the real challenge is the station the right-wingers
  watch.  FOX.  Documented liars, well documented liars.  They make
  shit up and present it as fact.  This has been admitted to by
  their owner Murdock.  They make up news just to drive their
  advertisement income.  Why the hell should anyone watch that
  channel is beyond belief.  The problem is, this has been a known
  factoid for years, FOX makes shit up and presents it as news. 
  Until Congress makes laws concerning what a so-called "NEWS"
  channel can present as truth and still have a license to
  broadcast, I'm never going to trust any "NEWS" source.  Especially
  FOX.  Reagan did away with the "Fairness Doctrine" back in the
  '80's, which more or less ended news reporting facts and brought
  in news reporting as hype.  Facts and news have been at odds since
  Roger Ailes left the Nixon WH and started up FOX news.  Real news
  may never recover.

On 3/25/23 11:02, Darin Steffl wrote:


  
  Evan and others,


You guys really are delusional if you don't see
  what the Republicans are doing to strip away rights. At least
  call them out if you're going to vote their way.


Everything I shared were statements of fact
  easily verified with a Google search with reputable sources.


It's sad that I can share facts and then some
  want to shut down the conversation. Is that because you don't
  want to be wrong or don't think you're associated with people
  who hold more extreme views than you?


I'm asking you not to be complicit in the hate
  that's in the republican party. Speak out and tell your
  representatives to stop attacking people. Be kind and
  empathetic.


Evan, why are you worried about me for calling
  out hateful people? I'm worried for you if you think that's
  wrong for me to do. You should be against all the attacks on
  human rights too. I believe you and your wife, Sandra, to be
  good people. But there are some on the right who would tell
  your wife to go back to her country because they're xenophobic
  and racist. Wouldn't that bother you if someone said that to
  your wife?


Call out the bad that any party partakes in.
  During the Floyd riots, I didn't agree with any violence or
  destruction of property and I'm in full support of people
  being arrested who committed crimes. Same with the people on
  January 6. But why does the right think January 6 was a
  peaceful tour and no one should be arrested? It's
  hypocritical.


Same with law enforcement. We shouldn't support
  any profession unconditionally. That's dangerous. I support
  good cops but think we should hold bad ones accountable. This
  is all common sense stuff but the right thinks all cops are
  good which is false. There's bad people in every profession so
  let's weed them out. I have some sick friends and family that
  think the murder of George Floyd was justified, even after
  watching the full video of him being suffocated!! Sick people
  with sick minds.


The rest of the world watches our country with
  disgust that half the people hate themselves and their country
  so much that they continue to vote republican. The new
  republican party is far more different and extreme than the
  old one, which was somewhat reasonable and bipartisan.


There is no question that democrats are better
  for all people in terms of human rights, equality, and
  protections against employers and corporations. They pass more
  bills to protect people, the environment, and the world than
  the right. That does NOT mean the party is perfect and that
  there aren't shady politicians on both sides. It all comes
  back to calling out the bad things either party tries to do so
  we end up in the middle.


Being in the middle, centrist, should not be an
  extreme view. The facts and opinions I shared are centrist
  views so if you think I'm crazy or you're offended, it's
  likely that your views are more extreme than you think. I
  

[OE-core] [PATCH] scripts/yocto_testresults_query.py: fix regression reports for branches with slashes

2023-03-24 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

Regression reports are not generated on some integration branches because
yocto_testresults_query.py truncates branches names with slashes when it passes
it to resulttool. For example, "abelloni/master-next" is truncated to "abelloni"

Fix this unwanted branch truncation by fix tag parsing in yocto-testresults

Signed-off-by: Alexis Lothoré 
---
 scripts/yocto_testresults_query.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/yocto_testresults_query.py 
b/scripts/yocto_testresults_query.py
index 4df339c92eb..a5073736aab 100755
--- a/scripts/yocto_testresults_query.py
+++ b/scripts/yocto_testresults_query.py
@@ -41,7 +41,7 @@ def get_sha1(pokydir, revision):
 def get_branch(tag):
 # The tags in test results repository, as returned by git rev-list, have 
the following form:
 # refs/tags//-g/
-return tag.split("/")[2]
+return '/'.join(tag.split("/")[2:-2])
 
 def fetch_testresults(workdir, sha1):
 logger.info(f"Fetching test results for {sha1} in {workdir}")
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#179035): 
https://lists.openembedded.org/g/openembedded-core/message/179035
Mute This Topic: https://lists.openembedded.org/mt/97823573/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: TeXLive 2023 update

2023-03-24 Thread Genes Lists

On 3/23/23 13:44, Rémy Oudompheng wrote:

Hello

Texlive packages have been updated to 2023 version in [testing].


...


Thank you Rémy.
I don't use luaxxx.  On the docs I've tested so far,  pdflatex is 
working well.


gene



[yocto] [yocto-autobuilder-helper][PATCH v2 0/2] expose regression reports on web page

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Regression reports are currently stored alongside test reports and other
artifacts on the autobuilder artifacts web page. This small update propose to
add a link to the regression report (when available) on main non-release page
([1]) instead of having to manually navigate the directories to find it

Changes since v1:
- put regression report link in results report column instead of dedicated 
column

[1] https://autobuilder.yocto.io/pub/non-release/

Alexis Lothoré (2):
  scripts/generate-testresult-index.py: fix typo in template var name
  scripts/generate-testresult-index.py: expose regression reports on web
page

 scripts/generate-testresult-index.py | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59511): https://lists.yoctoproject.org/g/yocto/message/59511
Mute This Topic: https://lists.yoctoproject.org/mt/97820724/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH v2 2/2] scripts/generate-testresult-index.py: expose regression reports on web page

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

When available, expose tesresult-regressions-report.txt on non-release web page,
as it is done for many other artifacts currently

Signed-off-by: Alexis Lothoré 
---
 scripts/generate-testresult-index.py | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/scripts/generate-testresult-index.py 
b/scripts/generate-testresult-index.py
index 09d2edb..29a6900 100755
--- a/scripts/generate-testresult-index.py
+++ b/scripts/generate-testresult-index.py
@@ -42,7 +42,10 @@ index_template = """
{{entry[0]}}
{% if entry[2] %} {{entry[2]}}{% endif %}
{% if entry[4] %} {{entry[4]}}{% endif %}
-{% if entry[3] %}Report{% endif %} 
+   
+ {% if entry[3] %}Report{% endif -%}
+ {% if entry[9] %}Regressions{% endif %}
+   

{% for perfrep in entry[6] %}
  {{perfrep[1]}}
@@ -129,6 +132,10 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 if os.path.exists(buildpath + "/testresult-report.txt"):
 testreport = reldir + "testresults/testresult-report.txt"
 
+regressionreport = ""
+if os.path.exists(buildpath + "/testresult-regressions-report.txt"):
+regressionreport = reldir + 
"testresults/testresult-regressions-report.txt"
+
 ptestlogs = []
 ptestseen = []
 for p in glob.glob(buildpath + "/*-ptest/*.log"):
@@ -165,7 +172,7 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 
 branch = get_build_branch(buildpath)
 
-entries.append((build, reldir, btype, testreport, branch, buildhistory, 
perfreports, ptestlogs, hd))
+entries.append((build, reldir, btype, testreport, branch, buildhistory, 
perfreports, ptestlogs, hd, regressionreport))
 
 # Also ensure we have saved out log data for ptest runs to aid debugging
 if "ptest" in btype or btype in ["full", "quick"]:
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59512): https://lists.yoctoproject.org/g/yocto/message/59512
Mute This Topic: https://lists.yoctoproject.org/mt/97820725/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH v2 1/2] scripts/generate-testresult-index.py: fix typo in template var name

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/generate-testresult-index.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/generate-testresult-index.py 
b/scripts/generate-testresult-index.py
index 1fc9f41..09d2edb 100755
--- a/scripts/generate-testresult-index.py
+++ b/scripts/generate-testresult-index.py
@@ -12,7 +12,7 @@ import json
 import subprocess
 from jinja2 import Template
 
-index_templpate = """
+index_template = """
 
 
 
@@ -181,6 +181,6 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 with open(f + "/resulttool-done.log", "a+") as tf:
 tf.write("\n")
 
-t = Template(index_templpate)
+t = Template(index_template)
 with open(os.path.join(path, "index.html"), 'w') as f:
 f.write(t.render(entries = entries))
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59510): https://lists.yoctoproject.org/g/yocto/message/59510
Mute This Topic: https://lists.yoctoproject.org/mt/97820723/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 2/2] scripts/generate-testresult-index.py: expose regression reports on web page

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
Hi Richard,
On 3/24/23 10:55, Richard Purdie wrote:
> On Fri, 2023-03-24 at 10:00 +0100, Alexis Lothoré via
> lists.yoctoproject.org wrote:
>> From: Alexis Lothoré 
>> -entries.append((build, reldir, btype, testreport, branch, buildhistory, 
>> perfreports, ptestlogs, hd))
>> +entries.append((build, reldir, btype, testreport, branch, buildhistory, 
>> perfreports, ptestlogs, hd, regressionreport))
>>  
> 
> In the interests of keeping that index page a manageable size, instead
> of a new data column, I'd suggest we just add the link in the same TD
> cell with the name "Regression"?

Sure, I will update it with your suggestion

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59509): https://lists.yoctoproject.org/g/yocto/message/59509
Mute This Topic: https://lists.yoctoproject.org/mt/97819687/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/2] scripts/generate-testresult-index.py: fix typo in template var name

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/generate-testresult-index.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/generate-testresult-index.py 
b/scripts/generate-testresult-index.py
index 1fc9f41..09d2edb 100755
--- a/scripts/generate-testresult-index.py
+++ b/scripts/generate-testresult-index.py
@@ -12,7 +12,7 @@ import json
 import subprocess
 from jinja2 import Template
 
-index_templpate = """
+index_template = """
 
 
 
@@ -181,6 +181,6 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 with open(f + "/resulttool-done.log", "a+") as tf:
 tf.write("\n")
 
-t = Template(index_templpate)
+t = Template(index_template)
 with open(os.path.join(path, "index.html"), 'w') as f:
 f.write(t.render(entries = entries))
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59505): https://lists.yoctoproject.org/g/yocto/message/59505
Mute This Topic: https://lists.yoctoproject.org/mt/97819685/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 0/2] expose regression reports on web page

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Regression reports are currently stored alongside test reports and other
artifacts on the autobuilder artifacts web page. This small update propose to
add a link to the regression report (when available) on main non-release page
([1]) instead of having to manually navigate the directories to find it

[1] https://autobuilder.yocto.io/pub/non-release/

Alexis Lothoré (2):
  scripts/generate-testresult-index.py: fix typo in template var name
  scripts/generate-testresult-index.py: expose regression reports on web
page

 scripts/generate-testresult-index.py | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59506): https://lists.yoctoproject.org/g/yocto/message/59506
Mute This Topic: https://lists.yoctoproject.org/mt/97819686/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/2] scripts/generate-testresult-index.py: expose regression reports on web page

2023-03-24 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

When available, expose tesresult-regressions-report.txt on non-release web page,
as it is done for many other artifacts currently

Signed-off-by: Alexis Lothoré 
---
 scripts/generate-testresult-index.py | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/scripts/generate-testresult-index.py 
b/scripts/generate-testresult-index.py
index 09d2edb..122bac1 100755
--- a/scripts/generate-testresult-index.py
+++ b/scripts/generate-testresult-index.py
@@ -30,6 +30,7 @@ index_template = """
   Type
   Branch
   Test Results Report
+  Regressions Report
   Performance Reports
   ptest Logs
   Buildhistory
@@ -43,6 +44,7 @@ index_template = """
{% if entry[2] %} {{entry[2]}}{% endif %}
{% if entry[4] %} {{entry[4]}}{% endif %}
 {% if entry[3] %}Report{% endif %} 
+{% if entry[9] %}Report{% endif %} 

{% for perfrep in entry[6] %}
  {{perfrep[1]}}
@@ -129,6 +131,10 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 if os.path.exists(buildpath + "/testresult-report.txt"):
 testreport = reldir + "testresults/testresult-report.txt"
 
+regressionreport = ""
+if os.path.exists(buildpath + "/testresult-regressions-report.txt"):
+regressionreport = reldir + 
"testresults/testresult-regressions-report.txt"
+
 ptestlogs = []
 ptestseen = []
 for p in glob.glob(buildpath + "/*-ptest/*.log"):
@@ -165,7 +171,7 @@ for build in sorted(os.listdir(path), key=keygen, 
reverse=True):
 
 branch = get_build_branch(buildpath)
 
-entries.append((build, reldir, btype, testreport, branch, buildhistory, 
perfreports, ptestlogs, hd))
+entries.append((build, reldir, btype, testreport, branch, buildhistory, 
perfreports, ptestlogs, hd, regressionreport))
 
 # Also ensure we have saved out log data for ptest runs to aid debugging
 if "ptest" in btype or btype in ["full", "quick"]:
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59507): https://lists.yoctoproject.org/g/yocto/message/59507
Mute This Topic: https://lists.yoctoproject.org/mt/97819687/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 3/3] scripts/send_qa_email: return previous tag when running a non-release master build

2023-03-23 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Some nightly builders are configured in yocto-autobuilder2 to run master builds.
Those build parameters currently skip all branches of
get_regression_base_and_target, which then return None, while the caller
expects a base and target tuple

Set default behaviour to return previous tag as comparison base and passed
branch as target for such builds

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py  | 3 +++
 scripts/test_send_qa_email.py | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 78e051a..4613bff 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -61,6 +61,9 @@ def get_regression_base_and_target(basebranch, comparebranch, 
release, targetrep
 # Basebranch/comparebranch is defined in config.json: regression 
reporting must be done against branches as defined in config.json
 return comparebranch, basebranch
 
+#Default case: return previous tag as base
+return get_previous_tag(targetrepodir, release), basebranch
+
 def generate_regression_report(querytool, targetrepodir, base, target, 
resultdir, outputdir):
 print(f"Comparing {target} to {base}")
 
diff --git a/scripts/test_send_qa_email.py b/scripts/test_send_qa_email.py
index ce0c6b7..974112a 100755
--- a/scripts/test_send_qa_email.py
+++ b/scripts/test_send_qa_email.py
@@ -48,6 +48,8 @@ class TestVersion(unittest.TestCase):
   "comparebranch": "master", 
"release": None}, "expected": ("master", "master-next")},
 {"name": "Fork Master Next", "input": {"basebranch": "ross/mut",
"comparebranch": "master", 
"release": None}, "expected": ("master", "ross/mut")},
+{"name": "Nightly a-quick", "input": {"basebranch": "master",
+   "comparebranch": None, 
"release": "20230322-2"}, "expected": ("LAST_TAG", "master")},
 ]
 
 def test_versions(self):
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59499): https://lists.yoctoproject.org/g/yocto/message/59499
Mute This Topic: https://lists.yoctoproject.org/mt/97797237/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 0/3] fix regression reporting for nightly build

2023-03-23 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

It has been observed that regression reporting is currently failing on nightly
builds ([1]). Those builds parameters are currently not properly managed by the
base and target computation for regression reports. Add default behaviour to
generate report against last tag

[1] 
https://lore.kernel.org/yocto/20230313145145.2574842-1-alexis.loth...@bootlin.com/T/#m4c1e0a8124c1bcfb74a80c4ef64176f42fee4e4e

Alexis Lothoré (3):
  scripts/test_utils: test master nightly build case
  scripts/test_send_qa_email.py: allow tests with non static results
  scripts/send_qa_email: return previous tag when running a non-release
master build

 scripts/send_qa_email.py  |  3 +++
 scripts/test_send_qa_email.py | 15 +--
 scripts/test_utils.py | 10 ++
 3 files changed, 26 insertions(+), 2 deletions(-)

-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59498): https://lists.yoctoproject.org/g/yocto/message/59498
Mute This Topic: https://lists.yoctoproject.org/mt/97797236/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/3] scripts/test_send_qa_email.py: allow tests with non static results

2023-03-23 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

When the test assert is about a tag in Poky, the result will not be the same
depending on existing tags at the time of running tests.

Add a LAST_TAG marker to loosen constraints but still allow to tests for general
cases (e.g. : test that tag-depending tests does not return None)

Signed-off-by: Alexis Lothoré 
---
 scripts/test_send_qa_email.py | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/scripts/test_send_qa_email.py b/scripts/test_send_qa_email.py
index ccdcba6..ce0c6b7 100755
--- a/scripts/test_send_qa_email.py
+++ b/scripts/test_send_qa_email.py
@@ -65,8 +65,17 @@ class TestVersion(unittest.TestCase):
 def test_get_regression_base_and_target(self):
 for data in self.regression_inputs:
 with self.subTest(data['name']):
-self.assertEqual(send_qa_email.get_regression_base_and_target(
-data['input']['basebranch'], 
data['input']['comparebranch'], data['input']['release'], 
os.environ.get("POKY_PATH")), data['expected'])
+base, target = send_qa_email.get_regression_base_and_target(
+data['input']['basebranch'], 
data['input']['comparebranch'], data['input']['release'], 
os.environ.get("POKY_PATH"))
+expected_base, expected_target = data["expected"]
+# The comparison base can not be set statically in tests when 
it is supposed to be the previous tag,
+# since the result will depend on current tags
+if expected_base == "LAST_TAG":
+self.assertIsNotNone(base)
+else:
+self.assertEqual(base, expected_base)
+self.assertEqual(target, expected_target)
+
 
 if __name__ == '__main__':
 if os.environ.get("POKY_PATH") is None:
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59497): https://lists.yoctoproject.org/g/yocto/message/59497
Mute This Topic: https://lists.yoctoproject.org/mt/97797235/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/3] scripts/test_utils: test master nightly build case

2023-03-23 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/test_utils.py | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/scripts/test_utils.py b/scripts/test_utils.py
index ab91e3b..d02e9b2 100755
--- a/scripts/test_utils.py
+++ b/scripts/test_utils.py
@@ -99,6 +99,16 @@ class TestGetComparisonBranch(unittest.TestCase):
 self.assertEqual(
 comparebranch, None,  msg="Arbitrary repo/branch should not return 
any specific comparebranch")
 
+def test_master_nightly(self):
+repo = "ssh://g...@push.yoctoproject.org/poky"
+branch = "master"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(
+basebranch, "master", msg="Master branch should be returned")
+self.assertEqual(
+comparebranch, None,  msg="No specific comparebranch should be 
returned")
+
 
 if __name__ == '__main__':
 unittest.main()
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59496): https://lists.yoctoproject.org/g/yocto/message/59496
Mute This Topic: https://lists.yoctoproject.org/mt/97797234/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/8] fix regression reports generation on "master-next" branches

2023-03-22 Thread Alexis Lothoré via lists . yoctoproject . org
Hi Richard,
On 3/22/23 10:41, Richard Purdie wrote:
> On Mon, 2023-03-13 at 15:51 +0100, Alexis Lothoré via
> lists.yoctoproject.org wrote:
>> From: Alexis Lothoré 
>>
>> This series fixes regression report generation on "next" branches, as raised 
>> in
>> [1].
>>
>> The first five patches are preparatory updates for the real fix, being either
>> refactoring, cleanup or unit tests addition to better understand how 
>> integration
>> branches are used in send-qa-email.
>> The proper fix is in 6th commit, followed by corresponding tests
>> Finally, the last commit add Alexandre's "next" branch as "fork" branches to
>> enable regression reports generation when testing patches, as suggested in 
>> [1]
>> too.
>>
>> Since patch testing branches are force-pushed on a regular basis, it is quite
>> difficult to get a relevant testing scenario, so this series has been tested 
>> by
>> faking SHA1 in yocto_testresults_query to match some master-next results in
>> yocto-testresults at the time of testing this series. I would gladly take
>> feedback about this series running for real in a master-next branch
>>
>> [1] https://lists.yoctoproject.org/g/yocto/message/59067
>>
>> Alexis Lothoré (8):
>>   scripts/utils: add unit tests for getcomparisonbranch
>>   scripts/send-qa-email: remove unused variable
>>   scripts/send-qa-email: invert boolean logic for release check
>>   scripts/send-qa-email: protect is_release_version from None value
>>   scripts/send-qa-email: add tests for is_release_version
>>   scripts/send-qa-email: fix testing branches regression reporting
>>   scripts/test_send_qa_email.py: add tests for base/target pair guessing
>>   config: flag A. Belloni master-next branch as testing branch
> 
> I think there is a regression somewhere:
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/85/builds/2085/steps/29/logs/stdio
ACK, will take a look at it

Regards,
-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59490): https://lists.yoctoproject.org/g/yocto/message/59490
Mute This Topic: https://lists.yoctoproject.org/mt/97582163/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: linux headers

2023-03-21 Thread Genes Lists

On 3/21/23 04:26, lacsaP Patatetom wrote:

hi everybody,




my development workstation is running 6.1.20-1-lts and I made 
linux-lts-ro-6.1.15-1-x86_64.pkg.tar.zst and 
linux-lts-ro-headers-6.1.15-1-x86_64.pkg.tar.zst packages on it : if now 
I want to compile on this development workstation applications to run on 
another workstation which is running 6.1.15-1-lts-ro, what should I do 
so that the compilation takes this specificity into account ?


Unless you are compiling kernel modules the answer is nothing. The 
kernel team takes great pains to ensure that user space will continue to 
work. Most of the kernel interface of the kernel with user space is 
mediated by libraries such as glibc.


You do need to be sure that your user-space environment is consistent 
between the compiling host and the application target. For example that 
the target machines run same version of glibc etc.


So short answer - if only difference between your compile machine and 
target application machine is kernel - there is nothing special you need 
to do.


Good luck.






[yocto] [yocto-autobuilder-helper][PATCH 1/1] config.json: fix A. Belloni configuration for regression reporting

2023-03-21 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

There is a typo in BUILD_HISTORY_FORKPUSH, leading to failures on Autobuilder
when trying to generate regression reports:

Traceback (most recent call last):
  File 
"/home/pokybuild/yocto-worker/a-full/yocto-autobuilder-helper/scripts/send-qa-email",
 line 213, in 
send_qa_email()
  File 
"/home/pokybuild/yocto-worker/a-full/yocto-autobuilder-helper/scripts/send-qa-email",
 line 117, in send_qa_email
basebranch, comparebranch = utils.getcomparisonbranch(ourconfig, repo, 
branch)
  File 
"/home/pokybuild/yocto-worker/a-full/yocto-autobuilder-helper/scripts/utils.py",
 line 392, in getcomparisonbranch
comparerepo, comparebranch = base.split(":")
ValueError: not enough values to unpack (expected 2, got 1)

Observed on build a-full/5070

Signed-off-by: Alexis Lothoré 
---
 config.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config.json b/config.json
index fcd0588..51c95e1 100644
--- a/config.json
+++ b/config.json
@@ -6,7 +6,7 @@
 "BUILD_HISTORY_DIR" : "buildhistory",
 "BUILD_HISTORY_REPO" : 
"ssh://g...@push.yoctoproject.org/poky-buildhistory",
 "BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:sumo", "poky:thud", "poky:warrior", "poky:zeus", "poky:dunfell", 
"poky:gatesgarth", "poky:hardknott", "poky:honister", "poky:kirkstone", 
"poky:langdale", "poky:master"],
-"BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky-contrib:abelloni/master-next": "poky/master", "poky:master-next" : 
"poky:master"},
+"BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky-contrib:abelloni/master-next": "poky:master", "poky:master-next" : 
"poky:master"},
 
 "BUILDTOOLS_URL_TEMPLOCAL" : 
"/srv/autobuilder/autobuilder.yocto.io/pub/non-release/20210214-8/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.2+snapshot-7d38cc8e749aedb8435ee71847e04b353cca541d.sh",
 "BUILDTOOLS_URL_TEMPLOCAL2" : 
"https://downloads.yoctoproject.org/releases/yocto/milestones/yocto-3.1_M3/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.0+snapshot-20200315.sh;,
-- 
2.40.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59480): https://lists.yoctoproject.org/g/yocto/message/59480
Mute This Topic: https://lists.yoctoproject.org/mt/97751009/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: linux headers

2023-03-20 Thread Genes Lists





if I understand correctly, I can install my package 
linux-lts-perso-6.1.15-2-x86_64.pkg.tar.zst, boot on it (eg. "my" 
kernel) and continue to use the binaries present on my system while they 
have not been compiled with its (new) headers and especially for the 
binaries that call the functions present in the blk-core.c file ?


The kernel headers are used to compile out-of-tree kernel modules for 
that kernel. You should always install kernel and its companion headers 
file. While there may be cases where the headers are not actually 
changed and it may work - good policy is always install both kernel and 
its companion headers package. If you don't compile out of tree kernel 
modules then you may not even need to install headers.


Hope that is clear.





Re: linux headers

2023-03-20 Thread Genes Lists

On 3/20/23 06:44, lacsaP Patatetom wrote:

Please don't top post on mailing lists.

I don't understand what 'problem' you are speaking of. All you've asked 
is if you can install a kernel headers from a different build - the 
general answer is "no" - don't ever do that.


I already explained if you want to compile kernel package that you hav 
emodified, bump pkgrel and compile it - it will create both kernel and 
kernel headers package - install and use them both.


Cross compile?? Unless you're compiling on one architecture for another 
I don't understand the question. Since you're compiling a kernel for a 
VM your host and VM are presumably both x86-64.


Honestly, my best advice to you is stop trying to build kernels.




Re: linux headers

2023-03-20 Thread Genes Lists

On 3/20/23 05:27, lacsaP Patatetom wrote:

hi,




 - When you change the source you must also bump pkgrel as the package 
is now different.


 - If you want to build your own version of an Arch package, you should 
not use same package name as the official Arch package - this will only 
lead to bad things. Change the package name to something else.



 - current kernel LTS is 6.1.20 so probably you should throw away 
6.1.15 and use that anyway.



best,

gene





Re: [tor-relays] AirTor/ATOR continues to pester Tor relay operators, promising donations

2023-03-19 Thread lists
On Freitag, 17. März 2023 17:25:10 CET Bauruine wrote:

> ... but I'll
> just keep "mining" consensus weight. Because you don't need a modified
> version of Tor and you don't need the blockchain for that. Just download
> the consensus and look at the consensus weight and you have your proof
> of uptime and relaying.

Yeah, contribution in accumulated consensus weight, that's what nusenu has 
been doing for a long time:
https://nusenu.github.io/OrNetStats/#top-relay-contributors-by-aroi

Besides, no reputable relay operator would use a modified
version of Tor. (from third-party sources) ;-)


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: Orphaning packages

2023-03-17 Thread Genes Lists

On 3/14/23 12:50, Tobias Powalowski wrote:

Hi guys,
early spring cleanup on my adopted packages:

..

- mdadm



Thanks for all the work you've put into Arch Tobias - it is very much 
appreciated.


Of the packages you listed, I sure hope mdadm will be picked up - this 
quite obviously is a very important one. I'm sure many of us use RAID in 
various forms. I use MD for RAID-5/6.


Be good if someone were to pick this up and keep it in the official 
repos and not let it drop to AUR.


thank you to all those contributing to Arch.

regards,

gene


Re: [ping][vect-patterns] Refactor widen_plus/widen_minus as internal_fns

2023-03-17 Thread Andre Vieira (lists) via Gcc-patches

Hi Richard,

I'm only picking this up now. Just going through your earlier comments 
and stuff and I noticed we didn't address the situation with the 
gimple::build. Do you want me to add overloaded static member functions 
to cover all gimple_build_* functions, or just create one to replace 
vect_gimple_build and we create them as needed? It's more work but I 
think adding them all would be better. I'd even argue that it would be 
nice to replace the old ones with the new ones, but I can imagine you 
might not want that as it makes backporting and the likes a bit annoying...


Let me know what you prefer, I'll go work on your latest comments too.

Cheers,
Andre


Re: GESO (6) - Spring Cemetery stroll

2023-03-14 Thread lists
> On 14 Mar 2023, at 02:50, Rick Womer  wrote:
> 
> I took my camera for a walk on a lovely afternoon a week ago. These
> were my favorites.
> 
> https://rickwomer.smugmug.com/2023/March-2023/Woodland-Cemetery-3-6-23/
> 

Nice and pleasing images Rick!

It sure seems like lovely weather ;-)



Rregards, JvW

=
Jan van Wijk; author of DFsee;  https://www.dfsee.com

--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


[PATCH] ifcvt: Lower bitfields only if suitable for scalar register [PR tree/109005]

2023-03-13 Thread Andre Vieira (lists) via Gcc-patches

This patch fixes the condition check for eligilibity of lowering bitfields,
where before we would check for non-BLKmode types, in the hope of excluding
unsuitable aggregate types, we now check directly the representative is 
not an

aggregate type, i.e. suitable for a scalar register.

I tried adding the reduced testcase mentioned in the PR, but I couldn't 
get the Ada testsuite to run, so could an Ada maintainer add the test 
after verifying it runs properly?


OK for trunk?

gcc/ChangeLog:

PR tree/109005
* tree-if-conv.cc (get_bitfield_rep): Replace BLKmode check with
aggregate type check.
diff --git a/gcc/tree-if-conv.cc b/gcc/tree-if-conv.cc
index 
f133102ad3350a0fd3a09ad836c68e840f316a0e..ca1abd8656c6c47c314d2b2c9fa515e150d1703b
 100644
--- a/gcc/tree-if-conv.cc
+++ b/gcc/tree-if-conv.cc
@@ -3317,9 +3317,9 @@ get_bitfield_rep (gassign *stmt, bool write, tree *bitpos,
   tree field_decl = TREE_OPERAND (comp_ref, 1);
   tree rep_decl = DECL_BIT_FIELD_REPRESENTATIVE (field_decl);
 
-  /* Bail out if the representative is BLKmode as we will not be able to
- vectorize this.  */
-  if (TYPE_MODE (TREE_TYPE (rep_decl)) == E_BLKmode)
+  /* Bail out if the representative is not a suitable type for a scalar
+ register variable.  */
+  if (!is_gimple_reg_type (TREE_TYPE (rep_decl)))
 return NULL_TREE;
 
   /* Bail out if the DECL_SIZE of the field_decl isn't the same as the BF's


[yocto] [yocto-autobuilder-helper][PATCH 8/8] config: flag A. Belloni master-next branch as testing branch

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Add "abelloni/master-next" branch from poky-contrib in configuration so that
regression reports are generated when testing for patches

Signed-off-by: Alexis Lothoré 
---
 config.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config.json b/config.json
index 687608d..fcd0588 100644
--- a/config.json
+++ b/config.json
@@ -6,7 +6,7 @@
 "BUILD_HISTORY_DIR" : "buildhistory",
 "BUILD_HISTORY_REPO" : 
"ssh://g...@push.yoctoproject.org/poky-buildhistory",
 "BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:sumo", "poky:thud", "poky:warrior", "poky:zeus", "poky:dunfell", 
"poky:gatesgarth", "poky:hardknott", "poky:honister", "poky:kirkstone", 
"poky:langdale", "poky:master"],
-"BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky:master-next" : "poky:master"},
+"BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky-contrib:abelloni/master-next": "poky/master", "poky:master-next" : 
"poky:master"},
 
 "BUILDTOOLS_URL_TEMPLOCAL" : 
"/srv/autobuilder/autobuilder.yocto.io/pub/non-release/20210214-8/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.2+snapshot-7d38cc8e749aedb8435ee71847e04b353cca541d.sh",
 "BUILDTOOLS_URL_TEMPLOCAL2" : 
"https://downloads.yoctoproject.org/releases/yocto/milestones/yocto-3.1_M3/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.0+snapshot-20200315.sh;,
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59417): https://lists.yoctoproject.org/g/yocto/message/59417
Mute This Topic: https://lists.yoctoproject.org/mt/97582171/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 6/8] scripts/send-qa-email: fix testing branches regression reporting

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

d6018b891a3b7c62c7a2883c7fb9ae55e66f1363 broke regression reporting for testing
branches (e.g: master-next in poky, ross/mut in poky-contrib) by ignoring the 
comparebranch returned by
utils.getcomparison branch

Fix regression reporting for those branches by using comparebranch again. The
fix also refactor/add a intermediary step to guess base and target for
regression reporting, to isolate a bit the logic and make it easier later to add
multiple base/target couples

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 27 +++
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 540eb94..78e051a 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -49,18 +49,28 @@ def get_previous_tag(targetrepodir, version):
 defaultbaseversion, _, _ = 
utils.get_version_from_string(subprocess.check_output(["git", "describe", 
"--abbrev=0"], cwd=targetrepodir).decode('utf-8').strip())
 return utils.get_tag_from_version(defaultbaseversion, None)
 
-def generate_regression_report(querytool, targetrepodir, basebranch, 
resultdir, outputdir, yoctoversion):
-baseversion = get_previous_tag(targetrepodir, yoctoversion)
-print(f"Comparing {basebranch} to {baseversion}")
+def get_regression_base_and_target(basebranch, comparebranch, release, 
targetrepodir):
+if not basebranch:
+# Basebranch/comparebranch is an arbitrary configuration (not defined 
in config.json): do not run regression reporting
+return None, None
+
+if is_release_version(release):
+# We are on a release: ignore comparebranch (which is very likely 
None), regression reporting must be done against previous tag
+return get_previous_tag(targetrepodir, release), basebranch
+elif comparebranch:
+# Basebranch/comparebranch is defined in config.json: regression 
reporting must be done against branches as defined in config.json
+return comparebranch, basebranch
+
+def generate_regression_report(querytool, targetrepodir, base, target, 
resultdir, outputdir):
+print(f"Comparing {target} to {base}")
 
 try:
-regreport = subprocess.check_output([querytool, "regression-report", 
baseversion, basebranch, '-t', resultdir])
+regreport = subprocess.check_output([querytool, "regression-report", 
base, target, '-t', resultdir])
 with open(outputdir + "/testresult-regressions-report.txt", "wb") as f:
f.write(regreport)
 except subprocess.CalledProcessError as e:
 error = str(e)
-print(f"Error while generating report between {basebranch} and 
{baseversion} : {error}")
-
+print(f"Error while generating report between {target} and {base} : 
{error}")
 
 def send_qa_email():
 parser = utils.ArgParser(description='Process test results and optionally 
send an email about the build to prompt QA to begin testing.')
@@ -142,8 +152,9 @@ def send_qa_email():
 subprocess.check_call(["git", "push", "--all"], cwd=tempdir)
 subprocess.check_call(["git", "push", "--tags"], cwd=tempdir)
 
-if basebranch:
-generate_regression_report(querytool, targetrepodir, 
basebranch, tempdir, args.results_dir, args.release)
+regression_base, regression_target = 
get_regression_base_and_target(basebranch, comparebranch, args.release, 
targetrepodir)
+if regression_base and regression_target:
+generate_regression_report(querytool, targetrepodir, 
regression_base, regression_target, tempdir, args.results_dir)
 
 finally:
 subprocess.check_call(["rm", "-rf",  tempdir])
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59415): https://lists.yoctoproject.org/g/yocto/message/59415
Mute This Topic: https://lists.yoctoproject.org/mt/97582168/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 7/8] scripts/test_send_qa_email.py: add tests for base/target pair guessing

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/test_send_qa_email.py | 21 +
 1 file changed, 21 insertions(+)

diff --git a/scripts/test_send_qa_email.py b/scripts/test_send_qa_email.py
index 48bca98..ccdcba6 100755
--- a/scripts/test_send_qa_email.py
+++ b/scripts/test_send_qa_email.py
@@ -35,6 +35,21 @@ class TestVersion(unittest.TestCase):
 {"input": None, "expected":False}
 ]
 
+# This data represent real data returned by utils.getcomparisonbranch
+# and the release argument passed to send-qa-email script
+regression_inputs = [
+{"name": "Arbitrary branch", "input": {"basebranch": None,
+   "comparebranch": None, 
"release": None}, "expected": (None, None)},
+{"name": "Master release", "input": {"basebranch": "master",
+ "comparebranch": None, "release": 
"yocto-4.2_M3.rc1"}, "expected": ("4.2_M2", "master")},
+{"name": "Older release", "input": {"basebranch": "kirkstone",
+"comparebranch": None, "release": 
"yocto-4.0.8.rc2"}, "expected": ("yocto-4.0.7", "kirkstone")},
+{"name": "Master Next", "input": {"basebranch": "master-next",
+  "comparebranch": "master", 
"release": None}, "expected": ("master", "master-next")},
+{"name": "Fork Master Next", "input": {"basebranch": "ross/mut",
+   "comparebranch": "master", 
"release": None}, "expected": ("master", "ross/mut")},
+]
+
 def test_versions(self):
 for data in self.test_data_get_version:
 test_name = data["input"]["version"]
@@ -47,6 +62,12 @@ class TestVersion(unittest.TestCase):
 with self.subTest(f"{data['input']}"):
 
self.assertEqual(send_qa_email.is_release_version(data['input']), 
data['expected'])
 
+def test_get_regression_base_and_target(self):
+for data in self.regression_inputs:
+with self.subTest(data['name']):
+self.assertEqual(send_qa_email.get_regression_base_and_target(
+data['input']['basebranch'], 
data['input']['comparebranch'], data['input']['release'], 
os.environ.get("POKY_PATH")), data['expected'])
+
 if __name__ == '__main__':
 if os.environ.get("POKY_PATH") is None:
 print("Please set POKY_PATH to proper poky clone location before 
running tests")
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59416): https://lists.yoctoproject.org/g/yocto/message/59416
Mute This Topic: https://lists.yoctoproject.org/mt/97582170/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 5/8] scripts/send-qa-email: add tests for is_release_version

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/test_send_qa_email.py | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/scripts/test_send_qa_email.py b/scripts/test_send_qa_email.py
index c1347fb..48bca98 100755
--- a/scripts/test_send_qa_email.py
+++ b/scripts/test_send_qa_email.py
@@ -29,6 +29,12 @@ class TestVersion(unittest.TestCase):
 {"input": {"version": "4.1.rc4"}, "expected": "yocto-4.0"}
 ]
 
+test_data_is_release_version = [
+{"input": "yocto-4.2", "expected":True},
+{"input": "20230313-15", "expected":False},
+{"input": None, "expected":False}
+]
+
 def test_versions(self):
 for data in self.test_data_get_version:
 test_name = data["input"]["version"]
@@ -36,6 +42,10 @@ class TestVersion(unittest.TestCase):
 self.assertEqual(send_qa_email.get_previous_tag(os.environ.get(
 "POKY_PATH"), data["input"]["version"]), data["expected"])
 
+def test_is_release_version(self):
+for data in self.test_data_is_release_version:
+with self.subTest(f"{data['input']}"):
+
self.assertEqual(send_qa_email.is_release_version(data['input']), 
data['expected'])
 
 if __name__ == '__main__':
 if os.environ.get("POKY_PATH") is None:
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59414): https://lists.yoctoproject.org/g/yocto/message/59414
Mute This Topic: https://lists.yoctoproject.org/mt/97582167/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 4/8] scripts/send-qa-email: protect is_release_version from None value

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 320ff24..540eb94 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -16,7 +16,7 @@ import utils
 
 def is_release_version(version):
 p = re.compile('\d{8}-\d+')
-return p.match(version) is None
+return version is not None and p.match(version) is None
 
 def get_previous_tag(targetrepodir, version):
 previousversion = None
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59413): https://lists.yoctoproject.org/g/yocto/message/59413
Mute This Topic: https://lists.yoctoproject.org/mt/97582166/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 0/8] fix regression reports generation on "master-next" branches

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

This series fixes regression report generation on "next" branches, as raised in
[1].

The first five patches are preparatory updates for the real fix, being either
refactoring, cleanup or unit tests addition to better understand how integration
branches are used in send-qa-email.
The proper fix is in 6th commit, followed by corresponding tests
Finally, the last commit add Alexandre's "next" branch as "fork" branches to
enable regression reports generation when testing patches, as suggested in [1]
too.

Since patch testing branches are force-pushed on a regular basis, it is quite
difficult to get a relevant testing scenario, so this series has been tested by
faking SHA1 in yocto_testresults_query to match some master-next results in
yocto-testresults at the time of testing this series. I would gladly take
feedback about this series running for real in a master-next branch

[1] https://lists.yoctoproject.org/g/yocto/message/59067

Alexis Lothoré (8):
  scripts/utils: add unit tests for getcomparisonbranch
  scripts/send-qa-email: remove unused variable
  scripts/send-qa-email: invert boolean logic for release check
  scripts/send-qa-email: protect is_release_version from None value
  scripts/send-qa-email: add tests for is_release_version
  scripts/send-qa-email: fix testing branches regression reporting
  scripts/test_send_qa_email.py: add tests for base/target pair guessing
  config: flag A. Belloni master-next branch as testing branch

 config.json   |   2 +-
 scripts/send_qa_email.py  |  34 +++
 scripts/test_send_qa_email.py |  31 ++
 scripts/test_utils.py | 104 ++
 4 files changed, 158 insertions(+), 13 deletions(-)
 create mode 100755 scripts/test_utils.py

-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59410): https://lists.yoctoproject.org/g/yocto/message/59410
Mute This Topic: https://lists.yoctoproject.org/mt/97582163/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/8] scripts/send-qa-email: remove unused variable

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 1 -
 1 file changed, 1 deletion(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 7999c1b..96225a8 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -83,7 +83,6 @@ def send_qa_email():
 
 args = parser.parse_args()
 
-scriptsdir = os.path.dirname(os.path.realpath(__file__))
 ourconfig = utils.loadconfig()
 
 with open(args.repojson) as f:
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59409): https://lists.yoctoproject.org/g/yocto/message/59409
Mute This Topic: https://lists.yoctoproject.org/mt/97582162/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/8] scripts/utils: add unit tests for getcomparisonbranch

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Signed-off-by: Alexis Lothoré 
---
 scripts/test_utils.py | 104 ++
 1 file changed, 104 insertions(+)
 create mode 100755 scripts/test_utils.py

diff --git a/scripts/test_utils.py b/scripts/test_utils.py
new file mode 100755
index 000..ab91e3b
--- /dev/null
+++ b/scripts/test_utils.py
@@ -0,0 +1,104 @@
+#!/usr/bin/env python3
+
+import os
+import unittest
+import utils
+
+
+class TestGetComparisonBranch(unittest.TestCase):
+TEST_CONFIG = {
+"BUILD_HISTORY_DIRECTPUSH": [
+"poky:morty",
+"poky:pyro",
+"poky:rocko",
+"poky:sumo",
+"poky:thud",
+"poky:warrior",
+"poky:zeus",
+"poky:dunfell",
+"poky:gatesgarth",
+"poky:hardknott",
+"poky:honister",
+"poky:kirkstone",
+"poky:langdale",
+"poky:master"
+], "BUILD_HISTORY_FORKPUSH": {
+"poky-contrib:ross/mut": "poky:master",
+"poky:master-next": "poky:master",
+"poky-contrib:abelloni/master-next": "poky:master"
+}
+}
+
+def test_release_master(self):
+repo = "ssh://g...@push.yoctoproject.org/poky"
+branch = "master"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(
+basebranch, "master", msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+self.assertEqual(
+comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+
+def test_release_kirkstone(self):
+repo = "ssh://g...@push.yoctoproject.org/poky"
+branch = "kirkstone"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(basebranch, "kirkstone",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+self.assertEqual(
+comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+
+def test_release_langdale(self):
+repo = "ssh://g...@push.yoctoproject.org/poky"
+branch = "langdale"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(basebranch, "langdale",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+self.assertEqual(
+comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+
+def test_master_next(self):
+repo = "ssh://g...@push.yoctoproject.org/poky"
+branch = "master-next"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(basebranch, "master-next",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding base branch")
+self.assertEqual(comparebranch, "master",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding compare branch")
+
+def test_abelloni_master_next(self):
+repo = "ssh://g...@push.yoctoproject.org/poky-contrib"
+branch = "abelloni/master-next"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(basebranch, "abelloni/master-next",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding base branch")
+self.assertEqual(comparebranch, "master",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding compare branch")
+
+def test_ross_master_next(self):
+repo = "ssh://g...@push.yoctoproject.org/poky-contrib"
+branch = "ross/mut"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(basebranch, "ross/mut",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding base branch")
+self.assertEqual(comparebranch, "master",
+ msg="Repo/branch pair present in 
BUILD_HISTORY_FORKPUSH must return corresponding compare branch")
+
+def test_arbitrary_branch(self):
+repo = "ssh://g...@push.yoctoproject.org/poky-contrib"
+branch = "akanavin/package-version-updates"
+basebranch, comparebranch = utils.getcomparisonbranch(
+self.TEST_CONFIG, repo, branch)
+self.assertEqual(
+basebranch, None, 

[yocto] [yocto-autobuilder-helper][PATCH 3/8] scripts/send-qa-email: invert boolean logic for release check

2023-03-13 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

is_non_release_version has an inverted logic which makes its reuse quite
confusing

Transform it as is_release_version and let caller do the negation if needed

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 96225a8..320ff24 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -14,15 +14,15 @@ import re
 
 import utils
 
-def is_non_release_version(version):
+def is_release_version(version):
 p = re.compile('\d{8}-\d+')
-return p.match(version) is not None
+return p.match(version) is None
 
 def get_previous_tag(targetrepodir, version):
 previousversion = None
 previousmilestone = None
 if version:
-if is_non_release_version(version):
+if not is_release_version(version):
 return subprocess.check_output(["git", "describe", "--abbrev=0"], 
cwd=targetrepodir).decode('utf-8').strip()
 compareversion, comparemilestone, _ = 
utils.get_version_from_string(version)
 compareversionminor = compareversion[-1]
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#59411): https://lists.yoctoproject.org/g/yocto/message/59411
Mute This Topic: https://lists.yoctoproject.org/mt/97582164/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Wifi Networking Regression in linux-6.2.3

2023-03-13 Thread Genes Lists

On 3/13/23 05:56, Genes Lists wrote:

On 3/13/23 03:00, David Bohman wrote:

There is a fairly serious regression in linux-6.2.3 that kills wifi:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=79d1ed5ca7db67d48e870c979f0e0f6b0947944a


Hi David:

  I am sure this commit is in 6.2.3 and all later stable kernels - and 
therefore is in 6.2.5 which is Arch current kernel in core repo.

Nope I was wrong - (bad git grep on my part) So sorry.


It is however in 6.2.6 and 6.1.19 (lts).

thanks and apologies for my bad grep.

gene



Re: Wifi Networking Regression in linux-6.2.3

2023-03-13 Thread Genes Lists

On 3/13/23 03:00, David Bohman wrote:

There is a fairly serious regression in linux-6.2.3 that kills wifi:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=79d1ed5ca7db67d48e870c979f0e0f6b0947944a


Hi David:

 I am sure this commit is in 6.2.3 and all later stable kernels - and 
therefore is in 6.2.5 which is Arch current kernel in core repo.


 gene






Re: [tor-relays] Confusing bridge signs...

2023-03-12 Thread lists
On Sonntag, 12. März 2023 04:45:21 CET Keifer Bly wrote:
> I do not use any scripts to start tor, I just type tor to start the process
> on debian.
That's where your problems begin. You start a 2nd tor process as root that 
doesn't take the default configs from:
/usr/share/tor/tor-service-defaults-torrc & /etc/tor/torrc

You have a systemd system & tor.service is activated by default. You don't 
have to do anything, tor runs automatically after a reboot|server start.

The systemd services are controlled with the following commands:
systemctl start tor.service
systemctl stop tor.service
systemctl restart tor.service
systemctl reload tor.service
systemctl status tor.service

> And yes the datacenter I run in has an external firewall which
> requires setting up port forwarding.
Ok, anything in the customer interface for the datacenter router.
 
> The result of running ls -A /var/log/tor
> 
> root@instance-1:/home/keifer_bly# ls -A /var/log/tor
> notices.log  notices.log.1  notices.log.2.gz  notices.log.3.gz
>  notices.log.4.gz  notices.log.5.gz
There are 6 log files of one of the tor processes. Both write to syslog.

> 
> So it's creating separate .gz files for some reason. I don't know why that
> is or what to do from here. Thanks.
I wrote, learn what _logrotate_ does. Hint: without that, the hd fills up.
man logrotate

> 
> 
> 
> --Keifer
> 
> On Fri, Mar 10, 2023 at 8:15 AM  wrote:
> > On Mittwoch, 8. März 2023 18:13:01 CET Keifer Bly wrote:
> > > Strangely, nothing whatsoever is being written to the notices.log file,
> > > upon checking it it is completely empty, nothing there.
> > 
> > That can't be, please post:
> > ~# ls -A /var/log/tor
> > 
> > In general, everything is always written to /var/log/syslog &
> > systemd-journald
> > to /var/log/journal (binaries).
> > ~$ man journalctl
> > 
> > > I wonder why that
> > 
> > Read what _logrotate_ does. Every tor restart creates a new empty log
> > file.
> > 
> > > would happen and how else to tell what's going on? Tor is running as
> > > root
> > 
> > Why do you change security-related default settings? Default tor user is:
> > debian-tor. (On Debian and Ubuntu systems)
> > 
> > > so it's not a permission issue, and I also set up a port forwarding rule
> > 
> > Why? You have a server in the data center. You only need forwarding on a
> > router! Packet forwarding is also disabled in /etc/sysctl.conf per
> > default.
> > 
> > Your iptables must start like this.
> > *filter
> > 
> > :INPUT DROP [0:0]
> > :FORWARD DROP [0:0]
> > :OUTPUT ACCEPT [0:0]
> > 
> > ...
> > -A INPUT -p tcp --dport   -j ACCEPT
> > ...
> > 
> > No FORWARD, no  OUTPUT rules.
> > 
> > --
> > ╰_╯ Ciao Marco!
> > 
> > Debian GNU/Linux
> > 
> > It's free software and it gives you
> > freedom!___
> > tor-relays mailing list
> > tor-relays@lists.torproject.org
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Relay requirements

2023-03-10 Thread lists
On Dienstag, 7. März 2023 13:31:13 CET mail--- via tor-relays wrote:
 
> Running a few relays on 1-2 CPU cores with limited RAM is
> fine, but just keep an eye on it and don't run other memory intensive stuff
> on the server (like DNS query caching, which can take quite some RAM as
> well).

A recursive, and caching DNS server like unbound or PowerDNS(+dnsdist) is 
absolutely necessary on an exit or in your own network.

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Confusing bridge signs...

2023-03-10 Thread lists
On Mittwoch, 8. März 2023 18:13:01 CET Keifer Bly wrote:

> Strangely, nothing whatsoever is being written to the notices.log file,
> upon checking it it is completely empty, nothing there.
That can't be, please post:
~# ls -A /var/log/tor

In general, everything is always written to /var/log/syslog & systemd-journald 
to /var/log/journal (binaries).
~$ man journalctl

> I wonder why that
Read what _logrotate_ does. Every tor restart creates a new empty log file.

> would happen and how else to tell what's going on? Tor is running as root
Why do you change security-related default settings? Default tor user is: 
debian-tor. (On Debian and Ubuntu systems)

> so it's not a permission issue, and I also set up a port forwarding rule
Why? You have a server in the data center. You only need forwarding on a 
router! Packet forwarding is also disabled in /etc/sysctl.conf per default.

Your iptables must start like this.
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
...
-A INPUT -p tcp --dport   -j ACCEPT
...

No FORWARD, no  OUTPUT rules.

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Relay requirements

2023-03-10 Thread lists
On Dienstag, 7. März 2023 03:00:49 CET Sydney wrote:
> Newbie here. No network experience but already running 2 TOR instances: 1
> TOR service + 1 bridge.
Never mix different relay types under one IP.

> I would like to "upgrade" to TOR relays but have a few questions relating to
> hardware needs.

1core 2GB RAM is enough for an exit. This one:
https://metrics.torproject.org/rs.html#details/D00795330D77C75344C54FB8800531FAB3C40FBE
1core, 2GB RAM, 10GB Network
You need bandwidth, _unlimited_ bandwidth. A relay easily has 50-100TB/month!
Tor relay (=router) bandwidth is in + out!

> I guess my fundamental question is what is the advantage of running multiple
> relays of the same type, on the same server?
Because C-tor is not multicore aware.

> I see some operators running
> dozens of them, all in the same country, same ISP. Why not just a single
> relay running with a large capacity?
see above (multicore) These are very powerful servers. Mostly their own, in 
colocation.
1x10G, 2x10G or more network connection, 64 or 128 CPU cores 256-512 GB RAM and 
_unlimited_ bandwidth.
In addition usually their own ASN. To advertise an AS via BGP, at least a /24 
(255 IP's) is required.

That's why I keep asking when we'll finally be able to run IPv6 only relays.
/24 IP + ASN approx. 5000 EUR/(1st)year. (Only via waiting list & if never 
received an IPv4 allocation)
/48 IPv6 + ASN approx. 100 Eur/year.

https://www.ripe.net/manage-ips-and-asns/ipv4/ipv4-waiting-list

> Also, is there a requirement for the
> number of relays per core? (Maybe this is the answer to my question.) I
> know my bridge is currently keeping one core of my 2-core server constantly
> under load. Thank in advance.
Rule of thumb - one instance per core.


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: archlinux-keyring-wkd-sync returns 87 errors

2023-03-10 Thread Genes Lists




Curious - Are you able to ping the WKD webserver from failing machine?
ping openpgpkey.archlinux.org



Re: archlinux-keyring-wkd-sync returns 87 errors

2023-03-10 Thread Genes Lists

On 3/10/23 05:39, Łukasz Michalski wrote:


A and B. Both updated at the same time. On A service works, on B it fails.


I have similar situation. On a machine that had fails running manually 
worked fine.


What happens if you run manually?

   /usr/bin//archlinux-keyring-wkd-sync

best

gene



[OE-core] [PATCH] scripts/yocto_testresults_query.py: set proper branches when using resulttool

2023-03-09 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

The script currently only works if base and target can be found on default
branches. It breaks if we try to generate a regression report between revisions
that live on different branches (as needed on integration and testing branches).
For example, the following command:

./scripts/yocto_testresults_query.py regression-report yocto-4.0.6 yocto-4.0.7

ends with the follwing error:

[...]
ERROR: Only 1 tester revisions found, unable to generate report
[...]

Read branches from tags names in test results repository, and pass those
branches to resulttool when generating the report

Signed-off-by: Alexis Lothoré 
---
 scripts/yocto_testresults_query.py | 21 +++--
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/scripts/yocto_testresults_query.py 
b/scripts/yocto_testresults_query.py
index 3df9d6015fe..4df339c92eb 100755
--- a/scripts/yocto_testresults_query.py
+++ b/scripts/yocto_testresults_query.py
@@ -38,18 +38,27 @@ def get_sha1(pokydir, revision):
 logger.error(f"Can not find SHA-1 for {revision} in {pokydir}")
 return None
 
+def get_branch(tag):
+# The tags in test results repository, as returned by git rev-list, have 
the following form:
+# refs/tags//-g/
+return tag.split("/")[2]
+
 def fetch_testresults(workdir, sha1):
 logger.info(f"Fetching test results for {sha1} in {workdir}")
 rawtags = subprocess.check_output(["git", "ls-remote", "--refs", "--tags", 
"origin", f"*{sha1}*"], cwd=workdir).decode('utf-8').strip()
 if not rawtags:
 raise Exception(f"No reference found for commit {sha1} in {workdir}")
+branch = ""
 for rev in [rawtag.split()[1] for rawtag in rawtags.splitlines()]:
-logger.info(f"Fetching matching revisions: {rev}")
+if not branch:
+branch = get_branch(rev)
+logger.info(f"Fetching matching revision: {rev}")
 subprocess.check_call(["git", "fetch", "--depth", "1", "origin", 
f"{rev}:{rev}"], cwd=workdir)
+return branch
 
-def compute_regression_report(workdir, baserevision, targetrevision):
+def compute_regression_report(workdir, basebranch, baserevision, targetbranch, 
targetrevision):
 logger.info(f"Running resulttool regression between SHA1 {baserevision} 
and {targetrevision}")
-report = subprocess.check_output([resulttool, "regression-git", 
"--commit", baserevision, "--commit2", targetrevision, workdir]).decode("utf-8")
+report = subprocess.check_output([resulttool, "regression-git", 
"--branch", basebranch, "--commit", baserevision, "--branch2", targetbranch, 
"--commit2", targetrevision, workdir]).decode("utf-8")
 return report
 
 def print_report_with_header(report, baseversion, baserevision, targetversion, 
targetrevision):
@@ -74,9 +83,9 @@ def regression(args):
 if not args.testresultsdir:
 subprocess.check_call(["rm", "-rf",  workdir])
 sys.exit(1)
-fetch_testresults(workdir, baserevision)
-fetch_testresults(workdir, targetrevision)
-report = compute_regression_report(workdir, baserevision, 
targetrevision)
+basebranch = fetch_testresults(workdir, baserevision)
+targetbranch = fetch_testresults(workdir, targetrevision)
+report = compute_regression_report(workdir, basebranch, baserevision, 
targetbranch, targetrevision)
 print_report_with_header(report, args.base, baserevision, args.target, 
targetrevision)
 finally:
 if not args.testresultsdir:
-- 
2.39.2


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#178258): 
https://lists.openembedded.org/g/openembedded-core/message/178258
Mute This Topic: https://lists.openembedded.org/mt/97498872/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[RFC 6/X] omp: Allow creation of simd clones from omp declare variant with -fopenmp-simd flag

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This RFC is to propose relaxing the flag needed to allow the creation of 
simd clones from omp declare variants, such that we can use 
-fopenmp-simd rather than -fopenmp.
This should only change the behaviour of omp simd clones and should not 
enable any other openmp functionality, though I need to test this 
furter, for the time being I just played around a bit with some of the 
existing declare-variant tests.


Any objections to this in general? And/or ideas to properly test the 
effect of this on other omp codegen? My current plan is to have a look 
at the declare-variant tests we had before this patch series, locally 
modify them to pass -fopenmp-simd and make sure they fail the same way 
before and after this patch.diff --git a/gcc/c/c-parser.cc b/gcc/c/c-parser.cc
index 
21bc3167ce224823c214efc064be399f2da9c787..b28e3d0a8adb520941dc3a17173cc07de4a653c5
 100644
--- a/gcc/c/c-parser.cc
+++ b/gcc/c/c-parser.cc
@@ -23564,6 +23564,13 @@ c_parser_omp_declare (c_parser *parser, enum 
pragma_context context)
  c_parser_omp_declare_reduction (parser, context);
  return false;
}
+  if (strcmp (p, "variant") == 0)
+   {
+ /* c_parser_consume_token (parser); done in
+c_parser_omp_declare_simd.  */
+ c_parser_omp_declare_simd (parser, context);
+ return true;
+   }
   if (!flag_openmp)  /* flag_openmp_simd  */
{
  c_parser_skip_to_pragma_eol (parser, false);
@@ -23575,13 +23582,6 @@ c_parser_omp_declare (c_parser *parser, enum 
pragma_context context)
  c_parser_omp_declare_target (parser);
  return false;
}
-  if (strcmp (p, "variant") == 0)
-   {
- /* c_parser_consume_token (parser); done in
-c_parser_omp_declare_simd.  */
- c_parser_omp_declare_simd (parser, context);
- return true;
-   }
 }
 
   c_parser_error (parser, "expected %, %, "
diff --git a/gcc/cp/decl.cc b/gcc/cp/decl.cc
index 
1aa5f1a7898df9483a2af4f6f9fea99e6b219271..7bd32fd3e345a003be03d1e9acf33db76eed9460
 100644
--- a/gcc/cp/decl.cc
+++ b/gcc/cp/decl.cc
@@ -8428,7 +8428,7 @@ cp_finish_decl (tree decl, tree init, bool 
init_const_expr_p,
suppress_warning (decl, OPT_Winit_self);
 }
 
-  if (flag_openmp
+  if (flag_openmp_simd
   && TREE_CODE (decl) == FUNCTION_DECL
   /* #pragma omp declare variant on methods handled in finish_struct
 instead.  */
diff --git a/gcc/cp/parser.cc b/gcc/cp/parser.cc
index 
1a124f5395e018f3c4b2f9f36fcd42159d0b868f..d1c7f9d91d2546ad8f5674232a05f7d7726eeafe
 100644
--- a/gcc/cp/parser.cc
+++ b/gcc/cp/parser.cc
@@ -47884,7 +47884,7 @@ cp_parser_omp_declare (cp_parser *parser, cp_token 
*pragma_tok,
  context, false);
  return true;
}
-  if (flag_openmp && strcmp (p, "variant") == 0)
+  if (strcmp (p, "variant") == 0)
{
  cp_lexer_consume_token (parser->lexer);
  cp_parser_omp_declare_simd (parser, pragma_tok,
diff --git a/gcc/testsuite/gcc.target/aarch64/declare-variant-1.c 
b/gcc/testsuite/gcc.target/aarch64/declare-variant-1.c
index 
c44c9464f4e27047db9be5b0c9710ae3cfee8eee..83eeadd108b5578623c63e73dea11b2b17a08618
 100644
--- a/gcc/testsuite/gcc.target/aarch64/declare-variant-1.c
+++ b/gcc/testsuite/gcc.target/aarch64/declare-variant-1.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O3 -fopenmp" } */
+/* { dg-options "-O3 -fopenmp-simd" } */
 
 #include "declare-variant-1.x"
 
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-1.c 
b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-1.c
index 
7a8129fe88ac9759b2337892a3d14f4e8196e61f..616b0ed1c1dc019103dae504d2cec65523a35a3d
 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-1.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-1.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O3 -fopenmp" } */
+/* { dg-options "-O3 -fopenmp-simd" } */
 
 #include "../declare-variant-1.x"
 
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-2.c 
b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-2.c
index 
2b6eabac76cf1cd059ec8d960ddd9e30973dc797..a832c5255306999b0006b68b1890c7f42c3dafb0
 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-2.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-2.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { dg-options "-O3 -fopenmp -msve-vector-bits=128" } */
+/* { dg-options "-O3 -fopenmp-simd -msve-vector-bits=128" } */
 
 #include "../declare-variant-1.x"
 
diff --git a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-3.c 
b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-3.c
index 
e8b598fe479d7e1e92eb7f9e3413d5ac183626a9..455c0338d4680d143daae666c29e4f018df5bff9
 100644
--- a/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-3.c
+++ b/gcc/testsuite/gcc.target/aarch64/sve/declare-variant-3.c
@@ -1,5 +1,5 @@
 /* { dg-do compile } */
-/* { 

[RFC 5/X] omp: Create simd clones from 'omp declare variant's

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This RFC extends the omp-simd-clone pass to create simd clones for 
functions with 'omp declare variant' pragmas that contain simd 
constructs. This patch also implements AArch64's use for this functionality.
This requires two extra pieces of information be kept for each 
simd-clone, a 'variant_name' since each variant has to be named upon 
declaration, and a 'device' since a omp variant has the capability of 
having device clauses that can 'select' the device the variant is meant 
to be used with. For the latter I decided to currently implement it as 
an 'int', to keep a 'code' per device which is target dependent. Though 
we may want to expand this in the future to contain a target dependent 
'target selector' of sorts. This would enable the implementation of the 
'arch' device clause we describe in the BETA ABI can be found in the 
vfabia64 subdir of https://github.com/ARM-software/abi-aa/, this patch 
only implements support for the two 'isa' device clauses isa("simd") and 
isa("sve").


I'll create a ChangeLog when I turn this into a PATCH if we agree on 
this direction.diff --git a/gcc/cgraph.h b/gcc/cgraph.h
index 
b5fc739f1b0602a871040292a5bb1d69a9ef305f..ae1af65a9b5913ec435e783223e79767ddd68341
 100644
--- a/gcc/cgraph.h
+++ b/gcc/cgraph.h
@@ -810,6 +810,14 @@ struct GTY(()) cgraph_simd_clone {
   /* Original cgraph node the SIMD clones were created for.  */
   cgraph_node *origin;
 
+  /* This is a flag to indicate what device was selected for the variant
+ clone.  Always 0 for 'omp declare simd' clones.  */
+  unsigned device;
+
+  /* The identifier for the name of the variant in case of a declare variant
+ clone, this is NULL_TREE for declare simd clones.  */
+  tree variant_name;
+
   /* Annotated function arguments for the original function.  */
   cgraph_simd_clone_arg GTY((length ("%h.nargs"))) args[1];
 };
diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 
ef93a4e9d43799df4410f152cdd798db285e8897..344c6001fdd646a31326f5deb8ff94873d346ed1
 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -26970,15 +26970,28 @@ aarch64_simd_clone_compute_vecsize_and_simdlen 
(struct cgraph_node *node,
 
   clonei->mask_mode = VOIDmode;
   elt_bits = GET_MODE_BITSIZE (SCALAR_TYPE_MODE (base_type));
+  /* A simdclone without simdlen can legally originate from either a:
+ 'omp declare simd':
+   In this case generate at least 3 simd clones, one for Advanced SIMD
+   64-bit vectors, one for Advanced SIMD 128-bit vectors and one for SVE
+   vector length agnostic vectors.
+  'omp declare variant':
+   In this case we must be generating a simd clone for SVE vector length
+   agnostic vectors.
+   */
   if (known_eq (clonei->simdlen, 0U))
 {
-  if (num >= 2)
+  if (clonei->device == 2 || num >= 2)
{
+ count = 1;
  vec_bits = poly_uint64 (128, 128);
  clonei->simdlen = exact_div (vec_bits, elt_bits);
}
   else
{
+ if (clonei->device != 0)
+   return 0;
+
  count = 3;
  vec_bits = (num == 0 ? 64 : 128);
  clonei->simdlen = exact_div (vec_bits, elt_bits);
@@ -26991,7 +27004,14 @@ aarch64_simd_clone_compute_vecsize_and_simdlen (struct 
cgraph_node *node,
   /* For now, SVE simdclones won't produce illegal simdlen, So only check
 const simdlens here.  */
   if (clonei->simdlen.is_constant (_simdlen)
- && maybe_ne (vec_bits, 64U) && maybe_ne (vec_bits, 128U))
+ /* For Advanced SIMD we require either 64- or 128-bit vectors.  */
+ && ((clonei->device < 2
+  && maybe_ne (vec_bits, 64U)
+  && maybe_ne (vec_bits, 128U))
+ /* For SVE we require multiples of 128-bits.  TODO: should we check
+for max VL?  */
+ || (clonei->device == 2
+ && !constant_multiple_p (vec_bits, 128
{
  if (explicit_p)
warning_at (DECL_SOURCE_LOCATION (node->decl), 0,
@@ -27002,7 +27022,7 @@ aarch64_simd_clone_compute_vecsize_and_simdlen (struct 
cgraph_node *node,
}
 }
 
-  if (num >= 2)
+  if (clonei->device == 2 || num >= 2)
 {
   clonei->vecsize_mangle = 's';
   clonei->inbranch = 1;
@@ -27082,22 +27102,21 @@ aarch64_simd_clone_adjust_ret_or_param (struct 
cgraph_node *node, tree type,
aarch64_sve_vg = poly_uint16 (2, 2);
   unsigned int num_zr = 0;
   unsigned int num_pr = 0;
+  tree base_type = TREE_TYPE (type);
+  if (POINTER_TYPE_P (base_type))
+   base_type = pointer_sized_int_node;
+  scalar_mode base_mode = as_a  (TYPE_MODE (base_type));
+  machine_mode vec_mode = aarch64_full_sve_mode (base_mode).require ();
+  tree vectype = build_vector_type_for_mode (base_type, vec_mode);
   if (is_mask)
{
- type = truth_type_for (type);
  num_pr = 1;
+ type = truth_type_for (vectype);
}
   else
   

[RFC 4/X] omp, aarch64: Add SVE support for 'omp declare simd' [PR 96342]

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch adds SVE support for simd clone generation when using 'omp 
declare simd'. The design is based on what was discussed in PR 96342, 
but I did not look at YangYang's patch as I wasn't sure of whether that 
code's copyright had been assigned to FSF.


This patch also is not in accordance with the examples in the BETA 
VFABIA64 document that can be found in the vfabia64 subdir of 
https://github.com/ARM-software/abi-aa/

If we agree to this approach I will propose changes to the ABI.
It differs in that we take the ommission of 'simdlen' to be the only way 
to create a SVE simd clone using 'omp declare simd', and that the 
current target defined on the command-line has no bearing in what simd 
clones are generated. This SVE simd clone is always VLA.
The document describes a way to specify SVE simdclones of VLS by using 
the simdlen clause, but that would require another way to toggle between 
SVE and Advanced SIMD and since there is no clause to do that for 'omp 
declare simd' I would have to assume this would be controllable by the 
command-line target options (march/mcpu).
By generating all possible Advanced SIMD simdlens and a VLA simdlen for 
SVE when ommitting simdlen we would be adhering to the same practice 
x86_64 does.


Targethook changes

This patch proposes two targethook changes:
1) Add mode parameter to TARGET_SIMD_CLONE_USABLE
We require the mode parameter to distinguish between calls to a simd 
clone from a Advanced SIMD mode and a SVE mode.


2) Add new TARGET_SIMD_CLONE_ADJUST_RET_OR_PARAM
We require this to be able to modify the types used in SVE simd clones, 
as we need to add the SVE type attribute so that the correct PCS can be 
applied.


Other notable changes:
- We discourage the use of an 'inbranch' simdclone for when the caller 
is not in a branch, such that it picks a 'notinbranch' variant if 
available over an inbranch one. (we could probably rely on ordering but 
that's quite error prone and the ordering I'm looking at is by 
definition target specific).
- I currently put the VLA mangling in the target agnostic mangling 
function, if other targets with VLA want to use a different mangling in 
the future we may want to change this into a targethook.



I'll create a ChangeLog when I turn this into a PATCH if we agree on 
this direction.diff --git a/gcc/config/aarch64/aarch64-protos.h 
b/gcc/config/aarch64/aarch64-protos.h
index 
f75eb892f3daa7c2576efcedc8d944ab1e895cdb..122a473770eb4526ecce326f02d843608d088b5b
 100644
--- a/gcc/config/aarch64/aarch64-protos.h
+++ b/gcc/config/aarch64/aarch64-protos.h
@@ -995,6 +995,8 @@ namespace aarch64_sve {
 #ifdef GCC_TARGET_H
   bool verify_type_context (location_t, type_context_kind, const_tree, bool);
 #endif
+ void add_sve_type_attribute (tree, unsigned int, unsigned int,
+ const char *, const char *);
 }
 
 extern void aarch64_split_combinev16qi (rtx operands[3]);
diff --git a/gcc/config/aarch64/aarch64-sve-builtins.cc 
b/gcc/config/aarch64/aarch64-sve-builtins.cc
index 
161a14edde7c9fb1b13b146cf50463e2d78db264..6f99c438d10daa91b7e3b623c995489f1a8a0f4c
 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins.cc
@@ -569,14 +569,16 @@ static bool reported_missing_registers_p;
 /* Record that TYPE is an ABI-defined SVE type that contains NUM_ZR SVE vectors
and NUM_PR SVE predicates.  MANGLED_NAME, if nonnull, is the ABI-defined
mangling of the type.  ACLE_NAME is the  name of the type.  */
-static void
+void
 add_sve_type_attribute (tree type, unsigned int num_zr, unsigned int num_pr,
const char *mangled_name, const char *acle_name)
 {
   tree mangled_name_tree
 = (mangled_name ? get_identifier (mangled_name) : NULL_TREE);
+  tree acle_name_tree
+= (acle_name ? get_identifier (acle_name) : NULL_TREE);
 
-  tree value = tree_cons (NULL_TREE, get_identifier (acle_name), NULL_TREE);
+  tree value = tree_cons (NULL_TREE, acle_name_tree, NULL_TREE);
   value = tree_cons (NULL_TREE, mangled_name_tree, value);
   value = tree_cons (NULL_TREE, size_int (num_pr), value);
   value = tree_cons (NULL_TREE, size_int (num_zr), value);
diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 
5c40b6ed22a508723bd535a7460762c3a243d441..ef93a4e9d43799df4410f152cdd798db285e8897
 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -4015,13 +4015,13 @@ aarch64_takes_arguments_in_sve_regs_p (const_tree 
fntype)
 static const predefined_function_abi &
 aarch64_fntype_abi (const_tree fntype)
 {
-  if (lookup_attribute ("aarch64_vector_pcs", TYPE_ATTRIBUTES (fntype)))
-return aarch64_simd_abi ();
-
   if (aarch64_returns_value_in_sve_regs_p (fntype)
   || aarch64_takes_arguments_in_sve_regs_p (fntype))
 return aarch64_sve_abi ();
 
+  if (lookup_attribute ("aarch64_vector_pcs", TYPE_ATTRIBUTES (fntype)))
+return aarch64_simd_abi ();
+
   return default_function_abi;
 }
 

[PATCH 3/X] parloops: Allow poly number of iterations

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch modifies this function in parloops to allow it to handle 
loops with poly iteration counts.


gcc/ChangeLog:

* tree-parloops.cc (try_transform_to_exit_first_loop_alt): 
Handle poly nits.


Is this OK for Stage 1?diff --git a/gcc/tree-parloops.cc b/gcc/tree-parloops.cc
index 
02c1ed3220a949c1349536ef3f74bb497bf76f71..0a3133a3ae7932e11aa680dc14b8ea01613a514c
 100644
--- a/gcc/tree-parloops.cc
+++ b/gcc/tree-parloops.cc
@@ -2531,14 +2531,16 @@ try_transform_to_exit_first_loop_alt (class loop *loop,
   tree nit_type = TREE_TYPE (nit);
 
   /* Figure out whether nit + 1 overflows.  */
-  if (TREE_CODE (nit) == INTEGER_CST)
+  if (TREE_CODE (nit) == INTEGER_CST
+  || TREE_CODE (nit) == POLY_INT_CST)
 {
   if (!tree_int_cst_equal (nit, TYPE_MAX_VALUE (nit_type)))
{
  alt_bound = fold_build2_loc (UNKNOWN_LOCATION, PLUS_EXPR, nit_type,
   nit, build_one_cst (nit_type));
 
- gcc_assert (TREE_CODE (alt_bound) == INTEGER_CST);
+ gcc_assert (TREE_CODE (alt_bound) == INTEGER_CST
+ || TREE_CODE (alt_bound) == POLY_INT_CST);
  transform_to_exit_first_loop_alt (loop, reduction_list, alt_bound);
  return true;
}


[PATCH 2/X] parloops: Copy target and optimizations when creating a function clone

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch makes sure we copy over 
DECL_FUNCTION_SPECIFIC_{TARGET,OPTIMIZATION} in parloops when creating 
function clones.  This is required for SVE clones as we will need to 
enable +sve for them, regardless of the current target options.
I don't actually need the 'OPTIMIZATION' for this patch, but it sounds 
like a nice feature to have, so you can use pragmas to better control 
options used in simd_clone generation.


gcc/ChangeLog:

* tree-parloops.cc (create_loop_fn): Copy specific target and 
optimization options

when creating a function clone.

Is this OK for stage 1?diff --git a/gcc/tree-parloops.cc b/gcc/tree-parloops.cc
index 
dfb75c369d6d00d893ddd6fc28f189ec0d774711..02c1ed3220a949c1349536ef3f74bb497bf76f71
 100644
--- a/gcc/tree-parloops.cc
+++ b/gcc/tree-parloops.cc
@@ -2203,6 +2203,11 @@ create_loop_fn (location_t loc)
   DECL_CONTEXT (t) = decl;
   TREE_USED (t) = 1;
   DECL_ARGUMENTS (decl) = t;
+  DECL_FUNCTION_SPECIFIC_TARGET (decl)
+= DECL_FUNCTION_SPECIFIC_TARGET (act_cfun->decl);
+  DECL_FUNCTION_SPECIFIC_OPTIMIZATION (decl)
+= DECL_FUNCTION_SPECIFIC_OPTIMIZATION (act_cfun->decl);
+
 
   allocate_struct_function (decl, false);
 


[PATCH 1/X] omp: Replace simd_clone_subparts with TYPE_VECTOR_SUBPARTS

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch replaces the uses of simd_clone_subparts with 
TYPE_VECTOR_SUBPARTS and removes the definition of the first.


gcc/ChangeLog:

* omp-sind-clone.cc (simd_clone_subparts): Remove.
(simd_clone_init_simd_arrays): Replace simd_clone_subparts with 
TYPE_VECTOR_SUBPARTS.

(ipa_simd_modify_function_body): Likewise.
* tree-vect-stmts.cc (simd_clone_subparts): Remove.
(vectorizable_simd_clone_call): Replace simd_clone_subparts 
with TYPE_VECTOR_SUBPARTS.diff --git a/gcc/omp-simd-clone.cc b/gcc/omp-simd-clone.cc
index 
0949b8ba288dfc7e7692403bfc600983faddf5dd..48b480e7556d9ad8e5502e10e513ec36b17b9cbb
 100644
--- a/gcc/omp-simd-clone.cc
+++ b/gcc/omp-simd-clone.cc
@@ -255,16 +255,6 @@ ok_for_auto_simd_clone (struct cgraph_node *node)
   return true;
 }
 
-
-/* Return the number of elements in vector type VECTYPE, which is associated
-   with a SIMD clone.  At present these always have a constant length.  */
-
-static unsigned HOST_WIDE_INT
-simd_clone_subparts (tree vectype)
-{
-  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
-}
-
 /* Allocate a fresh `simd_clone' and return it.  NARGS is the number
of arguments to reserve space for.  */
 
@@ -1027,7 +1017,7 @@ simd_clone_init_simd_arrays (struct cgraph_node *node,
}
  continue;
}
-  if (known_eq (simd_clone_subparts (TREE_TYPE (arg)),
+  if (known_eq (TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg)),
node->simdclone->simdlen))
{
  tree ptype = build_pointer_type (TREE_TYPE (TREE_TYPE (array)));
@@ -1039,7 +1029,7 @@ simd_clone_init_simd_arrays (struct cgraph_node *node,
}
   else
{
- unsigned int simdlen = simd_clone_subparts (TREE_TYPE (arg));
+ poly_uint64 simdlen = TYPE_VECTOR_SUBPARTS (TREE_TYPE (arg));
  unsigned int times = vector_unroll_factor (node->simdclone->simdlen,
 simdlen);
  tree ptype = build_pointer_type (TREE_TYPE (TREE_TYPE (array)));
@@ -1225,9 +1215,9 @@ ipa_simd_modify_function_body (struct cgraph_node *node,
  iter, NULL_TREE, NULL_TREE);
   adjustments->register_replacement (&(*adjustments->m_adj_params)[j], r);
 
-  if (multiple_p (node->simdclone->simdlen, simd_clone_subparts (vectype)))
+  if (multiple_p (node->simdclone->simdlen, TYPE_VECTOR_SUBPARTS 
(vectype)))
j += vector_unroll_factor (node->simdclone->simdlen,
-  simd_clone_subparts (vectype)) - 1;
+  TYPE_VECTOR_SUBPARTS (vectype)) - 1;
 }
   adjustments->sort_replacements ();
 
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 
df6239a1c61c7213ad3c1468723bc1adf70bc02c..c85b6babc4bc5bc3111ef326dcc8f32bb25333f6
 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -3964,16 +3964,6 @@ vect_simd_lane_linear (tree op, class loop *loop,
 }
 }
 
-/* Return the number of elements in vector type VECTYPE, which is associated
-   with a SIMD clone.  At present these vectors always have a constant
-   length.  */
-
-static unsigned HOST_WIDE_INT
-simd_clone_subparts (tree vectype)
-{
-  return TYPE_VECTOR_SUBPARTS (vectype).to_constant ();
-}
-
 /* Function vectorizable_simd_clone_call.
 
Check if STMT_INFO performs a function call that can be vectorized
@@ -4251,7 +4241,7 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
  slp_node);
if (arginfo[i].vectype == NULL
|| !constant_multiple_p (bestn->simdclone->simdlen,
-simd_clone_subparts (arginfo[i].vectype)))
+TYPE_VECTOR_SUBPARTS (arginfo[i].vectype)))
  return false;
   }
 
@@ -4349,15 +4339,19 @@ vectorizable_simd_clone_call (vec_info *vinfo, 
stmt_vec_info stmt_info,
case SIMD_CLONE_ARG_TYPE_VECTOR:
  atype = bestn->simdclone->args[i].vector_type;
  o = vector_unroll_factor (nunits,
-   simd_clone_subparts (atype));
+   TYPE_VECTOR_SUBPARTS (atype));
  for (m = j * o; m < (j + 1) * o; m++)
{
- if (simd_clone_subparts (atype)
- < simd_clone_subparts (arginfo[i].vectype))
+ poly_uint64 atype_subparts = TYPE_VECTOR_SUBPARTS (atype);
+ poly_uint64 arginfo_subparts
+   = TYPE_VECTOR_SUBPARTS (arginfo[i].vectype);
+ if (known_lt (atype_subparts, arginfo_subparts))
{
  poly_uint64 prec = GET_MODE_BITSIZE (TYPE_MODE (atype));
- k = (simd_clone_subparts (arginfo[i].vectype)
-  / simd_clone_subparts (atype));
+ if (!constant_multiple_p (atype_subparts,
+ 

[RFC 0/X] Implement GCC support for AArch64 libmvec

2023-03-08 Thread Andre Vieira (lists) via Gcc-patches

Hi all,

This is a series of patches/RFCs to implement support in GCC to be able 
to target AArch64's libmvec functions that will be/are being added to glibc.
We have chosen to use the omp pragma '#pragma omp declare variant ...' 
with a simd construct as the way for glibc to inform GCC what functions 
are available.


For example, if we would like to supply a vector version of the scalar 
'cosf' we would have an include file with something like:

typedef __attribute__((__neon_vector_type__(4))) float __f32x4_t;
typedef __attribute__((__neon_vector_type__(2))) float __f32x2_t;
typedef __SVFloat32_t __sv_f32_t;
typedef __SVBool_t __sv_bool_t;
__f32x4_t _ZGVnN4v_cosf (__f32x4_t);
__f32x2_t _ZGVnN2v_cosf (__f32x2_t);
__sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
#pragma omp declare variant(_ZGVnN4v_cosf) \
match(construct = {simd(notinbranch, simdlen(4))}, device = 
{isa("simd")})

#pragma omp declare variant(_ZGVnN2v_cosf) \
match(construct = {simd(notinbranch, simdlen(2))}, device = 
{isa("simd")})

#pragma omp declare variant(_ZGVsMxv_cosf) \
match(construct = {simd(inbranch)}, device = {isa("sve")})
extern float cosf (float);

The BETA ABI can be found in the vfabia64 subdir of 
https://github.com/ARM-software/abi-aa/
This currently disagrees with how this patch series implements 'omp 
declare simd' for SVE and I also do not see a need for the 'omp declare 
variant' scalable extension constructs. I will make changes to the ABI 
once we've finalized the co-design of the ABI and this implementation.


The patch series has three main steps:
1) Add SVE support for 'omp declare simd', see PR 96342
2) Enable GCC to use omp declare variants with simd constructs as simd 
clones during auto-vectorization.
3) Add SLP support for vectorizable_simd_clone_call (This sounded like a 
nice thing to add as we want to move away from non-slp vectorization).


Below you can see the list of current Patches/RFCs, the difference being 
on how confident I am of the proposed changes. For the RFC I am hoping 
to get early comments on the approach, rather than more indepth 
code-reviews.


I appreciate we are still in Stage 4, so I can completely understand if 
you don't have time to review this now, but I thought it can't hurt to 
post these early.


Andre Vieira:
[PATCH] omp: Replace simd_clone_subparts with TYPE_VECTOR_SUBPARTS
[PATCH] parloops: Copy target and optimizations when creating a function 
clone

[PATCH] parloops: Allow poly nit and bound
[RFC] omp, aarch64: Add SVE support for 'omp declare simd' [PR 96342]
[RFC] omp: Create simd clones from 'omp declare variant's
[RFC] omp: Allow creation of simd clones from omp declare variant with 
-fopenmp-simd flag


Work in progress:
[RFC] vect: Enable SLP codegen for vectorizable_simd_clone_call


Re: PESO: Invaders!

2023-03-08 Thread lists



> On 5 Mar 2023, at 14:12, Alan C  wrote:
> 
> We were graced by the appearance of Guinea Fowl with chicks.


Nice one Alan!

Saw lots of those, sometimes in flocks of hundreds, last year in the Kalahari 
Desert (Deception Valley, Botswana).


> They have a hard time around here with all the feral cats. However we have 
> had 518mm rain in Feb. (a record) so the bush is very dense which gives them 
> better protection. Probably some of the older ones remembered being fed here 
> last year. They are still very skittish so I had to move very slowly to get a 
> few grabs.
> 
> https://www.flickr.com/photos/wisselstroom/52726656337/
> 
> K5 & HD 55-300

Regards, JvW

=
Jan van Wijk; author of DFsee;  https://www.dfsee.com

--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


Re: Dual Root setup

2023-03-07 Thread Genes Lists

All

I gave updated the code and now provide an inotify based daemon to sync 
alternate s - and a systemd service unit to run it.


 I would very much appreciate if others ran this - by using the test 
option it does nothing but prints what would happen. And can be run as 
non-root user.


 It is working here on test machine set up with 2 disks, 2 esps and 
with btrfs raid1 for the root file system across the 2 disks. (thanks 
Oscar).


 esp's are mounted on /efi0 and /efi1 - and the currently booted esp is 
bind mounted on /boot.


 1 daemon bind mounts /boot automatically from the currently booted 
ersp, and other monitors for change and updates any alternate esp(s).


  I consider this now feature complete :)

  Again, thanks for feedback and ideas.

  All code and docs and unit files are available via :

the AUR https://aur.archlinux.org/packages/dual-root
github  https://github.com/gene-git/dual-root

best

gene



Re: Dual Root setup

2023-03-07 Thread Genes Lists

On 3/7/23 03:50, Óscar García Amor wrote:

El lun, 06-03-2023 a las 21:30 -0500, Jonathan Whitlock escribió:


This is about having a computer that is resilient to root drive failure. 
 This is in addition to doing backups, certainly not a replacement :)



gene



Re: Dual Root setup

2023-03-06 Thread Genes Lists

On 3/4/23 12:56, Genes Lists wrote:


I know there's lots of info available about dual boot - but not much I 
could find on Dual Root.


What is Dual Root?

    This is a machine with 2 "root" disks where the second one is a hot 
standby - in event of root disk failure the second disk can be booted 
very quickly. This makes recovering very fast and thereafter replacing 
the bad drive and rebuilding very straightforward.  And no new install 
needed!


    I have now set this up on a few machines and its working well - so I 
wrote up some notes and am sharing them in case others who may be 
interested in doing something similar might find them useful.


    My notes explaining how to do this are available here:
    https://github.com/gene-git/blog


best

gene


I created a new repo for the code and provided an AUR package for it as 
well.


   https://github.com/gene-git/dual-root
   https://aur.archlinux.org/packages/dual-root

I'd appreciate wider testing with the currently booted ESP detection
Please run the script as non-root with no arguments (or -h for help)

   dual-root-tool

It should identify the esp used to boot current system and where it is 
mounted and print them out. Should work regardless of number of esp 
partitions on the system (1 or more).


  There is a companion bind-mount-efi which will bind mount the current 
esp onto /boot if its not bound.  It uses "-b" option of the tool to

do this. See README for more info on setting up dual root.

  Some more coding to do still do but at this point I'm happy to share 
more widely.


Thanks again to all.


gene




Re: Dual Root setup

2023-03-06 Thread Genes Lists

On 3/6/23 02:50, Óscar García Amor wrote:



Interesting, I'll take a look at it when you upload the code.



I'd appreciate wider testing on the code - we all know that just because 
it works for me, doesn't mean it will work everywhere with certainty.


It would be super helpful if others can test the code.

A safe and simple test is just to run the dual-root-tool script with no 
arguments. This can be run as non-root user, It simply prints some 
information about the currently booted .


Should work whether there is 1  or more.

I plan to upload the code today after some more local testing here.
I will also make an aur package which will provide both the tool and a 
systemd service to bind mount the currently booted esp onto /boot.


Details are in the notes [1]

Thanks again for sharing ideas and suggestions - it is definitely making 
things a lot better!


best

gene

 [1] https://github.com/gene-git/blog




Re: [gentoo-user] Setting a fixed nameserver for openvpn

2023-03-06 Thread Wols Lists

On 06/03/2023 11:08, Peter Humphrey wrote:

On Monday, 6 March 2023 10:56:37 GMT Wols Lists wrote:

On 06/03/2023 10:06, Michael wrote:



I suspect the behaviour you noticed is related to FF functionality like
TRR
(Trusted Recursive Resolver) farming all your DNS queries over to the
cloudfarce honeypot.

Have a look here if you want to disable it:

https://wiki.archlinux.org/title/Firefox/Privacy#Disable/
enforce_'Trusted_Recursive_Resolver'


Thanks. That led me to network.trr.allow-rfc1918, which provided your
name has a dot in it ! appears to resolve addresses from /etc/hosts. I
guess that actually means firefox uses your local resolver first, and if
it returns an rfc1918 address, will use it.

Surely that should be the default! It shouldn't break a PRIVATE network
in the name of security !!!


It is the default here, in www-client/firefox-110.0.1 .

I'm running amd not ~amd, and I've got FF 102esr. As soon as I changed 
it to allow rfc1918, it started working ...


Cheers,
Wol



Re: [gentoo-user] Setting a fixed nameserver for openvpn

2023-03-06 Thread Wols Lists

On 06/03/2023 10:06, Michael wrote:

On Monday, 6 March 2023 08:24:35 GMT Wols Lists wrote:

On 06/03/2023 08:08, Neil Bothwick wrote:

On Mon, 6 Mar 2023 07:54:51 +, Wols Lists wrote:

There's another file - can't remember its name - that tells your
resolver what to try in what order - the hosts file, dns, what dhcp
told you, etc etc, so your resolver might not be using dns the way you
think.


Do you mean /etc/nsswitch.conf?


Ah yes. Any idea why Firefox seems to ignore it? Whenever I try to
browse to local machines in /etc/hosts, firefox gives me a google search
page which is a bloody nuisance. If I type a VALID ADDRESS in the
ADDRESS BAR, that's where I expect to go! Not some damn random search page!

Cheers,
Wol


I suspect the behaviour you noticed is related to FF functionality like TRR
(Trusted Recursive Resolver) farming all your DNS queries over to the
cloudfarce honeypot.

Have a look here if you want to disable it:

https://wiki.archlinux.org/title/Firefox/Privacy#Disable/
enforce_'Trusted_Recursive_Resolver'


Thanks. That led me to network.trr.allow-rfc1918, which provided your 
name has a dot in it ! appears to resolve addresses from /etc/hosts. I 
guess that actually means firefox uses your local resolver first, and if 
it returns an rfc1918 address, will use it.


Surely that should be the default! It shouldn't break a PRIVATE network 
in the name of security !!!


Cheers,
Wol



Re: [gentoo-user] Setting a fixed nameserver for openvpn

2023-03-06 Thread Wols Lists

On 06/03/2023 08:08, Neil Bothwick wrote:

On Mon, 6 Mar 2023 07:54:51 +, Wols Lists wrote:


There's another file - can't remember its name - that tells your
resolver what to try in what order - the hosts file, dns, what dhcp
told you, etc etc, so your resolver might not be using dns the way you
think.


Do you mean /etc/nsswitch.conf?


Ah yes. Any idea why Firefox seems to ignore it? Whenever I try to 
browse to local machines in /etc/hosts, firefox gives me a google search 
page which is a bloody nuisance. If I type a VALID ADDRESS in the 
ADDRESS BAR, that's where I expect to go! Not some damn random search page!


Cheers,
Wol



Re: [gentoo-user] Setting a fixed nameserver for openvpn

2023-03-05 Thread Wols Lists

On 05/03/2023 18:41, Dale wrote:

I edited the file they say with kwrite.  Even after I restart openvpn,
the IP they want is there but it doesn't use it according to the site
they sent for me to check it with.  It shows other IP addresses.  I'm
sure I'm missing something, likely something simple, but I can't figure
out how to make it work.  I don't know if it is because I'm using openrc
or what.

Anyone have a idea on how to make this work?


resolv.conf tells DNS where to look. That's not openrc/systemd specific.

There's another file - can't remember its name - that tells your 
resolver what to try in what order - the hosts file, dns, what dhcp told 
you, etc etc, so your resolver might not be using dns the way you think.


I can't get that to work, either. I want my hosts file to take priority, 
but it's ignored.


And then, of course, to really screw you over your ISP might be 
hijacking dns.


Cheers,
Wol



Re: Dual Root setup

2023-03-05 Thread Genes Lists
I have updated the notes which now shows the original way but also the 
approach suggested by Oscar (thank you) - this is a superior method but 
bit more painful for existing installs.


This way has  on each disk along with btrfs raid1 for the rest 
basically.


I have a working example doing this and its pretty nice!
Yes I had to repartition and reformat 2 disks for my tests but without 
actually doing it, its all just theory :)


This had one little puzzling wrinkle that needed to be solved to make 
this viable. And it turned out to be quite tricky. The challenge is to 
identify which of the 2  was actually booted,  so the correct one 
can be bind mounted on to /boot.


Now that I solved that little puzzle, the rest is pretty straightforward.

I have a little more coding to do and will make the code available soon. 
I still have some work to do on syncing the  but now we know which 
esp is the 'other' one and which is the 'current' one - that work can 
proceed nicely.


The notes are updated -thanks for the ideas and feedback - much 
appreciated (esp. Oscar)


Current version now up on github
https://github.com/gene-git/blog

best

gene



[tor-relays] D5A3882CBDBE4CAD2F9DDA2AB80FE761BEDC3F11 is spoofing my contact info

2023-03-05 Thread lists
This is _not_ my relay:

https://metrics.torproject.org/rs.html#details/D5A3882CBDBE4CAD2F9DDA2AB80FE761BEDC3F11
https://nusenu.github.io/OrNetStats/w/relay/D5A3882CBDBE4CAD2F9DDA2AB80FE761BEDC3F11.html

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: Dual Root setup

2023-03-05 Thread Genes Lists

On 3/5/23 07:11, Óscar García Amor wrote:


In fact at hook level you can put one like in the example of the manual

ml

Yes I agree that Hooks are useful, but they do only catch things on 
package updates as far as I know. If you want to catch manual changes, 
like an edit to a loader file, then an Inotify based daemon might be 
better approach.


best

gene


Re: Dual Root setup

2023-03-05 Thread Genes Lists

On 3/5/23 07:11, Óscar García Amor wrote:

Thanks Oscar - I edited my notes to show this as the preferred approach.
Still needs more write up but I thought it best to get it up sooner than 
later.


Do you know if it would work to use separate /boot partitions, as I 
mention above, (each XBOOTLDR) but raid-1 them together with btrfs?

I imagine this would be fine, but have not tested to confirm.

If so this would seem like a nice variation. These would also use same 
loader configs.


Re: Dual Root setup

2023-03-05 Thread Genes Lists

On 3/5/23 05:13, Óscar García Amor wrote:
...

The method is simple as you simply need two partitions on the two
disks. The first one on each disk is the ESP and the second one is the
one you are going to use for the btrfs raid. Then you simply mount the
raid1 between both partitions btrfs[1] and the ESP partition you also
use it as boot.


Hi Óscar

This is good approach as there are systemd drivers for btrfs. Its clean, 
simple and transparent.


I would keep separate boot partition(s) -  using XBOOTLDR - these can 
also be mirrored using btrfs. As you said, you now only need to sync 
 and with kernels and initrds on separate boot, the  will 
rarely change.


Summary:
  - 2 x  - kept in sync
  - boot - btrfs raid-1 (data and metadata)
  - root - btrfs raid-1 (data and metadata)

I like it.

The key to every approach with dual disk boot capability is having 
separate  on different disks.  Other than that, its a only a 
question of how much can be on safely mirrored disk and whats left to sync.


I would definitely consider this for a fresh install or perhaps where 
the 2 disks can be set up that are separate from the existing boot disk. 
At least until they 2 are working - to minimize down time.


Thanks!

gene







Re: [tor-relays] Confusing bridge signs...

2023-03-04 Thread lists
On Samstag, 4. März 2023 02:09:19 CET Keifer Bly wrote:
> Wheres the pastebin page? Thanks.
$websearch pastebin

https://paste.debian.net/
https://paste.systemli.org/
https://pastebin.mozilla.org/
...


-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


<    4   5   6   7   8   9   10   11   12   13   >