[PATCH] aarch64: enable mixed-types for aarch64 simdclones

2023-07-26 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch enables the use of mixed-types for simd clones for AArch64 
and adds aarch64 as a target_vect_simd_clones.


Bootstrapped and regression tested on aarch64-unknown-linux-gnu

gcc/ChangeLog:

* config/aarch64/aarch64.cc (currently_supported_simd_type): 
Remove.
(aarch64_simd_clone_compute_vecsize_and_simdlen): Use NFS type 
to determine simdlen.


gcc/testsuite/ChangeLog:

* lib/target-supports.exp: Add aarch64 targets to vect_simd_clones.
* c-c++-common/gomp/declare-variant-14.c: Add aarch64 checks 
and remove warning check.

* g++.dg/gomp/attrs-10.C: Likewise.
* g++.dg/gomp/declare-simd-1.C: Likewise.
* g++.dg/gomp/declare-simd-3.C: Likewise.
* g++.dg/gomp/declare-simd-4.C: Likewise.
* gcc.dg/gomp/declare-simd-3.c: Likewise.
* gcc.dg/gomp/simd-clones-2.c: Likewise.
* gfortran.dg/gomp/declare-variant-14.f90: Likewise.
* c-c++-common/gomp/pr60823-1.c: Remove warning check.
* c-c++-common/gomp/pr60823-3.c: Likewise.
* g++.dg/gomp/declare-simd-7.C: Likewise.
* g++.dg/gomp/declare-simd-8.C: Likewise.
* g++.dg/gomp/pr88182.C: Likewise.
* gcc.dg/declare-simd.c: Likewise.
* gcc.dg/gomp/declare-simd-1.c: Likewise.
* gcc.dg/gomp/pr87895-1.c: Likewise.
* gfortran.dg/gomp/declare-simd-2.f90: Likewise.
* gfortran.dg/gomp/declare-simd-coarray-lib.f90: Likewise.
* gfortran.dg/gomp/pr79154-1.f90: Likewise.
* gfortran.dg/gomp/pr83977.f90: Likewise.
* gcc.dg/gomp/pr87887-1.c: Add warning test.
* gcc.dg/gomp/pr89246-1.c: Likewise.
* gcc.dg/gomp/pr99542.c: Update warning test.diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 
560e5431636ef46c41d56faa0c4e95be78f64b50..ac6350a44481628a947a0f20e034acf92cde63ec
 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -27194,21 +27194,6 @@ supported_simd_type (tree t)
   return false;
 }
 
-/* Return true for types that currently are supported as SIMD return
-   or argument types.  */
-
-static bool
-currently_supported_simd_type (tree t, tree b)
-{
-  if (COMPLEX_FLOAT_TYPE_P (t))
-return false;
-
-  if (TYPE_SIZE (t) != TYPE_SIZE (b))
-return false;
-
-  return supported_simd_type (t);
-}
-
 /* Implement TARGET_SIMD_CLONE_COMPUTE_VECSIZE_AND_SIMDLEN.  */
 
 static int
@@ -27217,7 +27202,7 @@ aarch64_simd_clone_compute_vecsize_and_simdlen (struct 
cgraph_node *node,
tree base_type, int num,
bool explicit_p)
 {
-  tree t, ret_type;
+  tree t, ret_type, nfs_type;
   unsigned int elt_bits, count;
   unsigned HOST_WIDE_INT const_simdlen;
   poly_uint64 vec_bits;
@@ -27240,55 +27225,61 @@ aarch64_simd_clone_compute_vecsize_and_simdlen 
(struct cgraph_node *node,
 }
 
   ret_type = TREE_TYPE (TREE_TYPE (node->decl));
+  /* According to AArch64's Vector ABI the type that determines the simdlen is
+ the narrowest of types, so we ignore base_type for AArch64.  */
   if (TREE_CODE (ret_type) != VOID_TYPE
-  && !currently_supported_simd_type (ret_type, base_type))
+  && !supported_simd_type (ret_type))
 {
   if (!explicit_p)
;
-  else if (TYPE_SIZE (ret_type) != TYPE_SIZE (base_type))
-   warning_at (DECL_SOURCE_LOCATION (node->decl), 0,
-   "GCC does not currently support mixed size types "
-   "for % functions");
-  else if (supported_simd_type (ret_type))
+  else if (COMPLEX_FLOAT_TYPE_P (ret_type))
warning_at (DECL_SOURCE_LOCATION (node->decl), 0,
"GCC does not currently support return type %qT "
-   "for % functions", ret_type);
+   "for simd", ret_type);
   else
warning_at (DECL_SOURCE_LOCATION (node->decl), 0,
-   "unsupported return type %qT for % functions",
+   "unsupported return type %qT for simd",
ret_type);
   return 0;
 }
 
+  nfs_type = ret_type;
   int i;
   tree type_arg_types = TYPE_ARG_TYPES (TREE_TYPE (node->decl));
   bool decl_arg_p = (node->definition || type_arg_types == NULL_TREE);
-
   for (t = (decl_arg_p ? DECL_ARGUMENTS (node->decl) : type_arg_types), i = 0;
t && t != void_list_node; t = TREE_CHAIN (t), i++)
 {
   tree arg_type = decl_arg_p ? TREE_TYPE (t) : TREE_VALUE (t);
-
   if (clonei->args[i].arg_type != SIMD_CLONE_ARG_TYPE_UNIFORM
- && !currently_supported_simd_type (arg_type, base_type))
+ && !supported_simd_type (arg_type))
{
  if (!explicit_p)
;
- else if (TYPE_SIZE (arg_type) != TYPE_SIZE (base_type))
+ else if (COMPLEX_FLOAT_TYPE_P (ret_type))
warning_at (DECL_SOURCE_LOCATION (node->decl), 0,
-   "GCC does not currently support mixed size types "
- 

Re: GNU Tools Cauldron 2023

2023-07-25 Thread Richard Earnshaw (lists)



It is now just under 2 months until the GNU Tools Cauldron.
Registration is still open, but we would really appreciate it if you
could register as soon as possible so that we have a clear idea of the
numbers.

Richard.

On 05/06/2023 14:59, Richard Earnshaw wrote:
> We are pleased to invite you all to the next GNU Tools Cauldron,
> taking place in Cambridge, UK, on September 22-24, 2023.
>
> As for the previous instances, we have setup a wiki page for
> details:
>
> https://gcc.gnu.org/wiki/cauldron2023
>
> Like last year, we are having to charge for attendance.  We are still
> working out what we will need to charge, but it will be no more than 
£250.

>
> Attendance will remain free for community volunteers and others who do
> not have a commercial backer and we will be providing a small number of
> travel bursaries for students to attend.
>
> For all details of how to register, and how to submit a proposal for a
> track session, please see the wiki page.
>
> The Cauldron is organized by a group of volunteers. We are keen to add
> some more people so others can stand down. If you'd like to be part of
> that organizing committee, please email the same address.
>
> This announcement is being sent to the main mailing list of the
> following groups: GCC, GDB, binutils, CGEN, DejaGnu, newlib and glibc.
>
> Please feel free to share with other groups as appropriate.
>
> Richard (on behalf of the GNU Tools Cauldron organizing committee).



Nested tuplet ratio notation

2023-07-25 Thread Lib Lists
Hello,
I'm testing different ways to notate the top-staff rhythm in the
example below (a 25:16 tuplet over two bars, or a quintuplet built on
four notes of another quintuplet). I have a couple of problems:

1. I'd like to notate the first staff ratio as in the attached image,
but I have no idea how to add a small '5' on top of the eight note.
2. As soon as I add manual beams [ ] in the first staff, the tuplet
number and bracket goes inside the staff, despite the \override
TupletBracket.outside-staff-priorit

Any help is really appreciated!
Lib

\version "2.25.5"

  <<
\new Staff
{
  \relative c'
  \repeat unfold 5 {
\override TupletBracket.outside-staff-priority = #0
\override TupletNumber.text =
#(tuplet-number::non-default-fraction-with-notes
  5 (ly:make-duration 3 0) 4 (ly:make-duration 3 0))
\tuplet 5/4 { \tuplet 5/4 {  c8[ c c c c]  } }
  }
}
\new Staff
{
  \relative c'
  \repeat unfold 4 {
\tuplet 5/4 { c8 c c c c }
  }
}
  >>


[OE-Core][PATCH] scripts/resulttool: add mention about new detected tests

2023-07-21 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

Some regression reports show a lot of "PASSED->None" transitions. When such
big lot of identical transitions are observed, it could be that tests are
now failing, but it could also be that some tests has been renamed.

To detect such case, add a log in regression report to report the number of
new tests (i.e: tests that are present in target results but not in base
result). This new log also allows to know about newly added tests bases

Signed-off-by: Alexis Lothoré 
---
This commit is a follow-up to [1], which discusses the issue with the
regression report from 4.3_M2 build.
A example of this regression report being updated with the "newly added
tests count" log can be found here: [2]

[1] 
https://lore.kernel.org/yocto/e7e05ead7e1740041e7633d71943345460472964.ca...@linuxfoundation.org/
[2] https://pastebin.com/WQdgrpA0
---
 scripts/lib/resulttool/regression.py | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/scripts/lib/resulttool/regression.py 
b/scripts/lib/resulttool/regression.py
index 1facbcd85e1e..f80a9182a9a9 100644
--- a/scripts/lib/resulttool/regression.py
+++ b/scripts/lib/resulttool/regression.py
@@ -178,6 +178,8 @@ def compare_result(logger, base_name, target_name, 
base_result, target_result):
 base_result = base_result.get('result')
 target_result = target_result.get('result')
 result = {}
+new_tests = 0
+
 if base_result and target_result:
 for k in base_result:
 base_testcase = base_result[k]
@@ -189,6 +191,13 @@ def compare_result(logger, base_name, target_name, 
base_result, target_result):
 result[k] = {'base': base_status, 'target': target_status}
 else:
 logger.error('Failed to retrieved base test case status: %s' % 
k)
+
+# Also count new tests that were not present in base results: it
+# could be newly added tests, but it could also highlights some tests
+# renames or fixed faulty ptests
+for k in target_result:
+if k not in base_result:
+new_tests += 1
 if result:
 new_pass_count = sum(test['target'] is not None and 
test['target'].startswith("PASS") for test in result.values())
 # Print a regression report only if at least one test has a regression 
status (FAIL, SKIPPED, absent...)
@@ -200,10 +209,13 @@ def compare_result(logger, base_name, target_name, 
base_result, target_result):
 if new_pass_count > 0:
 resultstring += f'Additionally, {new_pass_count} 
previously failing test(s) is/are now passing\n'
 else:
-resultstring = "Improvement: %s\n %s\n 
(+%d test(s) passing)" % (base_name, target_name, new_pass_count)
+resultstring = "Improvement: %s\n %s\n 
(+%d test(s) passing)\n" % (base_name, target_name, new_pass_count)
 result = None
 else:
-resultstring = "Match:   %s\n %s" % (base_name, 
target_name)
+resultstring = "Match:   %s\n %s\n" % (base_name, 
target_name)
+
+if new_tests > 0:
+resultstring += f'Additionally, {new_tests} new test(s) is/are 
present\n'
 return result, resultstring
 
 def get_results(logger, source):
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#184683): 
https://lists.openembedded.org/g/openembedded-core/message/184683
Mute This Topic: https://lists.openembedded.org/mt/100274589/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.

2023-07-21 Thread Alexis de BRUYN [Mailing Lists]

On 20/07/2023 11:58, Stuart Henderson wrote:

Did pkg_add -u complete successfully or did you have any errors or warnings?

No, no errors/warnings.

I have done a sysclean -a after, and deleted some libs.

You might get more information from trying to run a Qt-based program 
with LD_DEBUG set in the environment, but there will be a lot of output, 
probably needs running under script rather than relying on scrollback.


$ LD_DEBUG=1 flameshot
[...]
examining: '/usr/local/lib/qt5/libQt5XcbQpa.so.1.2'
loading: libz.so.5.0 required by /usr/local/lib/qt5/libQt5XcbQpa.so.1.2
dlopen: failed to open libz.so.5.0

$ ll /usr/local/lib/qt5/libQt5XcbQpa.so*
-rw-r--r--  1 root  bin  3043000 May 21 19:56 
/usr/local/lib/qt5/libQt5XcbQpa.so.0.1
-rw-r--r--  1 root  bin  3036224 Jul 17 17:30 
/usr/local/lib/qt5/libQt5XcbQpa.so.1.0
-rw-r--r--  1 root  bin  1709088 Mar  7  2020 
/usr/local/lib/qt5/libQt5XcbQpa.so.1.2


$ doas rm -f /usr/local/lib/qt5/libQt5XcbQpa.so.1.2
doas (ale...@ws-alexis.lan.mrs.de-bruyn.fr) password:

All Qt-based programs are running fine now.

Thanks Stuart and Rafael for your help.




--
   Sent from a phone, apologies for poor formatting.


On 20 July 2023 04:00:25 "Alexis de BRUYN [Mailing Lists]" 
 wrote:



On 19/07/2023 22:20, Rafael Sadowski wrote:
On Wed Jul 19, 2023 at 06:17:08PM +0200, Alexis de BRUYN [Mailing 
Lists] wrote:

Hi Everybody,

Following -current, I have just sysupgraded / pkg_add -u (after ~40 
days),

and I cannot launch qt applications anymore :

$ nextcloud
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" 
even though

it was found.
This application failed to start because no Qt platform plugin could be
initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, minimal, minimalegl, 
offscreen, vnc,
wayland-egl, wayland, wayland-xcomposite-egl, 
wayland-xcomposite-glx, xcb.



Could you run qtdiag-qt5 and send the output?

$ QT_DEBUG_PLUGINS=1 qtdiag-qt5
QFactoryLoader::QFactoryLoader() checking directory path
"/usr/local/lib/qt5/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at
"/usr/local/lib/qt5/plugins/platforms/libqeglfs.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqeglfs.so,
metadata=
{
     "IID":
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
     "MetaData": {
         "Keys": [
             "eglfs"
         ]
     },
     "archreq": 0,
     "className": "QEglFSIntegrationPlugin",
     "debug": false,
     "version": 331520
}


Got keys from plugin meta data ("eglfs")
QFactoryLoader::QFactoryLoader() looking at
"/usr/local/lib/qt5/plugins/platforms/libqminimal.so"
Found metadata in lib
/usr/local/lib/qt5/plugins/platforms/libqminimal.so, metadata=
{
     "IID":
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
     "MetaData": {
         "Keys": [
             "minimal"
         ]
     },
     "archreq": 0,
     "className": "QMinimalIntegrationPlugin",
     "debug": false,
     "version": 331520
}


Got keys from plugin meta data ("minimal")
QFactoryLoader::QFactoryLoader() looking at
"/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so"
Found metadata in lib
/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so, metadata=
{
     "IID":
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
     "MetaData": {
         "Keys": [
             "minimalegl"
         ]
     },
     "archreq": 0,
     "className": "QMinimalEglIntegrationPlugin",
     "debug": false,
     "version": 331520
}


Got keys from plugin meta data ("minimalegl")
QFactoryLoader::QFactoryLoader() looking at
"/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so"
Found metadata in lib
/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so, metadata=
{
     "IID":
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
     "MetaData": {
         "Keys": [
             "offscreen"
         ]
     },
     "archreq": 0,
     "className": "QOffscreenIntegrationPlugin",
     "debug": false,
     "version": 331520
}


Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at
"/usr/local/lib/qt5/plugins/platforms/libqvnc.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqvnc.so,
metadata=
{
     "IID":
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
     "MetaData": {
         "Keys": [
      

Re: qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.

2023-07-19 Thread Alexis de BRUYN [Mailing Lists]

On 19/07/2023 22:20, Rafael Sadowski wrote:

On Wed Jul 19, 2023 at 06:17:08PM +0200, Alexis de BRUYN [Mailing Lists] wrote:

Hi Everybody,

Following -current, I have just sysupgraded / pkg_add -u (after ~40 days),
and I cannot launch qt applications anymore :

$ nextcloud
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though
it was found.
This application failed to start because no Qt platform plugin could be
initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc,
wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.



Could you run qtdiag-qt5 and send the output?

$ QT_DEBUG_PLUGINS=1 qtdiag-qt5
QFactoryLoader::QFactoryLoader() checking directory path 
"/usr/local/lib/qt5/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqeglfs.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqeglfs.so, 
metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"eglfs"
]
},
"archreq": 0,
"className": "QEglFSIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("eglfs")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqminimal.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqminimal.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"minimal"
]
},
"archreq": 0,
"className": "QMinimalIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("minimal")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"minimalegl"
]
},
"archreq": 0,
"className": "QMinimalEglIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("minimalegl")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"offscreen"
]
},
"archreq": 0,
"className": "QOffscreenIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqvnc.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqvnc.so, 
metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"vnc"
]
},
"archreq": 0,
"className": "QVncIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("vnc")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqwayland-egl.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqwayland-egl.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"wayland-egl"
]
},
"archreq": 0,
"className": "QWaylandEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("wayland-egl")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqwayland-generic.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqwayland-generic.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"wayland"
]

Re: [AFMUG] wispa

2023-07-19 Thread Jeff Broadwick - Lists
It’s not legit.WISPA has made efforts in the past to stamp this out, but it’s pretty much impossible.Jeff BroadwickCTIconnect312-205-2519 Office574-220-7826 Celljbroadw...@cticonnect.comOn Jul 19, 2023, at 2:29 PM, David Hannum  wrote:I can't imagine WISPA sanctioning someone selling this list, even if it were legit, when WISPA usually publishes it for free in the conference app.  On Tue, Jul 18, 2023 at 12:43 PM Josh Luthman  wrote:This is a scam.  Just because they say they have something doesn't mean it's true.On Tue, Jul 18, 2023 at 11:42 AM Trey Scarborough  wrote:
  

  
  
What I don't get is who is this the event coordinator company or
  is someone going out and buying spots and reselling them. How do
  they proffit off of this spam or is it a scam?
I do know that unfortunately the convention industry is just as
  bad as credit card companies with selling your information. Every
  vendor up and down the chain Cvent, GES, Freeman, ChirpE, etc all
  collect and sell your data and/or the organizations as well. You
  agree to it when you sign up as the organization hosting the event
  and by the individual attendee. It makes me wish you could
  generate a virtual identity for attending trade shows. Just like
  you do with credit cards for suspect online orders.
On 7/14/2023 10:06 AM, Chuck McCown via
  AF wrote:


  
  
  
  
  

  I wonder if wispa sanctions this spam?
  

   
  
From: Alicia Paul 
Sent: Friday, July 14, 2023 8:59 AM
To: sa...@go-mtc.com 
Subject: Go-Mtc
  

 
  
  

  Hi,
   
  Just wanted to
  do a quick follow-up on my below email.
   
  Please review
  my below email and let me know your interest.
   
  May I send
  quote/pricing details for decision-making?
   
  Looking
  forward to hearing from you.
  Alicia 
  
  

  From: Alicia Paul
  Sent: Thursdy, July 13, 2023, 10:53 AM
  To: sa...@go-mtc.com
  Subject: Go-Mtc
  
  Hi,
  
  We are happy to let you know that the pre-registered
  attendance list for the “WISPAPALOOZA 2023” is now
  available to buy at the best possible price.
  
  Attendees: WISP Industry Professionals | Industry
  Experts | Decision-Makers in the ISP Industry |
  Service Providers | Leading Suppliers in the Fixed
  Wireless Internet Industry and many more…
  
  Please let me know your views, so that I can share the
  counts and pricing details.
  
  I look forward to your response.
  
  Regards,
  Alicia Paul | Event Database Coordinator.
  


  

  
  
  

  

-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

-- AF mailing listAF@af.afmug.comhttp://af.afmug.com/mailman/listinfo/af_af.afmug.com-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com


qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.

2023-07-19 Thread Alexis de BRUYN [Mailing Lists]

Hi Everybody,

Following -current, I have just sysupgraded / pkg_add -u (after ~40 
days), and I cannot launch qt applications anymore :


$ nextcloud
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even 
though it was found.
This application failed to start because no Qt platform plugin could be 
initialized. Reinstalling the application may fix this problem.


Available platform plugins are: eglfs, minimal, minimalegl, offscreen, 
vnc, wayland-egl, wayland, wayland-xcomposite-egl, 
wayland-xcomposite-glx, xcb.


Abort trap

$ doas pkg_add -r qtbase-5.15.10
quirks-6.134 signed on 2023-07-18T21:18:51Z

$ QT_PLUGIN_PATH=/usr/local/lib/qt5/plugins/ nextcloud
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even 
though it was found.
This application failed to start because no Qt platform plugin could be 
initialized. Reinstalling the application may fix this problem.


Available platform plugins are: eglfs, minimal, minimalegl, offscreen, 
vnc, wayland-egl, wayland, wayland-xcomposite-egl, 
wayland-xcomposite-glx, xcb.


Abort trap

$ QT_DEBUG_PLUGINS=1 nextcloud
QFactoryLoader::QFactoryLoader() checking directory path 
"/usr/local/lib/qt5/plugins/platforms" ...
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqeglfs.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqeglfs.so, 
metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"eglfs"
]
},
"archreq": 0,
"className": "QEglFSIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("eglfs")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqminimal.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqminimal.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"minimal"
]
},
"archreq": 0,
"className": "QMinimalIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("minimal")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqminimalegl.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"minimalegl"
]
},
"archreq": 0,
"className": "QMinimalEglIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("minimalegl")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqoffscreen.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"offscreen"
]
},
"archreq": 0,
"className": "QOffscreenIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("offscreen")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqvnc.so"
Found metadata in lib /usr/local/lib/qt5/plugins/platforms/libqvnc.so, 
metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"vnc"
]
},
"archreq": 0,
"className": "QVncIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("vnc")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqwayland-egl.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqwayland-egl.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"wayland-egl"
]
},
"archreq": 0,
"className": "QWaylandEglPlatformIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("wayland-egl")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqwayland-generic.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqwayland-generic.so, metadata=

{
"IID": 
"org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",

"MetaData": {
"Keys": [
"wayland"
]
},
"archreq": 0,
"className": "QWaylandIntegrationPlugin",
"debug": false,
"version": 331520
}


Got keys from plugin meta data ("wayland")
QFactoryLoader::QFactoryLoader() looking at 
"/usr/local/lib/qt5/plugins/platforms/libqwayland-xcomposite-egl.so"
Found metadata in lib 
/usr/local/lib/qt5/plugins/platforms/libqwayland-xcomposite-egl.so, 
metadata=

{
"IID": 

Re: Arch Linux installing.

2023-07-18 Thread Genes Lists

On 7/18/23 14:52, Ralf Mardorf wrote:
...


if the cat paws at the keyboard, it doesn't need root privileges, it can
execute "rm /path/unified_kernel_image" with the cat's user privileges?


I think that non-root can only do that if mounted uid=.
So, as far as cat-safe filesystem, isn't it no different for fat32, ext4 
or btrfs?


E.g. On my system here I get cat denied :)

as root:
# findmnt -t vfat /efi0
TARGET SOURCE  FSTYPE OPTIONS
/efi0  /dev/sda1 vfat ...

# ls -l /efi0/foo
0 -rwxr-xr-x 1 root root 0 Jul 18 15:06 /efi0/foo*

As user kitty:
$ rm -iv /efi0/foo
rm: remove write-protected regular empty file '/efi0/foo'? y
rm: cannot remove '/efi0/foo': Permission denied

$ ls -l /efi0/foo
0 -rwxr-xr-x 1 root root 0 Jul 18 15:06 /efi0/foo*




...I would also like to avoid fat as much as possible ... out of principle.


understood.





Re: Arch Linux installing.

2023-07-18 Thread Genes Lists

On 7/18/23 09:45, Ralf Mardorf wrote:


I don't install my kernels on a fat partition without UNIX privileges.
IOW if it should be required that the efi partition is a fat partition,
I wonder why this is recommended.

OI assume a part of your comment is the security aspect. This of course 
can be addressed by UKI and secure boot for example.


I'll let you peruse the references for more info. But one thing that 
comes to mind is - its simpler.


Simpler in the sense that the UEFI boot process needs to be able to read 
the XBOOTLDR partition to load the kernel - this in turn only works if 
there are available EFI file system drivers which in turn must be 
installed (see package efifs).


Not a big deal but these drivers need to keep pace appropriately with 
the actual "kernel" drivers; at least to some degree. These efi drivers 
are separate from the in-tree kernel drivers of course.


Indeed efi filesys drivers are available for common filesystems 
including ext4 and btrfs.


By contrast, there are no EFI drivers available for md raid, so that 
cannot be used for /boot mounted as XBOOTLDR partion.


gene







Re: Arch Linux installing.

2023-07-18 Thread Genes Lists

On 7/18/23 09:23, Genes Lists wrote:

While that is/was pretty common, I believe the current recommendation is 
to  mount:




To be more precise, the recommendation is to mount the efi as /boot, and 
as Sergey suggested, if your EFI partition is too small, then use 
separate efi and boot partitions as above.


gene


Re: Arch Linux installing.

2023-07-18 Thread Genes Lists

On 7/18/23 06:41, Sergey Filatov wrote:
  mount EFI partition as /boot/efi. 


While that is/was pretty common, I believe the current recommendation is 
to  mount:


  esp  onto /efi   gpt type : EF00
  boot onto /boot  gpt type : EA00  (type XBOOTLDR)

rather than having the efi mounted underneath another mount. For some 
additional info see [1] [2].


gene

.. [1] 
https://uapi-group.org/specifications/specs/boot_loader_specification/

.. [2] https://0pointer.net/blog/


Re: [tor-relays] Wrong "first seen" flag for bridges at metrics.torproject.org

2023-07-17 Thread lists
On Montag, 17. Juli 2023 20:12:34 CEST telekobold wrote:

> I have an issue regarding the "first seen" flag at
> metrics.torproject.org: It is definitely wrong for my two bridges - both
> dates are much too close in the past.

> Has anyone observed similar behavior for its relay? (I found it
> meaningful to first ask here before creating an issue.

Yeah, looks like a bug. My approx. 1 year old bridges are all:
First Seen 2023-06-20

https://metrics.torproject.org/rs.html#search/ForPrivacyNETbr

I don't care about the date, the only important thing is that they have users 
and make traffic ;-)

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [PATCH] Include insn-opinit.h in PLUGIN_H [PR110610]

2023-07-17 Thread Andre Vieira (lists) via Gcc-patches



On 11/07/2023 23:28, Jeff Law wrote:



On 7/11/23 04:37, Andre Vieira (lists) via Gcc-patches wrote:

Hi,

This patch fixes PR110610 by including OPTABS_H in the INTERNAL_FN_H 
list, as insn-opinit.h is now required by internal-fn.h. This will 
lead to insn-opinit.h, among the other OPTABS_H header files, being 
installed in the plugin directory.


Bootstrapped aarch64-unknown-linux-gnu.

@Jakub: could you check to see if it also addresses PR 110284?


gcc/ChangeLog:

 PR 110610
 * Makefile.in (INTERNAL_FN_H): Add OPTABS_H.
Why use OPTABS_H here?  Isn't the new dependency just on insn-opinit.h 
and insn-codes.h and neither of those #include other headers do they?





Yeah, there was no particular reason other than I just felt the Makefile 
structure sort of lend itself that way. I checked genopinit.cc and it 
seems insn-opinit.h doesn't include any other header files, only the 
sources do, so I've changed the patch to only add insn-opinit.h to 
INTERNAL_FN_H.


---

This patch fixes PR110610 by including insn-opinit.h in the 
INTERNAL_FN_H list, as insn-opinit.h is now required by internal-fn.h. 
This will lead to insn-opinit.h, among the other OPTABS_H header files, 
being installed in the plugin directory.


Bootstrapped aarch64-unknown-linux-gnu.

gcc/ChangeLog:
PR 110610
* Makefile.in (INTERNAL_FN_H): Add insn-opinit.h.diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 
c478ec852013eae65b9f3ec0a443e023c7d8b452..683774ad446d545362644d2dbdc37723eea55bc3
 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -976,7 +976,7 @@ READ_MD_H = $(OBSTACK_H) $(HASHTAB_H) read-md.h
 BUILTINS_DEF = builtins.def sync-builtins.def omp-builtins.def \
gtm-builtins.def sanitizer.def
 INTERNAL_FN_DEF = internal-fn.def
-INTERNAL_FN_H = internal-fn.h $(INTERNAL_FN_DEF)
+INTERNAL_FN_H = internal-fn.h $(INTERNAL_FN_DEF) insn-opinit.h
 TREE_CORE_H = tree-core.h $(CORETYPES_H) all-tree.def tree.def \
c-family/c-common.def $(lang_tree_files) \
$(BUILTINS_DEF) $(INPUT_H) statistics.h \


Issue with RCS + SUID

2023-07-14 Thread Lists

Hello,

We are trying to setup RCS with SUID to prevent users other than the 
primary user from deleting revisions. We followed the directions in the 
man page for ci, but we are getting permissions issues when we try to 
checkout as one of the other users.


This is our setup:

 * Master user: sjc001
 * Group permitted to CI/CO: fte
 * Test alternate user in the fte group: sha001

This is the procedure we followed from the man page:

 * mkdir /usr/local/rcs-sjc001
 * cp /usr/bin/ci /usr/local/rcs-sjc001/
 * cp /usr/bin/co /usr/local/rcs-sjc001/
 * cp /usr/bin/rcsclean /usr/local/rcs-sjc001/
 * chmod go-w,u+s /usr/local/rcs-sjc001/*
 * PATH=/usr/local/rcs-sjc001:$PATH; export $PATH
 * mkdir /projects/rcs-test/test
 * chmod go-w /projects/rcs-test/test

We then, with sjc001, create a test file and check it in. Once that is 
done, we try to check out with sha001 and get the following error:


test]$ co -l xorgxrdp.10.log
xorgxrdp.10.log,v  -->  xorgxrdp.10.log
revision 1.1 (locked)
co: xorgxrdp.10.log,v: Operation not permitted
co: saved in ,xorgxrdp.10.log,

Why is it when we follow the procedure for setuid that is in the man 
page do we get permission denied when we are trying to check in/check 
out even though we are using setuid?


Re: Fullscreen mode

2023-07-12 Thread Mailing Lists
without knowing more I think you should go fullscreen with the 
browser/application before connecting to the remote system. 

Additionally the target resolution set in the connection has impact and also 
the configured/available resize method of the target system.

It may also vary if initially connect or re-connect an existing session.

I hope I could help.

Regards

Peter

via Smartphone

> Am 12.07.2023 um 16:13 schrieb Fatima Ezzahra Jaber 
> :
> 
> 
> Hello,
> 
> I have created my own guacamole application and I can see my remote desktop 
> on my web page, bu I need it to go full screen, I tried using 
> theElement.requestFullscreen() method but it only makes my display go on 
> fullscreen, but the guacamole display's size doesn't change. Does guacamole 
> handle fullscreen mode and is there an instruction or a way to do that? 
> 
> 
> -- 
> JABER Fatima Ezzahra
> 


Re: Unable to Port Forward to a Virtual Machine

2023-07-11 Thread Lists
On Tuesday, July 11, 2023 6:36:22 PM PDT David King wrote:
> On 7/11/23 19:15, Lists wrote:
> 
> >
> >
> > I have a Fedora (35) workstation with some VMs running on a virtual 
> > LAN and I want to open service(s) to the local Physical LAN. Goal is 
> > to make an HTTP service running on 192.168.122.11:80 visible to 
> > 192.168.1.* as 192.168.1.62:80
> >
> >
> 
> The problem isn't your firewall configuration, instead it's that a VM 
> with a NIC configured in NAT mode has no network connection that would 
> allow traffic to flow from the 198.168.1.* network to the 192.168..122.* 
> network.  When I need to allow a VM to expose services to an external 
> network like your LAN, I set it up with a bridged network 
> configuration.  This configuration results in your VM being given its 
> own address on the 192.168.1.* network and any ports it exposes to be 
> visible to the other devices on that network.  No port forwarding is 
> necessary.  Firewall software running in the VM is used to control 
> access to these ports, the host's firewall is not a factor.  This Fedora 
> Docs article provides more details and describes how to set this up: 
> https://docs.fedoraproject.org/en-US/fedora-server/administration/virtual-ro
> uting-bridge/
 
Sorry if I didn't make myself clear: the 192.168.1.62 address is the virtual 
host. What I'm trying to do is get connections from 192.168.1.* to 
192.168.1.62 to be forwarded to the 192.168.122.11 VLAN address on the host. 

Because this is a dev laptop/workstation, the bridged process is a pain 
because the Internet device changes a lot; sometimes it's Wifi, sometimes LAN 
cable, etc. 



signature.asc
Description: This is a digitally signed message part.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Unable to Port Forward to a Virtual Machine

2023-07-11 Thread Lists
I have a Fedora (35) workstation with some VMs running on a virtual LAN and I 
want to open 
service(s) to the local Physical LAN. Goal is to make an HTTP service running 
on 192.168.122.11:80 
visible to 192.168.1.* as 192.168.1.62:80

What am I missing!?!? 

Office Network
192.168.1.*
192.168.1.62 Fedora Workstation IP 

VirtD network
192.168.122.* 
192.168.122.11 Virtual Machine IP 


I have a script file as
firewall-cmd --add-service=http

firewall-cmd \
  --add-forward-port=port=80:proto=tcp:toport=80:toaddr=192.168.122.11 
firewall-cmd --add-masquerade 
firewall-cmd --add-forward 
firewall-cmd --add-port=80/tcp 

and sysctll -p reports: 
net.ipv4.ip_forward = 1
But when I attempt to get the service 
wget http://192.168.1.62

Connecting to 192.168.1.62:80... failed: Connection refused.

Although I can get the service directly
wget http://192.168.122.11
2023-07-11 15:33:04 (86.1 MB/s) - ‘index.html’ saved [612/612]

# This is the default target 
[root@tesla setup]# firewall-cmd --list-all
FedoraWorkstation (active) 
 target: default 
 icmp-block-inversion: no 
 interfaces: wlp6s0 
 sources:  
 services: dhcpv6-client http https mdns samba samba-client ssh 
 ports: 1025-65535/udp 1025-65535/tcp 80/tcp 443/tcp 
 protocols:  
 forward: yes 
 masquerade: yes 
 forward-ports:  
   port=80:proto=tcp:toport=80:toaddr=192.168.122.11 
   port=443:proto=tcp:toport=443:toaddr=192.168.122.11 
 source-ports:  
 icmp-blocks:  
 rich rules: 

# And I'm pretty sure this is related - I've tried opening up everything I can 
think of: 
[root@tesla setup]# firewall-cmd --list-all  --zone=libvirt
libvirt (active) 
 target: ACCEPT 
 icmp-block-inversion: no 
 interfaces: virbr0 
 sources:  
 services: dhcp dhcpv6 dns ssh tftp 
 ports: 1-65534/tcp 
 protocols: icmp ipv6-icmp 
 forward: yes 
 masquerade: yes 
 forward-ports:  
 source-ports:  
 icmp-blocks:  
 rich rules: 





signature.asc
Description: This is a digitally signed message part.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: [PATCH] aarch64: Fix warnings during libgcc build

2023-07-11 Thread Richard Earnshaw (lists) via Gcc-patches

On 11/07/2023 15:54, Richard Earnshaw (lists) via Gcc-patches wrote:

On 11/07/2023 10:37, Florian Weimer via Gcc-patches wrote:

libgcc/

* config/aarch64/aarch64-unwind.h (aarch64_cie_signed_with_b_key):
Add missing const qualifier.  Cast from const unsigned char *
to const char *.  Use __builtin_strchr to avoid an implicit
function declaration.
* config/aarch64/linux-unwind.h (aarch64_fallback_frame_state):
Add missing cast.

---
diff --git a/libgcc/config/aarch64/linux-unwind.h 
b/libgcc/config/aarch64/linux-unwind.h

index 00eba866049..93da7a9537d 100644
--- a/libgcc/config/aarch64/linux-unwind.h
+++ b/libgcc/config/aarch64/linux-unwind.h
@@ -77,7 +77,7 @@ aarch64_fallback_frame_state (struct _Unwind_Context 
*context,

  }
    rt_ = context->cfa;
-  sc = _->uc.uc_mcontext;
+  sc = (struct sigcontext *) _->uc.uc_mcontext;
  /* This define duplicates the definition in aarch64.md */
  #define SP_REGNUM 31




This looks somewhat dubious.  I'm not particularly familiar with the 
kernel headers, but a quick look suggests an mcontext_t is nothing like 
a sigcontext_t.  So isn't the cast just papering over some more 
fundamental problem?


R.


Sorry, I was looking at the wrong set of headers.  It looks like these 
have to match. But in that case, I think we should have a comment about 
that here to explain the suspicious cast.


R.


Re: [PATCH] aarch64: Fix warnings during libgcc build

2023-07-11 Thread Richard Earnshaw (lists) via Gcc-patches

On 11/07/2023 10:37, Florian Weimer via Gcc-patches wrote:

libgcc/

* config/aarch64/aarch64-unwind.h (aarch64_cie_signed_with_b_key):
Add missing const qualifier.  Cast from const unsigned char *
to const char *.  Use __builtin_strchr to avoid an implicit
function declaration.
* config/aarch64/linux-unwind.h (aarch64_fallback_frame_state):
Add missing cast.

---
diff --git a/libgcc/config/aarch64/linux-unwind.h 
b/libgcc/config/aarch64/linux-unwind.h
index 00eba866049..93da7a9537d 100644
--- a/libgcc/config/aarch64/linux-unwind.h
+++ b/libgcc/config/aarch64/linux-unwind.h
@@ -77,7 +77,7 @@ aarch64_fallback_frame_state (struct _Unwind_Context *context,
  }
  
rt_ = context->cfa;

-  sc = _->uc.uc_mcontext;
+  sc = (struct sigcontext *) _->uc.uc_mcontext;
  
  /* This define duplicates the definition in aarch64.md */

  #define SP_REGNUM 31




This looks somewhat dubious.  I'm not particularly familiar with the 
kernel headers, but a quick look suggests an mcontext_t is nothing like 
a sigcontext_t.  So isn't the cast just papering over some more 
fundamental problem?


R.


[PATCH] Include insn-opinit.h in PLUGIN_H [PR110610]

2023-07-11 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch fixes PR110610 by including OPTABS_H in the INTERNAL_FN_H 
list, as insn-opinit.h is now required by internal-fn.h. This will lead 
to insn-opinit.h, among the other OPTABS_H header files, being installed 
in the plugin directory.


Bootstrapped aarch64-unknown-linux-gnu.

@Jakub: could you check to see if it also addresses PR 110284?


gcc/ChangeLog:

PR 110610
* Makefile.in (INTERNAL_FN_H): Add OPTABS_H.diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 
c478ec852013eae65b9f3ec0a443e023c7d8b452..d3ff210ee04414f4e238c087400dd21e1cb0fc18
 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -976,7 +976,7 @@ READ_MD_H = $(OBSTACK_H) $(HASHTAB_H) read-md.h
 BUILTINS_DEF = builtins.def sync-builtins.def omp-builtins.def \
gtm-builtins.def sanitizer.def
 INTERNAL_FN_DEF = internal-fn.def
-INTERNAL_FN_H = internal-fn.h $(INTERNAL_FN_DEF)
+INTERNAL_FN_H = internal-fn.h $(INTERNAL_FN_DEF) $(OPTABS_H)
 TREE_CORE_H = tree-core.h $(CORETYPES_H) all-tree.def tree.def \
c-family/c-common.def $(lang_tree_files) \
$(BUILTINS_DEF) $(INPUT_H) statistics.h \


daily insecurity output (emails) end with: mtree special: exit code 2

2023-07-07 Thread Why 42? The lists account.


Hi All,

FYI, I noticed this in the last couple of daily insecurity output emails:

> From: "Charlie Root @ mjoelnir_aa1667" 
> Date: Fri,  7 Jul 2023 01:32:09 +0200 (CEST)
> 
> Running security(8):
> 
> Checking special files and directories.
> Output format is:
> filename:
> criteria (shouldbe, reallyis)
> etc/pf.conf:
> permissions (0600, 0640)
> mtree special: exit code 2

This seems to be since I updated to a snapshot:
mjoelnir:~ 7.07 14:15:44 % uname -a
OpenBSD mjoelnir.fritz.box 7.3 GENERIC.MP#1268 amd64



Re: Question regarding pf rules: block in on em0: ...

2023-07-07 Thread Why 42? The lists account.


I have no idea how I could make my question any clearer:
> My question is not about how to disable pf, but rather why the packets
> are see as "in" when coming from my own address, and, why they are
> blocked i.e. ...

On Thu, Jul 06, 2023 at 11:09:27AM -0600, Zack Newman wrote:
> For added clarity, this tcpdump you show is with pf disabled and all
> its rules flushed. The tcpdump you showed in the initial e-mail
> clearly was with active pf rules.
Dude, it is _literally_ the same trace output.

If you feel the need to try to help people, maybe calm down a bit and
actually read the question.

I'm out.

Robb.



Re: Question regarding pf rules: block in on em0: ...

2023-07-06 Thread Why 42? The lists account.


On Tue, Jul 04, 2023 at 10:42:39AM -0600, Zack Newman wrote:
> ...
> I am guessing you didn't flush the rules after disabling pf since
> clearly pf rules are still being used. Run pfctl -F all after disabling
> pf. Run pfctl -s all to verify there are no active rules.

Hi,

I see that I was not clear enough.

My question is not about how to disable pf, but rather why the packets
are see as "in" when coming from my own address, and, why they are
blocked i.e.

I noticed these block messages being logged when I click "discover/refresh" in 
simple-scan:

Jul 04 11:23:44.601042 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8612: udp 16
Jul 04 11:23:44.601051 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8610: udp 16
Jul 04 11:23:44.615516 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8612: udp 16
Jul 04 11:23:44.615523 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8610: udp 16
Jul 04 11:23:45.147239 rule 2/(match) block in on em0: 192.168.178.11.9609 > 
255.255.255.255.3289: udp 15 [ttl 1]
Jul 04 11:23:46.155868 rule 2/(match) block in on em0: 192.168.178.11.39413 > 
255.255.255.255.1124: udp 37 [ttl 1]

192.168.178.11 is my OpenBSD desktop (where of is running).

I don't understand what I'm seeing here ...

 1. Why am I seeing traffic coming _in_ from my own address? Is that not
slightly weird? Is it because it is _to_ the .255 broadcast address?

 2. And why is it being blocked? Do I have explicitly allow broadcast
traffic e.g. with rules to handle broadcast addresses? I don't think
I ever considered doing that before ...

The more I use pf, the less I seem to understand?

Danke im Voraus!

Robb.



Minor defect in OpenBSD install program ...

2023-07-04 Thread Why 42? The lists account.


Hi All,

FYI, I think there there is a minor defect in the OpenBSD installation
program.

I noticed what looks like the use of an unset / uninitialised variable in
the text output:
> ...
> Let's install the sets!
> Location of sets? (disk http nfs or 'done') [http] 
> HTTP proxy URL? (e.g. 'http://proxy:8080', or 'none') [none] 
> (Unable to get list from openbsd.org, but that is OK)
> HTTP Server? (hostname or 'done') ftp.fau.de
> Server directory? [pub/OpenBSD/snapshots/armv7] 
> Unable to connect using HTTPS; using HTTP instead.
> Unable to get a verified list of distribution sets.
> Looked at  and found no OpenBSD/armv7 7.3 sets.  The set names looked for 
> were:
> bsd xbase73.tgz
> bsd.rd  xshare73.tgz
> base73.tgz  xfont73.tgz
> comp73.tgz  xserv73.tgz
> man73.tgz   site73.tgz
> game73.tgz  site73-novaya-zemlya.tgz

Notice the "Looked at  and found no" with double space.

I'm providing a valid host and (I believe) path.

This is a bit off the "main path" ... I think the root cause of the issue
here is that the Network (Ethernet) driver is not functioning correctly.

E.g.  if I drop out of the install I see ping statistics like this:
> ...
> Type 'exit' to return to install.
> novaya-zemlya# ping -v 192.168.178.85
> PING 192.168.178.85 (192.168.178.14 --> 192.168.178.85): 56 data bytes
> 64 bytes from 192.168.178.85: icmp_seq=0 ttl=64 time=1008.556 ms
> 64 bytes from 192.168.178.85: icmp_seq=1 ttl=64 time=2.239 ms
> 64 bytes from 192.168.178.85: icmp_seq=5 ttl=64 time=1.156 ms
> 64 bytes from 192.168.178.85: icmp_seq=7 ttl=64 time=0.939 ms
> 64 bytes from 192.168.178.85: icmp_seq=10 ttl=64 time=1.192 ms
> 64 bytes from 192.168.178.85: icmp_seq=19 ttl=64 time=1.131 ms
> 64 bytes from 192.168.178.85: icmp_seq=23 ttl=64 time=1.106 ms
> ^C
> --- 192.168.178.85 ping statistics ---
> 25 packets transmitted, 7 packets received, 72.0% packet loss
> round-trip min/avg/max/std-dev = 0.939/145.188/1008.556/352.469 ms

Presumably the same issue effects the install programs attempts to reach
the HTTP server, leading to some name variable not being set ...

This is with a 7.3 snapshot on an 32-bit ARM platform:
> novaya-zemlya# uname -a
> ksh: uname: not found

> novaya-zemlya# sysctl
> kern.osrelease=7.3
> hw.machine=armv7
> hw.model=ARM Cortex-A9 r2p10
> hw.product=Kosagi Novena Dual/Quad
> hw.disknames=sd0:60443d11093dd341,rd0:b66dc1c5a063c2b5,sd1:b4cca6f4102ee145,sd2:
> hw.ncpufound=1
> machdep.compatible=kosagi,imx6q-novena



Question regarding pf rules: block in on em0: ...

2023-07-04 Thread Why 42? The lists account.


Hi All,

I just noticed that "simple-scan" no longer discovers my scanner.

While trying to debug the issue, it occurred to me that it could be a
network / pf problem. This doesn't seem to be the issue though, even
after I disable pf (pfctl -d), the scanner is still not seen.

However, running "tcpdump -n -e -ttt -i pflog0" I noticed these block
messages being logged when I click "discover/refresh" in simple-scan:
...
Jul 04 11:23:44.601042 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8612: udp 16
Jul 04 11:23:44.601051 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8610: udp 16
Jul 04 11:23:44.615516 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8612: udp 16
Jul 04 11:23:44.615523 rule 2/(match) block in on em0: 192.168.178.11.8612 > 
192.168.178.255.8610: udp 16
Jul 04 11:23:45.147239 rule 2/(match) block in on em0: 192.168.178.11.9609 > 
255.255.255.255.3289: udp 15 [ttl 1]
Jul 04 11:23:46.155868 rule 2/(match) block in on em0: 192.168.178.11.39413 > 
255.255.255.255.1124: udp 37 [ttl 1]
...

192.168.178.11 is my OpenBSD desktop.

I don't understand what I'm seeing here ...

why am I seeing traffic coming _in_ from my own address? Is that not
slightly weird? Is it because it is _to_ the .255 broadcast address?

And why is it being blocked? Do I have explicitly allow broadcast traffic
e.g. with rules to handle broadcast addresses? I don't think I ever
considered doing that before ...

Grateful for any advice!

Yours,
Puzzled in PF-Land


FYI:
This is with a 7.3 snapshot: 7.3 GENERIC.MP#1268 amd64

Output of ifconfig:
4.07 11:23:51 # ifconfig em0
em0: 
flags=a48843
 mtu 1492
lladdr 94:c6:91:aa:16:67
index 1 priority 0 llprio 3
groups: egress
media: Ethernet autoselect (1000baseT full-duplex)
status: active
inet6 fe80::96c6:91ff:feaa:1667%em0 prefixlen 64 scopeid 0x1
inet 192.168.178.11 netmask 0xff00 broadcast 192.168.178.255
inet6 2003:ee:1718:b100:39e3:3c67:bd3c:44f4 prefixlen 64 deprecated 
autoconf pltime 0 vltime 5213
inet6 2003:ee:1718:b100:3470:4349:f8d0:e1d2 prefixlen 64 deprecated 
autoconf temporary pltime 0 vltime 5213

Not sure what that "deprecated" means here.

Rule @2 is the "classic" block all rule ...

The contents of pf.conf:
#   $OpenBSD: pf.conf,v 1.55 2017/12/03 20:40:04 sthen Exp $
#
# See pf.conf(5) and /etc/examples/pf.conf

set skip on lo
set block-policy return
set debug warning

# By default, do not permit remote connections to X11
#block return in log on ! lo0 proto tcp to port 6000:6010
block log on ! lo0 all  # Begin by blocking everything

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild

# Allow all outbound
pass out quick modulate state

# Local subnet ...
local_subnet_v4="{ 192.168.178.0/24 }"

local_subnet_v6="{ fe80::/10 }"
# TODO: Correct ???

# Local systems that I might trust ...
trusted_clients_v4="{ 192.168.178.10, 192.168.178.12, 192.168.178.13, 
192.168.178.14 }"

# Allow ssh in
pass in log inet proto tcp from $trusted_clients_v4 to (egress) port ssh 
modulate state

# Scanner discovery? Allow traffic from Canon pixma TR8550
#scanner_ports="{ 8610, 8612 }"
#passlog inet  proto udp from 192.168.178.85 port $scanner_ports
pass in log inet proto udp from 192.168.178.85 port 8610
pass in log inet proto udp from 192.168.178.85 port 8612
#
# Allow avahi? See: /usr/local/share/doc/pkg-readmes/avahi
pass in log inet  proto udp from any to 224.0.0.251 port mdns allow-opts
pass in log inet6 proto udp from any to ff02::fb port mdns allow-opts
# and for SSDP:
pass in log inet  proto udp from any to 239.255.255.250 port ssdp allow-opts
pass in log inet6 proto udp from any to { ff02::c, ff05::c, ff08::c } port ssdp 
allow-opts
#
# OK, then try allowing multicast in general ...
pass log inet  proto igmp from any allow-opts

# NFS: Allow access to local NFS server
nfs_ports="{ sunrpc, nfsd, 881 }"
#
# But is UDP really still necessary?
#pass in proto udp from $trusted_clients to (egress) port $nfs_ports keep state
#pass out proto udp from (egress) to $trusted_clients port $nfs_ports keep state
#
pass in proto tcp from $trusted_clients_v4 to (egress) port $nfs_ports modulate 
state
pass in proto tcp from (egress) to $trusted_clients_v4 port $nfs_ports modulate 
state

# ICMP: Limit ICMP to allowed types: echorep, unreach, squench, echoreq, timex:
icmp_types = "{ echoreq, echorep, unreach, squench, timex }"
# See also: "man 4 icmp"
pass in log inet proto icmp to (egress) icmp-type $icmp_types label "rule $nr: 
pass: $proto: $icmp_type"

# HTTP: Running http-file-server:
# PORT= bin/http-file-server -u ~/Public/
# 2020/07/13 16:11:35 serving local path "/space/home/robb/Public" on "/Public/"
# 2020/07/13 16:11:35 redirecting to "/Public/" from "/"
# 2020/07/13 16:11:35 http-file-server listening on ":"
fs_port="{  }"
pass in proto tcp from $trusted_clients_v4 to (egress) 

Re: [AFMUG] OT Everyone was right.

2023-07-03 Thread Jeff Broadwick - Lists
I’m guessing they’d be Libertarians.  ;-)Jeff BroadwickCTIconnect312-205-2519 Office574-220-7826 Celljbroadw...@cticonnect.comOn Jul 3, 2023, at 3:52 PM, Bill Prince  wrote:
  

  
  
I don't think grizzlies are either liberal of conservative. They
  operate on the rules of food and fear, and maybe not too much of
  the fear part.

bp

On 7/3/2023 9:55 AM, Chuck McCown via
  AF wrote:


  
  

  Third draft:
  Lessee, we can
  presume the bear is a liberal because it got vaccinated.
  So a liberal bear is
  more likely to attack conservatives.
   
  So I think you guys
  are good.  
   
  

   
  
From: Bill
Prince 
Sent: Sunday, July 2, 2023 7:45 PM
To: af@af.afmug.com 
Subject: Re: [AFMUG] OT Everyone was right.
  

 
  
  
I'm not a democrat

bp

On 7/2/2023 4:55 PM, Chuck
  McCown via AF wrote:


  

  Not sure the logic of that joke makes sense
  

   
  Lessee, we can
  presume the bear is a democrat because it got
  vaccinated.
  So a democrat
  bear is more likely to attack republicans.
   
  So I think you
  guys are good.  
   
   
  
From: Chuck McCown via AF

Sent: Sunday, July 2, 2023 5:23 PM
To: af@af.afmug.com 
Cc: ch...@go-mtc.com

Subject: Re: [AFMUG] OT Everyone was
  right.
  

 
  
  

  
Then they will not attack.  They only
  attack democrats!
 
 

  
 

  From:
Bill Prince
  
  Sent: Sunday, July 2, 2023
12:58 PM
  To: af@af.afmug.com
  
  Subject: Re: [AFMUG] OT
Everyone was right.

  
   


  What if the grizzly is vaccinated?
   
  bp

  On 7/2/2023 11:08
AM, Chuck McCown via AF wrote:
  
  

  
.005 but that is attack, not
  death.  I actually calculated it.  

  
Deaths
is .0005
 
Lightening
is 100 x more likely to get
her.  
 
 

  From:
Bill
  Prince 
  Sent: Sunday, July 2,
2023 11:25 AM
  To: af@af.afmug.com
  
  Subject: Re: [AFMUG]
OT Everyone was right.

  
   


  My wife wants to know the
micromorts for death by grizzly bear
when backpacking in Yellowstone
National Park.
  
  

Re: wishlist: support for shorter pointers

2023-07-03 Thread Richard Earnshaw (lists) via Gcc

On 03/07/2023 17:42, Rafał Pietrak via Gcc wrote:

Hi Ian,

W dniu 3.07.2023 o 17:07, Ian Lance Taylor pisze:
On Wed, Jun 28, 2023 at 11:21 PM Rafał Pietrak via Gcc 
 wrote:

[]

I was thinking about that, and it doesn't look as requiring that deep
rewrites. ABI spec, that  could accomodate the functionality could be as
little as one additional attribute to linker segments.


If I understand correctly, you are looking for something like the x32
mode that was available for a while on x86_64 processors:
https://en.wikipedia.org/wiki/X32_ABI .  That was a substantial amount
of work including changes to the compiler, assembler, linker, standard
library, and kernel.  And at least to me it's never seemed
particularly popular.


Yes.

And WiKi reporting up to 40% performance improvements in some corner 
cases is impressive and encouraging. I believe, that the reported 
average of 5-8% improvement would be significantly better within MCU 
tiny resources environment. In MCU world, such improvement could mean 
fit-nofit of a project into a particular device.


-R


I think you need to be very careful when reading benchmarketing (sic) 
numbers like this.  Firstly, this is a 32-bit vs 64-bit measurement; 
secondly, the benchmark (spec 2000) is very old now and IIRC was not 
fully optimized for 64-bit processors (it predates the 64-bit version of 
the x86 instruction set); thirdly, there are benchmarks in SPEC which 
are very sensitive to cache size and the 32-bit ABI just happened to 
allow them to fit enough data in the caches to make the numbers leap.


R.


[PATCH] vect: Treat vector widening IFN calls as 'simple' [PR110436]

2023-07-03 Thread Andre Vieira (lists) via Gcc-patches

Hi,

This patch makes the vectorizer treat any vector widening IFN as simple, 
like

it did with the tree codes VEC_WIDEN_*.

I wasn't sure whether I should make all IFN's simple and then exclude 
some (like GOMP_ ones), or include more than just the new widening IFNs. 
But since this is the only behaviour that changed with the ifn patch, I 
decided to only special case the widening IFNs for now. Let me know if 
you have different thoughts on this.


Bootstrapped and regression tested on aarch64-unknow-linux-gnu.

gcc/ChangeLog:

PR tree-optimization/110436
* tree-vect-stmts.cc (is_simple_and_all_uses_invariant): Treat widening
IFN's as simple.

gcc/testsuite/ChangeLog:

* gcc.dg/pr110436.c: New test.diff --git a/gcc/testsuite/gcc.dg/pr110436.c b/gcc/testsuite/gcc.dg/pr110436.c
new file mode 100644
index 
..c146f99fac9f0524eaa3b1230b56e9f94eed5bda
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr110436.c
@@ -0,0 +1,5 @@
+/* { dg-do compile } */
+/* { dg-options "-O3" } */
+
+#include "pr83089.c"
+
diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc
index 
d642d3c257f8d540a8562eedbcd40372b9550959..706055e9af94f0c1500c25faf4bd74fc08bf3cd6
 100644
--- a/gcc/tree-vect-stmts.cc
+++ b/gcc/tree-vect-stmts.cc
@@ -296,8 +296,11 @@ is_simple_and_all_uses_invariant (stmt_vec_info stmt_info,
   tree op;
   ssa_op_iter iter;
 
-  gassign *stmt = dyn_cast  (stmt_info->stmt);
-  if (!stmt)
+  gimple *stmt = stmt_info->stmt;
+  if (!is_gimple_assign (stmt)
+  && !(is_gimple_call (stmt)
+  && gimple_call_internal_p (stmt)
+  && widening_fn_p (gimple_call_combined_fn (stmt
 return false;
 
   FOR_EACH_SSA_TREE_OPERAND (op, stmt, iter, SSA_OP_USE)


Re: gcc tricore porting

2023-07-03 Thread Richard Earnshaw (lists) via Gcc

On 03/07/2023 15:34, Joel Sherrill wrote:

On Mon, Jul 3, 2023, 4:33 AM Claudio Eterno 
wrote:


Hi Joel, I'll give an answer ASAP on the newlib and libgloss...
I supposed your question were about the licences question on newlib,
instead you were really asking what changed on the repo libs...



It was a bit of both. If they put the right licenses on the newlib and
libgloss ports, you should be able to use them and eventually submit them.
But GCC, binutils, and gdb would be gpl and require an assignment to the
FSF. That is all I meant.


It's not quite as restricted as that.  For GCC, I suggest reading 
https://gcc.gnu.org/contribute.html#legal for more details.


I think there are similar processes in place for binutils as well.  (I'm 
not quite so sure for GDB).


R.



An option here is to reach out to the authors and ask if they are willing
to do the FSF assignment. If they are, then any GPL licensed code from them
might be a baseline.

It looks like their current products may be based on LLVM.

--joel


C.



Il giorno dom 2 lug 2023 alle ore 19:53 Claudio Eterno <
eterno.clau...@gmail.com> ha scritto:


Hi Joel, can you give me more info regarding newlib or libgloss cases?
Unfortunately I'm a newbie on th9is world...
Thank you,
Claudio

Il giorno dom 2 lug 2023 alle ore 17:38 Joel Sherrill 
ha scritto:




On Sun, Jul 2, 2023, 3:29 AM Claudio Eterno 
wrote:


Hi, Joel and Mikael
taking a look at the code it seems that the repo owner is higtech
 but we have no confirmations.
In fact, after a comparison with gcc 9.4.0 original files i see this on
a lot of ("WITH_HIGHTEC") [intl.c]:
[image: image.png]
Probably this version of gcc is a basic version of their tricore-gcc
and probably works fine but that repo doesn't show any extra info.
Seems also impossible to contact the owner (that account doesn't show
any email or other info)..
Honestly with these conditions, from gcc development point of view,
that repo has no value.



Without an assignment, you can't submit that code. That's a blocker on
using it if there isn't one.

But you can file an issue against the repo asking questions.


Anyway this is a good starting point...




Maybe not if you can't submit it. Anything that needs to be GOL licensed
and owned by the FSF is off limits.

But areas with permissive licenses might be ok if they stuck with those.
Look at what they did with newlib and libgloss.

--joel



C.



Il giorno lun 19 giu 2023 alle ore 18:55 Joel Sherrill 
ha scritto:




On Mon, Jun 19, 2023, 10:36 AM Mikael Pettersson via Gcc <
gcc@gcc.gnu.org> wrote:


(Note I'm reading the gcc mailing list via the Web archives, which
doesn't let me
create "proper" replies. Oh well.)

On Sun Jun 18 09:58:56 GMT 2023,  wrote:

Hi, this is my first time with open source development. I worked in
automotive for 22 years and we (generally) were using tricore

series for

these products. GCC doesn't compile on that platform. I left my

work some

days ago and so I'll have some spare time in the next few months. I

would

like to know how difficult it is to port the tricore platform on

gcc and if

during this process somebody can support me as tutor and... also if

the gcc

team is interested in this item...


https://github.com/volumit has a port of gcc + binutils + newlib +
gdb
to Tricore,
and it's not _that_ ancient. I have no idea where it originates from
or how complete
it is, but I do know the gcc-4.9.4 based one builds with some tweaks.




https://github.com/volumit/package_494 says there is a port in

process to gcc 9. Perhaps digging in and assessing that would be a good
start.



One question is whether that code has proper assignments on file for
ultimate inclusion. That should be part of your assessment.

--joel





I don't know anything more about it, I'm just a collector of

cross-compilers for
obscure / lost / forgotten / abandoned targets.

/Mikael





--
Claudio Eterno
via colle dell'Assietta 17
10036 Settimo Torinese (TO)





--
Claudio Eterno
via colle dell'Assietta 17
10036 Settimo Torinese (TO)




--
Claudio Eterno
via colle dell'Assietta 17
10036 Settimo Torinese (TO)







dvmrpd reports "routeroute decision engine terminated; signal 11"

2023-07-03 Thread Why 42? The lists account.


Hi All,

FYI, after patching the kernel (See: discussion from June 7th entitled
"dvmrpd start causes kernel panic: assertion failed") I am able to run
the dvmrpd multicast routing daemon and indeed it seems to be doing
something, I see messages logged regarding multicast IP address groups
or ranges that are in use, or at least configured.

Strangely though, the daemon occasionally logs these messages:
...
kmr_shutdown: interface em0
waiting for children to terminate
route decision engine terminated; signal 11
fatal in dvmrpe: msgbuf_write: Broken pipe

It's unclear to me if this is normal operation or not, but signal 11
(segmentation violation?) certainly doesn't look typical ...

Should a signal 11 result in a core file being dumped? I don't find any
in any of the likely places e.g. the starting directory.

Thanks for any tips!

Cheers,
Robb.



Re: nginx http3/quic support

2023-06-29 Thread Genes Lists

On 6/29/23 07:16, Genes Lists wrote:
Nginx mainline added (experimental) http3/quic support with version 1.25 


Seems our nginx-mainline does have '--with-http_v3_module' together with 
openssl. So it should mostly work as is.


Missed that as well - must need more coffee ...

gene



Re: nginx http3/quic support

2023-06-29 Thread Genes Lists

On 6/29/23 08:06, Genes Lists wrote:

On 6/29/23 07:16, Genes Lists wrote:

Actually the cleanest and simplest way is to use libressl which is 
... 
- I will build and test.




Very simple to build with libressl - preliminary testing nginx working 
fine for both http/2 and http/3.


Be good to have quic in the official nginx-mainline.

gene


Re: nginx http3/quic support

2023-06-29 Thread Genes Lists

On 6/29/23 07:16, Genes Lists wrote:

Actually the cleanest and simplest way is to use libressl which is 
already nicely packaged in in repo. Don't know how I missed this earlier 
- I will build and test.



gene


nginx http3/quic support

2023-06-29 Thread Genes Lists
Nginx mainline added (experimental) http3/quic support with version 1.25 
in late May.


Is there any interest in adding support to our nginx-mainline package?

It can be optionally turned on in server config, so having it compiled 
in and available shouldn't have any impact until it's activated by 
changing the web server configs. I confirmed this with my web servers.


I've been running this for a while now (even before the quic branch was 
merged into mainline) and it has been working well both with and without 
http3. Since quic uses udp, I did need to change the firewall to allow 
udp in addition to tcp for the web servers on port 443.


In case of interest, here's what I did to build and get it running.

Since openssl doesn't support quic, nginx provides for some 
alternatives; quictls, boringssl or libressl. I chose to use quictls.


Since quictls is openssl plus quic support, I want to be sure it did not 
interfere in any way with the default Arch openssl libraries or binaries.


So, I made a quictls package which installed into it's own tree not in 
/usr or /usr/local.  I chose to use /usr/local/quictls/. This keeps the 
binaries in and libraries away from all normal paths while making the 
libraries readily available for nginx. I imagine there are other 
approaches to dealing with this.


Once quictls was built and installed it is quite simple to use it to add 
quic support to the nginx-mainline package.


As always, thanks to those keeping Arch vibrant and at the leading edge.

gene



Re: wishlist: support for shorter pointers

2023-06-28 Thread Richard Earnshaw (lists) via Gcc

On 28/06/2023 17:07, Martin Uecker wrote:

Am Mittwoch, dem 28.06.2023 um 16:44 +0100 schrieb Richard Earnshaw (lists):

On 28/06/2023 15:51, Rafał Pietrak via Gcc wrote:

Hi Martin,

W dniu 28.06.2023 o 15:00, Martin Uecker pisze:


Sounds like named address spaces to me:
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html


Only to same extend, and only in x86 case.

The goal of the wish-item I've describe is to shorten pointers. I may be
wrong and have misread the specs, but the "address spaces"
implementation you've pointed out don't look like doing that. In
particular the AVR variant applies to devices that have a "native int"
of 16-bits, and those devices (most of them) have address space no
larger. So there is no gain. Their pointers cover all their address
space and if one wanted to have shorter pointers ... like 12-bits -
those wouldn't "nicely fit into register", or 8-bits - those would
reduce the "addressable" space to 256 bytes, which is VERY tight for any
practical application.

Additionally, the AVR case is explained as "only for rodata" - this
completely dismisses it from my use.

To explain a little more: the functionality I'm looking for is something
like x86 implementation of that "address spaces". The key functionality
here is the additional register like fs/gs (an address offset register).
IMHO the feature/implementation in question would HAVE TO use additional
register instead of letting linker adjust them at link time, because
those "short" pointers would need to be load-and-stored dynamically and
changed dynamically at runtime. That's why I've put an example of ARM
instruction that does this. Again IMHO the only "syntactic" feature,that
is required for a compiler to do "the right thing" is to make compiler
consider segment (segment name, ordinary linker segment name) where a
particular pointer target resides. Then if that segment where data (of
that pointer) reside is declared "short pointers", then compiler loads
and uses additional register pointing to the base of that segment. Quite
like intel segments work in hardware.

Naturally, although I have hints on such mechanism behavior, I have no
skills to even imagine where to tweak the sources to achieve that.



I think I understand what you're asking for but:
1) You'd need a new ABI specification to handle this, probably involving
register assignments (for the 'segment' addresses), the initialization
of those at startup, assembler and linker extensions to allow for
relocations describing the symbols, etc.
2) Implementations for all of the above (it would be a lot of work -
weeks to months, not days).  Little existing code, including most of the
hand-written assembly routines is likely to be compatible with the
register conventions you'd need to define, so all that code would need
auditing and alternatives developed.
3) I doubt it would be an overall win in the end.

I base the last assertion on the fact that you'd now have three values
in many addresses, the base (segment), the pointer and then a final
offset.  This means quite a bit more code being generated, so you trade
smaller pointers in your data section for more code in your code
section.  For example,

struct f
{
    int a;
    int b;
};

int func (struct f *p)
{
    return p->b;
}

would currently compile to something like

ldr r0, [r0, #4]
bx lr

but with the new, shorter, pointer you'd end up with

add r0, r_seg, r0
ldr r0, [r0, #4]
bx lr

In some cases it might be even worse as you'd end up with
zero-extensions of the pointer values as well.



I do not quite understand why this wouldn't work with
named address spaces?

__near struct {
   int x;
   int y;
};

int func (__near struct f *p)
{
return p->b;
}

could produce exactly such code?   If you need multiple
such segments one could have __near0, ..., __near9.

Such a pointer could also be converted to a regular
pointer, which could reduce code overhead.

Martin


Named address spaces, as they exist today, don't really do anything (at 
least, in the Arm port).  A pointer is still 32-bits in size, so they 
become just syntactic sugar.


If you're going to use them as 'bases', then you still have to define 
how the base address is accessed - it doesn't just happen by magic.


R.






R.


-R



Best,
Martin

Am Dienstag, dem 27.06.2023 um 14:26 +0200 schrieb Rafał Pietrak via Gcc:

Hello everybody,

I'm not quite sure if this is correct mailbox for this suggestion (may
be "embedded" would be better), but let me present it first (and while
the examples is from ARM stm32 environment, the issue would equally
apply to i386 or even amd64). So:

1. Small MPU (like stm32f103) would normally have small amount of RAM,
and even somewhat larger variant do have its memory "partitioned/
dedicated" to various subsystems (like CloseCoupledMemory, Ethe

Re: wishlist: support for shorter pointers

2023-06-28 Thread Richard Earnshaw (lists) via Gcc

On 28/06/2023 15:51, Rafał Pietrak via Gcc wrote:

Hi Martin,

W dniu 28.06.2023 o 15:00, Martin Uecker pisze:


Sounds like named address spaces to me:
https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html


Only to same extend, and only in x86 case.

The goal of the wish-item I've describe is to shorten pointers. I may be 
wrong and have misread the specs, but the "address spaces" 
implementation you've pointed out don't look like doing that. In 
particular the AVR variant applies to devices that have a "native int" 
of 16-bits, and those devices (most of them) have address space no 
larger. So there is no gain. Their pointers cover all their address 
space and if one wanted to have shorter pointers ... like 12-bits - 
those wouldn't "nicely fit into register", or 8-bits - those would 
reduce the "addressable" space to 256 bytes, which is VERY tight for any 
practical application.


Additionally, the AVR case is explained as "only for rodata" - this 
completely dismisses it from my use.


To explain a little more: the functionality I'm looking for is something 
like x86 implementation of that "address spaces". The key functionality 
here is the additional register like fs/gs (an address offset register). 
IMHO the feature/implementation in question would HAVE TO use additional 
register instead of letting linker adjust them at link time, because 
those "short" pointers would need to be load-and-stored dynamically and 
changed dynamically at runtime. That's why I've put an example of ARM 
instruction that does this. Again IMHO the only "syntactic" feature,that 
is required for a compiler to do "the right thing" is to make compiler 
consider segment (segment name, ordinary linker segment name) where a 
particular pointer target resides. Then if that segment where data (of 
that pointer) reside is declared "short pointers", then compiler loads 
and uses additional register pointing to the base of that segment. Quite 
like intel segments work in hardware.


Naturally, although I have hints on such mechanism behavior, I have no 
skills to even imagine where to tweak the sources to achieve that.



I think I understand what you're asking for but:
1) You'd need a new ABI specification to handle this, probably involving 
register assignments (for the 'segment' addresses), the initialization 
of those at startup, assembler and linker extensions to allow for 
relocations describing the symbols, etc.
2) Implementations for all of the above (it would be a lot of work - 
weeks to months, not days).  Little existing code, including most of the 
hand-written assembly routines is likely to be compatible with the 
register conventions you'd need to define, so all that code would need 
auditing and alternatives developed.

3) I doubt it would be an overall win in the end.

I base the last assertion on the fact that you'd now have three values 
in many addresses, the base (segment), the pointer and then a final 
offset.  This means quite a bit more code being generated, so you trade 
smaller pointers in your data section for more code in your code 
section.  For example,


struct f
{
  int a;
  int b;
};

int func (struct f *p)
{
  return p->b;
}

would currently compile to something like

ldr r0, [r0, #4]
bx lr

but with the new, shorter, pointer you'd end up with

add r0, r_seg, r0
ldr r0, [r0, #4]
bx lr

In some cases it might be even worse as you'd end up with 
zero-extensions of the pointer values as well.


R.


-R



Best,
Martin

Am Dienstag, dem 27.06.2023 um 14:26 +0200 schrieb Rafał Pietrak via Gcc:

Hello everybody,

I'm not quite sure if this is correct mailbox for this suggestion (may
be "embedded" would be better), but let me present it first (and while
the examples is from ARM stm32 environment, the issue would equally
apply to i386 or even amd64). So:

1. Small MPU (like stm32f103) would normally have small amount of RAM,
and even somewhat larger variant do have its memory "partitioned/
dedicated" to various subsystems (like CloseCoupledMemory, Ethernet
buffers, USB buffs, etc).

2. to address any location within those sections of that memory (or
their entire RAM) it would suffice to use 16-bit pointers.

3. still, declaring a pointer in GCC always allocate "natural" size of a
pointer in given architecture. In case of ARM stm32 it would be 32-bits.

4. programs using pointers do keep them around in structures. So
programs with heavy use of pointers have those structures like 2 times
larger then necessary  if only pointers were 16-bit. And memory in
those devices is scarce.

5. the same thing applies to 64-bit world. Programs that don't require
huge memories but do use pointers excessively, MUST take up 64-bit for a
pointer no matter what.

So I was wondering if it would be feasible for GCC to allow SEGMENT to
be declared as "small" (like 16-bit addressable in 32-bit CPU, or 32-bit
addressable in 64-bit CPU), and ANY pointer declared to reference
location 

Re: [PATCH 2/2] [testsuite, arm]: Make mve_fp_fpu[12].c accept single or double precision FPU

2023-06-28 Thread Richard Earnshaw (lists) via Gcc-patches

On 28/06/2023 10:26, Christophe Lyon via Gcc-patches wrote:

This tests currently expect a directive containing .fpu fpv5-sp-d16
and thus may fail if the test is executed for instance with
-march=armv8.1-m.main+mve.fp+fp.dp

This patch accepts either fpv5-sp-d16 or fpv5-d16 to avoid the failure.

2023-06-28  Christophe Lyon  

gcc/testsuite/
* gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c: Fix .fpu
scan-assembler.
* gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c: Likewise.
---
  gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c | 2 +-
  gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c | 2 +-
  2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c 
b/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c
index e375327fb97..8358a616bb5 100644
--- a/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu1.c
@@ -12,4 +12,4 @@ foo1 (int8x16_t value)
return b;
  }
  
-/* { dg-final { scan-assembler "\.fpu fpv5-sp-d16" }  } */

+/* { dg-final { scan-assembler "\.fpu fpv5(-sp|)-d16" }  } */
diff --git a/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c 
b/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c
index 1fca1100cf0..5dd2feefc35 100644
--- a/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c
+++ b/gcc/testsuite/gcc.target/arm/mve/intrinsics/mve_fp_fpu2.c
@@ -12,4 +12,4 @@ foo1 (int8x16_t value)
return b;
  }
  
-/* { dg-final { scan-assembler "\.fpu fpv5-sp-d16" }  } */

+/* { dg-final { scan-assembler "\.fpu fpv5(-sp|)-d16" }  } */


OK.


Re: [PATCH 1/2] [testsuite,arm]: Make nomve_fp_1.c require arm_fp

2023-06-28 Thread Richard Earnshaw (lists) via Gcc-patches

On 28/06/2023 10:26, Christophe Lyon via Gcc-patches wrote:

If GCC is configured with the default (soft) -mfloat-abi, and we don't
override the target_board test flags appropriately,
gcc.target/arm/mve/general-c/nomve_fp_1.c fails for lack of
-mfloat-abi=softfp or -mfloat-abi=hard, because it doesn't use
dg-add-options arm_v8_1m_mve (on purpose, see comment in the test).

Require and use the options needed for arm_fp to fix this problem.

2023-06-28  Christophe Lyon  

gcc/testsuite/
* gcc.target/arm/mve/general-c/nomve_fp_1.c: Require arm_fp.
---
  gcc/testsuite/gcc.target/arm/mve/general-c/nomve_fp_1.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/gcc/testsuite/gcc.target/arm/mve/general-c/nomve_fp_1.c 
b/gcc/testsuite/gcc.target/arm/mve/general-c/nomve_fp_1.c
index 21c2af16a61..c9d279ead68 100644
--- a/gcc/testsuite/gcc.target/arm/mve/general-c/nomve_fp_1.c
+++ b/gcc/testsuite/gcc.target/arm/mve/general-c/nomve_fp_1.c
@@ -1,9 +1,11 @@
  /* { dg-do compile } */
  /* { dg-require-effective-target arm_v8_1m_mve_ok } */
+/* { dg-require-effective-target arm_fp_ok } */
  /* Do not use dg-add-options arm_v8_1m_mve, because this might expand to "",
 which could imply mve+fp depending on the user settings. We want to make
 sure the '+fp' extension is not enabled.  */
  /* { dg-options "-mfpu=auto -march=armv8.1-m.main+mve" } */
+/* { dg-add-options arm_fp } */
  
  #include 
  


OK.


Re: error when pkg_add'ing

2023-06-27 Thread lists
> Having A LOTS of problems in pkg_check usually means something in your
> process is REALLY BAD.
> 
> As the guy who wrote (most of) pkg_add/pkg_check, I don't need pkg_check
> all that often.


Just wanted to say: thanks for writing pkg_add.

Don't know if it was by you, but also really appreciated also that semi-recent 
update to pkg_add which made package updates (pkg_add -u) significantly faster.



Re: Metronome marking with non-integer value

2023-06-26 Thread Lib Lists
> Interestingly, and this would have been the topic of my next message, the 
> resulting MIDI output is always correct whatever \tempo I put (it can be the 
> same \tempo for all staves). In the example below the top staff is always set 
> to 4 = 120 and the other staves correctly follow. I'm wondering if this is 
> because \scaleDurations overrides any \tempo indication, or because of 
> \cadenzaOn.
>
> In your code below, there are no custom MIDI tempo settings, since you did 
> not include any \set Score.tempoWholesPerMinute = ... command (\tempo adds 
> this along with a TempoChangeEvent, as Valentin explained).
>
> Even with \tempo, MIDI cannot have several tempi for several tracks I 
> believe. At any rate, LilyPond will just choose the last one seen, and 
> consistently apply it to all staves. Just try swapping the staves in
>
> \version "2.24.1"
>
> \score {
>   <<
> \new Staff { \tempo 4 = 60 c'4 4 4 4 }
> \new Staff { \tempo 4 = 120 c'4 4 4 4 }
>   >>
>   \midi { }
> }
>
> to hear the difference.

Hi Jean, thank you so much to you as well for the detailed explanation
and examples. I'm still getting the hang of how many aspects of
Lilypond works, and all of this is really useful.

>
> \scaleDurations does not override \tempo. But, it makes notes performed at a 
> different pace than what their written durations would normally make for. For 
> example, “\scaleDurations 2” will turn quarter notes into half notes for 
> MIDI, while preserving their appearance (filled note heads) in the printed 
> output.
>
> But it's not a problem, as it works as it should.
>
> In any case, here below an example that uses your 'Weird tempo' marking. Also 
> I added a rounding function found on Stack Overflow to avoid a visually too 
> long decimal. However, the resulting MIDI file doesn't show any visible 
> rounding issue.
>
> What about just making your own tempo marks as \markup ? I think that's more 
> future-proof than putting a non-integer into 'metronome-count while the 
> parser code would only let an integer pass through and downstream code may 
> thus legitimately (IMHO) assume it's integer.
>
> Something like the following should do it:

Fantastic, thank you so much for this!

Cheers,
Lib



[OE-core] [PATCH] bonnie++: New recipe for version 2.0

2023-06-26 Thread Jörg Sommer via lists . openembedded . org
Newer versions of bonnie get published on
. Unfortunately, the new version
doesn't compile with g++ 11 which requires *fix-csv2html-data.patch* and
configure fails due to cross compilation which gets fixed
with *fix-configure-lfs.patch*

Signed-off-by: Jörg Sommer 
---
 .../bonnie/bonnie++/fix-configure-lfs.patch   |  37 
 .../bonnie/bonnie++/fix-csv2html-data.patch   | 181 ++
 .../bonnie/bonnie++_2.00a.bb  |  33 
 3 files changed, 251 insertions(+)
 create mode 100644 
meta-oe/recipes-benchmark/bonnie/bonnie++/fix-configure-lfs.patch
 create mode 100644 
meta-oe/recipes-benchmark/bonnie/bonnie++/fix-csv2html-data.patch
 create mode 100644 meta-oe/recipes-benchmark/bonnie/bonnie++_2.00a.bb

diff --git a/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-configure-lfs.patch 
b/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-configure-lfs.patch
new file mode 100644
index 00..d28e28658c
--- /dev/null
+++ b/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-configure-lfs.patch
@@ -0,0 +1,37 @@
+diff --git i/configure.in w/configure.in
+index 080e40c..f2a2bbe 100644
+--- i/configure.in
 w/configure.in
+@@ -82,8 +82,15 @@ void * thread_func(void * param) { return NULL; }
+   , thread_ldflags="-lpthread"
+   , thread_ldflags="-pthread")
+ 
+-AC_SUBST(large_file)
+-AC_TRY_RUN([#ifndef _LARGEFILE64_SOURCE
++AC_ARG_ENABLE(lfs,
++  [  --disable-lfs  disable large file support],
++  LFS_CHOICE=$enableval, LFS_CHOICE=check)
++
++if test "$LFS_CHOICE" = yes; then
++   bonniepp_cv_large_file=yes
++elif test "$LFS_CHOICE" = check; then
++   AC_CACHE_CHECK([whether to enable -D_LARGEFILE64_SOURCE], 
bonniepp_cv_large_file,
++  AC_TRY_RUN([#ifndef _LARGEFILE64_SOURCE
+ #define _LARGEFILE64_SOURCE
+ #endif
+ #include 
+@@ -118,8 +125,12 @@ int main () {
+   }
+   close(fd);
+   return 0;
+-}], large_file="yes")
+-if [[ -n "$large_file" ]]; then
++}], bonniepp_cv_large_file="yes"))
++fi
++
++AC_SUBST(large_file)
++
++if [[ -n "$bonniepp_cv_large_file" ]]; then
+large_file="#define _LARGEFILE64_SOURCE"
+ fi
+ 
diff --git a/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-csv2html-data.patch 
b/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-csv2html-data.patch
new file mode 100644
index 00..78ac347aa8
--- /dev/null
+++ b/meta-oe/recipes-benchmark/bonnie/bonnie++/fix-csv2html-data.patch
@@ -0,0 +1,181 @@
+commit 7e9433a56f22426b11cbc9bd80e0debca67c893b
+Author: Jörg Sommer 
+Date:   Mon Jun 26 12:38:30 2023 +0200
+
+csv2html: Explicitly reference data in top level
+
+With g++ 11 *data* became ambiguous with [std::data][1]. Therefore it's
+needed to explicitly address the variable from the top level scope.
+
+[1] https://en.cppreference.com/w/cpp/iterator/data
+
+diff --git a/bon_csv2html.cpp b/bon_csv2html.cpp
+index e9d9c50..652e330 100644
+--- a/bon_csv2html.cpp
 b/bon_csv2html.cpp
+@@ -87,8 +87,8 @@ int main(int argc, char **argv)
+ read_in(buf);
+   }
+ 
+-  props = new PPCCHAR[data.size()];
+-  for(i = 0; i < data.size(); i++)
++  props = new PPCCHAR[::data.size()];
++  for(i = 0; i < ::data.size(); i++)
+   {
+ props[i] = new PCCHAR[MAX_ITEMS];
+ props[i][0] = NULL;
+@@ -109,7 +109,7 @@ int main(int argc, char **argv)
+   }
+   calc_vals();
+   int mid_width = header();
+-  for(i = 0; i < data.size(); i++)
++  for(i = 0; i < ::data.size(); i++)
+   {
+ // First print the average speed line
+ printf("");
+@@ -171,23 +171,23 @@ int compar(const void *a, const void *b)
+ 
+ void calc_vals()
+ {
+-  ITEM *arr = new ITEM[data.size()];
++  ITEM *arr = new ITEM[::data.size()];
+   for(unsigned int column_ind = 0; column_ind < MAX_ITEMS; column_ind++)
+   {
+ switch(vals[column_ind])
+ {
+ case eNoCols:
+ {
+-  for(unsigned int row_ind = 0; row_ind < data.size(); row_ind++)
++  for(unsigned int row_ind = 0; row_ind < ::data.size(); row_ind++)
+   {
+ if(column_ind == COL_CONCURRENCY)
+ {
+-  if(data[row_ind][column_ind] && strcmp("1", 
data[row_ind][column_ind]))
++  if(::data[row_ind][column_ind] && strcmp("1", 
::data[row_ind][column_ind]))
+ col_used[column_ind] = true;
+ }
+ else
+ {
+-  if(data[row_ind][column_ind] && strlen(data[row_ind][column_ind]))
++  if(::data[row_ind][column_ind] && 
strlen(::data[row_ind][column_ind]))
+ col_used[column_ind] = true;
+ }
+   }
+@@ -195,22 +195,22 @@ void calc_vals()
+ break;
+ case eCPU:
+ {
+-  for(unsigned int row_ind = 0; row_ind < data.size(); row_ind++)
++  for(unsigned int row_ind = 0; row_ind < ::data.size(); row_ind++)
+   {
+ double work, cpu;
+ arr[row_ind].val = 0.0;
+-if(data[row_ind].size() > column_ind
+- && sscanf(data[row_ind][column_ind - 1], "%lf", ) == 1
+- && sscanf(data[row_ind][column_ind], "%lf", ) == 1)
++

CCC Hacker Camp 2023 - Ultimi slot per partecipare

2023-06-26 Thread Fabio Pietrosanti (Lists)

Buongiorno a tutti,

come di consueto come Inclusive Hacker Framework organizziamo la Italian 
Hacker Embassy, raduno di Hacker Italiani presso l'hacker camp 
internazionale CCC CAMP che si terrà in Germania.


Per chi volesse fare una mistica esperienza di partecipazione a 5 giorni 
di convegni, talk e workshop, in campeggio (con anche area "family 
village" per chi viene con i bambini) alla intersezione di tecnologia, 
qui le informazioni https://events.ccc.de/camp/2023/infos/ .


Sono rimasti ultimi 22 biglietti per il CCC CAMP 2023 e ITALIAN HACKER 
EMBASSY dal 15 al 19 Agosto in Germania. Da acquistare entro Giovedì 29 
Giugno, dopodiché non sarà possibile averne più e le vendite da parte 
del CCC sono già chiuse: https://pretix.eu/italianhackersembassy/ccc2023/


Fabio




[nexa] CCC Hacker Camp 2023 - Ultimi slot di partecipazione disponibili

2023-06-26 Thread Fabio Pietrosanti (Lists)

Buongiorno a tutti,

come di consueto come Inclusive Hacker Framework organizziamo la Italian 
Hacker Embassy, raduno di Hacker Italiani presso l'hacker camp 
internazionale CCC CAMP che si terrà in Germania.


Per chi volesse fare una mistica esperienza di partecipazione a 5 giorni 
di convegni, talk e workshop, in campeggio (con anche area "family 
village" per chi viene con i bambini) alla intersezione di tecnologia, 
qui le informazioni https://events.ccc.de/camp/2023/infos/ .


Sono rimasti ultimi 22 biglietti per il CCC CAMP 2023 e ITALIAN HACKER 
EMBASSY dal 15 al 19 Agosto in Germania. Da acquistare entro Giovedì 29 
Giugno, dopodiché non sarà possibile averne più e le vendite da parte 
del CCC sono già chiuse: https://pretix.eu/italianhackersembassy/ccc2023/


Fabio

___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[yocto] [yocto-autobuilder-helper][PATCH 3/3] scripts/test_utils.py: update test after BUILD_HISTORY_DIRECTPUSH removal

2023-06-26 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Update getcomparisonbranch unit tests by removing BUILD_HISTORY_DIRECTPUSH
entry in fake configuration

Signed-off-by: Alexis Lothoré 
---
 scripts/test_utils.py | 29 +++--
 1 file changed, 7 insertions(+), 22 deletions(-)

diff --git a/scripts/test_utils.py b/scripts/test_utils.py
index d02e9b2a5bb3..d149dc946ccd 100755
--- a/scripts/test_utils.py
+++ b/scripts/test_utils.py
@@ -7,22 +7,7 @@ import utils
 
 class TestGetComparisonBranch(unittest.TestCase):
 TEST_CONFIG = {
-"BUILD_HISTORY_DIRECTPUSH": [
-"poky:morty",
-"poky:pyro",
-"poky:rocko",
-"poky:sumo",
-"poky:thud",
-"poky:warrior",
-"poky:zeus",
-"poky:dunfell",
-"poky:gatesgarth",
-"poky:hardknott",
-"poky:honister",
-"poky:kirkstone",
-"poky:langdale",
-"poky:master"
-], "BUILD_HISTORY_FORKPUSH": {
+   "BUILD_HISTORY_FORKPUSH": {
 "poky-contrib:ross/mut": "poky:master",
 "poky:master-next": "poky:master",
 "poky-contrib:abelloni/master-next": "poky:master"
@@ -35,9 +20,9 @@ class TestGetComparisonBranch(unittest.TestCase):
 basebranch, comparebranch = utils.getcomparisonbranch(
 self.TEST_CONFIG, repo, branch)
 self.assertEqual(
-basebranch, "master", msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+basebranch, "master", msg="Release branch in poky must return 
corresponding base branch")
 self.assertEqual(
-comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+comparebranch, None, msg="Release branch in poky must return 
corresponding compare branch")
 
 def test_release_kirkstone(self):
 repo = "ssh://g...@push.yoctoproject.org/poky"
@@ -45,9 +30,9 @@ class TestGetComparisonBranch(unittest.TestCase):
 basebranch, comparebranch = utils.getcomparisonbranch(
 self.TEST_CONFIG, repo, branch)
 self.assertEqual(basebranch, "kirkstone",
- msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+ msg="Release branch in poky must return corresponding 
base branch")
 self.assertEqual(
-comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+comparebranch, None, msg="Release branch in poky must return 
corresponding compare branch")
 
 def test_release_langdale(self):
 repo = "ssh://g...@push.yoctoproject.org/poky"
@@ -55,9 +40,9 @@ class TestGetComparisonBranch(unittest.TestCase):
 basebranch, comparebranch = utils.getcomparisonbranch(
 self.TEST_CONFIG, repo, branch)
 self.assertEqual(basebranch, "langdale",
- msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding base branch")
+ msg="Release branch in poky must return corresponding 
base branch")
 self.assertEqual(
-comparebranch, None, msg="Repo/branch pair present in 
BUILD_HISTORY_DIRECTPUSH must return corresponding compare branch")
+comparebranch, None, msg="Release branch in poky must return 
corresponding compare branch")
 
 def test_master_next(self):
 repo = "ssh://g...@push.yoctoproject.org/poky"
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60443): https://lists.yoctoproject.org/g/yocto/message/60443
Mute This Topic: https://lists.yoctoproject.org/mt/99783502/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/3] config.json: remove BUILD_HISTORY_DIRECTPUSH

2023-06-26 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Now that BUILD_HISTORY_DIRECTPUSH has been replaced by a hardcoded
condition, remove it from config.json

Signed-off-by: Alexis Lothoré 
---
 config.json | 1 -
 1 file changed, 1 deletion(-)

diff --git a/config.json b/config.json
index e7f308d0a3f6..f271ffaa402a 100644
--- a/config.json
+++ b/config.json
@@ -5,7 +5,6 @@
 
 "BUILD_HISTORY_DIR" : "buildhistory",
 "BUILD_HISTORY_REPO" : 
"ssh://g...@push.yoctoproject.org/poky-buildhistory",
-"BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:sumo", "poky:thud", "poky:warrior", "poky:zeus", "poky:dunfell", 
"poky:gatesgarth", "poky:hardknott", "poky:honister", "poky:kirkstone", 
"poky:langdale", "poky:mickledore", "poky:master"],
 "BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky-contrib:abelloni/master-next": "poky:master", "poky:master-next" : 
"poky:master"},
 
 "BUILDTOOLS_URL_TEMPLOCAL" : 
"/srv/autobuilder/autobuilder.yocto.io/pub/non-release/20210214-8/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.2+snapshot-7d38cc8e749aedb8435ee71847e04b353cca541d.sh",
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60441): https://lists.yoctoproject.org/g/yocto/message/60441
Mute This Topic: https://lists.yoctoproject.org/mt/99783500/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 0/3] replace BUILD_HISTORY_DIRECTPUSH with hardcoded condition

2023-06-26 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

This series is a follow-up to [1], which hot-fixed tests results not being
pushed by Autobuilder by enriching the BUILD_HISTORY_DIRECTPUSH variable
with mickledore. Since the issue will likely happen for all new releases,
this series brings in a better fix (suggested by Richard) to systematically
include all "main" branches, based on their name and the target repository
(poky). Since the new condition is based on the branch name, it assumes
that except for the XXX-next branch, no other custom/non release branch
will be pushed to poky (contrary to poky-contrib)

[1] https://lists.yoctoproject.org/g/yocto/topic/99523809#60297

Alexis Lothoré (3):
  scripts/utils.py: replace BUILD_HISTORY_DIRECTPUSH with hardcoded
condition
  config.json: remove BUILD_HISTORY_DIRECTPUSH
  scripts/test_utils.py: update test after BUILD_HISTORY_DIRECTPUSH
removal

 config.json   |  1 -
 scripts/test_utils.py | 29 +++--
 scripts/utils.py  | 13 +++--
 3 files changed, 18 insertions(+), 25 deletions(-)

-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60440): https://lists.yoctoproject.org/g/yocto/message/60440
Mute This Topic: https://lists.yoctoproject.org/mt/99783499/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/3] scripts/utils.py: replace BUILD_HISTORY_DIRECTPUSH with hardcoded condition

2023-06-26 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

It has been observed that when a new release branch is created, it is quite
easy to forget to update the BUILD_HISTORY_DIRECTPUSH variable, which leads
to failures in autobuilder like test results not being pushed.
Replace the BUILD_HISTORY_DIRECTPUSH usage with a hardcoded condition which
validates any branch in poky representing a "main" branch, i.e all branches
not ending in "-next"

Signed-off-by: Alexis Lothoré 
---
 scripts/utils.py | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/scripts/utils.py b/scripts/utils.py
index 444b3ab55092..36b3e81bfc94 100644
--- a/scripts/utils.py
+++ b/scripts/utils.py
@@ -19,6 +19,15 @@ import fnmatch
 import glob
 import fcntl
 
+
+def is_a_main_branch(reponame, branchname):
+"""
+Checks if target repo/branch combo represent a main branch. This
+includes master and release branches in poky, while excluding "next"
+branches
+"""
+return reponame == "poky" and not branchname.endswith("-next")
+
 #
 # Check if config contains all the listed params
 #
@@ -212,7 +221,7 @@ def getbuildhistoryconfig(ourconfig, builddir, target, 
reponame, branchname, ste
 reponame = reponame.rsplit("/", 1)[1]
 if reponame.endswith(".git"):
 reponame = reponame[:-4]
-if (reponame + ":" + branchname) in 
getconfig("BUILD_HISTORY_DIRECTPUSH", ourconfig):
+if is_a_main_branch(reponame, branchname):
 base = reponame + ":" + branchname
 if (reponame + ":" + branchname) in 
getconfig("BUILD_HISTORY_FORKPUSH", ourconfig):
 base = getconfig("BUILD_HISTORY_FORKPUSH", ourconfig)[reponame 
+ ":" + branchname]
@@ -392,7 +401,7 @@ def getcomparisonbranch(ourconfig, reponame, branchname):
 comparerepo, comparebranch = base.split(":")
 print("Comparing to %s\n" % (comparebranch))
 return branchname, comparebranch
-if (reponame + ":" + branchname) in getconfig("BUILD_HISTORY_DIRECTPUSH", 
ourconfig):
+if is_a_main_branch(reponame, branchname):
 return branchname, None
 return None, None
 
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60442): https://lists.yoctoproject.org/g/yocto/message/60442
Mute This Topic: https://lists.yoctoproject.org/mt/99783501/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Metronome marking with non-integer value

2023-06-25 Thread Lib Lists
Hi Valentin,
thank you so much for your time, it works perfectly and the detailed
explanation is really helpful and welcome (now I need to experiment
more on my own to fully understand everything).
Interestingly, and this would have been the topic of my next message,
the resulting MIDI output is always correct whatever \tempo I put (it
can be the same \tempo for all staves). In the example below the top
staff is always set to 4 = 120 and the other staves correctly follow.
I'm wondering if this is because \scaleDurations overrides any \tempo
indication, or because of \cadenzaOn. But it's not a problem, as it
works as it should.

In any case, here below an example that uses your 'Weird tempo'
marking. Also I added a rounding function found on Stack Overflow to
avoid a visually too long decimal. However, the resulting MIDI file
doesn't show any visible rounding issue.

Thank you again,
Lib

P.S. Rational / decimal metronome markings can be useful for
analytical purposes (performance studies, or rhythmically complex /
generative music such as Nancarrow etc.). Also for music to be
performed if the piece has i.e. (multiple) click tracks or conductors,
or if it is music written for mechanical instruments. I agree though
that in the big scheme of things those are quite unusual cases :-)

\version "2.25.5"
#(ly:set-option 'midi-extension "mid")

voiceAmount = 7
% round-off taken from:
https://stackoverflow.com/questions/16302038/float-precision-and-removing-rounding-errors-in-scheme
#(define (round-off z n)
  (let ((power (expt 10 n)))
(/ (round (* power z)) power)))

\score {
  \new StaffGroup  <<
#@(map (lambda (i)
#{
  \new Staff {
\scaleDurations #(cons voiceAmount i) {
  #(make-music
'TempoChangeEvent
'metronome-count
(round-off (* i (/ 120.0 voiceAmount )) 2)
'tempo-unit
(ly:make-duration 2)
)
  \relative c'' \repeat unfold #i {{c c c c }} \bar "||"
}
  }
  #})
(iota voiceAmount voiceAmount -1))
  >>

  \layout {
\context {
  \Score
  \remove Metronome_mark_engraver
  \cadenzaOn
}
\context {
  \Staff
  \remove Time_signature_engraver
  \consists Metronome_mark_engraver
}
  }
  \midi { }
}



On Sun, 25 Jun 2023 at 14:13, Valentin Petzel  wrote:
>
> Hello Lib,
>
> this is a limitation of the Lilypond parser which implements the syntax
>
> \tempo [text] [duration = ...]
>
> Here it assumes that ... is an unsigned integer. This I think is not
> unreasonable, if you tell a musician to play something in MM 72.4 he will
> probably be a bit confused.
>
> Anyway, the solution to the problem would be to simply not use this parser
> feature. \tempo is only a parser feature and not a music function as a music
> functions do not allow the 4 = 120 syntax and does not allow having two
> arguments both of which can be optional but one being mandatory.
>
> But in the end \tempo ... still evaluates to a music event, as you can see if
> you do
>
> \displayMusic \tempo "some text" 4 = 120
>
> This creates two music events:
>
> (make-music
>   'TempoChangeEvent
>   'metronome-count
>   120
>   'tempo-unit
>   (ly:make-duration 2)
>   'text
>   "some text")
>
> is the event that is responsible for creating the tempo mark and
>
> (make-music
>   'ContextSpeccedMusic
>   'context-type
>   'Score
>   'element
>   (make-music
> 'PropertySet
> 'value
> (ly:make-moment 30)
> 'symbol
> 'tempoWholesPerMinute))
>
> is what makes midi tempo work (it basically sets Score.tempoWholesPerMinute to
> a moment of duration metronome marking * metronome base length, so in our case
> 120 * 1/4 = 30.
>
> So what you need to do to get non integral is to manually create the first
> thing:
>
> {
>   #(make-music
> 'TempoChangeEvent
> 'metronome-count
> 72.4
> 'tempo-unit
> (ly:make-duration 2)
> 'text
> "Weird tempo")
>   c
> }
>
> To get midi to work you’ll also need to do
>
> \score {
>   {
> #(make-music
>   'TempoChangeEvent
>   'metronome-count
>   72.4
>   'tempo-unit
>   (ly:make-duration 2)
>   'text
>   "Weird tempo")
> \set Score.tempoWholesPerMinute = #(ly:make-moment (* 724/10 1/4))
> c
>   }
>   \layout { }
>   \midi { }
> }
>
> If you do not what to do the conversion to rationals 72.4 → 724/10 yourself
> you can use (inexact->exact ...), but keep in mind that this will include
> rounding errors:
>
> (inexact->exact 72.4) -> 2547348539231437/35184372088832
>
> which is the representation of 72.4 with binary digits with machine precision
> (the denominator is 2^45).
&

Metronome marking with non-integer value

2023-06-25 Thread Lib Lists
Hello,

I realised that Lilypond doesn't like it if the metronome value is a
non-integer. In the example below, assigning 7 to the voiceAmount
variable triggers a 'error: not an unsigned integer'. I tried to
construct the metronome number marking as a markup, but without
success.

Any suggestions? The idea is to have the metronome markings values
automatically generated, starting from 4 = 120 in the upper staff.
Below both a M(non-)WE and the complete example

Thank you in advance for any help!

Cheers,
Lib

%%% MWE %%%
\version "2.25.5"
\score {
  \new Staff {
\tempo 4 = #(* 1 (/ 120 7 ))
{ c' }
  }
}
%%%


%%% COMPLETE EXAMPLE %%%
\version "2.25.5"

voiceAmount = 7

\score {
  \new StaffGroup  <<
#@(map (lambda (i)
#{
  \new Staff {
\scaleDurations #(cons voiceAmount i) {
  \tempo 4 = #(* i (/ 120 voiceAmount ))
  \relative c'' \repeat unfold #i {{c c c c }} \bar "||"
}
  }
  #})
(iota voiceAmount voiceAmount -1))
  >>

  \layout {
\context {
  \Score
  \remove Metronome_mark_engraver
  \cadenzaOn
}
\context {
  \Staff
  \remove Time_signature_engraver
  \consists Metronome_mark_engraver
}
  }
}
%%%



Re: [tor-relays] (EVENT) Tor Relay Operator Meetup - June 24, 2023 @ 18.00 UTC

2023-06-24 Thread lists
On Samstag, 24. Juni 2023 18:03:47 CEST li...@for-privacy.net wrote:
> On Dienstag, 20. Juni 2023 23:01:23 CEST gus wrote:
> > Just a friendly reminder that the Relay Operator meetup will happen this
> > Saturday, June 24 at 18 UTC.
> > 
> > ## Agenda
> > 
> > 1. Announcements
> > 
> >  - Tor Relay Operators meetup @ CCCamp 2023!
> >  - More unrestricted snowflake proxies are needed
> >  - Relays EOL (0.4.5.x) removal
> >  - IPv4 limit proposal
> > 
> > 2. Presentation about Webtunnel bridges with Tor Anti-censorship Team
> > 
> > 3. Tor Network Health proposals discussion
> > 
> >  - Meta proposal discussion
> >  - contactinfo proposal discussion
> > 
> > 4. Q
> > 
> > https://pad.riseup.net/p/tor-relay-op-meetup-june-keep
> 
> https://pad.riseup.net/ is down :-(
> As an alternative, the 'German riseup' systemli could be taken. systemli.org
> is hosted on its own servers at Community-IX.
> 
> https://pad.systemli.org/p/tor-relay-op-meetup-june-keep

I think gus copied the pad. Thanks. Hidden service link is:
http://mjrkrqnlf26etelsi7zpkqc3dzlrzyurvmd3jksmndarzzbugz5xctid.onion/p/tor-relay-op-meetup-june-keep

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] (EVENT) Tor Relay Operator Meetup - June 24, 2023 @ 18.00 UTC

2023-06-24 Thread lists
On Dienstag, 20. Juni 2023 23:01:23 CEST gus wrote:

> Just a friendly reminder that the Relay Operator meetup will happen this
> Saturday, June 24 at 18 UTC.
> 
> ## Agenda
> 
> 1. Announcements
>  - Tor Relay Operators meetup @ CCCamp 2023!
>  - More unrestricted snowflake proxies are needed
>  - Relays EOL (0.4.5.x) removal
>  - IPv4 limit proposal
> 
> 2. Presentation about Webtunnel bridges with Tor Anti-censorship Team
> 
> 3. Tor Network Health proposals discussion
>  - Meta proposal discussion
>  - contactinfo proposal discussion
> 
> 4. Q
> 
> https://pad.riseup.net/p/tor-relay-op-meetup-june-keep

https://pad.riseup.net/ is down :-(
As an alternative, the 'German riseup' systemli could be taken. systemli.org 
is hosted on its own servers at Community-IX.

https://pad.systemli.org/p/tor-relay-op-meetup-june-keep



-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: clef bass_16

2023-06-24 Thread Lib Lists
On Fri, 23 Jun 2023 at 12:24, Valentin Petzel  wrote:
>
> To be more general the clef modifier specifies a shift in steps where we start
> with 1 as unison. So as an octave is 7 steps n octaves will have a shift of
> 7*n, but we will add a 1 as we start with 1 = unison, which gives 0+1 = 1, 7+1
> = 8, 14+1 = 15, 21+1 = 22, ...
>
> Also the correct suffix would be 8va for ottava, but not 15va but 15ma for
> quindicesima, and similar then 22ma for ventiduesima.
>
> The suffix 8vb for the lower octavions is sadly quite common nowadays, but 
> quite
> stupid. It essentially assumes that 8va would be a contraction of "ottava
> alta" and thus uses 8vb as "ottava bassa", when in fact 8va is only a
> contraction of "ottava".
>
> If anything one should use 8va alta or 8aa for "ottava alta" and 8va bassa or
> 8ab for "ottava bassa" (the va or the first a should be superscript, the a/b
> normal text). The use of 8vb for "ottava bassa" leads to confusion, for if a
> score uses 8va below the staff people will then be confused whether to 
> octavate
> that part up or down.
>
> So it is better to avoid the use of 8vb and such, as it is not traditional
> notation, it will cause confusion and does not offer any additional
> information. The only case where this is relevant is when you notate an
> ottavation down above the stave, in which case 8va bassa is much clearer than
> doing 8vb.
>
> With regards to software both Finale and MuseScore have abandoned the use of
> 8vb and Dorico has opted for 8ba. The general preference seems to go in the
> direction of only printing the number, but if we print a suffix at this point
> 8vb is only advocated by Sibelius and the Lilypond glossary (which
> interestingly marks the 8va below as "unusual" as compared to the less
> traditional and more debated 8vb ... also the glossary suggests 15ma above the
> staff and 15va below the staff).
>
> Cheers,
> Valentin
>

I completely agree with you. My take is that ottava markings are such
a debated topic because of overlapping different traditions, increased
usage of instruments' lowest and highest range, the understandable
misunderstanding of abbreviations in Italian language, different
publishers preferences, etc.  My personal preference nowadays is to
use only the number, or '8va'. I don't like the highly disputed 8vb,
but I find it completely clear and it seems it is the standard in some
countries / communities.
Also the name of the sign itself can be confusing: is it a bracket, a
sign, a mark, or a line? I personally call it a line or a mark.
However, when I search for how to write it in Lilypond, I have to
remember to google 'ottava bracket'. Otherwise, if I search for
'octave mark' I get either the glossary page (where it is actually
called 'octave mark'), or Lilypond's octave changing mark (' or ,).

Cheers,
Lib



Re: [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops

2023-06-23 Thread Andre Vieira (lists) via Gcc-patches

+  /* In order to find out if the loop is of type A or B above look for the
+ loop counter: it will either be incrementing by one per iteration or
+ it will be decrementing by num_of_lanes.  We can find the loop counter
+ in the condition at the end of the loop.  */
+  rtx_insn *loop_cond = prev_nonnote_nondebug_insn_bb (BB_END (body));
+  gcc_assert (cc_register (XEXP (PATTERN (loop_cond), 0), VOIDmode)
+ && GET_CODE (XEXP (PATTERN (loop_cond), 1)) == COMPARE);

Not sure this should be an assert. If we do encounter a differently 
formed loop, we should bail out of DLSTPing for now but we shouldn't ICE.



+  /* The loop latch has to be empty.  When compiling all the known MVE 
LoLs in
+ user applications, none of those with incrementing counters had 
any real
+ insns in the loop latch.  As such, this function has only been 
tested with

+ an empty latch and may misbehave or ICE if we somehow get here with an
+ increment in the latch, so, for sanity, error out early.  */
+  rtx_insn *dec_insn = BB_END (body->loop_father->latch);
+  if (NONDEBUG_INSN_P (dec_insn))
+gcc_unreachable ();

Similarly here I'd return false rather than gcc_unreachable ();


+  /* Find where both of those are modified in the loop body bb.  */
+  rtx condcount_reg_set = PATTERN (DF_REF_INSN (df_bb_regno_only_def_find
+(body, REGNO (condcount;
Put = on newline, breaks it down nicer.

+ counter_orig_set = XEXP (PATTERN
+   (DF_REF_INSN
+ (DF_REF_NEXT_REG
+   (DF_REG_DEF_CHAIN
+(REGNO
+  (XEXP (condcount_reg_set, 0)), 
1);

This makes me a bit nervous, can we be certain that the PATTERN of the 
next insn that sets it is indeed a set. Heck can we even be sure 
DF_REG_DEF_CHAIN returns a non-null, I can't imagine why not but maybe 
there are some constructs it can't follow-up on? Might just be worth 
checking these steps and bailing out.




+  /* When we find the vctp instruction: This may be followed by
+  a zero-extend insn to SImode.  If it is, then save the
+  zero-extended REG into vctp_vpr_generated.  If there is no
+  zero-extend, then store the raw output of the vctp.
+  For any VPT-predicated instructions we need to ensure that
+  the VPR they use is the same as the one given here and
+  they often consume the output of a subreg of the SImode
+  zero-extended VPR-reg.  As a result, comparing against the
+  output of the zero-extend is more likely to succeed.
+  This code also guarantees to us that the vctp comes before
+  any instructions that use the VPR within the loop, for the
+  dlstp/letp transform to succeed.  */

Wrong comment indent after first line.

+  rtx_insn *vctp_insn = arm_mve_get_loop_vctp (body);
+  if (!vctp_insn || !arm_mve_loop_valid_for_dlstp (body))
+return GEN_INT (1);

arm_mve_loop_valid_for_dlstp already calls arm_mve_get_loop_vctp, maybe 
have 'arm_mve_loop_valid_for_dlstp' return vctp_insn or null to 
determine success or failure, avoids looping through the BB again.


For the same reason I'd also pass vctp_insn down to 
'arm_mve_check_df_chain_back_for_implic_predic'.


+ if (GET_CODE (SET_SRC (single_set (next_use1))) == ZERO_EXTEND)
+   {
+ rtx_insn *next_use2 = NULL;

Are we sure single_set can never return 0 here? Maybe worth an extra 
check and bail out if it does?


+   /* If the insn pattern requires the use of the VPR value from the
+ vctp as an input parameter.  */
s/an an input parameter./as an input parameter for predication./

+ /* None of registers USE-d by the instruction need can be the VPR
+vctp_vpr_generated.  This blocks the optimisation if there any
+instructions that use the optimised-out VPR value in any way
+other than as a VPT block predicate.  */

Reword this slightly to be less complex:
This instruction USE-s the vctp_vpr_generated other than for 
predication, this blocks the transformation as we are not allowed to 
optimise the VPR value away.


Will continue reviewing next week :)

On 15/06/2023 12:47, Stamatis Markianos-Wright via Gcc-patches wrote:

     Hi all,

     This is the 2/2 patch that contains the functional changes needed
     for MVE Tail Predicated Low Overhead Loops.  See my previous email
     for a general introduction of MVE LOLs.

     This support is added through the already existing loop-doloop
     mechanisms that are used for non-MVE dls/le looping.

     Mid-end changes are:

     1) Relax the loop-doloop mechanism in the mid-end to allow for
    decrement numbers other that -1 and for `count` to be an
    rtx containing a simple REG (which in this case will contain
    the number of elements to be processed), 

Re: [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops

2023-06-23 Thread Andre Vieira (lists) via Gcc-patches

+  if (insn != arm_mve_get_loop_vctp (body))
+{

probably a good idea to invert the condition here and return false, 
helps reducing the indenting in this function.



+   /* Starting from the current insn, scan backwards through the insn
+  chain until BB_HEAD: "for each insn in the BB prior to the current".
+   */

There's a trailing whitespace after insn, but also I'd rewrite this bit. 
The "for each insn in the BB prior to the current" is superfluous and 
even confusing to me. How about:
"Scan backwards from the current INSN through the instruction chain 
until the start of the basic block.  "



 I find 'that previous insn' to be confusing as you don't mention any 
previous insn before. So how about something along the lines of:
'If a previous insn defines a register that INSN uses then return true 
if...'



Do we need to check: 'insn != prev_insn' ? Any reason why you can't 
start the loop with:

'for (rtx_insn *prev_insn = PREV_INSN (insn);'

Now I also found a case where things might go wrong in:
+   /* Look at all the DEFs of that previous insn: if one of them is on
+  the same REG as our current insn, then recurse in order to check
+  that insn's USEs.  If any of these insns return true as
+  MVE_VPT_UNPREDICATED_INSN_Ps, then the whole chain is affected
+  by the change in behaviour from being placed in dlstp/letp loop.
+   */
+   df_ref prev_insn_defs = NULL;
+   FOR_EACH_INSN_DEF (prev_insn_defs, prev_insn)
+ {
+   if (DF_REF_REGNO (insn_uses) == DF_REF_REGNO (prev_insn_defs)
+   && insn != prev_insn
+   && body == BLOCK_FOR_INSN (prev_insn)
+   && !arm_mve_vec_insn_is_predicated_with_this_predicate
+(insn, vctp_vpr_generated)
+   && arm_mve_check_df_chain_back_for_implic_predic
+(prev_insn, vctp_vpr_generated))
+ return true;
+ }

The body == BLOCK_FOR_INSN (prev_insn) hinted me at it, if a def comes 
from outside of the BB (so outside of the loop's body) then its by 
definition unpredicated by vctp.  I think you want to check that if 
prev_insn defines a register used by insn then return true if prev_insn 
isn't in the same BB or has a chain that is not predicated, i.e.: 
'!arm_mve_vec_insn_is_predicated_with_this_predicate (insn, 
vctp_vpr_generated) && arm_mve_check_df_chain_back_for_implic_predic 
prev_insn, vctp_vpr_generated))' you check body != BLOCK_FOR_INSN 
(prev_insn)'



I also found some other issues, this currently loloops:

uint16_t  test (uint16_t *a, int n)
{
  uint16_t res =0;
  while (n > 0)
{
  mve_pred16_t p = vctp16q (n);
  uint16x8_t va = vldrhq_u16 (a);
  res = vaddvaq_u16 (res, va);
  res = vaddvaq_p_u16 (res, va, p);
  a += 8;
  n -= 8;
}
  return res;
}

But it shouldn't, this is because there's a lack of handling of across 
vector instructions. Luckily in MVE all across vector instructions have 
the side-effect that they write to a scalar register, even the vshlcq 
instruction (it writes to a scalar carry output).


Did this lead me to find an ICE with:

uint16x8_t  test (uint16_t *a, int n)
{
  uint16x8_t res = vdupq_n_u16 (0);
  while (n > 0)
{
  uint16_t carry = 0;
  mve_pred16_t p = vctp16q (n);
  uint16x8_t va = vldrhq_u16 (a);
  res = vshlcq_u16 (va, , 1);
  res = vshlcq_m_u16 (res, , 1 , p);
  a += 8;
  n -= 8;
}
  return res;
}

This is because:
+ /* If the USE is outside the loop body bb, or it is inside, but
+is an unpredicated store to memory.  */
+ if (BLOCK_FOR_INSN (insn) != BLOCK_FOR_INSN (next_use_insn)
+|| (arm_mve_vec_insn_is_unpredicated_or_uses_other_predicate
+(next_use_insn, vctp_vpr_generated)
+   && mve_memory_operand
+   (SET_DEST (single_set (next_use_insn)),
+GET_MODE (SET_DEST (single_set (next_use_insn))
+   return true;

Assumes single_set doesn't return 0.

Let's deal with these issues and I'll continue to review.

On 15/06/2023 12:47, Stamatis Markianos-Wright via Gcc-patches wrote:

     Hi all,

     This is the 2/2 patch that contains the functional changes needed
     for MVE Tail Predicated Low Overhead Loops.  See my previous email
     for a general introduction of MVE LOLs.

     This support is added through the already existing loop-doloop
     mechanisms that are used for non-MVE dls/le looping.

     Mid-end changes are:

     1) Relax the loop-doloop mechanism in the mid-end to allow for
    decrement numbers other that -1 and for `count` to be an
    rtx containing a simple REG (which in this case will contain
    the number of elements to be processed), rather
    than an expression for calculating the number of iterations.
  

Re: [OE-core] [PATCH 4/9] runqemu-ifup: remove only our taps

2023-06-23 Thread Jörg Sommer via lists . openembedded . org
On 22 June 2023 19:01, openembedded-core@lists.openembedded.org wrote:
> If there are other tap interfaces than the interfaces created by the
> runqemu-* scripts, these interfaces are not ignored. This is now fixed
> by filtering the interfaces for a specific prefix in the interface name.
>
> Signed-off-by: Adrian Freihofer 
> ---
>  scripts/runqemu-ifup | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/scripts/runqemu-ifup b/scripts/runqemu-ifup
> index fe4492e78b7..c65ceaf1c83 100755
> --- a/scripts/runqemu-ifup
> +++ b/scripts/runqemu-ifup
> @@ -45,7 +45,7 @@ if [ -z "$OE_TAP_NAME" ]; then
>  fi
>
>  if taps=$(ip tuntap list 2>/dev/null); then
> -   tap_no_last=$(echo "$taps" |cut -f 1 -d ":" |sed "s/$OE_TAP_NAME//g" 
> | sort -rn | head -n 1)
> +   tap_no_last=$(echo "$taps" |cut -f 1 -d ":" |grep -E 
> "^$OE_TAP_NAME.*" |sed "s/$OE_TAP_NAME//g" | sort -rn | head -n 1)

You can combine the cut+grep+sed to `sed "/^$OE_TAP_NAME/!d; s///; s/:.*//"`


Regards,

Jörg Sommer

Software Developer / Programmierer
--

Navimatix GmbH

Tatzendpromenade 2

07745 Jena
  

T: 03641 - 327 99 0

F: 03641 - 526 306

M: joerg.som...@navimatix.de

www.navimatix.de
  



Geschäftsführer: Steffen Späthe, Jan Rommeley

Registergericht: Amtsgericht Jena, HRB 501480



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#183315): 
https://lists.openembedded.org/g/openembedded-core/message/183315
Mute This Topic: https://lists.openembedded.org/mt/99702217/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [OE-core] [PATCH 1/9] runqemu-ifup: remove uid parameter

2023-06-23 Thread Jörg Sommer via lists . openembedded . org
On 22 June 2023 19:01, openembedded-core@lists.openembedded.org wrote:
> ip tuntap does not need the uid, it was an unused variable/parameter.
> Backward compatibility should be fine.
>
> Signed-off-by: Adrian Freihofer 
> ---
> scripts/runqemu-ifup | 13 -
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/scripts/runqemu-ifup b/scripts/runqemu-ifup
> index 5dc765dee82..26714518020 100755
> --- a/scripts/runqemu-ifup
> +++ b/scripts/runqemu-ifup
> @@ -21,7 +21,7 @@
> #
>
> usage() {
> -   echo "sudo $(basename $0)  "
> +   echo "sudo $(basename $0) "
> }
>
> if [ $EUID -ne 0 ]; then
> @@ -29,17 +29,20 @@ if [ $EUID -ne 0 ]; then
> exit 1
> fi
>
> -if [ $# -ne 2 ]; then
> +if [ $# -eq 2 ]; then
> +   echo "Warning: uid parameter is ignored. It is no longer needed."

Would it be better to send this message to stderr (use `>&2`)?

Regards

Jörg Sommer

Software Developer / Programmierer
--

Navimatix GmbH

Tatzendpromenade 2

07745 Jena


T: 03641 - 327 99 0

F: 03641 - 526 306

M: joerg.som...@navimatix.de

www.navimatix.de




Geschäftsführer: Steffen Späthe, Jan Rommeley

Registergericht: Amtsgericht Jena, HRB 501480



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#183312): 
https://lists.openembedded.org/g/openembedded-core/message/183312
Mute This Topic: https://lists.openembedded.org/mt/99702214/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH 2/2] arm: Add support for MVE Tail-Predicated Low Overhead Loops

2023-06-22 Thread Andre Vieira (lists) via Gcc-patches
Some comments below, all quite minor. I'll continue to review tomorrow, 
I need a fresher brain for arm_mve_check_df_chain_back_for_implic_predic 
 ;)


+static int
+arm_mve_get_vctp_lanes (rtx x)
+{
+  if (GET_CODE (x) == SET && GET_CODE (XEXP (x, 1)) == UNSPEC
+  && (XINT (XEXP (x, 1), 1) == VCTP || XINT (XEXP (x, 1), 1) == 
VCTP_M))

+{
+  switch (GET_MODE (XEXP (x, 1)))
+   {
+ case V16BImode:
+   return 16;
+ case V8BImode:
+   return 8;
+ case V4BImode:
+   return 4;
+ case V2QImode:
+   return 2;
+ default:
+   break;
+   }
+}
+  return 0;
+}

I think you can replace the switch with something along the lines of:
machine_mode mode = GET_MODE (XEXP (x, 1));
return VECTOR_MODE_P (mode) ? GET_MODE_NUNITS (mode) : 0;


+/* Check if an insn requires the use of the VPR_REG, if it does, return the
+   sub-rtx of the VPR_REG.  The `type` argument controls whether
+   this function should:
+   * For type == 0, check all operands, including the OUT operands,
+ and return the first occurance of the VPR_REG.

s/occurance/occurrence/

+ bool requires_vpr;
+  extract_constrain_insn (insn);

indent of requires_vpr is off.

+  if (type == 1 && (recog_data.operand_type[op] == OP_OUT
+   || recog_data.operand_type[op] == OP_INOUT))
+   continue;
+  else if (type == 2 && (recog_data.operand_type[op] == OP_IN
+|| recog_data.operand_type[op] == OP_INOUT))
+   continue;

Why skip INOUT? I guess this will become clear when I see the uses, but 
I'm wondering whether 'only check the input operands.' is clear enough. 
Maybe 'check operands that are input only.' would be more accurate?


+ /* Fetch the reg_class for each entry and check it against the
+  * VPR_REG reg_class.  */

Remove leading * on the second line.

+
+/* Wrapper function of arm_get_required_vpr_reg with type == 1, so return
+   something only if the VPR reg is an input operand to the insn.  */

When talking about a function parameter in comments capitalize (INSN) 
the name. Same for:


+/* Wrapper function of arm_get_required_vpr_reg with type == 2, so return
+   something only if the VPR reg is the retrurn value, an output of, or is
+   clobbered by the insn.  */

+/* Return true if an insn is an MVE instruction that VPT-predicable, but in
+   its unpredicated form, or if it is predicated, but on a predicate other
+   than vpr_reg.  */

In this one also 'is a MVE instruction that is VPT-predicable' would be 
better I think.



On 15/06/2023 12:47, Stamatis Markianos-Wright via Gcc-patches wrote:
>  Hi all,
>
>  This is the 2/2 patch that contains the functional changes needed
>  for MVE Tail Predicated Low Overhead Loops.  See my previous email
>  for a general introduction of MVE LOLs.
>
>  This support is added through the already existing loop-doloop
>  mechanisms that are used for non-MVE dls/le looping.
>
>  Mid-end changes are:
>
>  1) Relax the loop-doloop mechanism in the mid-end to allow for
> decrement numbers other that -1 and for `count` to be an
> rtx containing a simple REG (which in this case will contain
> the number of elements to be processed), rather
> than an expression for calculating the number of iterations.
>  2) Added a new df utility function: `df_bb_regno_only_def_find` that
> will return the DEF of a REG only if it is DEF-ed once within the
> basic block.
>
>  And many things in the backend to implement the above optimisation:
>
>  3)  Implement the `arm_predict_doloop_p` target hook to instruct the
>  mid-end about Low Overhead Loops (MVE or not), as well as
>  `arm_loop_unroll_adjust` which will prevent unrolling of any 
loops
>  that are valid for becoming MVE Tail_Predicated Low Overhead 
Loops
>  (unrolling can transform a loop in ways that invalidate the 
dlstp/

>  letp tranformation logic and the benefit of the dlstp/letp loop
>  would be considerably higher than that of unrolling)
>  4)  Appropriate changes to the define_expand of doloop_end, new
>  patterns for dlstp and letp, new iterators,  unspecs, etc.
>  5) `arm_mve_loop_valid_for_dlstp` and a number of checking 
functions:

> * `arm_mve_dlstp_check_dec_counter`
> * `arm_mve_dlstp_check_inc_counter`
> * `arm_mve_check_reg_origin_is_num_elems`
> * `arm_mve_check_df_chain_back_for_implic_predic`
> * `arm_mve_check_df_chain_fwd_for_implic_predic_impact`
> This all, in smoe way or another, are running checks on the loop
> structure in order to determine if the loop is valid for 
dlstp/letp

> transformation.
>  6) `arm_attempt_dlstp_transform`: (called from the define_expand of
>  doloop_end) this function re-checks for the loop's 
suitability for


Re: Vertical Spacing with Tuplets and Lyrics

2023-06-21 Thread Lib Lists
On Wed, 21 Jun 2023 at 00:33, Jean Abou Samra  wrote:
>
> Le mardi 20 juin 2023 à 23:44 +0300, Lib Lists a écrit :
>
> I think the problem is caused by Lilypond putting too many systems on the 
> page (have no idea why).
>
> The precise positioning of grobs such as tuplet brackets depends on the 
> horizontal spacing, which is not known yet during page breaking, so LilyPond 
> uses estimates at this stage. For tuplet brackets specifically, no such 
> estimates are currently implemented, so their height is not taken into 
> account, which is the reason why LilyPond ends up placing more systems on the 
> page than is reasonable in this code. This usually isn't a problem since 
> scores are usually not made of continuous tuplets (if there are tuplets 
> across all the score, you would customarily omit them after writing out the 
> first few).

Wow, interesting, and thank you (this is very useful to know)! I was
just reading yesterday about pure-unpure containers in Lilypond,
trying to figure out some possible use cases.

I think I've encountered a similar issue during the years, but I had
no idea what was the reason. I (think I) never asked for help here
thinking it's a bug and ended up instructing Lilypond to put less
systems per page or modifying the layout, staff size, etc. I
personally use a lot of brackets and in general there are many cases
in which the bracket shouldn't be omitted. Is there any technical
advantage in not having estimates for the brackets?

Cheers,
Lib



Re: Vertical Spacing with Tuplets and Lyrics

2023-06-20 Thread Lib Lists
Maybe even better than writing explicitly the number of systems per page,
\paper { page-breaking-system-system-spacing.padding = 2 }
(https://lilypond.org/doc/v2.25/Documentation/notation/paper-variables-for-page-breaking)
seems to work.

On Tue, 20 Jun 2023 at 23:44, Lib Lists  wrote:
>
> On Tue, 20 Jun 2023 at 22:18, Mogens Lemvig Hansen  wrote:
> >
> > It struck we as weird to put the lyrics inside the DrumStaff, so I tried 
> > something closer to what I would have done for a choir:
> >
> > \score {
> >
> >   <<
> >
> > \new DrumStaff <<
> >
> >   \new DrumVoice { \voiceOne \CyBars }
> >
> >   \new DrumVoice { \voiceTwo \DrBars }
> >
> > >>
> >
> > \new Lyrics { \PrOne }
> >
> >   >>
> >
> > }
> >
> >
> >
> > Looks better to my eye.
> >
> >
> >
> > Regards,
> >
> > Mogens
>
> Yet if the lyrics are something as:
>
> PrOne = \lyricmode {
>   \override LyricText.self-alignment-X = #LEFT
>   \skip 1 "Lyrics for introduction"1
>   \skip 2 "Lyrics for introduction"2
>   \skip 1. "Lyrics for introduction"2
>   \skip 1. "Lyrics for introduction"2
>   \skip 1. "Lyrics for introduction"2
> }
>
> They still clash with the tuplet bracket.
>
> I think the problem is caused by Lilypond putting too many systems on
> the page (have no idea why). The verbose output says
> 'warning: compressing over-full page by 15.2 staff-spaces
> warning: page 1 has been compressed'.
>
> I added \paper { max-systems-per-page = 12 } and it seems to work as it 
> should.
>
> Cheers,
> Lib



Re: Vertical Spacing with Tuplets and Lyrics

2023-06-20 Thread Lib Lists
On Tue, 20 Jun 2023 at 22:18, Mogens Lemvig Hansen  wrote:
>
> It struck we as weird to put the lyrics inside the DrumStaff, so I tried 
> something closer to what I would have done for a choir:
>
> \score {
>
>   <<
>
> \new DrumStaff <<
>
>   \new DrumVoice { \voiceOne \CyBars }
>
>   \new DrumVoice { \voiceTwo \DrBars }
>
> >>
>
> \new Lyrics { \PrOne }
>
>   >>
>
> }
>
>
>
> Looks better to my eye.
>
>
>
> Regards,
>
> Mogens

Yet if the lyrics are something as:

PrOne = \lyricmode {
  \override LyricText.self-alignment-X = #LEFT
  \skip 1 "Lyrics for introduction"1
  \skip 2 "Lyrics for introduction"2
  \skip 1. "Lyrics for introduction"2
  \skip 1. "Lyrics for introduction"2
  \skip 1. "Lyrics for introduction"2
}

They still clash with the tuplet bracket.

I think the problem is caused by Lilypond putting too many systems on
the page (have no idea why). The verbose output says
'warning: compressing over-full page by 15.2 staff-spaces
warning: page 1 has been compressed'.

I added \paper { max-systems-per-page = 12 } and it seems to work as it should.

Cheers,
Lib



Re: Grace note spacing & alignment in score

2023-06-20 Thread Lib Lists
Hi Michael,

I'm also interested in knowing if there's a better solution, but in
the end this is fairly easy to implement, and one can decide how
compressed the grace spacing is by changing its duration. In the
example you posted it makes sense to have the grace notes starting
with the percussion upbeat because the score looks rhythmically neat,
and the spacing gives a realistic idea of the actual duration of the
grace (depending on the tempo).
Interestingly, I found this post from 2019 on Scoring Notes, but
referred to grace notes at the end of a bar
https://www.scoringnotes.com/tips/grace-notes-at-the-end-of-a-bar-in-sibelius/.
Maybe the solution proposed is even better than mine, as it includes
the rest before the grace notes in the same tuplet.
Cheers,
Lib

On Tue, 20 Jun 2023 at 15:44, Michael Seifert  wrote:
>
> Hey there Lib,
>
> Thanks for that — and sorry for the delayed reply.  I was hoping that 
> there would be some combination of overrides that could be set to fix the 
> spacing automatically (SpacingSpanner.strict-grace-spacing in combination 
> with something else?) but this will work, faute de mieux.
>
> I was going to submit a bug report about the collision issues, but it 
> appears that someone already did so about 10 years ago: 
> https://gitlab.com/lilypond/lilypond/-/issues/2630
>
> Take care,
>
> Mike Seifert
>
> > On Jun 15, 2023, at 12:54 PM, Lib Lists  wrote:
> >
> > Hi,
> > Here is a hack, and among the various things to be fixed, the beam
> > thickness of the fake grace notes needs to be checked more carefully
> > against the 'real' grace notes. I calculated the starting point of the
> > fake grace according to the percussion part, so it begins on the
> > upbeat of the 3rd beat. Another possibility is to use tuplets, but the
> > results are pretty much the same. I'm not a Lilypond expert, so
> > probably there are better solutions to this issue.
> > Cheers,
> > Lib
> >
> > \version "2.25.5"
> >
> > RHpianonotes = {
> >  \time 2/2
> >  \clef bass
> >
> >  \relative c {
> >\transposition c'
> >\stemUp \change Staff = "lower" \grace { d,32( a' bes \change
> > Staff = "upper" \stemDown e f fis} a8) \stemNeutral r8 r2*1/4
> >\stemUp \change Staff = "lower" \override Beam.length-fraction =
> > 1.1 \magnifyMusic 0.70  { d,,32*12/6[( a' bes \change Staff = "upper"
> > { \stemDown e f fis]}} a8) \stemNeutral r8
> >r2*1/4 \stemUp \change Staff = "lower" \magnifyMusic 0.70  {
> > d,,32*12/6[( a' bes \change Staff = "upper" { \stemDown e f fis]}} a8)
> > \stemNeutral r8  r4
> >  }
> > }
> >
> > LHpianonotes = {
> >  \time 2/2
> >  \clef bass
> >  \relative c {
> >\grace{s8.} s1*2
> >  }
> > }
> >
> > bassnotes = {
> >  \relative c {
> >\clef bass
> >\time 2/2
> >\transposition c
> >\grace{s8.} fis'8->( f) d4 r8  \clef tenor a'8 b[ a]
> >\tuplet 3/2 {c[->( b) a]} eis8. fis16 d4 r8 cis8~
> >  }
> > }
> >
> > \score{
> >  \layout {
> >\context {
> >  \Score
> >  %  \override SpacingSpanner.strict-grace-spacing = ##f
> >
> >}
> >  }
> >
> >  <<
> >\new PianoStaff
> ><<
> >  \new Staff = "upper" {\RHpianonotes}
> >  \new Staff = "lower" {\LHpianonotes}
> >>>
> >\new Staff="Staff_bass"
> ><< \bassnotes  >>
> >>>
> > }
> >
> > On Wed, 14 Jun 2023 at 05:04, Michael Seifert  wrote:
> >>
> >>Hello everyone,
> >>
> >>I’m working on a score transcription project, and I’m having some 
> >> trouble getting “nice” grace note placement in a section involving a piano 
> >> part and a double bass.
> >>
> >>Specifically, if I use the default settings in the snippet below, 
> >> the grace notes in the piano part cause extra space to be inserted between 
> >> two of the eighth notes in the bass line.  This makes the rhythm harder to 
> >> read for the conductor.
> >>
> >>If, on the other hand, I use the "\override 
> >> SpacingSpanner.strict-grace-spacing = ##t” line (currently commented out), 
> >> then the spacing of the bass line looks fine.  But the accidentals for the 
> >> grace notes collide with nearby noteheads, and the grace notes at the 
> >> start of the measure collide with the time signatu

Re: dvmrpd start causes kernel panic: assertion failed

2023-06-20 Thread Why 42? The lists account.


On Tue, Jun 13, 2023 at 03:12:01PM +0300, Vitaliy Makkoveev wrote:
> On Tue, Jun 13, 2023 at 10:04:35AM +0200, Why 42? The lists account. wrote:
> So, you tried this diff, and it fixed panic? 
> 
> > The system is running the 7.3 release, can I apply that patch directly
> > there somehow, or would I need to be using current / a snapshot?
> > 
> 
> This diff should be applicable to 7.3 release. 
 

Hi Again,

I was able to reboot with the new, patched, kernel yesterday. The dvmrpd
multicasting routing daemon now starts and runs perfectly. Well, it
starts and logs messages, now I just have to figure out how to use it ;-)

So the panic is fixed, thanks very much indeed!

Cheers,
Robb.



Re: After update, vim reports undefined symbols in libruby32.so

2023-06-18 Thread Why 42? The lists account.


On Tue, Jun 13, 2023 at 09:37:32AM +0200, Theo Buehler wrote:
> ...
> That's because libruby32 did not link explicitly against libc++abi, which
> is now needed on aarch64 and amd64 for the Rust-based YJIT compiler.
> 
> Fixed in this commit: 
> https://marc.info/?l=openbsd-ports-cvs=168663240314909=2
> 
> Once you get ruby-3.2.2p0 on your machine either by updating after it
> made it into snapshot packages or by building the latest lang/ruby/3.2
> yourself, this noise should go away.

That updated package has fixed it, thanks!

Cheers,
Robb.



Re: recent malloc changes

2023-06-18 Thread Why 42? The lists account.


On Sun, Jun 18, 2023 at 04:46:46PM +0200, Why 42? The lists account. wrote:
> Jun 18 16:18:23 mjoelnir mdnsd: startup
> Jun 18 16:18:23 mjoelnir mdnsd: fatal: bind: Address already in use
> Jun 18 16:23:02 mjoelnir mdnsd: startup
> Jun 18 16:23:02 mjoelnir mdnsd: fatal: bind: Address already in use

So, this issue "Address in use" was because mdnsd and avahi-daemon were
fighting over the same port, likely 5353. My fault.

Once I stop the avahi daemon, I can start mdnsd OK (and vice versa).

Still seems that mdnsd exits with 0 when it fails to start e.g. when
given a nonexistant interface:

mjoelnir:robb 18.06 17:41:08 # mdnsd le0
mjoelnir:robb 18.06 17:41:12 # echo $?
0

mjoelnir:robb 18.06 17:41:18 # pgrep -l mdnsd
mjoelnir:robb 18.06 17:41:24 [$?==1]#

mjoelnir:robb 18.06 17:41:24 [$?==1]# grep mdnsd /var/log/daemon | tail -3
Jun 18 17:41:12 mjoelnir mdnsd: startup
Jun 18 17:41:12 mjoelnir mdnsd: Unknown interface le0
Jun 18 17:41:12 mjoelnir mdnsd: fatal: Couldn't find any interface

This then confuses the rcctl script.

So IMHO that would also be a potential improvement.

Cheers,
Robb.



Re: recent malloc changes

2023-06-18 Thread Why 42? The lists account.


On Sun, Jun 18, 2023 at 03:34:27PM +0200, Otto Moerbeek wrote:
> So what's in your malloc options?

Er, nothing:
mjoelnir:robb 18.06 15:57:29 [$?==1]# echo $MALLOC_OPTIONS

mjoelnir:robb 18.06 15:57:40 # echo $MALLOC_OPTIONS | cat -vet
$

mjoelnir:robb 18.06 15:59:25 [$?==1]# unset MALLOC_OPTIONS

mjoelnir:robb 18.06 15:59:30 # mdnsd -d
malloc() warning: unknown char in MALLOC_OPTIONS
malloc() warning: unknown char in MALLOC_OPTIONS


But I think that I now see how I managed to trigger this ...

I commented out the "mdnsd_flags" entry in /etc/rc.conf.local while
trying to debug the "simple-scan" application.

(simple-scan starts but doesn't do anything and I noticed that I had
previously (2021) left myself a hint in that file: "Reverted to avahid
since scanner not detected with mdnsd")

I would have probably done better to have changed that line to say "=NO"
instead.

Still doesn't start though ... Now I see this log message:
mjoelnir:robb 18.06 16:23:21 # grep mdnsd /var/log/daemon
...
Jun 18 16:18:23 mjoelnir mdnsd: startup
Jun 18 16:18:23 mjoelnir mdnsd: fatal: bind: Address already in use
Jun 18 16:23:02 mjoelnir mdnsd: startup
Jun 18 16:23:02 mjoelnir mdnsd: fatal: bind: Address already in use

Interesting that the startup script returns OK:
mjoelnir:robb 18.06 16:32:43 [$?==1]# rcctl start mdnsd
mdnsd(ok)

Log file:
Jun 18 16:32:45 mjoelnir mdnsd: startup
Jun 18 16:32:45 mjoelnir mdnsd: fatal: bind: Address already in use

mjoelnir:robb 18.06 16:34:21 # pgrep -l dns
mjoelnir:robb 18.06 16:34:26 [$?==1]#

Maybe the daemon isn't returning a bad exit status?

I tried "ktrace -di mdnsd" but couldn't spot any obvious error. It would
be nice if the error message included the bind address, i mean the port
number.

I'll try a reboot ...

Cheers,
Robb.



Re: recent malloc changes

2023-06-18 Thread Why 42? The lists account.


On Sun, Jun 04, 2023 at 01:03:14PM +0200, Otto Moerbeek wrote:
> Hello,
> 
> In the last few weeks a series of mallocs diff have been committed.
> The last one today. That one allows malloc to detect more cases of
> write-after-free. While that is a good thing, it might uncover latent
> bugs in appliations. 
> 
> So if you are running current or snapshots, please keep an eye out for
> issues reported by malloc. If we get too many reports of issues I
> might change things so the extra write-after-free detetecion is ony
> enabled when malloc option S is active.

Well, I don't know if this related, but I just noticed this:
mjoelnir:robb 18.06 15:17:30 # rcctl check mdnsd
mdnsd(failed)

mjoelnir:robb 18.06 15:17:32 [$?==1]# rcctl restart mdnsd
mdnsd(ok)

mjoelnir:robb 18.06 15:17:37 # rcctl check mdnsd
mdnsd(failed)

mjoelnir:robb 18.06 15:17:39 [$?==1]# pgrep -l mdnsd
mjoelnir:robb 18.06 15:18:06 [$?==1]#

mjoelnir:robb 18.06 15:18:07 [$?==1]# mdnsd -h
mdnsd: unknown option -- h
usage: mdnsd [-dw] ifname [ifnames...]
usage: mdnsd -v

mjoelnir:robb 18.06 15:18:22 [$?==1]# mdnsd -dv
malloc() warning: unknown char in MALLOC_OPTIONS
malloc() warning: unknown char in MALLOC_OPTIONS
OpenMdns Daemon 0.7 (2017-03-10)
Copyright (C) 2010-2014 Christiano F. Haesbaert
mjoelnir:robb 18.06 15:18:30 #

mjoelnir:robb 18.06 15:18:33 # pgrep -l mdnsd
mjoelnir:robb 18.06 15:18:36 [$?==1]# mdnsd -d
malloc() warning: unknown char in MALLOC_OPTIONS
malloc() warning: unknown char in MALLOC_OPTIONS
usage: mdnsd [-dw] ifname [ifnames...]
usage: mdnsd -v

This is with a 7.3 snapshot from about a week ago:
mjoelnir:robb 18.06 15:22:22 # ls -ltr /bsd*
-rwx--  1 root  wheel  25245701 Jun 11 13:42 /bsd.sp
-rw---  1 root  wheel   4674809 Jun 11 13:42 /bsd.rd
-rwx--  1 root  wheel  25364480 Jun 11 13:44 /bsd.booted
-rwx--  1 root  wheel  25375272 Jun 11 13:56 /bsd

mjoelnir:robb 18.06 15:21:53 # uname -a
OpenBSD mjoelnir.fritz.box 7.3 GENERIC.MP#1230 amd64



CCC Camp 2023 e Last Call per i biglietti 24 giugno 12.00

2023-06-18 Thread Fabio Pietrosanti (Lists)

Il CCC Hacker Camp 2023 si avvicina e Italian Hackers Embassy ci sarà!

Per unirsi al gruppo e contribuire: https://t.me/italianembassycongress

Sabato 24 alle 12.00 è l'ultima occasione per avere un biglietto -> 
https://events.ccc.de/2023/05/29/camp23-presale/#camp23-presale-en


-naif




[nexa] CCC Hacker Camp 2023 e Italian Hackers Embassy

2023-06-17 Thread Fabio Pietrosanti (Lists)

Il CCC Hacker Camp 2023 si avvicina e Italian Hackers Embassy ci sarà!
Per unirsi al gruppo e contribuire: https://t.me/italianembassycongress

Sabato 24 alle 12.00 è l'ultima occasione per avere un biglietto -> 
https://events.ccc.de/2023/05/29/camp23-presale/#camp23-presale-en


-naif

___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


Re: [gentoo-user] Loading modules prevents shutdown

2023-06-17 Thread Wols Lists

On 17/06/2023 12:57, dhk wrote:
Thanks for the tips.  After spending a lot of time on and off for a few 
weeks trying to keep /lib/modules on its own partition, it just did not 
work right; the system was scrapped and rebuilt per the trivial solution 
with /lib/modules on the root partition.  Now it works as expected.


A good explanation as to why /lib/modules cannot be a separate partition 
would be nice, but after learning learning the hard way again it stays 
on the root partition going forward.


The kernel needs to load modules to boot fully. If mount hasn't run by 
the time the kernel needs a module, you have a problem ...


Even worse, if mount needs the kernel to load a module, you're stuffed ...

Cheers,
Wol



Fedora on a 2015 iMac?

2023-06-17 Thread Lists
I have a 27" late 2015 iMac with i5 processor and 16 GB of RAM. It runs El 
Capitan just fine after wiping the drive and doing an Internet install.  

This would make a fabulous Fedora workstation! However, I have had trouble 
getting Fedora installer to run much at all.  

I have an F38 install ISO dd'd to a thumb disk. On my Dell laptop, the 
installer starts fine.  

When I put the thumb disk on the iMac and hold  during boot, I see "EFI 
Boot" without issue., but every attempt I've tried to get it to boot or start 
into the installer has failed.  

Is there anybody here who has had success loading/running Fedora 37/38 on 
Intel iMacs?  

Thanks  
Ben Smith


signature.asc
Description: This is a digitally signed message part.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-16 Thread Alexis Lothoré via lists . yoctoproject . org
On 6/16/23 18:30, Richard Purdie wrote:
> On Fri, 2023-06-16 at 16:58 +0200, Alexis Lothoré wrote:
>> On 6/15/23 22:34, Alexis Lothoré wrote:
>>> Hello Richard, Michael,
>>> On 6/15/23 15:41, Richard Purdie wrote:
 On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via 
 lists.yoctoproject.org wrote:
> From: Alexis Lothoré 
>
> There must be a more robust rework to do (because the issue will likely
> happen on each major delivery), but I aimed for the quick and small fix to
> quickly bring back tests results storage without breaking other things in
> the process

 Thanks, I've merged this as it is a good first set of steps.

 As I mentioned, I think we should hardcode poky + "not ending with -
 next" as the test, then we shouldn't run into this issue again.
>>>
>>> ACK, will do the fix

 I'd also like to retroactively push the test results for 4.2 since we
 have them and should be able to merge them onto the branch. I'd then
 like to see what the revised 4.3 M1 report looks like.
>>>
>>> I have started importing the archive kindly prepared by Michael in 
>>> poky-contrib
>>> test-results repository, but I am struggling a bit regarding regression 
>>> report
>>> generation with freshly imported result. I still have to confirm if it is 
>>> the
>>> generated tag that is faulty or if it is a kind of an edge case in 
>>> resulttool
>>
>> So, I have managed to generate the regression report locally (there's likely 
>> a
>> tag issue for older tests stored in test-results to be circumvented in
>> resulttool), and it is a bit disappointing. The report is 13MB large, and is
>> filled once again with false positive likely due to non static ptest names,
>> likely due to leaky build logs. Here's a sample
>>
>> ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
>> expected multiline pattern lines 13-17 was found: "\s*/\* \}
>> if \(isAdmin\)  begin admins only \*/[^\n\r]*\n
>> \^\n
>> \|   \|
>> \|[^\n\r]*\n   \|   \|
>> end of bidirectional context[^\n\r]*\n   U\+202E \(RIGHT-TO-LEFT
>> OVERRIDE\) U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n": PASS -> None
>> ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
>> expected multiline pattern lines 26-31 was found: " /\* end admins only
>>  \{ \*/[^\n\r]*\n   
>> 
>> \^\n\|  \|\|[^\n\r]*\n
>>  \|  \|end of bidirectional context[^\n\r]*\n
>> \|  U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n
>>   U\+202E \(RIGHT-TO-LEFT OVERRIDE\)[^\n\r]*\n": PASS -> None
>>
>> Most of this noise is about gcc ptests, there is also a bit about python3 and
>> ltp. I manually trimmed gcc false positive to reach a reasonable size, here 
>> it is:
>> https://pastebin.com/rYZ3qYMK
> 
> Thanks for getting us the diff!
> 
> Going through the details there, most of it is "expected" due to
> changes in version of the components. I did wonder if we could somehow
> show that version change?
> 
> I'm starting to wonder if we should:
> 
> a) file two bugs for cleaning up the python3 and gcc test results
> b) summarise the python3 and gcc test results in the processing rather
> than printing in full if the differences exceed some threshold (40
> changes?)

I would say yes and yes, and I like the idea of setting a general threshold,
either an absolute one or as a percentage of total number of test cases in
current test.

> 
> Basically we need to make this report useful somehow, even if we have
> to exclude some data for now until we can better process it.

Absolutely. I will use this report as a base to bring a new batch of
improvements. I will also add the stats I have been talking about earlier, to
know for example if for a test case, the generated noise is really affecting the
whole test or is a drop in the sea
> 
> I'm open to other ideas...
> 
> Cheers,
> 
> Richard
> 
> 
> 
> 
> 

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60330): https://lists.yoctoproject.org/g/yocto/message/60330
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-16 Thread Alexis Lothoré via lists . yoctoproject . org
On 6/15/23 22:34, Alexis Lothoré wrote:
> Hello Richard, Michael,
> On 6/15/23 15:41, Richard Purdie wrote:
>> On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via lists.yoctoproject.org 
>> wrote:
>>> From: Alexis Lothoré 
>>>
>>> There must be a more robust rework to do (because the issue will likely
>>> happen on each major delivery), but I aimed for the quick and small fix to
>>> quickly bring back tests results storage without breaking other things in
>>> the process
>>
>> Thanks, I've merged this as it is a good first set of steps.
>>
>> As I mentioned, I think we should hardcode poky + "not ending with -
>> next" as the test, then we shouldn't run into this issue again.
> 
> ACK, will do the fix
>>
>> I'd also like to retroactively push the test results for 4.2 since we
>> have them and should be able to merge them onto the branch. I'd then
>> like to see what the revised 4.3 M1 report looks like.
> 
> I have started importing the archive kindly prepared by Michael in 
> poky-contrib
> test-results repository, but I am struggling a bit regarding regression report
> generation with freshly imported result. I still have to confirm if it is the
> generated tag that is faulty or if it is a kind of an edge case in resulttool

So, I have managed to generate the regression report locally (there's likely a
tag issue for older tests stored in test-results to be circumvented in
resulttool), and it is a bit disappointing. The report is 13MB large, and is
filled once again with false positive likely due to non static ptest names,
likely due to leaky build logs. Here's a sample

ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
expected multiline pattern lines 13-17 was found: "\s*/\* \}
if \(isAdmin\)  begin admins only \*/[^\n\r]*\n
\^\n
\|   \|
\|[^\n\r]*\n   \|   \|
end of bidirectional context[^\n\r]*\n   U\+202E \(RIGHT-TO-LEFT
OVERRIDE\) U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n": PASS -> None
ptestresult.gcc-g++-user.c-c++-common/Wbidi-chars-ranges.c  -std=gnu++14
expected multiline pattern lines 26-31 was found: " /\* end admins only
 \{ \*/[^\n\r]*\n   
\^\n\|  \|\|[^\n\r]*\n
 \|  \|end of bidirectional context[^\n\r]*\n
\|  U\+2066 \(LEFT-TO-RIGHT ISOLATE\)[^\n\r]*\n
  U\+202E \(RIGHT-TO-LEFT OVERRIDE\)[^\n\r]*\n": PASS -> None

Most of this noise is about gcc ptests, there is also a bit about python3 and
ltp. I manually trimmed gcc false positive to reach a reasonable size, here it 
is:
https://pastebin.com/rYZ3qYMK


> 
> Kind regards,
> 
>> Cheers,
>>
>> Richard
> 
> 

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60325): https://lists.yoctoproject.org/g/yocto/message/60325
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-15 Thread Alexis Lothoré via lists . yoctoproject . org
Hello Richard, Michael,
On 6/15/23 15:41, Richard Purdie wrote:
> On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via lists.yoctoproject.org 
> wrote:
>> From: Alexis Lothoré 
>>
>> There must be a more robust rework to do (because the issue will likely
>> happen on each major delivery), but I aimed for the quick and small fix to
>> quickly bring back tests results storage without breaking other things in
>> the process
> 
> Thanks, I've merged this as it is a good first set of steps.
> 
> As I mentioned, I think we should hardcode poky + "not ending with -
> next" as the test, then we shouldn't run into this issue again.

ACK, will do the fix
> 
> I'd also like to retroactively push the test results for 4.2 since we
> have them and should be able to merge them onto the branch. I'd then
> like to see what the revised 4.3 M1 report looks like.

I have started importing the archive kindly prepared by Michael in poky-contrib
test-results repository, but I am struggling a bit regarding regression report
generation with freshly imported result. I still have to confirm if it is the
generated tag that is faulty or if it is a kind of an edge case in resulttool

Kind regards,

> Cheers,
> 
> Richard


-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60321): https://lists.yoctoproject.org/g/yocto/message/60321
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Grace note spacing & alignment in score

2023-06-15 Thread Lib Lists
Hi,
Here is a hack, and among the various things to be fixed, the beam
thickness of the fake grace notes needs to be checked more carefully
against the 'real' grace notes. I calculated the starting point of the
fake grace according to the percussion part, so it begins on the
upbeat of the 3rd beat. Another possibility is to use tuplets, but the
results are pretty much the same. I'm not a Lilypond expert, so
probably there are better solutions to this issue.
Cheers,
Lib

\version "2.25.5"

RHpianonotes = {
  \time 2/2
  \clef bass

  \relative c {
\transposition c'
\stemUp \change Staff = "lower" \grace { d,32( a' bes \change
Staff = "upper" \stemDown e f fis} a8) \stemNeutral r8 r2*1/4
\stemUp \change Staff = "lower" \override Beam.length-fraction =
1.1 \magnifyMusic 0.70  { d,,32*12/6[( a' bes \change Staff = "upper"
{ \stemDown e f fis]}} a8) \stemNeutral r8
r2*1/4 \stemUp \change Staff = "lower" \magnifyMusic 0.70  {
d,,32*12/6[( a' bes \change Staff = "upper" { \stemDown e f fis]}} a8)
\stemNeutral r8  r4
  }
}

LHpianonotes = {
  \time 2/2
  \clef bass
  \relative c {
\grace{s8.} s1*2
  }
}

bassnotes = {
  \relative c {
\clef bass
\time 2/2
\transposition c
\grace{s8.} fis'8->( f) d4 r8  \clef tenor a'8 b[ a]
\tuplet 3/2 {c[->( b) a]} eis8. fis16 d4 r8 cis8~
  }
}

\score{
  \layout {
\context {
  \Score
  %  \override SpacingSpanner.strict-grace-spacing = ##f

}
  }

  <<
\new PianoStaff
<<
  \new Staff = "upper" {\RHpianonotes}
  \new Staff = "lower" {\LHpianonotes}
>>
\new Staff="Staff_bass"
<< \bassnotes  >>
  >>
}

On Wed, 14 Jun 2023 at 05:04, Michael Seifert  wrote:
>
> Hello everyone,
>
> I’m working on a score transcription project, and I’m having some 
> trouble getting “nice” grace note placement in a section involving a piano 
> part and a double bass.
>
> Specifically, if I use the default settings in the snippet below, the 
> grace notes in the piano part cause extra space to be inserted between two of 
> the eighth notes in the bass line.  This makes the rhythm harder to read for 
> the conductor.
>
> If, on the other hand, I use the "\override 
> SpacingSpanner.strict-grace-spacing = ##t” line (currently commented out), 
> then the spacing of the bass line looks fine.  But the accidentals for the 
> grace notes collide with nearby noteheads, and the grace notes at the start 
> of the measure collide with the time signature (or, in other situations, the 
> preceding bar line).
>
> What I would like is for grace notes at the start of a bar to lead to 
> additional space insertion in other lines, but mid-bar arpeggios not to lead 
> to additional space insertion.  This is effectively the convention suggested 
> in Gould (p. 127–8), and it's also what’s done in the hand-engraved score I’m 
> transcribing (screenshots attached;  they also include a percussion part 
> which I’ve omitted from my MWE.)
>
> Any ideas on how to accomplish this?  Thanks in advance for your help!
>
> Mike Seifert
> Quaker Hill, CT, USA
>
>
>
> Code follows:
> —
> \version "2.24.0"
>
> RHpianonotes = {
>
> \time 2/2
> \clef bass
>
> \relative c {
> \transposition c'
>
> \stemUp \change Staff = "lower" \grace { d,32( a' bes \change Staff = "upper" 
> \stemDown e f fis} a8) \stemNeutral r8 r4
> r4 \stemUp \change Staff = "lower" \grace { d,,32( a' bes \change Staff = 
> "upper" \stemDown e f fis} a8) \stemNeutral r8
>
> r2 \stemUp \change Staff = "lower" \grace { d,,32( a' bes \change Staff = 
> "upper" \stemDown e f fis} a8) \stemNeutral r8 r4
>
> }
> }
>
> LHpianonotes = {
>
> \time 2/2
>
> \clef bass
>
> \relative c {
>
> \grace{s8.} s1*2
> }
> }
>
> bassnotes = {
>
> \relative c {
>
> \clef bass
> \time 2/2
> \transposition c
>
> \grace{s8.} fis'8->( f) d4 r8 \clef tenor a'8 b a
> \tuplet 3/2 {c->( b) a} eis8. fis16 d4 r8 cis8~
>
> }
>
> }
>
> \score{
>
> \layout {
> \context {
>   \Score
> % \override SpacingSpanner.strict-grace-spacing = ##t
> }
>   }
>
> <<
>
> \new PianoStaff
> <<
> \new Staff = "upper" {\RHpianonotes}
>   \new Staff = "lower" {\LHpianonotes}
> >>
>
> \new Staff="Staff_bass"
> << \bassnotes  >>
>
> >>
> }
>


grace_notes-crop.pdf
Description: Adobe PDF document


Re: [yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-14 Thread Alexis Lothoré via lists . yoctoproject . org
On 6/14/23 12:31, Richard Purdie wrote:
> On Wed, 2023-06-14 at 10:56 +0200, Alexis Lothoré via
> lists.yoctoproject.org wrote:
>> From: Alexis Lothoré 
>>
>> This series is a follow-up for the 4.3_M1.rc1 regression report issue.
>>
>> It has been observed that the report is empty. This issue is linked to
>> configuration description in yocto-autobuilder-helper, and has been
>> identified through the following steps:
>> - empty report is supposed to be a comparison between yocto-4.2 (4.2.rc3)
>>   and 4.3_M1.rc1
>> - yocto-4.2 results are almost empty: we only find test results from Intel
>>   QA (pushed _after_ the AB build) and not the AB test results
>> - tests results are managed by send-qa-email.send-qa-email uses resulttool
>>   to systematically gather and store test results in local git directory
>> - however, it looks for basebranch/comparebranch to know if those results
>>   can be pushed onto git server, and those variables depend on config.json
>>   content
>> - yocto-4.2 (4.2.rc3) has been built on release branch mickledore
>>   (https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212)
>> - since mickledore is not yet described in config.json, send-qa-email
>>   considers it as a "work" branch (contrary to a "release" branch) and does
>>   not push test results
>>
>> As a consequence:
>> - first commit brings in python logger
>> - second commit adds a warning when such case happen, since we are able to
>>   detect it
>> - third fix actually adds mickledore as a release branch to properly store
>>   again test results
>>
>> There must be a more robust rework to do (because the issue will likely
>> happen on each major delivery), but I aimed for the quick and small fix to
>> quickly bring back tests results storage without breaking other things in
>> the process
>>
>> Alexis Lothoré (3):
>>   scripts/send-qa-email: use logger instead of raw prints
>>   scripts/send-qa-email: print warning when test results are not stored
>>   config.json: add mickledore as direct push branch for test results
> 
> Thanks for the analysis. I agree we need to somehow fix this properly.
> One solution might be to always push for poky if the branch name
> doesn't end with -next?

That might work indeed. If we are sure enough that no custom/feature branch will
be used in poky with send-qa-email (ie, only in poky-contrib), I can do the fix
this way
> 
> Since we have the release artefacts for the release, could we add the
> test results after the fact now?>
> Id' be interested to see the 4.3 M1 to 4.2 comparison rerun with that
> added.

I am not sure about where to find those artifacts for yocto-4.2 ? If you are
referring to https://autobuilder.yocto.io/pub/, yocto-4.2 has already been
removed from there. And if you are referring to the archived release on main
site
(https://downloads.yoctoproject.org/releases/yocto/yocto-4.2/poky-21790e71d55f417f27cd51fae9dd47549758d4a0.tar.bz2),
it does contain a single, 40 line testresults.json, so that's definitely not the
full AB tests results.

> 
> Cheers,
> 
> Richard
> 
> 
> 
> 
> 

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60300): https://lists.yoctoproject.org/g/yocto/message/60300
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 0/3] fix test results storage for mickledore

2023-06-14 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

This series is a follow-up for the 4.3_M1.rc1 regression report issue.

It has been observed that the report is empty. This issue is linked to
configuration description in yocto-autobuilder-helper, and has been
identified through the following steps:
- empty report is supposed to be a comparison between yocto-4.2 (4.2.rc3)
  and 4.3_M1.rc1
- yocto-4.2 results are almost empty: we only find test results from Intel
  QA (pushed _after_ the AB build) and not the AB test results
- tests results are managed by send-qa-email.send-qa-email uses resulttool
  to systematically gather and store test results in local git directory
- however, it looks for basebranch/comparebranch to know if those results
  can be pushed onto git server, and those variables depend on config.json
  content
- yocto-4.2 (4.2.rc3) has been built on release branch mickledore
  (https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212)
- since mickledore is not yet described in config.json, send-qa-email
  considers it as a "work" branch (contrary to a "release" branch) and does
  not push test results

As a consequence:
- first commit brings in python logger
- second commit adds a warning when such case happen, since we are able to
  detect it
- third fix actually adds mickledore as a release branch to properly store
  again test results

There must be a more robust rework to do (because the issue will likely
happen on each major delivery), but I aimed for the quick and small fix to
quickly bring back tests results storage without breaking other things in
the process

Alexis Lothoré (3):
  scripts/send-qa-email: use logger instead of raw prints
  scripts/send-qa-email: print warning when test results are not stored
  config.json: add mickledore as direct push branch for test results

 config.json  |  2 +-
 scripts/send_qa_email.py | 17 -
 2 files changed, 13 insertions(+), 6 deletions(-)

-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60288): https://lists.yoctoproject.org/g/yocto/message/60288
Mute This Topic: https://lists.yoctoproject.org/mt/99523809/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 3/3] config.json: add mickledore as direct push branch for test results

2023-06-14 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Now that mickledore is released, builds are executed on mickeldore release
branch. If not properly described in config.json, it will be considered a
"work" branch, and as a consequence test results will not be pushed onto
test results git repository
Add mickeldore entry in config.json to fix test results storage

Signed-off-by: Alexis Lothoré 
---
Example of such failure is AB build 5212
(https://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/5212/steps/29/logs/stdio)
for yocto-4.2 (mickledore release), which lead to empty regression report
for 4.3_M1.rc1 since it was compared to 4.2
---
 config.json | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config.json b/config.json
index 7fe2baea3428..e7f308d0a3f6 100644
--- a/config.json
+++ b/config.json
@@ -5,7 +5,7 @@
 
 "BUILD_HISTORY_DIR" : "buildhistory",
 "BUILD_HISTORY_REPO" : 
"ssh://g...@push.yoctoproject.org/poky-buildhistory",
-"BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:sumo", "poky:thud", "poky:warrior", "poky:zeus", "poky:dunfell", 
"poky:gatesgarth", "poky:hardknott", "poky:honister", "poky:kirkstone", 
"poky:langdale", "poky:master"],
+"BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:sumo", "poky:thud", "poky:warrior", "poky:zeus", "poky:dunfell", 
"poky:gatesgarth", "poky:hardknott", "poky:honister", "poky:kirkstone", 
"poky:langdale", "poky:mickledore", "poky:master"],
 "BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky-contrib:abelloni/master-next": "poky:master", "poky:master-next" : 
"poky:master"},
 
 "BUILDTOOLS_URL_TEMPLOCAL" : 
"/srv/autobuilder/autobuilder.yocto.io/pub/non-release/20210214-8/buildtools/x86_64-buildtools-extended-nativesdk-standalone-3.2+snapshot-7d38cc8e749aedb8435ee71847e04b353cca541d.sh",
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60291): https://lists.yoctoproject.org/g/yocto/message/60291
Mute This Topic: https://lists.yoctoproject.org/mt/99523812/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 1/3] scripts/send-qa-email: use logger instead of raw prints

2023-06-14 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

As for other scripts in yocto-autobuilder-helper or oecore, use python
logger class instead of raw print calls to allow log level distinction

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 4613bff892e0..8a8454d09c2f 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -11,6 +11,7 @@ import sys
 import subprocess
 import tempfile
 import re
+import logging
 
 import utils
 
@@ -64,8 +65,8 @@ def get_regression_base_and_target(basebranch, comparebranch, 
release, targetrep
 #Default case: return previous tag as base
 return get_previous_tag(targetrepodir, release), basebranch
 
-def generate_regression_report(querytool, targetrepodir, base, target, 
resultdir, outputdir):
-print(f"Comparing {target} to {base}")
+def generate_regression_report(querytool, targetrepodir, base, target, 
resultdir, outputdir, log):
+log.info(f"Comparing {target} to {base}")
 
 try:
 regreport = subprocess.check_output([querytool, "regression-report", 
base, target, '-t', resultdir])
@@ -73,9 +74,13 @@ def generate_regression_report(querytool, targetrepodir, 
base, target, resultdir
f.write(regreport)
 except subprocess.CalledProcessError as e:
 error = str(e)
-print(f"Error while generating report between {target} and {base} : 
{error}")
+log.error(f"Error while generating report between {target} and {base} 
: {error}")
 
 def send_qa_email():
+# Setup logging
+logging.basicConfig(level=logging.INFO, format="%(levelname)s: 
%(message)s")
+log = logging.getLogger('send-qa-email')
+
 parser = utils.ArgParser(description='Process test results and optionally 
send an email about the build to prompt QA to begin testing.')
 
 parser.add_argument('send',
@@ -132,7 +137,7 @@ def send_qa_email():
 try:
 subprocess.check_call(["git", "clone", 
"g...@push.yoctoproject.org:yocto-testresults", tempdir, "--depth", "1"] + 
cloneopts)
 except subprocess.CalledProcessError:
-print("No comparision branch found, falling back to master")
+log.info("No comparision branch found, falling back to master")
 subprocess.check_call(["git", "clone", 
"g...@push.yoctoproject.org:yocto-testresults", tempdir, "--depth", "1"])
 
 # If the base comparision branch isn't present regression 
comparision won't work
@@ -157,7 +162,7 @@ def send_qa_email():
 
 regression_base, regression_target = 
get_regression_base_and_target(basebranch, comparebranch, args.release, 
targetrepodir)
 if regression_base and regression_target:
-generate_regression_report(querytool, targetrepodir, 
regression_base, regression_target, tempdir, args.results_dir)
+generate_regression_report(querytool, targetrepodir, 
regression_base, regression_target, tempdir, args.results_dir, log)
 
 finally:
 subprocess.check_call(["rm", "-rf",  tempdir])
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60289): https://lists.yoctoproject.org/g/yocto/message/60289
Mute This Topic: https://lists.yoctoproject.org/mt/99523810/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [yocto-autobuilder-helper][PATCH 2/3] scripts/send-qa-email: print warning when test results are not stored

2023-06-14 Thread Alexis Lothoré via lists . yoctoproject . org
From: Alexis Lothoré 

Tests results push command depends on basebranch and comparebranch
variables, which are computed based on config.json content. If this file is
not in sync with current release branch, tests results will be properly
stored in git directory but not pushed onto test results server. Since we
are able to detect this scenario, print at least a warning, without
breaking current build since it could be a release

Signed-off-by: Alexis Lothoré 
---
 scripts/send_qa_email.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/scripts/send_qa_email.py b/scripts/send_qa_email.py
index 8a8454d09c2f..fc7fccc6f6f7 100755
--- a/scripts/send_qa_email.py
+++ b/scripts/send_qa_email.py
@@ -159,6 +159,8 @@ def send_qa_email():
 elif basebranch:
 subprocess.check_call(["git", "push", "--all"], cwd=tempdir)
 subprocess.check_call(["git", "push", "--tags"], cwd=tempdir)
+elif is_release_version(args.release) and not comparebranch and 
not basebranch:
+log.warning("Test results not published on release version. 
Faulty AB configuration ?")
 
 regression_base, regression_target = 
get_regression_base_and_target(basebranch, comparebranch, args.release, 
targetrepodir)
 if regression_base and regression_target:
-- 
2.41.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60290): https://lists.yoctoproject.org/g/yocto/message/60290
Mute This Topic: https://lists.yoctoproject.org/mt/99523811/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Compilation time (was: Generate \scaleDurations procedurally)

2023-06-13 Thread Lib Lists
Dear Jean (and Valentin!),
thank you so much! The page breaker was the problem, and after
applying your fix the compilation time on my machine is around 3
minutes. I'll soon be able to finish the score, just need to fix the
duration ratio and the layout.
Again, thank you!
Lib
P.S. I just found an old pdf containing two pages from the same piece,
made with Lilypond 2.9.19, I guess around the year 2006. Unfortunately
I couldn't find the .ly file...



Re: [gentoo-user] trying to get sd card reader to work

2023-06-13 Thread Wols Lists

On 13/06/2023 03:01, John Blinka wrote:

Good to know it all works, but if you're sticking a new card in an old
reader, they may not be compatible.


Don’t know what constitutes new/old, but these are <1 year old cards. 
Satisfied with empiric evidence that it all works. Have written mp3 
files to this card and played them via Arduino/attached mp3 board. 
Sufficient for my purposes. Amazed that it all works! (Pushing beyond my 
comfort level with card reader/Arduino/mp3 board/wiring all this stuff 
together.)


Basically, just a little bit of history ...

When these cards came out, they were true SD. With a max capacity of 4GB 
(4GB cards are actually rare as hens teeth ...)


As 2GB became cheap and common, the technology transitioned to SDHC, so 
your 4GB card is almost certainly SDHC, and will not work in a true SD 
reader (like my 2009-era satnav).


That had a limit of - iirc - 32GB, and as that became common the 
technology transitioned to SDXC. This is where my knowledge becomes 
rather hazy...


But anyways, everywhere the card is newer than the reader, you have the 
possibility of problems. It rarely happens, but I've been bitten twice 
trying to upgrade the chips in cameras ...


Cheers,
Wol



Re: dvmrpd start causes kernel panic: assertion failed

2023-06-13 Thread Why 42? The lists account.
On Mon, Jun 12, 2023 at 11:56:43PM +0300, Vitaliy Makkoveev wrote:
> ...
> We have missing kernel lock around (*if_sysctl)(). Diff below should fix
> it.
> 
> Index: sys/netinet/ip_mroute.c
> ===
> RCS file: /cvs/src/sys/netinet/ip_mroute.c,v
> retrieving revision 1.138
> diff -u -p -r1.138 ip_mroute.c
> --- sys/netinet/ip_mroute.c   19 Apr 2023 20:03:51 -  1.138
> +++ sys/netinet/ip_mroute.c   12 Jun 2023 20:55:05 -
> @@ -718,7 +718,9 @@ add_vif(struct socket *so, struct mbuf *
>   satosin(_addr)->sin_len = sizeof(struct sockaddr_in);
>   satosin(_addr)->sin_family = AF_INET;
>   satosin(_addr)->sin_addr = zeroin_addr;
> + KERNEL_LOCK();
>   error = (*ifp->if_ioctl)(ifp, SIOCADDMULTI, (caddr_t));
> + KERNEL_UNLOCK();
>   if (error)
>   return (error);
>   }

Cool, well, not cool, but you know what I mean ... another problem fixed.
:-) Thanks for the support.

The system is running the 7.3 release, can I apply that patch directly
there somehow, or would I need to be using current / a snapshot?

Thanks again.

Cheers,
Robb.



Compilation time (was: Generate \scaleDurations procedurally)

2023-06-13 Thread Lib Lists
Hello,
When trying to compile the complete score of the piece below (88
staves and 120 quarter notes), I noticed it was taking many hours (I
didn't finish the compilation). Trying with shorter versions, I got
the following compilation times:
- 2x quarter notes -> 16.7",
- 4x quarter notes -> 1'17",
- 8x quarter notes -> > 57'47" (!)

Is there anything that could be optimised to reduce the compilation
time? I'm afraid the final version could potentially take many days to
compile.

I'm on a MacBook Air M2 with macOS Ventura 13.4, Lilypond 2.25.5

Thank you for any hint,
Lib

- - -

\version "2.25.5"

#(set-default-paper-size "a0")
#(set-global-staff-size 15)

mus = { \relative c'  \repeat unfold 1 { c c c c }} % modify the
\repeat unfold value for testing, final version should be 40

#(define my-semitone->pitch
   (make-semitone->pitch
(music-pitches #{ { c cis d ees e f fis g gis a bes b } #})))

\new StaffGroup  <<
  #@(map (lambda (i)
   #{
  \new Staff {
\scaleDurations #(cons 120 i) {
  \transpose c' #(my-semitone->pitch (- (- 120 i))) {
#(cond
  ((<= 108 i 120) #{ \ottava 2 #})
  ((<= 97 i 107) #{ \ottava 1 #})
  ((<= 72 i 96) #{ \ottava 0 #})
  ((<= 48 i 71) #{ \clef bass #})
  ((<= 33 i 47) #{ \clef bass \ottava -1 #} )
  (else #{ #}))
\mus
  }
}
  }
#})
 (iota 88 120 -1))
>>

   \layout {
 indent = #0
\context {
  \Score
  \override SpacingSpanner.base-shortest-duration = #(ly:make-moment 1/4)
  proportionalNotationDuration = #(ly:make-moment 1/10)
  \override SpacingSpanner.uniform-stretching = ##t
  \override SpacingSpanner.strict-note-spacing = ##t
  forbidBreakBetweenBarLines = ##f
  \cadenzaOn
}
\context {
  \Staff
  \remove Time_signature_engraver
}
\context {
  \Voice
  \remove Forbid_line_break_engraver
}
  }



Re: dvmrpd start causes kernel panic: assertion failed

2023-06-12 Thread Why 42? The lists account.


On Wed, Jun 07, 2023 at 03:50:29PM +0300, Vitaliy Makkoveev wrote:
> > ...
> > Please, share your dvmrpd.conf.
> > 
> 
> Also, you could try to use ktrace to provide some additional info.


Hi Again,

On site I had to power cycle the ThinkPad to be able to get control.

The contents of the dvmrpd config file should be visible here:
dvmrpd.conf+ifconfig.jpghttps://paste.c-net.org/SlimeReply

In order to be able to show progress, I tried using "mrouted" instead.
It seems to have resulted in much the same panic.
So apparently the problem may not specific to dvmrpd.
Maybe something related to the USB Ethernet adaptor? I see some
references to both "ure" and "usb" in the stack traces ...

See for example:
mrouted_panic.jpg   https://paste.c-net.org/YolandaSamir
ddb_show_panic+trace.jpghttps://paste.c-net.org/TrackParent
ddbcpu0+1.jpg   https://paste.c-net.org/HansonAinsley
ddbcpu3+4+5.jpg https://paste.c-net.org/MidtermsComposer
ddbcpu6+7.jpg   https://paste.c-net.org/CostaScratchy

Sorry about all the photos, it was the best I could do. I'm driving the
system via a pretty rubbish KVM switch.

Hope this helps with the analysis. In the meantime I'll look around for
some other multicast routing solution.

Cheers,
Robb.



Re: [tor-relays] Comcast blocks ALL traffic with tor relays

2023-06-12 Thread lists
On Sonntag, 11. Juni 2023 13:46:06 CEST xmrk2 via tor-relays wrote:

> Background: I am running a lightning node, lightning is a layer 2 protocol
> to scale Bitcoin. Lightning nodes need to be connected to each other
> ideally 24/7. I was contacted by the operator of another Lightning node,
> complaining that he cannot connect to my node. He is Comcast customer, I am
> not. I was also running a tor relay on the same public IPv4 address.
> 
> 
> Any ideas on how to combat this?
It might help to configure Lightning node as a hidden service.
I offer my Monero and Bitcoin RPC & P2P ports as a hidden service.

And have additionally SocksPort flag 'OnionTrafficOnly' on the client and 
hidden 
service side.
SocksPort 9050 OnionTrafficOnly
# Tell the tor client to only connect to .onion addresses in response to 
SOCKS5 requests on this connection.
# This is equivalent to NoDNSRequest, NoIPv4Traffic, NoIPv6Traffic.

> I was thinking about including some false positives in tor relay list.
I wouldn't do that. I think you'll end up on the bad-relay list in no time.
I would rather write to the Comcast network admins first. Give them good 
examples. E.g. in Germany the ISP's support Tor (NetCologne, Deutsche Telekom, 
...)

Mirror:
https://torproject.netcologne.de/dist/
Our Traffic sponsors:
https://www.community-ix.net/sponsors/

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [PATCH] inline: improve internal function costs

2023-06-12 Thread Andre Vieira (lists) via Gcc-patches




On 05/06/2023 04:04, Jan Hubicka wrote:

On Thu, 1 Jun 2023, Andre Vieira (lists) wrote:


Hi,

This is a follow-up of the internal function patch to add widening and
narrowing patterns.  This patch improves the inliner cost estimation for
internal functions.


I have no idea why calls are special in IPA analyze_function_body
and so I cannot say whether treating all internal fn calls as
non-calls is correct there.  Honza?


The reason is that normal statements are acconted as part of the
function body, while calls have their costs attached to call edges
(so it can be adjusted when call is inlined to otherwise optimized).

However since internal functions have no cgraph edges, this looks like
a bug that we do not test it.  (the code was written before internal
calls was introduced).



This sounds to me like you agree with my approach to treat internal 
calls different to regular calls.



I wonder if we don't want to have is_noninternal_gimple_call that could
be used by IPA code to test whether cgraph edge should exist for
the statement.


I'm happy to add such a helper function @richi,rsandifo: you ok with that?


The tree-inline.cc change is OK though (you can push that separately).

The rest is OK too.
Honza


Thanks,
Richard.


Bootstrapped and regression tested on aarch64-unknown-linux-gnu.

gcc/ChangeLog:

 * ipa-fnsummary.cc (analyze_function_body): Correctly handle
 non-zero costed internal functions.
 * tree-inline.cc (estimate_num_insns): Improve costing for internal
 functions.



--
Richard Biener 
SUSE Software Solutions Germany GmbH, Frankenstrasse 146, 90461 Nuernberg,
Germany; GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman;
HRB 36809 (AG Nuernberg)


Re: How to generate \scaleDurations values procedurally

2023-06-12 Thread Lib Lists
On Mon, 12 Jun 2023 at 12:44, Jean Abou Samra  wrote:
>
> [Adding back the list]

Oops, thank you!

>
> Le lundi 12 juin 2023 à 12:23 +0200, Lib Lists a écrit :
>
> > On Mon, 12 Jun 2023 at 10:58, Jean Abou Samra 
> > <[j...@abou-samra.fr](mailto:j...@abou-samra.fr)> wrote:
> > Dear Jean, all clear, thank you so much! I was exactly trying to
> > figure out the accidental issue and realised there was a problem
> > there.
> > Thank you for the help with the Scheme/Lily syntax, it's now obvious
> > what was wrong in my attempt.
> >
> >
> > > By the way, \remove'ing Timing_translator sounds a bit... scary, it's a 
> > > kind of
> > > fundamental translator, I don't know if we really support removing it 
> > > entirely.
> > > Also, your method of removing bar lines will still make LilyPond insert 
> > > them.
> > > I think it's better to just use \cadenzaOn.
> >
> >
> > Thank you for the tip. Reading the docs and various online examples I
> > automatically associated removing Timing_translator to a requisite for
> > every polyrhythmic / polymetric music notation, and kept it there (I
> > admit that I don't always accurately understand the meaning of all the
> > lilypond options,  at times I just experiment until it works)
>
>
> Basically, for polymeter, you move it from Score level to Staff level.
> Namely, you remove it from Score and you add it to Staff. That is also
> exactly what \enablePolymeter does (it is a relatively recent command,
> which explains why you might still find online examples that move
> Timing_translator explicitly).
>
> Now I realize that you did in fact have \enablePolymeter, so
> Timing_translator was present in Staff, not completely removed, and
> \remove Timing_translator in Score was simply redundant.
>
> For unmetered music, I maintain that \cadenzaOn is the easiest solution.
>
>
> > > \version "2.25.5"
> > > [...]
> >
> > That's fabulous, thank you!
> > I'm trying to add ottava signs to only some of the pitches (the top
> > staff is the piano's highest note). I changed mus = {\ottava #2
> > \repeat unfold 1 \relative c'  { c c c c }}, which obviously prints
> > the ottava on all the pitches.
> > The solution I found is to have multiple map loops,
> > depending on the pitch's need for ottava sign (15, 8, 8b, 15b) and
> > clef. It works, but I'm wondering if there's a
> > smarter solution.
> > Thank you again for your help!
> >
> > mus = {\ottava #2 \repeat unfold 1 \relative c'  { c c c c }}
> > musLower = { \repeat unfold 1 \relative c'  { c c c c }}
> > musLowest = { \clef bass \repeat unfold 1 \relative c,  { c c c c }}
> >
> > #(define my-semitone->pitch
> >(make-semitone->pitch
> > (music-pitches #{ { c cis d ees e f fis g gis a bes b } #})))
> >
> > \score {
> >   \new StaffGroup  <<
> > #@(map (lambda (i)
> >  #{
> > \new Staff {
> >   \scaleDurations #(cons 60 i) {
> > \transpose c' #(my-semitone->pitch (- (- 60 i))) {
> >   \mus
> > }
> >   }
> > }
> >   #})
> >(iota 3 60 -1))
> >#@(map (lambda (i)
> >  #{
> > \new Staff {
> >   \scaleDurations #(cons 63 i) {
> > \transpose c' #(my-semitone->pitch (- (- 60 i))) {
> >   \musLower
> > }
> >   }
> > }
> >   #})
> >(iota 3 60 -1))
> >#@(map (lambda (i)
> >  #{
> > \new Staff {
> >   \scaleDurations #(cons 66 i) {
> > \transpose c' #(my-semitone->pitch (- (- 60 i))) {
> >   \musLowest
> > }
> >   }
> > }
> >   #})
> >(iota 3 60 -1))
> >   >>
>
>
> Sure, use a cond form:
>
> \version "2.25.5"
>
>
> mus = {\repeat unfold 5 \relative c'  { c c c c }}
>
> #(define my-semitone->pitch
>(make-semitone->pitch
> (music-pitches #{ { c cis d ees e f fis g gis a bes b } #})))
>
> \layout {
>   \context {
> \Score
> \cadenzaOn
>   }
> }
>
> \new StaffGroup  <<
>   #@(map (lambda (i)
>#{
>

Re: How to generate \scaleDurations values procedurally

2023-06-12 Thread Lib Lists
On Mon, 12 Jun 2023 at 00:08, Jean Abou Samra  wrote:
>
> Le dimanche 11 juin 2023 à 23:55 +0200, Lib Lists a écrit :
>
> Hello, I'm (re)working on a series of pieces for player piano. I'd like to 
> find a way to generate all the \scaleDurations values so that I don't have to 
> type them by hand. In the example below they follow a simple pattern (60/60k 
> 60/59, 60/58, etc.). Unfortunately my knowledge of Scheme is very limited. 
> Moreover, I wouldn't know how to insert the generated values to the right 
> staves. Any hint would be really appreciated!
>
> Like this?
>
> \version "2.25.5"
>
> mus = \repeat unfold 3 { c c c c }
>
>
> \score {
>   \new StaffGroup  <<
> #@(map (lambda (i)
>  #{ \new Staff { \scaleDurations #(cons 60 i) \mus } #})
>(iota 10 60 -1))
>   >>
>
>\layout {
> \enablePolymeter
> \context {
>   \Score
>   \override SpacingSpanner.base-shortest-duration = #(ly:make-moment 1/4)
>   proportionalNotationDuration = #(ly:make-moment 1/10)
>   \override SpacingSpanner.uniform-stretching = ##t
>   \override SpacingSpanner.strict-note-spacing = ##t
>   \remove "Timing_translator"
>   forbidBreakBetweenBarLines = ##f
> }
>
> \context {
>   \Staff
>   \remove "Time_signature_engraver"
>   \override BarLine.stencil = ##f
>   \override BarLine.allow-span-bar = ##f
> }
>
> \context {
>   \Voice
>   \remove Forbid_line_break_engraver
> }
>   }
> }
>
> The 60/59 notation is just LilyPond syntax for the Scheme pair (60 . 59).
>
> There is some info about #@ here (and if you didn't know about pairs, you can 
> read this).
>
> Best,
>
> Jean

Hi Jean,
thank you so much, that works perfectly! And thank you also for the resources.
If I may still ask for some help, I now tried to transpose the pitches
so that the first staff has 'c', the second staff 'b', and so on. In
other words, each staff's pitch is one semitone lower than the
previous (or other transposition interval). I tried to add the
transpose function but got stuck.

In your example, I changed the line:
#{ \new Staff { \scaleDurations #(cons 60 i) \mus } #})

to this:
#{ \new Staff { \scaleDurations #(cons 60 i)  #(ly:music-transpose
{\mus} i)  } #})

but clearly there's something wrong,

Thank you and
best regards,

Lib



How to generate \scaleDurations values procedurally

2023-06-11 Thread Lib Lists
Hello,
I'm (re)working on a series of pieces for player piano. I'd like to
find a way to generate all the \scaleDurations values so that I don't
have to type them by hand. In the example below they follow a simple
pattern (60/60k 60/59, 60/58, etc.). Unfortunately my knowledge of
Scheme is very limited. Moreover, I wouldn't know how to insert the
generated values to the right staves.
Any hint would be really appreciated!

Cheers,
Lib

\version "2.25.5"

partOne = \relative c' {
  \repeat unfold 3 { c c c c }
}

partTwo = \relative c' {
  \scaleDurations 60/59
  \repeat unfold 3 { c c c c }
}

partThree = \relative c' {
  \scaleDurations 60/58
  \repeat unfold 3 { c c c c }
}

\score {
  \new StaffGroup  <<
\new Staff = "one" {\partOne }
\new Staff = "two" { \partTwo }
\new Staff = "three" {\partThree }
  >>

   \layout {
\enablePolymeter
\context {
  \Score
  \override SpacingSpanner.base-shortest-duration = #(ly:make-moment 1/4)
  proportionalNotationDuration = #(ly:make-moment 1/10)
  \override SpacingSpanner.uniform-stretching = ##t
  \override SpacingSpanner.strict-note-spacing = ##t
  \remove "Timing_translator"
  forbidBreakBetweenBarLines = ##f
}

\context {
  \Staff
  \remove "Time_signature_engraver"
  \override BarLine.stencil = ##f
  \override BarLine.allow-span-bar = ##f
}

\context {
  \Voice
  \remove Forbid_line_break_engraver
}
  }
}



After update, vim reports undefined symbols in libruby32.so

2023-06-11 Thread Why 42? The lists account.


Hi All,

FYI, After running "sysupgrade -s" + "pkg_add -u" earlier today, I now
see these messages when I exit vim:

mjoelnir:awk 11.06 18:42:45 % vi substrtest.awk
...
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_Backtrace'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetIP'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetCFA'
vim:/usr/local/lib/libruby32.so: undefined symbol 
'_Unwind_FindEnclosingFunction'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetDataRelBase'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetTextRelBase'
vim:/usr/local/lib/libruby32.so: undefined symbol 
'_Unwind_GetLanguageSpecificData'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetIPInfo'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_GetRegionStart'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_SetGR'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_SetIP'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_DeleteException'
vim:/usr/local/lib/libruby32.so: undefined symbol '_Unwind_RaiseException'

% uname -a
OpenBSD mjoelnir.fritz.box 7.3 GENERIC.MP#1230 amd64

% pkg_info | grep vim
vim-9.0.1536p0-no_x11-perl-python3-ruby vi clone, many additional features
vim-spell-de-9.0German spell-check files for Vim

It looks as if I received new versions of both ruby and vim:
# grep ruby /var/log/messages
Jun 11 13:47:01 mjoelnir pkg_add: Added ruby-3.2.2
Jun 11 13:52:56 mjoelnir pkg_add: Added ruby-3.1.4->3.1.4
Jun 11 13:53:22 mjoelnir pkg_add: Added 
vim-9.0.1536-no_x11-perl-python3-ruby->9.0.1536p0-no_x11-perl-python3-ruby
Jun 11 14:06:06 mjoelnir pkg_delete: Removed ruby-3.1.4



Re: dvmrpd start causes kernel panic: assertion failed

2023-06-11 Thread Why 42? The lists account.


On Wed, Jun 07, 2023 at 03:50:29PM +0300, Vitaliy Makkoveev wrote:
> > Please, share your dvmrpd.conf.
> > 
> 
> Also, you could try to use ktrace to provide some additional info.

Hi,

Thanks for responding, the system is some 30Km away and, er, crashed.
But maybe I will get there tomorrow. I wasn't able to get it to react to
input from the remote KVM system that I was using,

AFAICR, the dvmrpd.conf just contained a copy of the file from examples,
with the interface names changed to "em0" and "ure0" i.e. the two "up"
interfaces on the system (ure being a T-Link USB-Ethernet adaptor).

Forgive my ignorance, but does this matter? I mean the error looks (to
me) like an attempt to catch an unexpected set of cirumstances i.e.

  kernel diagnostic assertion "ident ==  || timo || _kernel_lock_held()" 
failed
 
It seems as if none of those three things were true, therefore the
assertion failed, so we just need to know why it was written in the first
place and the meaning of those clauses, if you see what I mean?

Cheers,
Robb.

P.S. I was starting the daemon manually via a terminal window, just as
you suggested.



Openscape for iOS?

2023-06-09 Thread lists
Hi all,
I had a link in twitter that opened to a page telling you about Openscape
and asking to install test flight and then install.
I did this, but when I went looking for a link it said app wasn't available
in my country.
TIA.
 
I know this is wrong since I have it 

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/03af01d99b5f%24cd247fe0%24676d7fa0%24%40damorris.com.


[OE-core] [PATCH 2/2] runqemu-ifupdown/get-tapdevs: Add support for ip tuntap

2023-06-09 Thread Jörg Sommer via lists . openembedded . org
The *ip* command supports the creation and destruction of TAP devices since
2009 and might be more likely installed on systems then *tunctl*. Therefore
it should be tried to setup or teardown the TAP interface with *ip* before
falling back to *tunctl*.

https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/commit/?id=580fbd88f75cc9eea0d28a48c025b090eb9419a7

Signed-off-by: Jörg Sommer 
---
 scripts/runqemu-gen-tapdevs | 26 +-
 scripts/runqemu-ifdown  | 14 --
 scripts/runqemu-ifup| 31 +++
 3 files changed, 44 insertions(+), 27 deletions(-)

diff --git a/scripts/runqemu-gen-tapdevs b/scripts/runqemu-gen-tapdevs
index f2d6cc39c2..ffb82adce6 100755
--- a/scripts/runqemu-gen-tapdevs
+++ b/scripts/runqemu-gen-tapdevs
@@ -50,12 +50,6 @@ if ! [ $COUNT -ge 0 ]; then
exit 1
 fi
 
-TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
-if [[ ! -x "$TUNCTL" || -d "$TUNCTL" ]]; then
-   echo "Error: $TUNCTL is not an executable"
-   usage
-fi
-
 if [ $EUID -ne 0 ]; then
echo "Error: This script must be run with root privileges"
exit
@@ -68,15 +62,29 @@ if [ ! -x "$RUNQEMU_IFUP" ]; then
exit 1
 fi
 
-if ! interfaces=`ip link` 2>/dev/null; then
+TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
+ip_supports_tuntap=false
+if interfaces=`ip tuntap list` 2>/dev/null; then
+   ip_supports_tuntap=true
+   interfaces=`echo "$interfaces |cut -f1 -d:`
+elif [[ ! -x "$TUNCTL" || -d "$TUNCTL" ]]; then
+   echo "Error: $TUNCTL is not an executable"
+   usage
+elif interfaces=`ip link` 2>/dev/null; then
+   interfaces=`echo "$interfaces" | sed '/^[0-9]\+: 
\(docker[0-9]\+\):.*/!d; s//\1/'`
+else
echo "Failed to call 'ip link'" >&2
exit 1
 fi
 
 # Ensure we start with a clean slate
-for tap in `echo "$interfaces" | sed '/^[0-9]\+: \(docker[0-9]\+\):.*/!d; 
s//\1/'`; do
+for tap in $interfaces; do
echo "Note: Destroying pre-existing tap interface $tap..."
-   $TUNCTL -d $tap
+   if $ip_supports_tuntap; then
+   ip tuntap del $tap mode tap
+   else
+   $TUNCTL -d $tap
+   fi
 done
 rm -f /etc/runqemu-nosudo
 
diff --git a/scripts/runqemu-ifdown b/scripts/runqemu-ifdown
index e0eb5344c6..f72166b32b 100755
--- a/scripts/runqemu-ifdown
+++ b/scripts/runqemu-ifdown
@@ -33,13 +33,15 @@ fi
 TAP=$1
 STAGING_BINDIR_NATIVE=$2
 
-TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
-if [ ! -e "$TUNCTL" ]; then
-   echo "Error: Unable to find tunctl binary in '$STAGING_BINDIR_NATIVE', 
please bitbake qemu-helper-native"
-   exit 1
-fi
+if !ip tuntap del $TAP mode tap 2>/dev/null; then
+   TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
+   if [ ! -e "$TUNCTL" ]; then
+   echo "Error: Unable to find tunctl binary in 
'$STAGING_BINDIR_NATIVE', please bitbake qemu-helper-native"
+   exit 1
+   fi
 
-$TUNCTL -d $TAP
+   $TUNCTL -d $TAP
+fi
 
 IFCONFIG=`which ip 2> /dev/null`
 if [ "x$IFCONFIG" = "x" ]; then
diff --git a/scripts/runqemu-ifup b/scripts/runqemu-ifup
index bb661740c5..5fdcddeeda 100755
--- a/scripts/runqemu-ifup
+++ b/scripts/runqemu-ifup
@@ -41,22 +41,29 @@ USERID="-u $1"
 GROUP="-g $2"
 STAGING_BINDIR_NATIVE=$3
 
-TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
-if [ ! -x "$TUNCTL" ]; then
-   echo "Error: Unable to find tunctl binary in '$STAGING_BINDIR_NATIVE', 
please bitbake qemu-helper-native"
-   exit 1
+if taps=$(ip tuntap list 2>/dev/null); then
+   tap_no=$(( $(echo "$taps" |sort -r |sed 's/^tap//; s/:.*//; q') + 1 ))
+   ip tuntap add tap$tap_no mode tap group $2 && TAP=tap$tap_no
 fi
 
-TAP=`$TUNCTL -b $GROUP 2>&1`
-STATUS=$?
-if [ $STATUS -ne 0 ]; then
-# If tunctl -g fails, try using tunctl -u, for older host kernels 
-# which do not support the TUNSETGROUP ioctl
-   TAP=`$TUNCTL -b $USERID 2>&1`
+if [ -z $TAP ]; then
+   TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
+   if [ ! -x "$TUNCTL" ]; then
+   echo "Error: Unable to find tunctl binary in 
'$STAGING_BINDIR_NATIVE', please bitbake qemu-helper-native"
+   exit 1
+   fi
+
+   TAP=`$TUNCTL -b $GROUP 2>&1`
STATUS=$?
if [ $STATUS -ne 0 ]; then
-   echo "tunctl failed:"
-   exit 1
+   # If tunctl -g fails, try using tunctl -u, for older host kernels
+   # which do not support the TUNSETGROUP ioctl
+   TAP=`$TUNCTL -b $USERID 2>&1`
+   STATUS=$?
+   if [ $STATUS -ne 0 ]; then
+   echo "tunctl failed:"
+   exit 1
+   fi
fi
 fi
 
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182547): 
https://lists.openembedded.org/g/openembedded-core/message/182547
Mute This Topic: https://lists.openembedded.org/mt/99424585/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: 

[OE-core] [PATCH 1/2] runqemu-gen-tapdevs: Refactoring

2023-06-09 Thread Jörg Sommer via lists . openembedded . org
The changes are mostly about early exit which causes indentation changes;
check with `git diff -w`. Another change is the check for ip by simply
calling it and deciding upon the exit code, if it's fine or not.

Signed-off-by: Jörg Sommer 
---
 scripts/runqemu-gen-tapdevs | 77 ++---
 1 file changed, 37 insertions(+), 40 deletions(-)

diff --git a/scripts/runqemu-gen-tapdevs b/scripts/runqemu-gen-tapdevs
index a6ee4517da..f2d6cc39c2 100755
--- a/scripts/runqemu-gen-tapdevs
+++ b/scripts/runqemu-gen-tapdevs
@@ -44,6 +44,12 @@ GID=$2
 COUNT=$3
 STAGING_BINDIR_NATIVE=$4
 
+# check if COUNT is a number and >= 0
+if ! [ $COUNT -ge 0 ]; then
+   echo "Error: Incorrect count: $COUNT"
+   exit 1
+fi
+
 TUNCTL=$STAGING_BINDIR_NATIVE/tunctl
 if [[ ! -x "$TUNCTL" || -d "$TUNCTL" ]]; then
echo "Error: $TUNCTL is not an executable"
@@ -62,48 +68,39 @@ if [ ! -x "$RUNQEMU_IFUP" ]; then
exit 1
 fi
 
-IFCONFIG=`which ip 2> /dev/null`
-if [ -z "$IFCONFIG" ]; then
-   # Is it ever anywhere else?
-   IFCONFIG=/sbin/ip
-fi
-if [ ! -x "$IFCONFIG" ]; then
-   echo "$IFCONFIG cannot be executed"
-   exit 1
-fi
-
-if [ $COUNT -ge 0 ]; then
-   # Ensure we start with a clean slate
-   for tap in `$IFCONFIG link | grep tap | awk '{ print \$2 }' | sed 
s/://`; do
-   echo "Note: Destroying pre-existing tap interface $tap..."
-   $TUNCTL -d $tap
-   done
-   rm -f /etc/runqemu-nosudo
-else
-   echo "Error: Incorrect count: $COUNT"
+if ! interfaces=`ip link` 2>/dev/null; then
+   echo "Failed to call 'ip link'" >&2
exit 1
 fi
 
-if [ $COUNT -gt 0 ]; then
-   echo "Creating $COUNT tap devices for UID: $TUID GID: $GID..."
-   for ((index=0; index < $COUNT; index++)); do
-   echo "Creating tap$index"
-   ifup=`$RUNQEMU_IFUP $TUID $GID $STAGING_BINDIR_NATIVE 2>&1`
-   if [ $? -ne 0 ]; then
-   echo "Error running tunctl: $ifup"
-   exit 1
-   fi
-   done
+# Ensure we start with a clean slate
+for tap in `echo "$interfaces" | sed '/^[0-9]\+: \(docker[0-9]\+\):.*/!d; 
s//\1/'`; do
+   echo "Note: Destroying pre-existing tap interface $tap..."
+   $TUNCTL -d $tap
+done
+rm -f /etc/runqemu-nosudo
 
-   echo "Note: For systems running NetworkManager, it's recommended"
-   echo "Note: that the tap devices be set as unmanaged in the"
-   echo "Note: NetworkManager.conf file. Add the following lines to"
-   echo "Note: /etc/NetworkManager/NetworkManager.conf"
-   echo "[keyfile]"
-   echo "unmanaged-devices=interface-name:tap*"
-
-   # The runqemu script will check for this file, and if it exists,
-   # will use the existing bank of tap devices without creating
-   # additional ones via sudo.
-   touch /etc/runqemu-nosudo
+if [ $COUNT -eq 0 ]; then
+   exit 0
 fi
+
+echo "Creating $COUNT tap devices for UID: $TUID GID: $GID..."
+for ((index=0; index < $COUNT; index++)); do
+   echo "Creating tap$index"
+   if ! ifup=`$RUNQEMU_IFUP $TUID $GID $STAGING_BINDIR_NATIVE 2>&1`; then
+   echo "Error running tunctl: $ifup"
+   exit 1
+   fi
+done
+
+echo "Note: For systems running NetworkManager, it's recommended"
+echo "Note: that the tap devices be set as unmanaged in the"
+echo "Note: NetworkManager.conf file. Add the following lines to"
+echo "Note: /etc/NetworkManager/NetworkManager.conf"
+echo "[keyfile]"
+echo "unmanaged-devices=interface-name:tap*"
+
+# The runqemu script will check for this file, and if it exists,
+# will use the existing bank of tap devices without creating
+# additional ones via sudo.
+touch /etc/runqemu-nosudo
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182546): 
https://lists.openembedded.org/g/openembedded-core/message/182546
Mute This Topic: https://lists.openembedded.org/mt/99424584/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] Fw: [poky] [PATCH] runqemu-ifupdown: Add support for ip tuntap

2023-06-09 Thread Jörg Sommer via lists . openembedded . org
Sorry, I have to resend this message, because I wasn't subscribed to oe-core.


From: Jörg Sommer 
Sent: Friday, 9 June 2023 09:30
To: Richard Purdie ; 
p...@lists.yoctoproject.org ; 
openembedded-core@lists.openembedded.org 

Subject: Re: [poky] [PATCH] runqemu-ifupdown: Add support for ip tuntap

@openembedded: I have proposed a patch to runqemu-ifup/down to use `ip tuntap` 
as an alternative to tunctl for setting up the tap interface. Now the question 
came up if tunctl could be fully dropped.

On 8 June 2023 22:18, Richard Purdie wrote:
> On Thu, 2023-06-08 at 15:07 +0200, Jörg Sommer via
> lists.yoctoproject.org wrote:
> > The *ip* command supports the creation and destruction of TAP devices since
> > 2009 and might be more likely installed on systems then *tunctl*. Therefore
> > it should be tried to setup or teardown the TAP interface with *ip* before
> > falling back to *tunctl*.
> >
> > https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/commit/?id=580fbd88f75cc9eea0d28a48c025b090eb9419a7
> >
> > Signed-off-by: Jörg Sommer 
> > ---
> >  scripts/runqemu-ifdown | 14 --
> >  scripts/runqemu-ifup   | 31 +++
> >  2 files changed, 27 insertions(+), 18 deletions(-)
>
> This does make me wonder if we could just drop tunctl now?

I think so. But do all systems support ip, now? If so, the part for ifconfig 
could be dropped.

> We originally had this as ifconfig couldn't do what we needed and ip
> was comparatively rare on systems. Things have changed and moved on!
>
> Did the gen-tap-devs script also need updating?

Yeah, you're right. I forgot about it.

> Also, this patch does need to go to the openembedded-core list as it is
> changing that repository which poky is built from.

Thanks for the pointing.

Mit freundlichen Grüßen


Jörg Sommer

Software Developer / Programmierer



--

Navimatix GmbH

Tatzendpromenade 2

07745 Jena


T: 03641 - 327 99 0

F: 03641 - 526 306

M: joerg.som...@navimatix.de

www.navimatix.de






Geschäftsführer: Steffen Späthe, Jan Rommeley

Registergericht: Amtsgericht Jena, HRB 501480



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182543): 
https://lists.openembedded.org/g/openembedded-core/message/182543
Mute This Topic: https://lists.openembedded.org/mt/99423892/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [OE-Core][PATCH v3 4/4] core-image-ptest: append ptest directory to artifacts list

2023-06-09 Thread Alexis Lothoré via lists . openembedded . org
Hi Mikko,

On 6/9/23 08:52, Mikko Rapeli wrote:
> Hi,
> 
> On Fri, Jun 09, 2023 at 08:48:02AM +0200, Alexis Lothoré via 
> lists.openembedded.org wrote:
>> From: Alexis Lothoré 
>>
>> TESTIMAGE_FAILED_QA_ARTIFACTS is defined in testimage.bbclass with a
>> minimal list of files to retrieve when a test fail. By appending the ptest
>> directory only in core-image-ptest.bb, thanks to multiconfig feature used
>> in the recipe, only failing ptests will lead to corresponding ptest
>> artifacts retrieval, instead of all ptests artifacts retrieval.
>>
>> Signed-off-by: Alexis Lothoré 
>> ---
>>  meta/recipes-core/images/core-image-ptest.bb | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/meta/recipes-core/images/core-image-ptest.bb 
>> b/meta/recipes-core/images/core-image-ptest.bb
>> index 90c26641ba3a..e1be81bb2666 100644
>> --- a/meta/recipes-core/images/core-image-ptest.bb
>> +++ b/meta/recipes-core/images/core-image-ptest.bb
>> @@ -28,6 +28,7 @@ QB_MEM:virtclass-mcextend-lttng-tools = "-m 4096"
>>  QB_MEM:virtclass-mcextend-python3-cryptography = "-m 4096"
>>  
>>  TEST_SUITES = "ping ssh parselogs ptest"
>> +TESTIMAGE_FAILED_QA_ARTIFACTS:append=" ${libdir}/${MCNAME}/ptest"
> 
> Why not += ? Also, spaces around =.
> 
> If :append is used, bbappend in other layers can not easily override
> this variable.

Good catch, thanks, I'll wait a bit for any more reviews and send a new version
with this point fixed.

Thanks,
Alexis

> 
> Cheers,
> 
> -Mikko
> 
> 
> 
> 
> 

-- 
Alexis Lothoré, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182541): 
https://lists.openembedded.org/g/openembedded-core/message/182541
Mute This Topic: https://lists.openembedded.org/mt/99423382/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-Core][PATCH v3 4/4] core-image-ptest: append ptest directory to artifacts list

2023-06-09 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

TESTIMAGE_FAILED_QA_ARTIFACTS is defined in testimage.bbclass with a
minimal list of files to retrieve when a test fail. By appending the ptest
directory only in core-image-ptest.bb, thanks to multiconfig feature used
in the recipe, only failing ptests will lead to corresponding ptest
artifacts retrieval, instead of all ptests artifacts retrieval.

Signed-off-by: Alexis Lothoré 
---
 meta/recipes-core/images/core-image-ptest.bb | 1 +
 1 file changed, 1 insertion(+)

diff --git a/meta/recipes-core/images/core-image-ptest.bb 
b/meta/recipes-core/images/core-image-ptest.bb
index 90c26641ba3a..e1be81bb2666 100644
--- a/meta/recipes-core/images/core-image-ptest.bb
+++ b/meta/recipes-core/images/core-image-ptest.bb
@@ -28,6 +28,7 @@ QB_MEM:virtclass-mcextend-lttng-tools = "-m 4096"
 QB_MEM:virtclass-mcextend-python3-cryptography = "-m 4096"
 
 TEST_SUITES = "ping ssh parselogs ptest"
+TESTIMAGE_FAILED_QA_ARTIFACTS:append=" ${libdir}/${MCNAME}/ptest"
 
 # Sadly at the moment the full set of ptests is not robust enough and 
sporadically fails in random places
 PTEST_EXPECT_FAILURE = "1"
-- 
2.40.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182536): 
https://lists.openembedded.org/g/openembedded-core/message/182536
Mute This Topic: https://lists.openembedded.org/mt/99423382/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-Core][PATCH v3 1/4] oeqa/core/runner: add helper to know about expected failures

2023-06-09 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

Testing framework currently uses the unittest.expectedFailure decorator for
tests that can have intermittent failures (see PTEST_EXPECT_FAILURE = "1")
in core-image-ptest.bb. While it allows upper layers to run tests without
failing on "fragile" tests, it prevents those from knowing more about those
failing tests since they are not accounting as failures (for example we
could want to retrieve some logs about failed tests to improve them, and
eventually to drop expectFailure decorator)

Add a helper to allow upper layers to know about those failures which won't
make global testing session

Signed-off-by: Alexis Lothoré 
---
 meta/lib/oeqa/core/runner.py | 4 
 1 file changed, 4 insertions(+)

diff --git a/meta/lib/oeqa/core/runner.py b/meta/lib/oeqa/core/runner.py
index d50690ab37f8..5077eb8e3e32 100644
--- a/meta/lib/oeqa/core/runner.py
+++ b/meta/lib/oeqa/core/runner.py
@@ -229,6 +229,10 @@ class OETestResult(_TestResult):
 # Override as we unexpected successes aren't failures for us
 return (len(self.failures) == len(self.errors) == 0)
 
+def hasAnyFailingTest(self):
+# Account for expected failures
+return not self.wasSuccessful() or len(self.expectedFailures)
+
 class OEListTestsResult(object):
 def wasSuccessful(self):
 return True
-- 
2.40.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182537): 
https://lists.openembedded.org/g/openembedded-core/message/182537
Mute This Topic: https://lists.openembedded.org/mt/99423383/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-Core][PATCH v3 2/4] oeqa/target/ssh: update options for SCP

2023-06-09 Thread Alexis Lothoré via lists . openembedded . org
From: Alexis Lothoré 

By default scp expects files. Passing -r option allows to copy directories
too

Signed-off-by: Alexis Lothoré 
---
Changes since v1:
- drop legacy scp protocol option
---
 meta/lib/oeqa/core/target/ssh.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/core/target/ssh.py b/meta/lib/oeqa/core/target/ssh.py
index 51079075b5bd..e650302052db 100644
--- a/meta/lib/oeqa/core/target/ssh.py
+++ b/meta/lib/oeqa/core/target/ssh.py
@@ -40,8 +40,11 @@ class OESSHTarget(OETarget):
 '-o', 'StrictHostKeyChecking=no',
 '-o', 'LogLevel=ERROR'
 ]
+scp_options = [
+'-r'
+]
 self.ssh = ['ssh', '-l', self.user ] + ssh_options
-self.scp = ['scp'] + ssh_options
+self.scp = ['scp'] + ssh_options + scp_options
 if port:
 self.ssh = self.ssh + [ '-p', port ]
 self.scp = self.scp + [ '-P', port ]
-- 
2.40.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#182534): 
https://lists.openembedded.org/g/openembedded-core/message/182534
Mute This Topic: https://lists.openembedded.org/mt/99423379/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



<    2   3   4   5   6   7   8   9   10   11   >