Re: How to create a custom Debian ISO

2024-05-19 Thread Thomas Schmitt
Hi,

in the second part of this mail i discuss a possible problem with script
04_create_iso.sh about bootability on Legacy BIOS from USB stick.
(I apologize already now for being off topic by talking about an Ubuntu
ISO.)


Roland Clobus wrote:
> Thanks for pointing to live-build :-)

Just connecting my users. :))


> > > [1] https://github.com/t2linux/T2-Ubuntu/tree/LTS
I wrote:
> > Just out of pure curiosity: From what script in particular do you get
> > this impression ?

Roland Clobus wrote:
> The scripts with the sequence numbers 01-04 mirror the steps that are done
> in live-build,

01_build_file_system.sh looks quite debianish. (Not my turf.)
02_build_image.sh mentions squashfs, which is indeed typical for Live ISOs.
But as said, with Ubuntu desktop there is no non-Live ISO.
03_prepare_iso.sh prepares bootloader files.
(I know the topics of 02 and 03 only as bystander.)

--
Now for the announced discussion of a possible problem in the production
of derived Ubuntu ISOs:

04_create_iso.sh looks on the first glimpse like producing a usual amd64
hybrid ISO for {Legacy,EFI} x {DVD,HDD}, but in detail it is somewhat
riddling:

  20   --grub2-mbr "/usr/lib/grub/i386-pc/boot_hybrid.img" \
  ...
  24   -isohybrid-mbr "${ROOT_PATH}/files/isohdpfx.bin" \

First xorriso is instructed to use boot_hybrid.img as MBR with some extra
info for GRUB.
But then this is overridden by the instruction to use isohdpfx.bin as MBR
with some extra info for ISOLINUX/SYSLINUX.

In my local experiments this leads to dysfunctional MBR code because
-isohybrid-mbr works its magic only if a ISOLINUX boot image for El Torito
is present. But i understand from 03_prepare_iso.sh that the image file
given by option -b is despite its name "isolinux/bios.img" concatenated
from GRUB files cdboot.img and core.img.

I wonder whether "T2 Macs" would be able to boot via MBR code at all.
That would be EFI in CSM mode and ISO on USB stick.
Does this work with the currently produced ISOs ?

Obviously i am not smart enough to find one of the resulting ISOs in the
web for inspection. Shrug.

Can it be that -isohybrid-mbr was used because else
  25   -isohybrid-gpt-basdat -isohybrid-apm-hfsplus \
would cause an error ?
If so, then option

  -part_like_isohybrid

might be the better alternative because it does not override the
preparation for using boot_hybrid.img as --grub2-mbr.
If i replace in my experiment the line
  -isohybrid-mbr "${ROOT_PATH}/files/isohdpfx.bin" \
by -part_like_isohybrid, then i get an ISO where xorriso can recognize
an MBR with the patched-in extra info caused by option --grub2-mbr.
The partition table is more sparse than with a real ISOLINUX MBR. But
it marks the EFI partition in MBR and (pseudo-)GPT.


Whatever, current Ubuntu desktop ISOs do not use -isohybrid-gpt-basdat
and -isohybrid-apm-hfsplus any more.
One may run e.g.:

  xorriso -indev ubuntu-22.04.3-desktop-amd64.iso -report_el_torito as_mkisofs

to see -append_partition , -appended_part_as_gpt ,
and -e '--interval:appended_partition_2:all::' serving instead of
-part_like_isohybrid (or -isohybrid-mbr), -e "EFI/efiboot.img", and
-isohybrid-gpt-basdat .


Have a nice day :)

Thomas



Re: [PATCH v2] ntp: safeguard against time_constant overflow case

2024-05-19 Thread Thomas Gleixner
On Fri, May 17 2024 at 22:18, Justin Stitt wrote:
> On Fri, May 17, 2024, 19:33 Thomas Gleixner  wrote:
> I accidentally sent a Frankstein-esque creation of two patches I was
> working on. Not my brightest moment. It got past my testing because (as you
> pointed out) I only ran the reproducer against my _fix_...

Shit happens.

> Let me really parse everything you've said and v3 will surely knock your
> socks off. You'll have to wait till Monday though :)

Take your time. There is no rush.

Thanks,

tglx



[tmux/tmux] ac6c1e: remove prototype with no matching function

2024-05-18 Thread 'Thomas Adam' via tmux-git
  Branch: refs/heads/master
  Home:   https://github.com/tmux/tmux
  Commit: ac6c1e9589bb5afa12e57b7a6cd8d50174253cce
  
https://github.com/tmux/tmux/commit/ac6c1e9589bb5afa12e57b7a6cd8d50174253cce
  Author: jsg 
  Date:   2024-05-19 (Sun, 19 May 2024)

  Changed paths:
M tmux.h

  Log Message:
  ---
  remove prototype with no matching function


  Commit: 4c2eedca5a75dedc3540e3373048ce7979c82064
  
https://github.com/tmux/tmux/commit/4c2eedca5a75dedc3540e3373048ce7979c82064
  Author: Thomas Adam 
  Date:   2024-05-19 (Sun, 19 May 2024)

  Changed paths:
M tmux.h

  Log Message:
  ---
  Merge branch 'obsd-master'


Compare: https://github.com/tmux/tmux/compare/0903790b0037...4c2eedca5a75

To unsubscribe from these emails, change your notification settings at 
https://github.com/tmux/tmux/settings/notifications

-- 
You received this message because you are subscribed to the Google Groups 
"tmux-git" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tmux-git+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/tmux-git/tmux/tmux/push/refs/heads/master/090379-4c2eed%40github.com.


Re: Streaming read-ready sequential scan code

2024-05-18 Thread Thomas Munro
On Sun, May 19, 2024 at 7:00 AM Alexander Lakhin  wrote:
> With blocknums[1], timing is changed, but the effect is not persistent.
> 10 query15 executions in a row, b7b0f3f27:
> 277.932 ms
> 281.805 ms
> 278.335 ms
> 281.565 ms
> 284.167 ms
> 283.171 ms
> 281.165 ms
> 281.615 ms
> 285.394 ms
> 277.301 ms

The bad time 10/10.

> b7b0f3f27~1:
> 159.789 ms
> 165.407 ms
> 160.893 ms
> 159.343 ms
> 160.936 ms
> 161.577 ms
> 161.637 ms
> 163.421 ms
> 163.143 ms
> 167.109 ms

The good time 10/10.

> b7b0f3f27 + blocknums[1]:
> 164.133 ms
> 280.920 ms
> 160.748 ms
> 163.182 ms
> 161.709 ms
> 161.998 ms
> 161.239 ms
> 276.256 ms
> 161.601 ms
> 160.384 ms

The good time 8/10, the bad time 2/10.

Thanks for checking!  I bet all branches can show that flip/flop
instability in these adverse conditions, depending on random
scheduling details.  I will start a new thread with a patch for the
root cause of that, ie problem #2 (this will need back-patching), and
post a fix for #3 (v17 blocknums[N] tweak affecting
fairness/likelihood, which was probably basically a bit of ill-advised
premature optimisation) here in a few days.




Re: Requiring LLVM 14+ in PostgreSQL 18

2024-05-18 Thread Thomas Munro
On Sun, May 19, 2024 at 10:46 AM Ole Peder Brandtzæg
 wrote:
> On Wed, May 15, 2024 at 07:20:09AM +0200, Peter Eisentraut wrote:
> > Yes, let's get that v3-0001 patch into PG17.
>
> Upon seeing this get committed in 4dd29b6833, I noticed that the docs
> still advertise the llvm-config-$version search dance. That's still
> correct for Meson-based builds since we use their config-tool machinery,
> but no longer holds for configure-based builds. The attached patch
> updates the docs accordingly.

Oops, right I didn't know we had that documented.  Thanks.  Will hold
off doing anything until the thaw.

Hmm, I also didn't know that Meson had its own list like our just-removed one:

https://github.com/mesonbuild/meson/blob/master/mesonbuild/environment.py#L183

Unsurprisingly, it suffers from maintenance lag, priority issues etc
(new major versions pop out every 6 months):

https://github.com/mesonbuild/meson/issues/10483




Re: speed up a logical replica setup

2024-05-18 Thread Thomas Munro
040_pg_createsubscriber.pl seems to be failing occasionally on
culicidae near "--dry-run on node S".  I couldn't immediately see why.
That animal is using EXEC_BACKEND and I also saw a one-off failure a
bit like that on my own local Linux + EXEC_BACKEND test run
(sorry I didn't keep the details around).  Coincidence?




[KBibTeX] [Bug 484421] KBibTex sometimes loses track of the active element, resulting in shortcuts affecting the wrong item and other issues

2024-05-18 Thread Thomas Fischer
https://bugs.kde.org/show_bug.cgi?id=484421

Thomas Fischer  changed:

   What|Removed |Added

  Latest Commit||7790859456a50ff8e31e918da28
   ||35b03cda51075
   Version Fixed In||0.10.1
 Ever confirmed|0   |1
 Status|REPORTED|ASSIGNED

--- Comment #1 from Thomas Fischer  ---
I am having trouble reproducing the described problem *sometimes*, i.e.
sometimes I get the result you describe, sometimes not.

I suspect the issue you describe is cause when the information what is selected
(no, one, or several lines) diverges from what is the 'current' line. Maybe
there is a deeper problem or a bug in Qt, but it seems to suffice to set the
current line to be one of the selected ones (if there are any) when the main
list receives focus after closing the menu.

Please let me know if this commit fixes your problem:
https://invent.kde.org/thomasfischer/kbibtex/-/commit/7790859456a50ff8e31e918da2835b03cda51075

-- 
You are receiving this mail because:
You are watching all bug changes.

[Craft] [Bug 486905] Qt6 Webengine crash - required icudtl.dat is missing from image

2024-05-18 Thread Thomas Friedrichsmeier
https://bugs.kde.org/show_bug.cgi?id=486905

Thomas Friedrichsmeier  changed:

   What|Removed |Added

 Resolution|--- |FIXED
 Status|REPORTED|RESOLVED
  Latest Commit||a7b43b7a4c54c487f86ca6b0725
   ||d7979f62a51d9

-- 
You are receiving this mail because:
You are on the CC list for the bug.

[Craft] [Bug 486905] Qt6 Webengine crash - required icudtl.dat is missing from image

2024-05-18 Thread Thomas Friedrichsmeier
https://bugs.kde.org/show_bug.cgi?id=486905

Thomas Friedrichsmeier  changed:

   What|Removed |Added

 Resolution|--- |FIXED
 Status|REPORTED|RESOLVED
  Latest Commit||a7b43b7a4c54c487f86ca6b0725
   ||d7979f62a51d9

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Perl ports in arm64 vs -current

2024-05-18 Thread Thomas Frohwein
On Sat, May 18, 2024 at 06:37:09PM +, Lucas Gabriel Vuotto wrote:
> Hello ports@,
> 
> Since today's snapshot, it seems that something is off in arm64 and
> Perl:
> 
> $ perl -MNet::SSLeay -e 'print "works\n"'
> SSLeay.c: loadable library and perl binaries are mismatched (got first 
> handshake key 0x1060, needed 0x10d0)
> 
> On the contrary, on amd64 updated today too,
> 
> $ perl -MNet::SSLeay -e 'print "works\n"'
> works
> 
> The issue is present with other modules, Net::SSLeay was chosen as it
> was the one giving me an error message. But I tried p5-EV with a similar
> error except for the filename.
> 
> Rebuilding the package locally makes the error go away, so I guess it's
> related to builders not being up-to-date with latest Perl, leading to
> errors for packages with native extensions?

Yes, that seems to be it and should resolve soon as new packages show up.

> dmesgs for systems follow.
> 
>   Lucas
> 
> 
> ==> arm64
> 
> OpenBSD 7.5-current (GENERIC.MP) #40: Fri May 17 14:59:13 MDT 2024
> dera...@arm64.openbsd.org:/usr/src/sys/arch/arm64/compile/GENERIC.MP
> real mem  = 4185792512 (3991MB)
> avail mem = 3971817472 (3787MB)
> random: good seed from bootblocks
> mainbus0 at root: ACPI
> psci0 at mainbus0: PSCI 1.0, SMCCC 1.1
> efi0 at mainbus0: UEFI 2.7
> efi0: EDK II rev 0x1
> smbios0 at efi0: SMBIOS 3.0.0
> smbios0: vendor Hetzner version "2017" date 11/11/2017
> smbios0: Hetzner vServer
> cpu0 at mainbus0 mpidr 0: ARM Neoverse N1 r3p1
> cpu0: 64KB 64b/line 4-way L1 PIPT I-cache, 64KB 64b/line 4-way L1 D-cache
> cpu0: 1024KB 64b/line 8-way L2 cache
> cpu0: 
> DP,RDM,Atomic,CRC32,SHA2,SHA1,AES+PMULL,LRCPC,DPB,ASID16,PAN+ATS1E1,LO,HPDS,VH,HAFDBS,CSV3,CSV2,SSBS+MSR
> cpu1 at mainbus0 mpidr 1: ARM Neoverse N1 r3p1
> cpu1: 64KB 64b/line 4-way L1 PIPT I-cache, 64KB 64b/line 4-way L1 D-cache
> cpu1: 1024KB 64b/line 8-way L2 cache
> apm0 at mainbus0
> agintc0 at mainbus0 shift 4:4 nirq 288 nredist 2 ipi: 0, 1, 2: 
> "interrupt-controller"
> agintcmsi0 at agintc0
> agtimer0 at mainbus0: 25000 kHz
> acpi0 at mainbus0: ACPI 5.1
> acpi0: sleep states
> acpi0: tables DSDT FACP APIC GTDT MCFG SPCR DBG2 IORT BGRT
> acpi0: wakeup devices
> acpimcfg0 at acpi0
> acpimcfg0: addr 0x401000, bus 0-255
> acpiiort0 at acpi0
> "ACPI0007" at acpi0 not configured
> "ACPI0007" at acpi0 not configured
> pluart0 at acpi0 COM0 addr 0x900/0x1000 irq 33
> pluart0: console
> "LNRO0015" at acpi0 not configured
> "LNRO0015" at acpi0 not configured
> "QEMU0002" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> "LNRO0005" at acpi0 not configured
> acpipci0 at acpi0 PCI0
> pci0 at acpipci0
> "Red Hat Host" rev 0x00 at pci0 dev 0 function 0 not configured
> virtio0 at pci0 dev 1 function 0 "Qumranet Virtio 1.x GPU" rev 0x01
> viogpu0 at virtio0: 1024x768, 32bpp
> wsdisplay0 at viogpu0 mux 1: console (std, vt100 emulation)
> wsdisplay0: screen 1-5 added (std, vt100 emulation)
> virtio0: msix per-VQ
> ppb0 at pci0 dev 2 function 0 "Red Hat PCIE" rev 0x00: irq 37
> pci1 at ppb0 bus 1
> virtio1 at pci1 dev 0 function 0 "Qumranet Virtio 1.x Network" rev 0x01
> vio0 at virtio1: address 96:00:02:40:c5:c9
> virtio1: msix shared
> ppb1 at pci0 dev 2 function 1 "Red Hat PCIE" rev 0x00: irq 37
> pci2 at ppb1 bus 2
> xhci0 at pci2 dev 0 function 0 "Red Hat xHCI" rev 0x01: msix, xHCI 0.0
> usb0 at xhci0: USB revision 3.0
> uhub0 at usb0 configuration 1 interface 0 "Red Hat xHCI root hub" rev 
> 3.00/1.00 addr 1
> ppb2 at pci0 dev 2 function 2 "Red Hat PCIE" rev 0x00: irq 37
> pci3 at ppb2 bus 3
> virtio2 at pci3 dev 0 function 0 "Qumranet Virtio 1.x Console" rev 0x01
> virtio2: no matching child driver; not configured
> ppb3 at pci0 dev 2 function 3 "Red Hat PCIE" rev 0x00: irq 37
> pci4 at ppb3 bus 4
> virtio3 at pci4 dev 0 function 0 "Qumranet Virtio 1.x Memory Balloon" rev 0x01
> viomb0 at virtio3
> 

[KBibTeX] [Bug 484418] Support for searching and adding book citations (e.g. from Google Books, WorldCat)

2024-05-18 Thread Thomas Fischer
https://bugs.kde.org/show_bug.cgi?id=484418

Thomas Fischer  changed:

   What|Removed |Added

  Latest Commit|ffe0fb736ac6a377b772bc6f5a7 |37858b41ce6a19158586546d5ca
   |b7edb0e004b18   |15998dc4124b1

-- 
You are receiving this mail because:
You are watching all bug changes.

[kmymoney] [Bug 430047] Feature request: Budgeting based on cash flow

2024-05-18 Thread Thomas Baumgart
https://bugs.kde.org/show_bug.cgi?id=430047

Thomas Baumgart  changed:

   What|Removed |Added

 Status|REPORTED|RESOLVED
 Resolution|--- |FIXED
   Version Fixed In||5.2
  Latest Commit||https://invent.kde.org/offi
   ||ce/kmymoney/-/commit/8cdd45
   ||36e30eabf898f8b6aca8d5d9159
   ||676bdbb

--- Comment #5 from Thomas Baumgart  ---
Git commit 8cdd4536e30eabf898f8b6aca8d5d9159676bdbb by Thomas Baumgart.
Committed on 18/05/2024 at 16:40.
Pushed by tbaumgart into branch 'master'.

Reporting for budgeting of asset/liability accounts

This amends commit e79a1cab and provides the reporting part of the
feature.
FIXED-IN: 5.2

M  +9-0kmymoney/mymoney/mymoneyreport.cpp
M  +25   -20   kmymoney/plugins/views/reports/core/pivottable.cpp

https://invent.kde.org/office/kmymoney/-/commit/8cdd4536e30eabf898f8b6aca8d5d9159676bdbb

-- 
You are receiving this mail because:
You are watching all bug changes.

[kmymoney] [Bug 430047] Feature request: Budgeting based on cash flow

2024-05-18 Thread Thomas Baumgart via KMyMoney-devel
https://bugs.kde.org/show_bug.cgi?id=430047

Thomas Baumgart  changed:

   What|Removed |Added

 Status|REPORTED|RESOLVED
 Resolution|--- |FIXED
   Version Fixed In||5.2
  Latest Commit||https://invent.kde.org/offi
   ||ce/kmymoney/-/commit/8cdd45
   ||36e30eabf898f8b6aca8d5d9159
   ||676bdbb

--- Comment #5 from Thomas Baumgart  ---
Git commit 8cdd4536e30eabf898f8b6aca8d5d9159676bdbb by Thomas Baumgart.
Committed on 18/05/2024 at 16:40.
Pushed by tbaumgart into branch 'master'.

Reporting for budgeting of asset/liability accounts

This amends commit e79a1cab and provides the reporting part of the
feature.
FIXED-IN: 5.2

M  +9-0kmymoney/mymoney/mymoneyreport.cpp
M  +25   -20   kmymoney/plugins/views/reports/core/pivottable.cpp

https://invent.kde.org/office/kmymoney/-/commit/8cdd4536e30eabf898f8b6aca8d5d9159676bdbb

-- 
You are receiving this mail because:
You are the assignee for the bug.

[systemsettings] [Bug 487194] Crash on close while in SDDM settings

2024-05-18 Thread Thomas Bertels
https://bugs.kde.org/show_bug.cgi?id=487194

Thomas Bertels  changed:

   What|Removed |Added

Summary|Crash on close  |Crash on close while in
   ||SDDM settings

-- 
You are receiving this mail because:
You are watching all bug changes.

[systemsettings] [Bug 487194] New: Crash on close

2024-05-18 Thread Thomas Bertels
https://bugs.kde.org/show_bug.cgi?id=487194

Bug ID: 487194
   Summary: Crash on close
Classification: Applications
   Product: systemsettings
   Version: 6.0.4
  Platform: Manjaro
OS: Linux
Status: REPORTED
  Keywords: drkonqi
  Severity: crash
  Priority: NOR
 Component: generic-crash
  Assignee: plasma-b...@kde.org
  Reporter: tbert...@gmail.com
  Target Milestone: ---

Application: systemsettings (6.0.4)

Qt Version: 6.7.0
Frameworks Version: 6.1.0
Operating System: Linux 6.6.30-2-MANJARO x86_64
Windowing System: Wayland
Distribution: Manjaro Linux
DrKonqi: 6.0.4 [CoredumpBackend]

-- Information about the crash:
To reproduce:
* Open Settings manager
* Search for "session"
* Open SDDM settings
* Click on "Behavior" at the top
* Close Settings manager and don't save settings when asked

The crash can be reproduced every time.

-- Backtrace (Reduced):
#4  0x7f7b6e2fa86d in KPageWidget::currentPage() const () at
/usr/lib/libKF6WidgetsAddons.so.6
#5  0x7f7b6eb64cc8 in operator() (__closure=0x555711a79320) at
/usr/src/debug/systemsettings/systemsettings-6.0.4/core/ModuleView.cpp:282
#8  QtPrivate::QCallableObject, QtPrivate::List<>, void>::impl(int,
QtPrivate::QSlotObjectBase *, QObject *, void **, bool *) (which=, this_=0x555711a79310, r=, a=,
ret=) at /usr/include/qt6/QtCore/qobjectdefs_impl.h:555
#9  0x7f7b6bd9b57f in QtPrivate::QSlotObjectBase::call (a=0x7ffc96ccb918,
r=0x555710df2800, this=0x555711a79310, this=, r=,
a=) at
/usr/src/debug/qt6-base/qtbase/src/corelib/kernel/qobjectdefs_impl.h:469
#10 doActivate (sender=0x555711941d60, signal_index=5,
argv=0x7ffc96ccb918) at
/usr/src/debug/qt6-base/qtbase/src/corelib/kernel/qobject.cpp:4078


Reported using DrKonqi

-- 
You are receiving this mail because:
You are watching all bug changes.

[OAUTH-WG] Re: New Internet Draft: OAuth 2.0 Delegated B2B Authorization

2024-05-18 Thread Thomas Broyer
Isn't that covered by Token Exchange already?
https://datatracker.ietf.org/doc/html/rfc8693

Le sam. 18 mai 2024, 16:29, Igor Janicijevic  a écrit :

> Dear All,
>
>
>
> I have published an Internet Draft document that I would like to introduce
> to the OAuth working group for consideration. Here is the link for your
> reference:
> https://www.ietf.org/archive/id/draft-janicijevic-oauth-b2b-authorization-00.html
>
>
>
> Abstract
>
> Delegated B2B Authorization enables a third-party OAuth client to obtain a
> limited access to an HTTP service on behalf of another OAuth client which
> is acting as a resource owner. This specification extends the OAuth 2.0
> Authorization Framework with two new endpoints which allow a resource owner
> OAuth client to manage access for a third-party OAuth client.
>
>
>
> Motivation
>
> I work for a large financial services organization, and we are using OAuth
> 2.0 extensively to secure API based B2B integrations with various third
> parties by utilizing OAuth client_credentials grant type. Some of those
> third parties are our customers, while others are either our partners or
> partners of our customers. One of the challenges that we have encountered
> is that there is no standard way to delegate access to resources in B2B
> integrations, so that one party can obtain access to protected resources on
> behalf of another party. The above internet draft describes a possible
> extension to OAuth 2.0 that may be able to address this issue.
>
>
>
> I am looking forward to receiving your feedback.
>
>
>
> Regards,
>
> Igor
> ___
> OAuth mailing list -- oauth@ietf.org
> To unsubscribe send an email to oauth-le...@ietf.org
>
___
OAuth mailing list -- oauth@ietf.org
To unsubscribe send an email to oauth-le...@ietf.org


Re: [VOTE] Release Apache Commons Daemon 1.4.0 based on RC1

2024-05-18 Thread Mark Thomas

Hi Gary,

Looks like I missed adding them before I committed the changes. I've 
just done that.


Mark


On 18/05/2024 15:28, Gary Gregory wrote:

Hi Mark,

Thank you for preparing this release candidate.

There are no SHA512 files in:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/source/
https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/binaries/

Gary

On Fri, May 17, 2024 at 2:06 PM Mark Thomas  wrote:


We have fixed a few bugs, added enhancements and updated the minimum
Java and Windows version since Apache Commons Daemon 1.3.4 was released,
so I would like to release Apache Commons Daemon 1.4.0.

Apache Commons Daemon 1.4.0 RC1 is available for review here:
  https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1
(svn revision 69267)

The Git tag commons-daemon-1.4.0-RC1 commit for this RC is
6b911598b815a4a7b8ab2b8a8a2157593effc6bc which you can browse here:

https://gitbox.apache.org/repos/asf?p=commons-daemon.git;a=commit;h=6b911598b815a4a7b8ab2b8a8a2157593effc6bc
You may checkout this tag using:
  git clone https://gitbox.apache.org/repos/asf/commons-daemon.git
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1

Maven artifacts are here:

https://repository.apache.org/content/repositories/orgapachecommons-1729/commons-daemon/commons-daemon/1.4.0/

These are the artifacts and their hashes:

#Release SHA-512s
#Fri May 17 16:28:36 BST 2024
commons-daemon-1.4.0-bin-windows.zip=5974d638994cbf821c17d0fc6b69bace08b0314ea5614c1a57175a02cda7c57a6b8ee49f8892206061f9d3385da5841db31d9ce9b3ce74cf4afc10ad8e68
commons-daemon-1.4.0-bin.tar.gz=15fccd35a711f91e5b4466d56f50585c7ae3a787a39c16e006617c86b9e9feee9fbf902582b08c2e896ca6a655500d805fdbb9c97f04f70321631168b8d42c81
commons-daemon-1.4.0-bin.zip=3652ed9ed9cf6fcb0d4b5067570c322b0b3c9ae0a81dee1d7b0992bb7ff5654a7c4dc89c0c2d966c9962778288c6ad60bd8ac10f62898c9e10261bec6e61d3ea
commons-daemon-1.4.0-bom.json=0de219d72a63d8154f42ef5bd6c348936e14d65efec3e54a55ebfb9bc757e4ceac7aabd8c8b85d94657ed76f44069ac56b2bb231aba5419733f00a3dc85f6601
commons-daemon-1.4.0-bom.xml=bc0dba27a50ca6c5d30015f97bd258325452e6fabefd1cf38b94d0ce5699233a18b456fd701761a5f8cedf847cbd152879e0dec9add548611d5593b910c90244
commons-daemon-1.4.0-javadoc.jar=8fd299a3d228c4ab4ea8455b81319d80b3e27cac1c31bed1e03cc7a3391d59f18e037adcb72e68202511a45ef5bc49274d6e9cf38c860b55bb9b874a92044d2e
commons-daemon-1.4.0-native-src.tar.gz=8a54200d547ef7ee647e8d4910fd3cb55bf7d8fc75de8f0e01bc701ef0b386ddc3843e6c9189e34d2afd62060fb6299ea83c421cf60c7d105d04cb45904500d3
commons-daemon-1.4.0-native-src.zip=cb6b12bbd775eba7d012744cf908f42fc6d39e421c1f41546f230b431c1d239cc3e2d9c09520165b5db7a95701b651a6738a5d1915d39a4520b1ff07ce4f65a5
commons-daemon-1.4.0-sources.jar=701b3646ea29de5ea69d72c8741a2dc56a44a57168c0e7d1afab87f89d9cab75c413f1fe3d09f5765e4dbe2b2af0951125ee0f6a0a4d5b4fafcf49bfd0b03cbf
commons-daemon-1.4.0-src.tar.gz=285f33ce36e2591f49b6067da16612ec1b49b23a8637d077618aefaae4452993dc2a31660665551ea761857390d940100e162e205fe7c0fad9c72374f2d15bb8
commons-daemon-1.4.0-src.zip=190d6b8b65d71594ff02bade3fbcd6b09d5b2e68413a2a23ef2cbf945d2e19655c1d480484ec198f7e140eaa3744c970770cea17498c12f9bfe284f5bd28a51d
commons-daemon-1.4.0-test-sources.jar=e889d8b5bda1e0a89d33741e9308739b732e938ef13b552acf7dc0ba52845766e6a49f3fbb6c821655d295e18b9accbfeac1c26b8afacc088084511cea301bcd
commons-daemon-1.4.0-tests.jar=b392bdaa59e3d75e7aa023f65514385edfc44bc1bc088826b643186bfeaf47215375a814af3637e585bde201dd6ee5ef3669f2b4a3cf2e275da4fc6ccd91dfda
commons-daemon_commons-daemon-1.4.0.spdx.json=47c669c16aca4588d4940a4dcec162a619587f8fc8d6a74a5abbe8562296f0eb08f271db531e678a939355a9b7f669cb9ade864d953c77402b60e8c183f1faed



Details of changes since 1.3.4 are in the change log:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/RELEASE-NOTES.txt

https://github.com/apache/commons-daemon/blob/master/src/changes/changes.xml

KEYS:
https://downloads.apache.org/commons/KEYS

Please review the release candidate and vote.
This vote will close no sooner than 72 hours from now.

[ ] +1 Release these artifacts
[ ] +0 OK, but...
[ ] -0 OK, but really should fix...
[ ] -1 I oppose this release because...

Thank you,

Mark Thomas,
Release Manager (using key 10C01C5A2F6059E7)

For following is intended as a helper and refresher for reviewers.

Validating a release candidate
==

These guidelines are NOT complete.

Requirements: Git, Java, Maven.

You can validate a release from a release candidate (RC) tag as follows.

1a) Clone and checkout the RC tag

git clone https://gitbox.apache.org/repos/asf/commons-daemon.git
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1
cd commons-daemon-1.4.0-RC1

1b) Download and unpack the source archive from:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/source

2) Check Apache licenses

This step is not required if the site includes a RAT report page which
you then must check

[NTG-context] Re: Special charakter

2024-05-18 Thread Thomas Meyer

Thanks to all of you for your hints and comments!
I thought \l does work, why \L does not.
To type \L is faster than changing the keyboard language when I write in 
German normally.


Greetings

Am 18.05.24 um 13:33 schrieb Bruce Horrocks:



On 18 May 2024, at 11:54, Thomas Meyer  wrote:

I know I can copy and paste it, like here (copied from Wikipedia), but if I 
don't have a template in a hurry ...


On the Mac, if you are using an 'English' keyboard, you can hold down any letter 
for half a second or so and see a popup selection of alternate accented versions of 
that letter. Not all text editors / word processors support it but TeXShop does. No 
need for cut & paste templates.

—
Bruce Horrocks
Hampshire, UK

___
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist :ntg-context@ntg.nl  
/https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  :https://www.pragma-ade.nl  /https://context.aanhet.net  (mirror)
archive  :https://github.com/contextgarden/context
wiki :https://wiki.contextgarden.net
___
___
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist : ntg-context@ntg.nl / 
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive  : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___


Re: How to create a custom Debian ISO

2024-05-18 Thread Thomas Schmitt
Hi,

since Aditya Garg gets a reply here at debian-live, i add a link to the
thread on debian-user, beginning with Marvin Renich's proposal to
continue the thread there:

  https://lists.debian.org/debian-user/2024/05/msg00149.html

(I proposed for Debian Live
  https://live-team.pages.debian.net/live-manual/html/live-manual/index.en.html
and for Debian installation ISOs
  https://wiki.debian.org/RepackBootableISO
)


Roland Clobus wrote:
> I had a quick peek at the scripts you use [1].
> [1] https://github.com/t2linux/T2-Ubuntu/tree/LTS
> It appears to me that you want to create a custom Debian Live ISO (not a
> netinst image).

Just out of pure curiosity: From what script in particular do you get
this impression ?

AFAIK, Ubuntu desktop ISOs are always Live systems.
  https://releases.ubuntu.com/noble/
says
  "The desktop image allows you to try Ubuntu without changing your
   computer at all, and at your option to install it permanently later."


Have a nice day :)

Thomas



[Bug 2065321] Re: Kubuntu 24.04 Second boot always black screen

2024-05-18 Thread chaillan thomas
replacing 'quiet splash' with just'quiet' from
GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub also allows a clean boot.


** Attachment added: "prevboot.txt"
   
https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/2065321/+attachment/5779622/+files/prevboot.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065321

Title:
  Kubuntu 24.04 Second boot always black screen

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/2065321/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2065321] Re: Kubuntu 24.04 Second boot always black screen

2024-05-18 Thread chaillan thomas
replacing 'quiet splash' with just'quiet' from
GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub also allows a clean boot.


** Attachment added: "prevboot.txt"
   
https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/2065321/+attachment/5779622/+files/prevboot.txt

-- 
You received this bug notification because you are a member of Kubuntu
Bugs, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/2065321

Title:
  Kubuntu 24.04 Second boot always black screen

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/plymouth/+bug/2065321/+subscriptions


-- 
kubuntu-bugs mailing list
kubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/kubuntu-bugs


[tmux/tmux] da0671: remove externs with no matching var; ok nicm@

2024-05-18 Thread 'Thomas Adam' via tmux-git
  Branch: refs/heads/master
  Home:   https://github.com/tmux/tmux
  Commit: da067193091c957b88770902cd49b9b0dece0836
  
https://github.com/tmux/tmux/commit/da067193091c957b88770902cd49b9b0dece0836
  Author: jsg 
  Date:   2024-05-18 (Sat, 18 May 2024)

  Changed paths:
M cmd.c

  Log Message:
  ---
  remove externs with no matching var; ok nicm@


  Commit: 03de52653ed845e43e25ccd17498f028dc618fe8
  
https://github.com/tmux/tmux/commit/03de52653ed845e43e25ccd17498f028dc618fe8
  Author: jsg 
  Date:   2024-05-18 (Sat, 18 May 2024)

  Changed paths:
M tmux.h

  Log Message:
  ---
  remove prototypes with no matching function; ok nicm@


  Commit: 0903790b0037ccacff1c4117bbf29e2f95fa3697
  
https://github.com/tmux/tmux/commit/0903790b0037ccacff1c4117bbf29e2f95fa3697
  Author: Thomas Adam 
  Date:   2024-05-18 (Sat, 18 May 2024)

  Changed paths:
M cmd.c
M tmux.h

  Log Message:
  ---
  Merge branch 'obsd-master'


Compare: https://github.com/tmux/tmux/compare/fc84097379d8...0903790b0037

To unsubscribe from these emails, change your notification settings at 
https://github.com/tmux/tmux/settings/notifications

-- 
You received this message because you are subscribed to the Google Groups 
"tmux-git" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tmux-git+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/tmux-git/tmux/tmux/push/refs/heads/master/fc8409-090379%40github.com.


[NTG-context] Re: Special charakter

2024-05-18 Thread Thomas Meyer
I know I can copy and paste it, like here (copied from Wikipedia), but 
if I don't have a template in a hurry ...


Am 18.05.24 um 11:45 schrieb vm via ntg-context:



On 18/05/2024 11:17, Thomas Meyer wrote:

How can I write Łódź (on a Mac)?


Just as you wrote it in your mail message?
ConTeXt knows how to deal with utf-8

___ 

If your question is of interest to others as well, please add an entry 
to the Wiki!


maillist : ntg-context@ntg.nl / 
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  : https://www.pragma-ade.nl / https://context.aanhet.net 
(mirror)

archive  : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___ 

___
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist : ntg-context@ntg.nl / 
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive  : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___


[kcolorchooser] [Bug 479406] The "Pick Screen Color" button is missing on Wayland session

2024-05-18 Thread Thomas Weißschuh
https://bugs.kde.org/show_bug.cgi?id=479406

Thomas Weißschuh  changed:

   What|Removed |Added

 CC||tho...@t-8ch.de

--- Comment #17 from Thomas Weißschuh  ---
This is a racecondition in QT. See
https://bugreports.qt.io/browse/QTBUG-120957?focusedId=794902=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-794902


The following workaround in kcolorchooser works for me:
```
diff --git a/kcolorchooser.cpp b/kcolorchooser.cpp
index 97297071e07e..bd9c6ef6cf94 100644
--- a/kcolorchooser.cpp
+++ b/kcolorchooser.cpp
@@ -67,6 +67,9 @@ int main(int argc, char *argv[])


QApplication::setWindowIcon(QIcon::fromTheme(QStringLiteral("kcolorchooser")));

+/* Work around https://bugreports.qt.io/browse/QTBUG-120957 */
+app.processEvents();
+
 QColorDialog dlg;
 dlg.setOption(QColorDialog::DontUseNativeDialog);
 QDialogButtonBox *box = dlg.findChild();
```

-- 
You are receiving this mail because:
You are watching all bug changes.

[NTG-context] Special charakter

2024-05-18 Thread Thomas Meyer

Hi folks,

I have problems to produce the polish l.

\l gives ł, but \L gives nothing!

How can I write Łódź (on a Mac)?

Thanks and nice weekend.
Thomas

___
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist : ntg-context@ntg.nl / 
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive  : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___


Re: race condition when writing pg_control

2024-05-17 Thread Thomas Munro
On Fri, May 17, 2024 at 4:46 PM Thomas Munro  wrote:
> The specific problem here is that LocalProcessControlFile() runs in
> every launched child for EXEC_BACKEND builds.  Windows uses
> EXEC_BACKEND, and Windows' NTFS file system is one of the two file
> systems known to this list to have the concurrent read/write data
> mashing problem (the other being ext4).

Phngh... this is surprisingly difficult to fix.

Things that don't work: We "just" need to acquire ControlFileLock
while reading the file or examining the object in shared memory, or
get a copy of it, passed through the EXEC_BACKEND BackendParameters
that was acquired while holding the lock, but the current location of
this code in child startup is too early to use LWLocks, and the
postmaster can't acquire locks either so it can't even safely take a
copy to pass on.  You could reorder startup so that we are allowed to
acquire LWLocks in children at that point, but then you'd need to
convince yourself that there is no danger of breaking some ordering
requirement in external preload libraries, and figure out what to do
about children that don't even attach to shared memory.  Maybe that's
possible, but that doesn't sound like a good idea to back-patch.

First idea idea I've come up with to avoid all of that: pass a copy of
the "proto-controlfile", to coin a term for the one read early in
postmaster startup by LocalProcessControlFile().  As far as I know,
the only reason we need it is to suck some settings out of it that
don't change while a cluster is running (mostly can't change after
initdb, and checksums can only be {en,dis}abled while down).  Right?
Children can just "import" that sucker instead of calling
LocalProcessControlFile() to figure out the size of WAL segments yada
yada, I think?  Later they will attach to the real one in shared
memory for all future purposes, once normal interlocking is allowed.

I dunno.  Draft patch attached.  Better plans welcome.  This passes CI
on Linux systems afflicted by EXEC_BACKEND, and Windows.  Thoughts?
From 48c2de14bd9368b4708a99ecbb75452dc327e608 Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Sat, 18 May 2024 13:41:09 +1200
Subject: [PATCH v1 1/3] Fix pg_control corruption in EXEC_BACKEND startup.

When backend processes were launched in EXEC_BACKEND builds, they would
run LocalProcessControlFile() to read in pg_control and extract several
important settings.

This happens too early to acquire ControlFileLock, and the postmaster is
also not allowed to acquire ControlFileLock, so it can't safely take a
copy to give to the child.

Instead, pass down the "proto-controlfile" that was read by the
postmaster in LocalProcessControlFile().  Introduce functions
ExportProtoControlFile() and ImportProtoControlFile() to allow that.
Subprocesses will extract information from that, and then later attach
to the current control file in shared memory.

Reported-by: Melanie Plageman  per Windows CI failure
Discussion: https://postgr.es/m/CAAKRu_YNGwEYrorQYza_W8tU%2B%3DtoXRHG8HpyHC-KDbZqA_ZVSA%40mail.gmail.com
---
 src/backend/access/transam/xlog.c   | 46 +++--
 src/backend/postmaster/launch_backend.c | 19 ++
 src/include/access/xlog.h   |  5 +++
 3 files changed, 62 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 330e058c5f2..b69a0d95af9 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -568,6 +568,10 @@ static WALInsertLockPadded *WALInsertLocks = NULL;
  */
 static ControlFileData *ControlFile = NULL;
 
+#ifdef EXEC_BACKEND
+static ControlFileData *ProtoControlFile = NULL;
+#endif
+
 /*
  * Calculate the amount of space left on the page after 'endptr'. Beware
  * multiple evaluation!
@@ -686,6 +690,7 @@ static bool PerformRecoveryXLogAction(void);
 static void InitControlFile(uint64 sysidentifier);
 static void WriteControlFile(void);
 static void ReadControlFile(void);
+static void ScanControlFile(void);
 static void UpdateControlFile(void);
 static char *str_time(pg_time_t tnow);
 
@@ -4309,9 +4314,7 @@ WriteControlFile(void)
 static void
 ReadControlFile(void)
 {
-	pg_crc32c	crc;
 	int			fd;
-	static char wal_segsz_str[20];
 	int			r;
 
 	/*
@@ -4344,6 +4347,15 @@ ReadControlFile(void)
 
 	close(fd);
 
+	ScanControlFile();
+}
+
+static void
+ScanControlFile(void)
+{
+	static char wal_segsz_str[20];
+	pg_crc32c	crc;
+
 	/*
 	 * Check for expected pg_control format version.  If this is wrong, the
 	 * CRC check will likely fail because we'll be checking the wrong number
@@ -4815,8 +4827,33 @@ LocalProcessControlFile(bool reset)
 	Assert(reset || ControlFile == NULL);
 	ControlFile = palloc(sizeof(ControlFileData));
 	ReadControlFile();
+
+#ifdef EXEC_BACKEND
+	/* We need to be able to give this to subprocesses. */
+	ProtoControlFile = ControlFile;
+#endif
+}
+
+#ifdef EXEC_BACKEND
+void
+E

Re: Refactoring backend fork+exec code

2024-05-17 Thread Thomas Munro
On Mon, Mar 18, 2024 at 10:41 PM Heikki Linnakangas  wrote:
> Committed, with some final cosmetic cleanups. Thanks everyone!

Nitpicking from UBSan with EXEC_BACKEND on Linux (line numbers may be
a bit off, from a branch of mine):

../src/backend/postmaster/launch_backend.c:772:2: runtime error: null
pointer passed as argument 2, which is declared to never be null
==13303==Using libbacktrace symbolizer.
#0 0x564b0202 in save_backend_variables
../src/backend/postmaster/launch_backend.c:772
#1 0x564b0242 in internal_forkexec
../src/backend/postmaster/launch_backend.c:311
#2 0x564b0bdd in postmaster_child_launch
../src/backend/postmaster/launch_backend.c:244
#3 0x564b3121 in StartChildProcess
../src/backend/postmaster/postmaster.c:3928
#4 0x564b933a in PostmasterMain
../src/backend/postmaster/postmaster.c:1357
#5 0x562de4ad in main ../src/backend/main/main.c:197
#6 0x7667ad09 in __libc_start_main
(/lib/x86_64-linux-gnu/libc.so.6+0x23d09)
#7 0x55e34279 in _start
(/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/bin/postgres+0x8e0279)

This silences it:

-   memcpy(param->startup_data, startup_data, startup_data_len);
+   if (startup_data_len > 0)
+   memcpy(param->startup_data, startup_data, startup_data_len);

(I found that out by testing EXEC_BACKEND on CI.  I also learned that
the Mac and FreeBSD tasks fail with EXEC_BACKEND because of SysV shmem
bleating.  We probably should go and crank up the relevant sysctls in
the .cirrus.tasks.yml...)




Re: Streaming read-ready sequential scan code

2024-05-17 Thread Thomas Munro
On Sat, May 18, 2024 at 11:30 AM Thomas Munro  wrote:
> Andres happened to have TPC-DS handy, and reproduced that regression
> in q15.  We tried some stuff and figured out that it requires
> parallel_leader_participation=on, ie that this looks like some kind of
> parallel fairness and/or timing problem.  It seems to be a question of
> which worker finishes up processing matching rows, and the leader gets
> a ~10ms head start but may be a little more greedy with the new
> streaming code.  He tried reordering the table contents and then saw
> 17 beat 16.  So for q15, initial indications are that this isn't a
> fundamental regression, it's just a test that is sensitive to some
> arbitrary conditions.
>
> I'll try to figure out some more details about that, ie is it being
> too greedy on small-ish tables,

After more debugging, we learned a lot more things...

1.  That query produces spectacularly bad estimates, so we finish up
having to increase the number of buckets in a parallel hash join many
times.  That is quite interesting, but unrelated to new code.
2.  Parallel hash join is quite slow at negotiating an increase in the
number of hash bucket, if all of the input tuples are being filtered
out by quals, because of the choice of where workers check for
PHJ_GROWTH_NEED_MORE_BUCKETS.  That could be improved quite easily I
think.  I have put that on my todo list 'cause that's also my code,
but it's not a new issue it's just one that is now highlighted...
3.  This bit of read_stream.c is exacerbating unfairness in the
underlying scan, so that 1 and 2 come together and produce a nasty
slowdown, which goes away if you change it like so:

-   BlockNumber blocknums[16];
+   BlockNumber blocknums[1];

I will follow up after some more study.




Re: [PATCH v2] ntp: safeguard against time_constant overflow case

2024-05-17 Thread Thomas Gleixner
Justin!

On Fri, May 17 2024 at 00:47, Justin Stitt wrote:
>   if (txc->modes & ADJ_TIMECONST) {
> - time_constant = txc->constant;
> - if (!(time_status & STA_NANO))
> + if (!(time_status & STA_NANO) && time_constant < MAXTC)
>   time_constant += 4;
>   time_constant = min(time_constant, (long)MAXTC);
>   time_constant = max(time_constant, 0l);

Let me digest this.

The original code does:

time_constant = txc->constant;
if (!(time_status & STA_NANO))
time_constant += 4;
time_constant = min(time_constant, (long)MAXTC);
time_constant = max(time_constant, 0l);

Your change results in:

if (!(time_status & STA_NANO) && time_constant < MAXTC)
time_constant += 4;
time_constant = min(time_constant, (long)MAXTC);
time_constant = max(time_constant, 0l);

IOW, you lost the intent of the code to assign the user space supplied
value of txc->constant.

Aside of that you clearly failed to map the deep analysis I provided to
you vs. the time_maxerror issue to this one:

# git grep 'time_constant.*=' kernel/time/
ntp.c:66:static longtime_constant = 2;

  That's the static initializer

kernel/time/ntp.c:736:  time_constant = txc->constant;
kernel/time/ntp.c:738:  time_constant += 4;
kernel/time/ntp.c:739:  time_constant = min(time_constant, 
(long)MAXTC);
kernel/time/ntp.c:740:  time_constant = max(time_constant, 0l);

  That's the part of process_adjtimex_modes() you are trying to
  "fix". So it's exactly the same problem as with time_maxerror, no?

And therefore you provide a "safeguard" against overflow for the price of
making the syscall disfunctional. Seriously?

Did you even try to run something else than the bad case reproducer
against your fix?

No. You did not. Any of the related real use case tests would have
failed.

I told you yesterday:

   Tools are good to pin-point symptoms, but they are by definition
   patently bad in root cause analysis. Otherwise we could just let the
   tool write the "fix".

Such a tool would have at least produced a correct "fix" to cure the
symptom.

Thanks,

tglx



[jira] [Commented] (FLINK-16686) [State TTL] Make user class loader available in native RocksDB compaction thread

2024-05-17 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847467#comment-17847467
 ] 

Thomas Weise commented on FLINK-16686:
--

Flink 1.17:
{code:java}
Exception in thread "Thread-14" java.lang.IllegalArgumentException: classLoader 
cannot be null.
at com.esotericsoftware.kryo.Kryo.setClassLoader(Kryo.java:975)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.checkKryoInitialized(KryoSerializer.java:550)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:391)
at 
org.apache.flink.api.common.typeutils.CompositeSerializer.deserialize(CompositeSerializer.java:156)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextElementLastAccessTimestamp(RocksDbTtlCompactFiltersManager.java:205)
at 
org.apache.flink.contrib.streaming.state.ttl.RocksDbTtlCompactFiltersManager$ListElementFilter.nextUnexpiredOffset(RocksDbTtlCompactFiltersManager.java:191)
 {code}

> [State TTL] Make user class loader available in native RocksDB compaction 
> thread
> 
>
> Key: FLINK-16686
> URL: https://issues.apache.org/jira/browse/FLINK-16686
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.8.0, 1.11.3, 1.13.0, 1.12.3, 1.17.0
>Reporter: Andrey Zagrebin
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> The issue is initially reported 
> [here|https://stackoverflow.com/questions/60745711/flink-kryo-serializer-because-chill-serializer-couldnt-be-found].
> The problem is that the java code of Flink compaction filter is called from 
> RocksDB native C++ code. It is called in the context of the native compaction 
> thread. RocksDB has utilities to create java Thread context for the Flink 
> java callback. Presumably, the Java thread context class loader is not set at 
> all and if it is queried then it produces NullPointerException.
> The provided report enabled a list state with TTL. The compaction filter has 
> to deserialise elements to check expiration. The deserialiser relies on Kryo 
> which queries the thread context class loader which is expected to be the 
> user class loader of the task but turns out to be null.
> We should investigate how to pass the user class loader to the compaction 
> thread of the list state with TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-16686) [State TTL] Make user class loader available in native RocksDB compaction thread

2024-05-17 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated FLINK-16686:
-
Affects Version/s: 1.17.0

> [State TTL] Make user class loader available in native RocksDB compaction 
> thread
> 
>
> Key: FLINK-16686
> URL: https://issues.apache.org/jira/browse/FLINK-16686
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.8.0, 1.11.3, 1.13.0, 1.12.3, 1.17.0
>Reporter: Andrey Zagrebin
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> The issue is initially reported 
> [here|https://stackoverflow.com/questions/60745711/flink-kryo-serializer-because-chill-serializer-couldnt-be-found].
> The problem is that the java code of Flink compaction filter is called from 
> RocksDB native C++ code. It is called in the context of the native compaction 
> thread. RocksDB has utilities to create java Thread context for the Flink 
> java callback. Presumably, the Java thread context class loader is not set at 
> all and if it is queried then it produces NullPointerException.
> The provided report enabled a list state with TTL. The compaction filter has 
> to deserialise elements to check expiration. The deserialiser relies on Kryo 
> which queries the thread context class loader which is expected to be the 
> user class loader of the task but turns out to be null.
> We should investigate how to pass the user class loader to the compaction 
> thread of the list state with TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Streaming read-ready sequential scan code

2024-05-17 Thread Thomas Munro
On Sat, May 18, 2024 at 8:09 AM Thomas Munro  wrote:
> On Sat, May 18, 2024 at 1:00 AM Alexander Lakhin  wrote:
> > I decided to compare v17 vs v16 performance (as I did the last year [1])
> > and discovered that v17 loses to v16 in the pg_tpcds (s64da_tpcds)
> > benchmark, query15 (and several others, but I focused on this one):
> > Best pg-src-master--.* worse than pg-src-16--.* by 52.2 percents (229.84 > 
> > 151.03): pg_tpcds.query15
> > Average pg-src-master--.* worse than pg-src-16--.* by 53.4 percents (234.20 
> > > 152.64): pg_tpcds.query15
> > Please look at the full html report attached in case you're interested.
> >
> > (I used my pg-mark tool to measure/analyze performance, but I believe the
> > same results can be seen without it.)
>
> Will investigate, but if it's easy for you to rerun, does it help if
> you increase Linux readahead, eg blockdev --setra setting?

Andres happened to have TPC-DS handy, and reproduced that regression
in q15.  We tried some stuff and figured out that it requires
parallel_leader_participation=on, ie that this looks like some kind of
parallel fairness and/or timing problem.  It seems to be a question of
which worker finishes up processing matching rows, and the leader gets
a ~10ms head start but may be a little more greedy with the new
streaming code.  He tried reordering the table contents and then saw
17 beat 16.  So for q15, initial indications are that this isn't a
fundamental regression, it's just a test that is sensitive to some
arbitrary conditions.

I'll try to figure out some more details about that, ie is it being
too greedy on small-ish tables, and generally I do wonder about the
interactions between the heuristics and batching working at different
levels (OS, seq scan, read stream, hence my earlier ra question which
is likely a red herring) and how there might be unintended
consequences/interference patterns, but this particular case seems
more data dependent.




User set to nologin

2024-05-17 Thread Thomas Minney
I recently tried to login to pause.perl.org with username tminney in order to 
delete the account as I no longer use it. Please can you help delete the 
account.



Re: About i386 support

2024-05-17 Thread Thomas Hochstein
Victor Gamper wrote:

> Is there a reason to do this? If so, what would be required to keep
> the i386 version, seeing as it still is important and used?





Re: Broken page link for buster installation

2024-05-17 Thread Thomas Lange
>>>>> On Fri, 17 May 2024 17:14:19 +0200, Holger Wansing  
>>>>> said:

> The buster release page says "To obtain and install Debian, see the 
installation 
> information page ..." but such page is not existant.
I removed this sentence.

-- 
regards Thomas



Bug#1071278: systemd 256 breaks dracut

2024-05-17 Thread Thomas Lange
Hi Luca,

it breaks the current version in unstable and earlier. So please add
Breaks: dracut (<= 060+5-7)

-- 

regards Thomas



Bug#1071278: systemd 256 breaks dracut

2024-05-17 Thread Thomas Lange
Hi Luca,

it breaks the current version in unstable and earlier. So please add
Breaks: dracut (<= 060+5-7)

-- 

regards Thomas



Re: Streaming read-ready sequential scan code

2024-05-17 Thread Thomas Munro
On Sat, May 18, 2024 at 1:00 AM Alexander Lakhin  wrote:
> I decided to compare v17 vs v16 performance (as I did the last year [1])
> and discovered that v17 loses to v16 in the pg_tpcds (s64da_tpcds)
> benchmark, query15 (and several others, but I focused on this one):
> Best pg-src-master--.* worse than pg-src-16--.* by 52.2 percents (229.84 > 
> 151.03): pg_tpcds.query15
> Average pg-src-master--.* worse than pg-src-16--.* by 53.4 percents (234.20 > 
> 152.64): pg_tpcds.query15
> Please look at the full html report attached in case you're interested.
>
> (I used my pg-mark tool to measure/analyze performance, but I believe the
> same results can be seen without it.)

Will investigate, but if it's easy for you to rerun, does it help if
you increase Linux readahead, eg blockdev --setra setting?




[VOTE] Release Apache Commons Daemon 1.4.0 based on RC1

2024-05-17 Thread Mark Thomas
We have fixed a few bugs, added enhancements and updated the minimum 
Java and Windows version since Apache Commons Daemon 1.3.4 was released, 
so I would like to release Apache Commons Daemon 1.4.0.


Apache Commons Daemon 1.4.0 RC1 is available for review here:
https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1 
(svn revision 69267)


The Git tag commons-daemon-1.4.0-RC1 commit for this RC is 
6b911598b815a4a7b8ab2b8a8a2157593effc6bc which you can browse here:


https://gitbox.apache.org/repos/asf?p=commons-daemon.git;a=commit;h=6b911598b815a4a7b8ab2b8a8a2157593effc6bc
You may checkout this tag using:
git clone https://gitbox.apache.org/repos/asf/commons-daemon.git 
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1


Maven artifacts are here:

https://repository.apache.org/content/repositories/orgapachecommons-1729/commons-daemon/commons-daemon/1.4.0/

These are the artifacts and their hashes:

#Release SHA-512s
#Fri May 17 16:28:36 BST 2024
commons-daemon-1.4.0-bin-windows.zip=5974d638994cbf821c17d0fc6b69bace08b0314ea5614c1a57175a02cda7c57a6b8ee49f8892206061f9d3385da5841db31d9ce9b3ce74cf4afc10ad8e68
commons-daemon-1.4.0-bin.tar.gz=15fccd35a711f91e5b4466d56f50585c7ae3a787a39c16e006617c86b9e9feee9fbf902582b08c2e896ca6a655500d805fdbb9c97f04f70321631168b8d42c81
commons-daemon-1.4.0-bin.zip=3652ed9ed9cf6fcb0d4b5067570c322b0b3c9ae0a81dee1d7b0992bb7ff5654a7c4dc89c0c2d966c9962778288c6ad60bd8ac10f62898c9e10261bec6e61d3ea
commons-daemon-1.4.0-bom.json=0de219d72a63d8154f42ef5bd6c348936e14d65efec3e54a55ebfb9bc757e4ceac7aabd8c8b85d94657ed76f44069ac56b2bb231aba5419733f00a3dc85f6601
commons-daemon-1.4.0-bom.xml=bc0dba27a50ca6c5d30015f97bd258325452e6fabefd1cf38b94d0ce5699233a18b456fd701761a5f8cedf847cbd152879e0dec9add548611d5593b910c90244
commons-daemon-1.4.0-javadoc.jar=8fd299a3d228c4ab4ea8455b81319d80b3e27cac1c31bed1e03cc7a3391d59f18e037adcb72e68202511a45ef5bc49274d6e9cf38c860b55bb9b874a92044d2e
commons-daemon-1.4.0-native-src.tar.gz=8a54200d547ef7ee647e8d4910fd3cb55bf7d8fc75de8f0e01bc701ef0b386ddc3843e6c9189e34d2afd62060fb6299ea83c421cf60c7d105d04cb45904500d3
commons-daemon-1.4.0-native-src.zip=cb6b12bbd775eba7d012744cf908f42fc6d39e421c1f41546f230b431c1d239cc3e2d9c09520165b5db7a95701b651a6738a5d1915d39a4520b1ff07ce4f65a5
commons-daemon-1.4.0-sources.jar=701b3646ea29de5ea69d72c8741a2dc56a44a57168c0e7d1afab87f89d9cab75c413f1fe3d09f5765e4dbe2b2af0951125ee0f6a0a4d5b4fafcf49bfd0b03cbf
commons-daemon-1.4.0-src.tar.gz=285f33ce36e2591f49b6067da16612ec1b49b23a8637d077618aefaae4452993dc2a31660665551ea761857390d940100e162e205fe7c0fad9c72374f2d15bb8
commons-daemon-1.4.0-src.zip=190d6b8b65d71594ff02bade3fbcd6b09d5b2e68413a2a23ef2cbf945d2e19655c1d480484ec198f7e140eaa3744c970770cea17498c12f9bfe284f5bd28a51d
commons-daemon-1.4.0-test-sources.jar=e889d8b5bda1e0a89d33741e9308739b732e938ef13b552acf7dc0ba52845766e6a49f3fbb6c821655d295e18b9accbfeac1c26b8afacc088084511cea301bcd
commons-daemon-1.4.0-tests.jar=b392bdaa59e3d75e7aa023f65514385edfc44bc1bc088826b643186bfeaf47215375a814af3637e585bde201dd6ee5ef3669f2b4a3cf2e275da4fc6ccd91dfda
commons-daemon_commons-daemon-1.4.0.spdx.json=47c669c16aca4588d4940a4dcec162a619587f8fc8d6a74a5abbe8562296f0eb08f271db531e678a939355a9b7f669cb9ade864d953c77402b60e8c183f1faed



Details of changes since 1.3.4 are in the change log:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/RELEASE-NOTES.txt

https://github.com/apache/commons-daemon/blob/master/src/changes/changes.xml

KEYS:
  https://downloads.apache.org/commons/KEYS

Please review the release candidate and vote.
This vote will close no sooner than 72 hours from now.

  [ ] +1 Release these artifacts
  [ ] +0 OK, but...
  [ ] -0 OK, but really should fix...
  [ ] -1 I oppose this release because...

Thank you,

Mark Thomas,
Release Manager (using key 10C01C5A2F6059E7)

For following is intended as a helper and refresher for reviewers.

Validating a release candidate
==

These guidelines are NOT complete.

Requirements: Git, Java, Maven.

You can validate a release from a release candidate (RC) tag as follows.

1a) Clone and checkout the RC tag

git clone https://gitbox.apache.org/repos/asf/commons-daemon.git 
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1

cd commons-daemon-1.4.0-RC1

1b) Download and unpack the source archive from:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/source

2) Check Apache licenses

This step is not required if the site includes a RAT report page which 
you then must check.


mvn apache-rat:check

3) Check binary compatibility

Older components still use Apache Clirr:

This step is not required if the site includes a Clirr report page which 
you then must check.


mvn clirr:check

Newer components use JApiCmp with the japicmp Maven Profile:

This step is not required if the site includes a JApiCmp report page 
which you then must check.


mvn install -DskipTests -P japicmp japicmp:cmp

4) Build the package

mvn -V

Re: problem with the heartbeat interval feature

2024-05-17 Thread Thomas Peyric
thanks Hongshun for your response !

Le ven. 17 mai 2024 à 07:51, Hongshun Wang  a
écrit :

> Hi Thomas,
>
> In debezium dos says: For the connector to detect and process events from
> a heartbeat table, you must add the table to the PostgreSQL publication
> specified by the publication.name
> <https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-publication-name>
>  property.
> If this publication predates your Debezium deployment, the connector uses
> the publications as defined. If the publication is not already configured
> to automatically replicate changes FOR ALL TABLES in the database, you
> must explicitly add the heartbeat table to the publication[2].
>
> Thus, if you want use heart beat in cdc:
>
>1. add a heartbeat table to publication: ALTER PUBLICATION
>** ADD TABLE **;
>2. set heartbeatInterval
>3. add debezium.heartbeat.action.query
>
> <https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-action-query>
> [3]
>
> However, when I use it it CDC, some exception occurs:
>
> Caused by: java.lang.NullPointerException
> at 
> io.debezium.heartbeat.HeartbeatFactory.createHeartbeat(HeartbeatFactory.java:55)
> at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:127)
> at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:94)
>
>
>
>
> It seems CDC don't add  a HeartbeatConnectionProvider  when configure
> PostgresEventDispatcher:
>
> //org.apache.flink.cdc.connectors.postgres.source.fetch.PostgresSourceFetchTaskContext#configurethis.postgresDispatcher
>  =
> new PostgresEventDispatcher<>(
> dbzConfig,
> topicSelector,
> schema,
> queue,
> dbzConfig.getTableFilters().dataCollectionFilter(),
> DataChangeEvent::new,
> metadataProvider,
> schemaNameAdjuster);
>
>
> In debezium, when PostgresConnectorTask start, it will  do it
>
> //io.debezium.connector.postgresql.PostgresConnectorTask#start  final 
> PostgresEventDispatcher dispatcher = new PostgresEventDispatcher<>(
> connectorConfig,
> topicNamingStrategy,
> schema,
> queue,
> connectorConfig.getTableFilters().dataCollectionFilter(),
> DataChangeEvent::new,
> PostgresChangeRecordEmitter::updateSchema,
> metadataProvider,
> connectorConfig.createHeartbeat(
> topicNamingStrategy,
> schemaNameAdjuster,
> () -> new 
> PostgresConnection(connectorConfig.getJdbcConfig(), 
> PostgresConnection.CONNECTION_GENERAL),
> exception -> {
> String sqlErrorId = exception.getSQLState();
> switch (sqlErrorId) {
> case "57P01":
> // Postgres error admin_shutdown, see 
> https://www.postgresql.org/docs/12/errcodes-appendix.html 
>throw new DebeziumException("Could not execute heartbeat 
> action query (Error: " + sqlErrorId + ")", exception);
> case "57P03":
> // Postgres error cannot_connect_now, 
> see https://www.postgresql.org/docs/12/errcodes-appendix.html 
>throw new RetriableException("Could not execute 
> heartbeat action query (Error: " + sqlErrorId + ")", exception);
> default:
> break;
> }
> }),
> schemaNameAdjuster,
> signalProcessor);
>
> Thus, I have create a new jira[4] to fix it.
>
>
>
>  [1]
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/legacy-flink-cdc-sources/postgres-cdc/
>
> [2]
> https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-interval-ms
>
> [3]
> https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-action-query
>
> [4] https://issues.apache.org/jira/browse/FLINK-35387
>
>
> Best
>
> Hongshun
>
>

[Craft] [Bug 486905] Qt6 Webengine crash - required icudtl.dat is missing from image

2024-05-17 Thread Thomas Friedrichsmeier
https://bugs.kde.org/show_bug.cgi?id=486905

--- Comment #2 from Thomas Friedrichsmeier  
---
Some more observations:
- The qtwebengine version in the craft cache was indeed built with
-DQT_FEATURE_webengine_system=ON (see
https://files.kde.org/craft/Qt6/24.04/windows/cl/msvc2022/x86_64/RelWithDebInfo/libs/qt6/qtwebengine/).
- This _should_ mean that icudtl.dat is not needed, but apparently, the option
is not working correctly, on Windows. The log shows "bundled_icu" being built
(around steps 5280+).
- I note a similar-sounding bug report in homebrew (MacOS):
https://github.com/Homebrew/homebrew-core/issues/104008 . Here, the conclusion
has apparently been to make system-icu a linux only option. In fact, the Mac
logs, too show "bundled_icu" being built despite the build being configured for
system_icu.

So, apparently what's happening is that the option is ignored while building,
and then (inappropriately) honored while installing.

Easiest solution would be to confine the option to linux. (Still hoping to
verify the approach with a test-build, though. Let's see if my (virtual) disk
space is large enough, this time).

-- 
You are receiving this mail because:
You are watching all bug changes.

[Craft] [Bug 486905] Qt6 Webengine crash - required icudtl.dat is missing from image

2024-05-17 Thread Thomas Friedrichsmeier
https://bugs.kde.org/show_bug.cgi?id=486905

--- Comment #2 from Thomas Friedrichsmeier  
---
Some more observations:
- The qtwebengine version in the craft cache was indeed built with
-DQT_FEATURE_webengine_system=ON (see
https://files.kde.org/craft/Qt6/24.04/windows/cl/msvc2022/x86_64/RelWithDebInfo/libs/qt6/qtwebengine/).
- This _should_ mean that icudtl.dat is not needed, but apparently, the option
is not working correctly, on Windows. The log shows "bundled_icu" being built
(around steps 5280+).
- I note a similar-sounding bug report in homebrew (MacOS):
https://github.com/Homebrew/homebrew-core/issues/104008 . Here, the conclusion
has apparently been to make system-icu a linux only option. In fact, the Mac
logs, too show "bundled_icu" being built despite the build being configured for
system_icu.

So, apparently what's happening is that the option is ignored while building,
and then (inappropriately) honored while installing.

Easiest solution would be to confine the option to linux. (Still hoping to
verify the approach with a test-build, though. Let's see if my (virtual) disk
space is large enough, this time).

-- 
You are receiving this mail because:
You are on the CC list for the bug.

[pve-devel] applied: [PATCH qemu] fixes for QEMU 9.0

2024-05-17 Thread Thomas Lamprecht
Am 17/05/2024 um 10:44 schrieb Fiona Ebner:
> Most importantly, fix forwards and backwards migration with VirtIO-GPU
> display.
> 
> Other fixes are for a regression in pflash device (introduced in 8.2)
> and some fixes for x86(_64) TCG emulation. One of the patches needed
> to be adapted, because it removed a helper that is still in use in
> 9.0.0.
> 
> There also is a revert for a fix in VirtIO PCI devices that turned out
> to cause some issues, see the revert itself for more details.
> 
> Lastly, there is a change to move compatibility flags for a new
> VirtIO-net feature to the correct machine type. The feature was
> introduced in QEMU 8.2, but the compatibility flags got added to
> machine version 8.0 instead of 8.1. This breaks backwards migration
> with machine version 8.1 from a 8.2/9.0 binary to an 8.1 binary, in
> cases where the guest kernel enables the feature (e.g. Ubuntu 23.10).
> While that breaks migration with machine version 8.1 from an unpatched
> to a patched binary, Proxmox VE only ever had 8.2 on the test
> repository and 9.0 not yet in any public repository. An upstream
> developer suggested it is the proper fix [0]. Upstream submission [1].
> 
> [0]: 
> https://lore.kernel.org/qemu-devel/cacgkmetzrjuhof+hugvrvllqe+8nqe5xmshpt0naq1epnqf...@mail.gmail.com/T/#u
> [1]: 
> https://lore.kernel.org/qemu-devel/20240517075336.104091-1-f.eb...@proxmox.com/T/#u
> 
> Signed-off-by: Fiona Ebner 
> ---
>  .../0006-virtio-gpu-fix-v2-migration.patch| 98 +++
>  ...0007-hw-pflash-fix-block-write-start.patch | 59 +++
>  ...operand-size-for-DATA16-REX.W-POPCNT.patch | 51 ++
>  ...ru-wrpkru-are-no-prefix-instructions.patch | 40 
>  ...6-fix-feature-dependency-for-WAITPKG.patch | 33 +++
>  ...tio-pci-fix-use-of-a-released-vector.patch | 87 
>  ...move-compatibility-flags-for-VirtIO-.patch | 57 +++
>  ...sed-balloon-qemu-4-0-config-size-fal.patch |  4 +-
>  debian/patches/series |  7 ++
>  9 files changed, 434 insertions(+), 2 deletions(-)
>  create mode 100644 
> debian/patches/extra/0006-virtio-gpu-fix-v2-migration.patch
>  create mode 100644 
> debian/patches/extra/0007-hw-pflash-fix-block-write-start.patch
>  create mode 100644 
> debian/patches/extra/0008-target-i386-fix-operand-size-for-DATA16-REX.W-POPCNT.patch
>  create mode 100644 
> debian/patches/extra/0009-target-i386-rdpkru-wrpkru-are-no-prefix-instructions.patch
>  create mode 100644 
> debian/patches/extra/0010-target-i386-fix-feature-dependency-for-WAITPKG.patch
>  create mode 100644 
> debian/patches/extra/0011-Revert-virtio-pci-fix-use-of-a-released-vector.patch
>  create mode 100644 
> debian/patches/extra/0012-hw-core-machine-move-compatibility-flags-for-VirtIO-.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Bug#1071182: dracut: requires changes for systemd 256; boot fails otherwise

2024-05-17 Thread Thomas Lange
The related systemd bug is #1071278
-- 
regards Thomas



Bug#1071182: dracut: requires changes for systemd 256; boot fails otherwise

2024-05-17 Thread Thomas Lange
The related systemd bug is #1071278
-- 
regards Thomas



Re: [PATCH v2 2/2] drm/mgag200: Add an option to disable Write-Combine

2024-05-17 Thread Thomas Zimmermann

Hi

Am 17.05.24 um 17:09 schrieb Jocelyn Falempe:

Unfortunately, the G200 ioburst workaround doesn't work on some
servers like Dell poweredge XR11, XR5610, or HPE XL260. In this case
completely disabling WC is the only option to achieve low-latency.
So this adds a new Kconfig option to disable WC mapping of the G200.

Signed-off-by: Jocelyn Falempe 


Reviewed-by: Thomas Zimmermann 

Thanks a lot for the fix.

Best regards
Thomas


---
  drivers/gpu/drm/mgag200/Kconfig   | 10 ++
  drivers/gpu/drm/mgag200/mgag200_drv.c |  6 ++
  2 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/mgag200/Kconfig b/drivers/gpu/drm/mgag200/Kconfig
index b28c5e4828f47..3096944a8f0ab 100644
--- a/drivers/gpu/drm/mgag200/Kconfig
+++ b/drivers/gpu/drm/mgag200/Kconfig
@@ -11,3 +11,13 @@ config DRM_MGAG200
 MGA G200 desktop chips and the server variants. It requires 0.3.0
 of the modesetting userspace driver, and a version of mga driver
 that will fail on KMS enabled devices.
+
+config DRM_MGAG200_DISABLE_WRITECOMBINE
+   bool "Disable Write Combine mapping of VRAM"
+   depends on DRM_MGAG200 && PREEMPT_RT
+   help
+ The VRAM of the G200 is mapped with Write-Combine to improve
+ performances. This can interfere with real-time tasks; even if they
+ are running on other CPU cores than the graphics output.
+ Enable this option only if you run realtime tasks on a server with a
+ Matrox G200.
\ No newline at end of file
diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c 
b/drivers/gpu/drm/mgag200/mgag200_drv.c
index 3883f25ca4d8b..62080cf0f2da4 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.c
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.c
@@ -146,12 +146,18 @@ int mgag200_device_preinit(struct mga_device *mdev)
}
mdev->vram_res = res;
  
+#if defined(CONFIG_DRM_MGAG200_DISABLE_WRITECOMBINE)

+   mdev->vram = devm_ioremap(dev->dev, res->start, resource_size(res));
+   if (!mdev->vram)
+   return -ENOMEM;
+#else
mdev->vram = devm_ioremap_wc(dev->dev, res->start, resource_size(res));
if (!mdev->vram)
return -ENOMEM;
  
  	/* Don't fail on errors, but performance might be reduced. */

devm_arch_phys_wc_add(dev->dev, res->start, resource_size(res));
+#endif
  
  	return 0;

  }


--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)



Bug#1071182: dracut: requires changes for systemd 256; boot fails otherwise

2024-05-17 Thread Thomas Lange
I also filed a bug against systemd because this problem can be solved
by both packages and there are plans to replace dracut by dracut-ng.
But that will need more time.


regards Thomas



Bug#1071182: dracut: requires changes for systemd 256; boot fails otherwise

2024-05-17 Thread Thomas Lange
I also filed a bug against systemd because this problem can be solved
by both packages and there are plans to replace dracut by dracut-ng.
But that will need more time.


regards Thomas



Bug#1071278: systemd 256 breaks dracut

2024-05-17 Thread Thomas Lange


Package: systemd
Version: 256~rc2-3
Severity: serious

systemd changed it's behaviour and now makes /usr read-only in the
initrd. This breaks dracut and vice versa.
This bug is releated to #1071182 which says dracut breaks systemd.
Please add a Breaks: dracut(<<..)

Currently I do not know when I will update dracut, because there are
also plans to replace dracut by dracut-ng, which may involve more
time. I not sure in which package I will invest my available time.

In order to not break the systems of our users, IMO the smalles change
would be to add the Breaks: line to systemd.

-- 
 Thomas



Bug#1071278: systemd 256 breaks dracut

2024-05-17 Thread Thomas Lange


Package: systemd
Version: 256~rc2-3
Severity: serious

systemd changed it's behaviour and now makes /usr read-only in the
initrd. This breaks dracut and vice versa.
This bug is releated to #1071182 which says dracut breaks systemd.
Please add a Breaks: dracut(<<..)

Currently I do not know when I will update dracut, because there are
also plans to replace dracut by dracut-ng, which may involve more
time. I not sure in which package I will invest my available time.

In order to not break the systems of our users, IMO the smalles change
would be to add the Breaks: line to systemd.

-- 
 Thomas



Bug#1071278: systemd 256 breaks dracut

2024-05-17 Thread Thomas Lange


Package: systemd
Version: 256~rc2-3
Severity: serious

systemd changed it's behaviour and now makes /usr read-only in the
initrd. This breaks dracut and vice versa.
This bug is releated to #1071182 which says dracut breaks systemd.
Please add a Breaks: dracut(<<..)

Currently I do not know when I will update dracut, because there are
also plans to replace dracut by dracut-ng, which may involve more
time. I not sure in which package I will invest my available time.

In order to not break the systems of our users, IMO the smalles change
would be to add the Breaks: line to systemd.

-- 
 Thomas



Re: [pve-devel] [RFC qemu] savevm-async: improve check for blockers

2024-05-17 Thread Thomas Lamprecht
subject might be improved by being less general/ambiguous, something like:

savevm-async: improve coverage by also checking for migration blockers

or 

savevm-async: block snapshot also if migration would fail

or

savevm-async: reuse migration blocker check for snapshots

Would have helped me to have a better initial context for reading this commit
(message).

Am 17/05/2024 um 13:39 schrieb Fiona Ebner:
> Same rationale as with upstream QEMU commit 5aaac46793 ("migration:
> savevm: consult migration blockers"), migration and (async) snapshot
> are essentially the same operation and thus snapshot also needs to
> check for migration blockers. For example, this catches passed-through
> PCI devices, where the driver does not support migration.
> 
> However, the commit notes:
> 
>> There is really no difference between live migration and savevm, except
>> that savevm does not require bdrv_invalidate_cache to be implemented
>> by all disks.  However, it is unlikely that savevm is used with anything
>> except qcow2 disks, so the penalty is small and worth the improvement
>> in catching bad usage of savevm.
> 
> and for Proxmox VE, suspend-to-disk with VMDK does use savevm-async
> and would be broken by simply using migration_is_blocked(). To keep
> this working, introduce a new helper that filters blockers with the
> prefix used by the VMDK migration blocker.
> 
> The function qemu_savevm_state_blocked() is called as part of
> migration_is_blocked_allow_prefix() so no check is lost with this
> patch.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> An alternative would be to mark the VMDK blocker as a
> "live-migration-only" blocker and extending migration_is_blocked() or
> using separate helpers to check for migration and snapshot blockers
> differently. But that requires touching more machinery and probably
> needs more adaptation going forward than the approach here.
> 
>  migration/migration.c| 22 ++
>  migration/migration.h|  1 +
>  migration/savevm-async.c |  7 ++-
>  3 files changed, 29 insertions(+), 1 deletion(-)
> 
> diff --git a/migration/migration.c b/migration/migration.c
> index b8d7e471a4..6235309a00 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1897,6 +1897,28 @@ void qmp_migrate_pause(Error **errp)
> "during postcopy-active or postcopy-recover state");
>  }
>  
> +/*
> + * HACK to allow hibernation in Proxmox VE even when VMDK image is present.
> + */
> +bool migration_is_blocked_allow_prefix(Error **errp, const char *prefix)
> +{
> +GSList *blockers = migration_blockers[migrate_mode()];
> +
> +if (qemu_savevm_state_blocked(errp)) {
> +return true;
> +}
> +
> +while (blockers) {
> +if (!g_str_has_prefix(error_get_pretty(blockers->data), prefix)) {
> +error_propagate(errp, error_copy(blockers->data));
> +return true;
> +}
> +blockers = g_slist_next(blockers);
> +}
> +
> +return false;
> +}
> +
>  bool migration_is_blocked(Error **errp)
>  {
>  GSList *blockers = migration_blockers[migrate_mode()];
> diff --git a/migration/migration.h b/migration/migration.h
> index 8045e39c26..575805a8e2 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -484,6 +484,7 @@ int migration_call_notifiers(MigrationState *s, 
> MigrationEventType type,
>   Error **errp);
>  
>  int migrate_init(MigrationState *s, Error **errp);
> +bool migration_is_blocked_allow_prefix(Error **errp, const char *prefix);
>  bool migration_is_blocked(Error **errp);
>  /* True if outgoing migration has entered postcopy phase */
>  bool migration_in_postcopy(void);
> diff --git a/migration/savevm-async.c b/migration/savevm-async.c
> index bf36fc06d2..33085446e1 100644
> --- a/migration/savevm-async.c
> +++ b/migration/savevm-async.c
> @@ -363,7 +363,12 @@ void qmp_savevm_start(const char *statefile, Error 
> **errp)
>  return;
>  }
>  
> -if (qemu_savevm_state_blocked(errp)) {
> +/*
> + * The limitation for VMDK images only applies to live-migration, not
> + * snapshots, see commit 5aaac46793 ("migration: savevm: consult 
> migration
> + * blockers").
> + */
> +if (migration_is_blocked_allow_prefix(errp, "The vmdk format used by 
> node")) {

meh, not a big fan of matching strings here, especially as that is not
stable ABI, I mean I did not check, but I would be surprised if that's
the case – maybe we could factor out that string here and when its added
as blocker into a common constant so that we'd notice if it changes.

And if we only uses this here then why add a generic "ignore one specific
blocker" helper, might be better to at least contain that detail in a
"qemu_savevm_async_state_blocked" one that takes only the `errp` as
parameter, as hacks should IMO always be quite specific to avoid the
spread of them (I know you would check in detail before doing so, but
not everybody does).

>  

Re: [Daemon] Anything to appease "Wrong type of arguments to formatting function"

2024-05-17 Thread Mark Thomas

Set them as false positives or just ignore them.

Mark


On 17/05/2024 15:09, Gary Gregory wrote:

Mark and all:

Is there anything smile to do to appease the warnings "Wrong type of
arguments to formatting function" in see
https://github.com/apache/commons-daemon/security/code-scanning ?

TY
Gary

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: Broken page link for buster installation

2024-05-17 Thread Thomas Lange
Having LTS support does not mean that we recommend to install new
systems using the outdated release. Therefore I've removed the link.

And where is the page (which) mentioned?


> Editing the page like this is not professional (mentioning a page,
> which is not there anymore).
> Buster is still under LTS support, so not archived strictly.

-- 
regards Thomas



[systemdgenie] [Bug 487148] New: SystemdGenie unresponsive (hangs) while refreshing after enabling/disabling or starting/stopping a system unit

2024-05-17 Thread Thomas Bertels
https://bugs.kde.org/show_bug.cgi?id=487148

Bug ID: 487148
   Summary: SystemdGenie unresponsive (hangs) while refreshing
after enabling/disabling or starting/stopping a system
unit
Classification: Plasma
   Product: systemdgenie
   Version: unspecified
  Platform: Manjaro
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: rthoms...@gmail.com
  Reporter: tbert...@gmail.com
  Target Milestone: ---

SUMMARY

SystemdGenie hangs while refreshing the list after enabling/disabling or
starting/stopping a system unit.

STEPS TO REPRODUCE
1. Open SystemdGenie
2. Check the boxes to show inactive and unloaded
3. Enable a system unit
3. (alternative way) open a console and enter the command "systemctl status
[unit name]"

OBSERVED RESULT

SystemdGenie hangs.

EXPECTED RESULT

SystemdGenie updates quickly the status of the enabled unit.

SOFTWARE/OS VERSIONS
SystemdGenie: 0.99.0
KDE Plasma Version: 6.0.4
KDE Frameworks Version: 6.1.0
Qt Version: 6.7.0

ADDITIONAL INFORMATION

The systemd process cpu usage jumps to 100% while SystemdGenie hangs.

systemctl status [unit name] shows quickly the updated unit status.
That same command also hangs SystemdGenie if it wasn't already so.

-- 
You are receiving this mail because:
You are watching all bug changes.

[gcc r14-10216] Fortran: Fix select type regression due to r14-9489 [PR114874]

2024-05-17 Thread Paul Thomas via Gcc-cvs
https://gcc.gnu.org/g:c887341432bb71cf5540d54955ad7265b0aaca77

commit r14-10216-gc887341432bb71cf5540d54955ad7265b0aaca77
Author: Paul Thomas 
Date:   Fri May 17 15:19:26 2024 +0100

Fortran: Fix select type regression due to r14-9489 [PR114874]

2024-05-17  Paul Thomas  

gcc/fortran
PR fortran/114874
* gfortran.h: Add 'assoc_name_inferred' to gfc_namespace.
* match.cc (gfc_match_select_type): Set 'assoc_name_inferred'
in select type namespace if the selector has inferred type.
* primary.cc (gfc_match_varspec): If a select type temporary
is apparently scalar and a left parenthesis has been detected,
check the current namespace has 'assoc_name_inferred' set. If
so, set inferred_type.
* resolve.cc (resolve_variable): If the namespace of a select
type temporary is marked with 'assoc_name_inferred' call
gfc_fixup_inferred_type_refs to ensure references are OK.
(gfc_fixup_inferred_type_refs): Catch invalid array refs..

gcc/testsuite/
PR fortran/114874
* gfortran.dg/pr114874_1.f90: New test for valid code.
* gfortran.dg/pr114874_2.f90: New test for invalid code.

(cherry picked from commit 5f5074fe7aaf9524defb265299a985eecba7f914)

Diff:
---
 gcc/fortran/gfortran.h   |  4 +++
 gcc/fortran/match.cc | 21 +
 gcc/fortran/primary.cc   | 10 +++---
 gcc/fortran/resolve.cc   | 17 +++---
 gcc/testsuite/gfortran.dg/pr114874_1.f90 | 32 +++
 gcc/testsuite/gfortran.dg/pr114874_2.f90 | 53 
 6 files changed, 128 insertions(+), 9 deletions(-)

diff --git a/gcc/fortran/gfortran.h b/gcc/fortran/gfortran.h
index 58505446bac5..de3d9e25911b 100644
--- a/gcc/fortran/gfortran.h
+++ b/gcc/fortran/gfortran.h
@@ -2241,6 +2241,10 @@ typedef struct gfc_namespace
   /* Set when resolve_types has been called for this namespace.  */
   unsigned types_resolved:1;
 
+  /* Set if the associate_name in a select type statement is an
+ inferred type.  */
+  unsigned assoc_name_inferred:1;
+
   /* Set to 1 if code has been generated for this namespace.  */
   unsigned translated:1;
 
diff --git a/gcc/fortran/match.cc b/gcc/fortran/match.cc
index 4539c9bb1344..1851a8f94a54 100644
--- a/gcc/fortran/match.cc
+++ b/gcc/fortran/match.cc
@@ -6721,6 +6721,27 @@ gfc_match_select_type (void)
   goto cleanup;
 }
 
+  /* Select type namespaces are not filled until resolution. Therefore, the
+ namespace must be marked as having an inferred type associate name if
+ either expr1 is an inferred type variable or expr2 is. In the latter
+ case, as well as the symbol being marked as inferred type, it might be
+ that it has not been detected to be so. In this case the target has
+ unknown type. Once the namespace is marked, the fixups in resolution can
+ be triggered.  */
+  if (!expr2
+  && expr1->symtree->n.sym->assoc
+  && expr1->symtree->n.sym->assoc->inferred_type)
+gfc_current_ns->assoc_name_inferred = 1;
+  else if (expr2 && expr2->expr_type == EXPR_VARIABLE
+  && expr2->symtree->n.sym->assoc)
+{
+  if (expr2->symtree->n.sym->assoc->inferred_type)
+   gfc_current_ns->assoc_name_inferred = 1;
+  else if (expr2->symtree->n.sym->assoc->target
+  && expr2->symtree->n.sym->assoc->target->ts.type == BT_UNKNOWN)
+   gfc_current_ns->assoc_name_inferred = 1;
+}
+
   new_st.op = EXEC_SELECT_TYPE;
   new_st.expr1 = expr1;
   new_st.expr2 = expr2;
diff --git a/gcc/fortran/primary.cc b/gcc/fortran/primary.cc
index 606e84432be6..c4821030ebb5 100644
--- a/gcc/fortran/primary.cc
+++ b/gcc/fortran/primary.cc
@@ -2113,13 +2113,13 @@ gfc_match_varspec (gfc_expr *primary, int equiv_flag, 
bool sub_flag,
 
   inferred_type = IS_INFERRED_TYPE (primary);
 
-  /* SELECT TYPE and SELECT RANK temporaries within an ASSOCIATE block, whose
- selector has not been parsed, can generate errors with array and component
- refs.. Use 'inferred_type' as a flag to suppress these errors.  */
+  /* SELECT TYPE temporaries within an ASSOCIATE block, whose selector has not
+ been parsed, can generate errors with array refs.. The SELECT TYPE
+ namespace is marked with 'assoc_name_inferred'. During resolution, this is
+ detected and gfc_fixup_inferred_type_refs is called.  */
   if (!inferred_type
-  && (gfc_peek_ascii_char () == '(' && !sym->attr.dimension)
-  && !sym->attr.codimension
   && sym->attr.select_type_temporary
+  && sym->ns->assoc_name_inferred
   && !sym->attr.select_rank_temporary)
 inferred_type = true;
 
diff --git

[gcc r15-633] Fortran: Fix select type regression due to r14-9489 [PR114874]

2024-05-17 Thread Paul Thomas via Gcc-cvs
https://gcc.gnu.org/g:5f5074fe7aaf9524defb265299a985eecba7f914

commit r15-633-g5f5074fe7aaf9524defb265299a985eecba7f914
Author: Paul Thomas 
Date:   Fri May 17 15:19:26 2024 +0100

Fortran: Fix select type regression due to r14-9489 [PR114874]

2024-05-17  Paul Thomas  

gcc/fortran
PR fortran/114874
* gfortran.h: Add 'assoc_name_inferred' to gfc_namespace.
* match.cc (gfc_match_select_type): Set 'assoc_name_inferred'
in select type namespace if the selector has inferred type.
* primary.cc (gfc_match_varspec): If a select type temporary
is apparently scalar and a left parenthesis has been detected,
check the current namespace has 'assoc_name_inferred' set. If
so, set inferred_type.
* resolve.cc (resolve_variable): If the namespace of a select
type temporary is marked with 'assoc_name_inferred' call
gfc_fixup_inferred_type_refs to ensure references are OK.
(gfc_fixup_inferred_type_refs): Catch invalid array refs..

gcc/testsuite/
PR fortran/114874
* gfortran.dg/pr114874_1.f90: New test for valid code.
* gfortran.dg/pr114874_2.f90: New test for invalid code.

Diff:
---
 gcc/fortran/gfortran.h   |  4 +++
 gcc/fortran/match.cc | 21 +
 gcc/fortran/primary.cc   | 10 +++---
 gcc/fortran/resolve.cc   | 17 +++---
 gcc/testsuite/gfortran.dg/pr114874_1.f90 | 32 +++
 gcc/testsuite/gfortran.dg/pr114874_2.f90 | 53 
 6 files changed, 128 insertions(+), 9 deletions(-)

diff --git a/gcc/fortran/gfortran.h b/gcc/fortran/gfortran.h
index a7a0fdba3dd3..de1a7cd09352 100644
--- a/gcc/fortran/gfortran.h
+++ b/gcc/fortran/gfortran.h
@@ -2242,6 +2242,10 @@ typedef struct gfc_namespace
   /* Set when resolve_types has been called for this namespace.  */
   unsigned types_resolved:1;
 
+  /* Set if the associate_name in a select type statement is an
+ inferred type.  */
+  unsigned assoc_name_inferred:1;
+
   /* Set to 1 if code has been generated for this namespace.  */
   unsigned translated:1;
 
diff --git a/gcc/fortran/match.cc b/gcc/fortran/match.cc
index 4539c9bb1344..1851a8f94a54 100644
--- a/gcc/fortran/match.cc
+++ b/gcc/fortran/match.cc
@@ -6721,6 +6721,27 @@ gfc_match_select_type (void)
   goto cleanup;
 }
 
+  /* Select type namespaces are not filled until resolution. Therefore, the
+ namespace must be marked as having an inferred type associate name if
+ either expr1 is an inferred type variable or expr2 is. In the latter
+ case, as well as the symbol being marked as inferred type, it might be
+ that it has not been detected to be so. In this case the target has
+ unknown type. Once the namespace is marked, the fixups in resolution can
+ be triggered.  */
+  if (!expr2
+  && expr1->symtree->n.sym->assoc
+  && expr1->symtree->n.sym->assoc->inferred_type)
+gfc_current_ns->assoc_name_inferred = 1;
+  else if (expr2 && expr2->expr_type == EXPR_VARIABLE
+  && expr2->symtree->n.sym->assoc)
+{
+  if (expr2->symtree->n.sym->assoc->inferred_type)
+   gfc_current_ns->assoc_name_inferred = 1;
+  else if (expr2->symtree->n.sym->assoc->target
+  && expr2->symtree->n.sym->assoc->target->ts.type == BT_UNKNOWN)
+   gfc_current_ns->assoc_name_inferred = 1;
+}
+
   new_st.op = EXEC_SELECT_TYPE;
   new_st.expr1 = expr1;
   new_st.expr2 = expr2;
diff --git a/gcc/fortran/primary.cc b/gcc/fortran/primary.cc
index 8e7833769a8f..76f6bcb8a789 100644
--- a/gcc/fortran/primary.cc
+++ b/gcc/fortran/primary.cc
@@ -2113,13 +2113,13 @@ gfc_match_varspec (gfc_expr *primary, int equiv_flag, 
bool sub_flag,
 
   inferred_type = IS_INFERRED_TYPE (primary);
 
-  /* SELECT TYPE and SELECT RANK temporaries within an ASSOCIATE block, whose
- selector has not been parsed, can generate errors with array and component
- refs.. Use 'inferred_type' as a flag to suppress these errors.  */
+  /* SELECT TYPE temporaries within an ASSOCIATE block, whose selector has not
+ been parsed, can generate errors with array refs.. The SELECT TYPE
+ namespace is marked with 'assoc_name_inferred'. During resolution, this is
+ detected and gfc_fixup_inferred_type_refs is called.  */
   if (!inferred_type
-  && (gfc_peek_ascii_char () == '(' && !sym->attr.dimension)
-  && !sym->attr.codimension
   && sym->attr.select_type_temporary
+  && sym->ns->assoc_name_inferred
   && !sym->attr.select_rank_temporary)
 inferred_type = true;
 
diff --git a/gcc/fortran/resolve.cc b/gcc/fortran/resolve.cc
index 4368627041ed..d7a08

[jira] [Commented] (DAEMON-447) Allow to rotate stdout and stderr redirected logs

2024-05-17 Thread Mark Thomas (Jira)


[ 
https://issues.apache.org/jira/browse/DAEMON-447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847313#comment-17847313
 ] 

Mark Thomas commented on DAEMON-447:


Note: DAEMON-213 has a link to a potential alternative suggestion

> Allow to rotate stdout and stderr redirected logs 
> --
>
> Key: DAEMON-447
> URL: https://issues.apache.org/jira/browse/DAEMON-447
> Project: Commons Daemon
>  Issue Type: Improvement
>  Components: prunsrv
>Affects Versions: 1.3.0, 1.3.1
> Environment: Windows 10; WIndows Server  2016
>Reporter: Ivan Pedruzzi
>Priority: Major
>
> We have a large legacy web application which makes use of System.out.println 
> to print errors.  
> Our Tomcat 9 is configured to redirect stdout to file using switch --StdOut
> In some peculiar conditions our web application can print a very large amount 
> error which end up in the log file and can quickly fill the hard drive, 
> crashing the system.  
> Looking at the code in prunsrv.c  it is possible to implement a simple 
> rotation policy which would limit the size of the log from stdout to a 
> configurable number of bytes.
> Piggy backing on the worker thread "eventThread", when the log file size is 
> above a configurable threshold (new option  StdOutFileMaxSize) we could make 
> a copy of the log and truncate the file.   
> To enable the rotation for the redirects, we would need 2 options:
> --Rotate 
> --StdOutFileMaxSize   
> These could be used for both stderr and stdout or split in dedicated options
>  
> Here is the worked altered with my change. In my local tests it behaves as I 
> expect.  
> DWORD WINAPI eventThread(LPVOID lpParam)
> {
>     DWORD dwRotateCnt = SO_LOGROTATE;
>     for (;;) {
>         DWORD dw = WaitForSingleObject(gSignalEvent, 1000);
>         if (dw == WAIT_TIMEOUT) {
>             /* Do process maintenance */
>             if (SO_LOGROTATE != 0 && --dwRotateCnt == 0) {
>                 /* Perform log rotation. */
>                
>               /* START CHANGE */ 
>         
>                 __int64 MAX_Mbytes = SO_STDOUTFILEMAXSIZE;
>                 struct _stat64 fileInfo;
>                 if (gStdwrap.szStdOutFilename 
>                     && gStdwrap.fpStdOutFile
>                     && _fstat64(_fileno(gStdwrap.fpStdOutFile), ) == > 0
>                     && fileInfo.st_size > MAX_Mbytes) {
>                     WCHAR sPath[SIZ_PATHLEN];
>                     lstrlcpyW(sPath, MAX_PATH, gStdwrap.szStdOutFilename);
>                     lstrlcatW(sPath, SIZ_PATHMAX, L"-backup.log");
>                     //Make a copy of current log before truncating it
>                     CopyFileW(gStdwrap.szStdOutFilename, sPath, FALSE);
>                     //close current handle 
>                     fclose(gStdwrap.fpStdOutFile);
>                     //re-open file to truncate it
>                     FILE* tempHandle = _wfsopen(gStdwrap.szStdOutFilename, 
> L"w", _SH_DENYNO);
>                     fclose(tempHandle);
>                     
>                     //re-open in append mode
>                     gStdwrap.fpStdOutFile = 
> _wfsopen(gStdwrap.szStdOutFilename, L"a", _SH_DENYNO);
>                     _dup2(_fileno(gStdwrap.fpStdOutFile), 1);
>                     *stdout = *(gStdwrap.fpStdOutFile);
>                  }
>                  /* END CHANGE */ 
>                  dwRotateCnt = SO_LOGROTATE;
>             }
>             continue;
>         }
>         if (dw == WAIT_OBJECT_0 && gSignalValid) {
>             if (!GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, 0)) {
>                 /* Invoke Thread dump */
>                 if (gWorker && _jni_startup)
>                     apxJavaDumpAllStacks(gWorker);
>             }
>             ResetEvent(gSignalEvent);
>             continue;
>         }
>         break;
>     }
>     ExitThread(0);
>     return 0;
>     UNREFERENCED_PARAMETER(lpParam);
> }
>    
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (DAEMON-213) procun log rotation support

2024-05-17 Thread Mark Thomas (Jira)


 [ 
https://issues.apache.org/jira/browse/DAEMON-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Thomas resolved DAEMON-213.

Resolution: Duplicate

> procun log rotation support
> ---
>
> Key: DAEMON-213
> URL: https://issues.apache.org/jira/browse/DAEMON-213
> Project: Commons Daemon
>  Issue Type: Improvement
>  Components: Procrun
>Affects Versions: 1.0.4, 1.0.5, 1.0.6
> Environment: os: winxp
>Reporter: viola.lu
>Priority: Minor
>
> currently, procun doesn't support log rotation. Should add an option



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Broken page link for buster installation

2024-05-17 Thread Thomas Lange
Yes, it's correct that this link is not working any more. You can find
the installer ISO here:
  https://cdimage.debian.org/mirror/cdimage/archive/10.13.0/amd64/iso-cd/

>>>>> On Thu, 16 May 2024 17:49:46 +, Heidi Fehr  said:

> Hello!
> I recently went to go look for an installer for Debian Buster v10 but it 
appears that the page link is no longer working.
> https://www.debian.org/releases/buster/debian-installer/

-- 
viele Grüße Thomas



Bug#1040186: NMU for fixing this bug in Bookworm

2024-05-17 Thread Thomas Goirand

Hi,

Since there's been very low activity in this bug, and that I cannot see 
if the current maintainer is willing to fix the bug, I have opened a bug 
against the Stable release team to fix this issue in Bookworm:


https://bugs.debian.org/1071264

You'll find the package debdiff over there.

Please let me know if you prefer to fix this yourself, or if it's ok for 
me to upload the fixed package in Bookworm.


Cheers,

Thomas Goirand (zigo)



Bug#1040186: NMU for fixing this bug in Bookworm

2024-05-17 Thread Thomas Goirand

Hi,

Since there's been very low activity in this bug, and that I cannot see 
if the current maintainer is willing to fix the bug, I have opened a bug 
against the Stable release team to fix this issue in Bookworm:


https://bugs.debian.org/1071264

You'll find the package debdiff over there.

Please let me know if you prefer to fix this yourself, or if it's ok for 
me to upload the fixed package in Bookworm.


Cheers,

Thomas Goirand (zigo)



Bug#1071264: autopkgtest fails with networkx 3.2.1

2024-05-17 Thread Thomas Goirand
Source: seirsplus
Version: 1.0.9-1
Severity: serious

Hi,

Your package fails autopkgtest with the current version of networkx in Unstable.
Please fix it.

Cheers,

Thomas Goirand (zigo)



Bug#1071264: autopkgtest fails with networkx 3.2.1

2024-05-17 Thread Thomas Goirand
Source: seirsplus
Version: 1.0.9-1
Severity: serious

Hi,

Your package fails autopkgtest with the current version of networkx in Unstable.
Please fix it.

Cheers,

Thomas Goirand (zigo)



Re: openarena mouse not working on multiple openbsd systems

2024-05-17 Thread Thomas Frohwein
On Thu, May 16, 2024 at 04:31:22PM +0200, Divan Santana wrote:
> Greetings :)
> 
> So I've tried openarena on 7.5 on multiple systems [1,2], both systems
> the mouse refuses to work, the left / right movement is not working.
> 
> The one system, it briefly works but stops after less then 15s.
> 
> I've tried launching the game with:
> 
> SDL_VIDEO_X11_DGAMOUSE=0
> 
> I've also tried this setting
> 
> root@cephas:~# cat /etc/X11/xorg.conf
> Section "Module"
> SubSection "extmod"
> # Don't initialize the DGA extension
> Option "omit xfree86-dga"
> EndSubSection
> EndSection
> 
> After restarting xenodm, I can see in the Xorg log, it's read the config
> file, but the issue persists in the game.
> 
> Any ideas?

I tried it and what I'm seeing is that the mouse works in the menu, and
after I launch the game itself, it stops working after less than a
second. However, if I go back into the menu, the mouse works there
again. No change with SDL_VIDEO_X11_DGAMOUSE=0.

I've looked through the port build and the patches. I don't see
anything that looks like it could be an OpenBSD-specific cause for this
behavior. You would need to run this by someone who has more
familiarity with the openarena codebase. It seems that upstream manages
support/issues via http://www.openarena.ws/board/ ...
> 
> [1]:
> OpenBSD 7.5 (GENERIC.MP) #82: Wed Mar 20 15:48:40 MDT 2024
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 67784667136 (64644MB)
> avail mem = 65708625920 (62664MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 3.6 @ 0xb9ad6000 (43 entries)
> bios0: vendor American Megatrends International, LLC. version "1.93" date 
> 01/26/2024
> bios0: Micro-Star International Co., Ltd. MS-7E26
> efi0 at bios0: UEFI 2.9
> efi0: American Megatrends rev 0x50020
> acpi0 at bios0: ACPI 6.5
> acpi0: sleep states S0 S3 S4 S5
> acpi0: tables DSDT FACP SSDT SSDT FIDT MCFG HPET WDRT UEFI FPDT VFCT SSDT 
> SSDT SSDT SSDT SSDT SSDT WSMT APIC IVRS SSDT SSDT SSDT SSDT SSDT BGRT
> acpi0: wakeup devices GPP3(S4) GPP4(S4) GPP5(S4) GPP6(S4) GP17(S4) XHC0(S4) 
> XHC1(S4) XHC2(S4) GPP0(S4) GPP1(S4) GPP2(S4) GPP7(S4) UP00(S4) DP48(S4) 
> EP00(S4) DP50(S4) [...]
> acpitimer0 at acpi0: 3579545 Hz, 32 bits
> acpimcfg0 at acpi0
> acpimcfg0: addr 0xe000, bus 0-255
> acpihpet0 at acpi0: 14318180 Hz
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: AMD Ryzen 9 7900 12-Core Processor, 3700.00 MHz, 19-61-02, patch 
> 0a601206
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,HWPSTATE,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,PQM,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,AVX512IFMA,CLFLUSHOPT,CLWB,AVX512CD,SHA,AVX512BW,AVX512VL,AVX512VBMI,UMIP,PKU,L1DF,IBPB,IBRS,STIBP,STIBP_ALL,IBRS_PREF,IBRS_SM,SSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
> cpu0: 32KB 64b/line 8-way D-cache, 32KB 64b/line 8-way I-cache, 1MB 64b/line 
> 8-way L2 cache, 32MB 64b/line 16-way L3 cache
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
> cpu0: apic clock running at 25MHz
> cpu0: mwait min=64, max=64, C-substates=1.1, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: AMD Ryzen 9 7900 12-Core Processor, 3700.00 MHz, 19-61-02, patch 
> 0a601206
> cpu1: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,SKINIT,TCE,TOPEXT,CPCTR,DBKP,PCTRL3,MWAITX,HWPSTATE,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,PQM,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,AVX512IFMA,CLFLUSHOPT,CLWB,AVX512CD,SHA,AVX512BW,AVX512VL,AVX512VBMI,UMIP,PKU,L1DF,IBPB,IBRS,STIBP,STIBP_ALL,IBRS_PREF,IBRS_SM,SSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
> cpu1: 32KB 64b/line 8-way D-cache, 32KB 64b/line 8-way I-cache, 1MB 64b/line 
> 8-way L2 cache, 32MB 64b/line 16-way L3 cache
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: AMD Ryzen 9 7900 12-Core Processor, 3700.00 MHz, 19-61-02, patch 
> 0a601206
> cpu2: 
> 

[PULL 6/6] hw/intc/s390_flic: Fix crash that occurs when saving the machine state

2024-05-17 Thread Thomas Huth
adapter_info_so_needed() treats its "opaque" parameter as a S390FLICState,
but the function belongs to a VMStateDescription that is attached to a
TYPE_VIRTIO_CCW_BUS device. This is currently causing a crash when the
user tries to save or migrate the VM state. Fix it by using s390_get_flic()
to get the correct device here instead.

Reported-by: Marc Hartmayer 
Fixes: 9d1b0f5bf5 ("s390_flic: add migration-enabled property")
Message-ID: <20240517061553.564529-1-th...@redhat.com>
Reviewed-by: Cédric Le Goater 
Tested-by: Marc Hartmayer 
Signed-off-by: Thomas Huth 
---
 hw/intc/s390_flic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index 7f93080087..6771645699 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -459,7 +459,7 @@ type_init(qemu_s390_flic_register_types)
 
 static bool adapter_info_so_needed(void *opaque)
 {
-S390FLICState *fs = S390_FLIC_COMMON(opaque);
+S390FLICState *fs = s390_get_flic();
 
 return fs->migration_enabled;
 }
-- 
2.45.0




[PULL 4/6] tests/lcitool/projects/qemu.yml: Sort entries alphabetically again

2024-05-17 Thread Thomas Huth
Let's try to keep the entries in alphabetical order here!

Message-ID: <20240516084059.511463-5-th...@redhat.com>
Reviewed-by: Daniel P. Berrangé 
Signed-off-by: Thomas Huth 
---
 tests/lcitool/projects/qemu.yml | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tests/lcitool/projects/qemu.yml b/tests/lcitool/projects/qemu.yml
index b63b6bd850..7511ec7ccb 100644
--- a/tests/lcitool/projects/qemu.yml
+++ b/tests/lcitool/projects/qemu.yml
@@ -35,8 +35,8 @@ packages:
  - hostname
  - json-c
  - libaio
- - libattr
  - libasan
+ - libattr
  - libbpf
  - libc-static
  - libcacard
@@ -54,6 +54,7 @@ packages:
  - libjpeg
  - libnfs
  - libnuma
+ - libpipewire-dev
  - libpmem
  - libpng
  - librbd
@@ -73,27 +74,26 @@ packages:
  - llvm
  - lttng-ust
  - lzo
+ - make
+ - mesa-libgbm
+ - meson
  - mtools
+ - ncursesw
  - netcat
  - nettle
  - ninja
  - nsis
- - make
- - mesa-libgbm
- - meson
- - ncursesw
  - pam
  - pcre-static
  - pixman
- - libpipewire-dev
  - pkg-config
  - pulseaudio
  - python3
- - python3-PyYAML
  - python3-numpy
  - python3-opencv
  - python3-pillow
  - python3-pip
+ - python3-PyYAML
  - python3-sphinx
  - python3-sphinx-rtd-theme
  - python3-sqlite3
@@ -121,6 +121,6 @@ packages:
  - which
  - xen
  - xorriso
- - zstdtools
  - zlib
  - zlib-static
+ - zstdtools
-- 
2.45.0




[PULL 2/6] tests/lcitool: Remove 'xfsprogs' from QEMU

2024-05-17 Thread Thomas Huth
From: Philippe Mathieu-Daudé 

QEMU's commit a5730b8bd3 ("block/file-posix: Simplify the
XFS_IOC_DIOINFO handling") removed the need for the 'xfsprogs'
package.

Signed-off-by: Philippe Mathieu-Daudé 
[thuth: Adjusted the patch from the lcitools repo to QEMU's repo]
Message-ID: <20240516084059.511463-3-th...@redhat.com>
Reviewed-by: Daniel P. Berrangé 
Signed-off-by: Thomas Huth 
---
 tests/lcitool/projects/qemu.yml | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tests/lcitool/projects/qemu.yml b/tests/lcitool/projects/qemu.yml
index 149b15de57..9173d1e36e 100644
--- a/tests/lcitool/projects/qemu.yml
+++ b/tests/lcitool/projects/qemu.yml
@@ -121,7 +121,6 @@ packages:
  - vte
  - which
  - xen
- - xfsprogs
  - xorriso
  - zstdtools
  - zlib
-- 
2.45.0




[PULL 5/6] tests/docker/dockerfiles: Update container files with "lcitool-refresh"

2024-05-17 Thread Thomas Huth
Run "make lcitool-refresh" after the previous changes to the
lcitool files. This removes the g++ and xfslibs-dev packages
from the dockerfiles (except for the fedora-win64-cross dockerfile
where we keep the C++ compiler).

Message-ID: <20240516084059.511463-6-th...@redhat.com>
Reviewed-by: Daniel P. Berrangé 
Signed-off-by: Thomas Huth 
---
 tests/docker/dockerfiles/alpine.docker| 4 
 tests/docker/dockerfiles/centos9.docker   | 4 
 tests/docker/dockerfiles/debian-amd64-cross.docker| 4 
 tests/docker/dockerfiles/debian-arm64-cross.docker| 4 
 tests/docker/dockerfiles/debian-armel-cross.docker| 4 
 tests/docker/dockerfiles/debian-armhf-cross.docker| 4 
 tests/docker/dockerfiles/debian-i686-cross.docker | 4 
 tests/docker/dockerfiles/debian-mips64el-cross.docker | 4 
 tests/docker/dockerfiles/debian-mipsel-cross.docker   | 4 
 tests/docker/dockerfiles/debian-ppc64el-cross.docker  | 4 
 tests/docker/dockerfiles/debian-riscv64-cross.docker  | 3 ---
 tests/docker/dockerfiles/debian-s390x-cross.docker| 4 
 tests/docker/dockerfiles/debian.docker| 4 
 tests/docker/dockerfiles/fedora-win64-cross.docker| 2 +-
 tests/docker/dockerfiles/fedora.docker| 4 
 tests/docker/dockerfiles/opensuse-leap.docker | 4 
 tests/docker/dockerfiles/ubuntu2204.docker| 4 
 17 files changed, 1 insertion(+), 64 deletions(-)

diff --git a/tests/docker/dockerfiles/alpine.docker 
b/tests/docker/dockerfiles/alpine.docker
index cd9d7af1ce..554464f31e 100644
--- a/tests/docker/dockerfiles/alpine.docker
+++ b/tests/docker/dockerfiles/alpine.docker
@@ -32,7 +32,6 @@ RUN apk update && \
 findutils \
 flex \
 fuse3-dev \
-g++ \
 gcc \
 gcovr \
 gettext \
@@ -110,7 +109,6 @@ RUN apk update && \
 vte3-dev \
 which \
 xen-dev \
-xfsprogs-dev \
 xorriso \
 zlib-dev \
 zlib-static \
@@ -119,10 +117,8 @@ RUN apk update && \
 rm -f /usr/lib*/python3*/EXTERNALLY-MANAGED && \
 apk list --installed | sort > /packages.txt && \
 mkdir -p /usr/libexec/ccache-wrappers && \
-ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/c++ && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
-ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/g++ && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
 
 ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
diff --git a/tests/docker/dockerfiles/centos9.docker 
b/tests/docker/dockerfiles/centos9.docker
index 6cf47ce786..0256865b9e 100644
--- a/tests/docker/dockerfiles/centos9.docker
+++ b/tests/docker/dockerfiles/centos9.docker
@@ -34,7 +34,6 @@ RUN dnf distro-sync -y && \
 flex \
 fuse3-devel \
 gcc \
-gcc-c++ \
 gettext \
 git \
 glib2-devel \
@@ -115,7 +114,6 @@ RUN dnf distro-sync -y && \
 util-linux \
 vte291-devel \
 which \
-xfsprogs-devel \
 xorriso \
 zlib-devel \
 zlib-static \
@@ -125,10 +123,8 @@ RUN dnf distro-sync -y && \
 rm -f /usr/lib*/python3*/EXTERNALLY-MANAGED && \
 rpm -qa | sort > /packages.txt && \
 mkdir -p /usr/libexec/ccache-wrappers && \
-ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/c++ && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
-ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/g++ && \
 ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
 
 ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
diff --git a/tests/docker/dockerfiles/debian-amd64-cross.docker 
b/tests/docker/dockerfiles/debian-amd64-cross.docker
index d0b0e9778e..f8c61d1191 100644
--- a/tests/docker/dockerfiles/debian-amd64-cross.docker
+++ b/tests/docker/dockerfiles/debian-amd64-cross.docker
@@ -79,7 +79,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
 eatmydata apt-get dist-upgrade -y && \
 eatmydata apt-get install --no-install-recommends -y dpkg-dev && \
 eatmydata apt-get install --no-install-recommends -y \
-  g++-x86-64-linux-gnu \
   gcc-x86-64-linux-gnu \
   libaio-dev:amd64 \
   libasan6:amd64 \
@@ -149,7 +148,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \
   libzstd-dev:amd64 \
   nettle-dev:amd64 \
   systemtap-sdt-dev:amd64 \
-  xfslibs-dev:amd64 \
   zlib1g-dev:amd64 &&

[PULL 1/6] tests/lcitool/refresh: Treat the output of lcitool as text, not as bytes

2024-05-17 Thread Thomas Huth
In case lcitool fails (e.g. with a python backtrace), this makes
the output  of lcitool much more readable.

Suggested-by: Daniel P. Berrangé 
Message-ID: <20240516084059.511463-2-th...@redhat.com>
Reviewed-by: Daniel P. Berrangé 
Signed-off-by: Thomas Huth 
---
 tests/lcitool/refresh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tests/lcitool/refresh b/tests/lcitool/refresh
index 24a735a3f2..174818d9c9 100755
--- a/tests/lcitool/refresh
+++ b/tests/lcitool/refresh
@@ -43,12 +43,12 @@ def atomic_write(filename, content):
 
 def generate(filename, cmd, trailer):
 print("Generate %s" % filename)
-lcitool = subprocess.run(cmd, capture_output=True)
+lcitool = subprocess.run(cmd, capture_output=True, encoding='utf8')
 
 if lcitool.returncode != 0:
 raise Exception("Failed to generate %s: %s" % (filename, 
lcitool.stderr))
 
-content = lcitool.stdout.decode("utf8")
+content = lcitool.stdout
 if trailer is not None:
 content += trailer
 atomic_write(filename, content)
-- 
2.45.0




[PULL 0/6] Fix s390x crash and clean up container images

2024-05-17 Thread Thomas Huth
The following changes since commit 85ef20f1673feaa083f4acab8cf054df77b0dbed:

  Merge tag 'pull-maintainer-may24-160524-2' of https://gitlab.com/stsquad/qemu 
into staging (2024-05-16 10:02:56 +0200)

are available in the Git repository at:

  https://gitlab.com/thuth/qemu.git tags/pull-request-2024-05-17

for you to fetch changes up to bebe9603fcb072dcdb7fb22005781b3582a4d701:

  hw/intc/s390_flic: Fix crash that occurs when saving the machine state 
(2024-05-17 11:18:32 +0200)


* Fix s390x crash when doing migration / savevm
* Decrease size of CI containers by removing unnecessary packages


Philippe Mathieu-Daudé (1):
  tests/lcitool: Remove 'xfsprogs' from QEMU

Thomas Huth (5):
  tests/lcitool/refresh: Treat the output of lcitool as text, not as bytes
  tests/lcitool: Remove g++ from the containers (except for the MinGW one)
  tests/lcitool/projects/qemu.yml: Sort entries alphabetically again
  tests/docker/dockerfiles: Update container files with "lcitool-refresh"
  hw/intc/s390_flic: Fix crash that occurs when saving the machine state

 hw/intc/s390_flic.c   |  2 +-
 tests/docker/dockerfiles/alpine.docker|  4 
 tests/docker/dockerfiles/centos9.docker   |  4 
 tests/docker/dockerfiles/debian-amd64-cross.docker|  4 
 tests/docker/dockerfiles/debian-arm64-cross.docker|  4 
 tests/docker/dockerfiles/debian-armel-cross.docker|  4 
 tests/docker/dockerfiles/debian-armhf-cross.docker|  4 
 tests/docker/dockerfiles/debian-i686-cross.docker |  4 
 tests/docker/dockerfiles/debian-mips64el-cross.docker |  4 
 tests/docker/dockerfiles/debian-mipsel-cross.docker   |  4 
 tests/docker/dockerfiles/debian-ppc64el-cross.docker  |  4 
 tests/docker/dockerfiles/debian-riscv64-cross.docker  |  3 ---
 tests/docker/dockerfiles/debian-s390x-cross.docker|  4 
 tests/docker/dockerfiles/debian.docker|  4 
 tests/docker/dockerfiles/fedora-win64-cross.docker|  2 +-
 tests/docker/dockerfiles/fedora.docker|  4 
 tests/docker/dockerfiles/opensuse-leap.docker |  4 
 tests/docker/dockerfiles/ubuntu2204.docker|  4 
 tests/lcitool/projects/qemu-minimal.yml   |  1 -
 tests/lcitool/projects/qemu-win-installer.yml |  4 
 tests/lcitool/projects/qemu.yml   | 18 --
 tests/lcitool/refresh |  5 +++--
 22 files changed, 17 insertions(+), 78 deletions(-)
 create mode 100644 tests/lcitool/projects/qemu-win-installer.yml




[PULL 3/6] tests/lcitool: Remove g++ from the containers (except for the MinGW one)

2024-05-17 Thread Thomas Huth
We don't need C++ for the normal QEMU builds anymore, so installing
g++ in each and every container seems to be a waste of time and disk
space. The only container that still needs it is the Fedora MinGW
container that builds the only remaining C++ code in ./qga/vss-win32/
and we can install it there with an extra project yml file instead.

Message-ID: <20240516084059.511463-4-th...@redhat.com>
Reviewed-by: Daniel P. Berrangé 
Signed-off-by: Thomas Huth 
---
 tests/lcitool/projects/qemu-minimal.yml   | 1 -
 tests/lcitool/projects/qemu-win-installer.yml | 4 
 tests/lcitool/projects/qemu.yml   | 1 -
 tests/lcitool/refresh | 1 +
 4 files changed, 5 insertions(+), 2 deletions(-)
 create mode 100644 tests/lcitool/projects/qemu-win-installer.yml

diff --git a/tests/lcitool/projects/qemu-minimal.yml 
b/tests/lcitool/projects/qemu-minimal.yml
index d44737dc1d..6bc232a1c3 100644
--- a/tests/lcitool/projects/qemu-minimal.yml
+++ b/tests/lcitool/projects/qemu-minimal.yml
@@ -7,7 +7,6 @@ packages:
  - ccache
  - findutils
  - flex
- - g++
  - gcc
  - gcc-native
  - glib2
diff --git a/tests/lcitool/projects/qemu-win-installer.yml 
b/tests/lcitool/projects/qemu-win-installer.yml
new file mode 100644
index 00..86aa22297c
--- /dev/null
+++ b/tests/lcitool/projects/qemu-win-installer.yml
@@ -0,0 +1,4 @@
+# Additional packages that are required to build the code in qga/vss-win32/
+---
+packages:
+ - g++
diff --git a/tests/lcitool/projects/qemu.yml b/tests/lcitool/projects/qemu.yml
index 9173d1e36e..b63b6bd850 100644
--- a/tests/lcitool/projects/qemu.yml
+++ b/tests/lcitool/projects/qemu.yml
@@ -22,7 +22,6 @@ packages:
  - findutils
  - flex
  - fuse3
- - g++
  - gcc
  - gcc-native
  - gcovr
diff --git a/tests/lcitool/refresh b/tests/lcitool/refresh
index 174818d9c9..789acefb75 100755
--- a/tests/lcitool/refresh
+++ b/tests/lcitool/refresh
@@ -192,6 +192,7 @@ try:
 "s390x-softmmu,s390x-linux-user"))
 
 generate_dockerfile("fedora-win64-cross", "fedora-38",
+project='qemu,qemu-win-installer',
 cross="mingw64",
 trailer=cross_build("x86_64-w64-mingw32-",
 "x86_64-softmmu"))
-- 
2.45.0




[frameworks-kio] [Bug 450727] Can't edit an application's desktop entry which is a symlink, because symlink copied instead of target file

2024-05-17 Thread Thomas Duckworth
https://bugs.kde.org/show_bug.cgi?id=450727

Thomas Duckworth  changed:

   What|Removed |Added

 CC||tduck973...@gmail.com

-- 
You are receiving this mail because:
You are watching all bug changes.

ocfs2_dlmfs missing from the cloud kernel

2024-05-17 Thread Thomas Goirand

Hi,

The module ocfs2_dlmfs may be use to have a filesystem shared with 2 VMs 
(using for example OpenStack Cinder multi-attach). It's there in Ubuntu, 
in the "normal" kernel, but not in the Debian cloud kernel.


Would it be possible to *not* strip-down this module in the cloud kernel 
please ?


In the source package of the kernel, I can read in: 
debian/config/config.cloud:


## file: fs/ocfs2/Kconfig
##
# CONFIG_OCFS2_FS is not set

how do I change this?

Cheers,

Thomas Goirand (zigo)



ocfs2_dlmfs missing from the cloud kernel

2024-05-17 Thread Thomas Goirand

Hi,

The module ocfs2_dlmfs may be use to have a filesystem shared with 2 VMs 
(using for example OpenStack Cinder multi-attach). It's there in Ubuntu, 
in the "normal" kernel, but not in the Debian cloud kernel.


Would it be possible to *not* strip-down this module in the cloud kernel 
please ?


In the source package of the kernel, I can read in: 
debian/config/config.cloud:


## file: fs/ocfs2/Kconfig
##
# CONFIG_OCFS2_FS is not set

how do I change this?

Cheers,

Thomas Goirand (zigo)



Re: [PATCH 2/2] drm/mgag200: Add an option to disable Write-Combine

2024-05-17 Thread Thomas Zimmermann

Hi,

just nits below.

Am 16.05.24 um 18:17 schrieb Jocelyn Falempe:

Unfortunately, the G200 ioburst workaround doesn't work on some servers
like Dell poweredge XR11, XR5610, or HPE XL260
In this case completely disabling WC is the only option to achieve
low-latency.
So this adds a new Kconfig option, to disable WC mapping of the G200


The formatting look off. Maybe make this one single paragraph.

No comma after 'option'.



Signed-off-by: Jocelyn Falempe 
---
  drivers/gpu/drm/mgag200/Kconfig   | 10 ++
  drivers/gpu/drm/mgag200/mgag200_drv.c |  7 +++
  2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/mgag200/Kconfig b/drivers/gpu/drm/mgag200/Kconfig
index b28c5e4828f47..73ab5730b74d9 100644
--- a/drivers/gpu/drm/mgag200/Kconfig
+++ b/drivers/gpu/drm/mgag200/Kconfig
@@ -11,3 +11,13 @@ config DRM_MGAG200
 MGA G200 desktop chips and the server variants. It requires 0.3.0
 of the modesetting userspace driver, and a version of mga driver
 that will fail on KMS enabled devices.
+
+config DRM_MGAG200_DISABLE_WRITECOMBINE
+   bool "Disable Write Combine mapping of VRAM"
+   depends on DRM_MGAG200 && PREEMPT_RT
+   help
+ The VRAM of the G200 is mapped with Write-Combine, to improve

No comma after Write-Combine

+ performances. However this increases the system latency a lot, even
Just say "This can interfere with real-time tasks; even if they are 
running on other CPU cores then the graphics output."



+ for realtime tasks running on other CPU cores. Typically 40us-80us
+ latencies are measured with hwlat when Write Combine is enabled.


Leave out the next sentence: "Typically ..." The measureed numbers 
depend on the hardware and everyone is encouraged to test on their own 
system. You could mention  the numbers in the commit description, as you 
already mention the affected systems there.



+ Recommended if you run realtime tasks on a server with a Matrox G200.


I still think that we should not encourage anyone to use this option. 
Maybe say "Enable this option only if you run..."



\ No newline at end of file
diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c 
b/drivers/gpu/drm/mgag200/mgag200_drv.c
index 3883f25ca4d8b..7461e3f984eff 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.c
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.c
@@ -146,12 +146,19 @@ int mgag200_device_preinit(struct mga_device *mdev)
}
mdev->vram_res = res;
  
+#if defined(CONFIG_DRM_MGAG200_DISABLE_WRITECOMBINE)

+   drm_info(dev, "Disable Write Combine\n");


I would not print this drm_info() here. The user has selected the config 
option, so they should know what happens. It's also listed in /proc/mtrr 
IIRC.


Best regards
Thomas


+   mdev->vram = devm_ioremap(dev->dev, res->start, resource_size(res));
+   if (!mdev->vram)
+   return -ENOMEM;
+#else
mdev->vram = devm_ioremap_wc(dev->dev, res->start, resource_size(res));
if (!mdev->vram)
return -ENOMEM;
  
  	/* Don't fail on errors, but performance might be reduced. */

devm_arch_phys_wc_add(dev->dev, res->start, resource_size(res));
+#endif
  
  	return 0;

  }


--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)



Re: [PATCH 1/2] Revert "drm/mgag200: Add a workaround for low-latency"

2024-05-17 Thread Thomas Zimmermann




Am 16.05.24 um 18:17 schrieb Jocelyn Falempe:

This reverts commit bfa4437fd3938ae2e186e7664b2db65bb8775670.

This workaround doesn't work reliably on all servers.
I'll replace it with an option to disable Write-Combine,
which has more impact on performance, but fix the latency
issue on all hardware.

Signed-off-by: Jocelyn Falempe 


Reviewed-by: Thomas Zimmermann 


---
  drivers/gpu/drm/mgag200/Kconfig| 12 
  drivers/gpu/drm/mgag200/mgag200_drv.c  | 17 -
  drivers/gpu/drm/mgag200/mgag200_mode.c |  8 
  3 files changed, 37 deletions(-)

diff --git a/drivers/gpu/drm/mgag200/Kconfig b/drivers/gpu/drm/mgag200/Kconfig
index 5e4d48df4854c..b28c5e4828f47 100644
--- a/drivers/gpu/drm/mgag200/Kconfig
+++ b/drivers/gpu/drm/mgag200/Kconfig
@@ -11,15 +11,3 @@ config DRM_MGAG200
 MGA G200 desktop chips and the server variants. It requires 0.3.0
 of the modesetting userspace driver, and a version of mga driver
 that will fail on KMS enabled devices.
-
-config DRM_MGAG200_IOBURST_WORKAROUND
-   bool "Disable buffer caching"
-   depends on DRM_MGAG200 && PREEMPT_RT && X86
-   help
- Enable a workaround to avoid I/O bursts within the mgag200 driver at
- the expense of overall display performance.
- It restores the   
-#if defined(CONFIG_DRM_MGAG200_IOBURST_WORKAROUND)

-static struct drm_gem_object *mgag200_create_object(struct drm_device *dev, 
size_t size)
-{
-   struct drm_gem_shmem_object *shmem;
-
-   shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
-   if (!shmem)
-   return NULL;
-
-   shmem->map_wc = true;
-   return >base;
-}
-#endif
-
  /*
   * DRM driver
   */
@@ -113,9 +99,6 @@ static const struct drm_driver mgag200_driver = {
.major = DRIVER_MAJOR,
.minor = DRIVER_MINOR,
.patchlevel = DRIVER_PATCHLEVEL,
-#if defined(CONFIG_DRM_MGAG200_IOBURST_WORKAROUND)
-   .gem_create_object = mgag200_create_object,
-#endif
DRM_GEM_SHMEM_DRIVER_OPS,
  };
  
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c

index fc54851d3384d..d3d820f7a77d7 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -13,7 +13,6 @@
  
  #include 

  #include 
-#include 
  #include 
  #include 
  #include 
@@ -438,13 +437,6 @@ static void mgag200_handle_damage(struct mga_device *mdev, 
const struct iosys_ma
  
  	iosys_map_incr(, drm_fb_clip_offset(fb->pitches[0], fb->format, clip));

drm_fb_memcpy(, fb->pitches, vmap, fb, clip);
-
-   /* Flushing the cache greatly improves latency on x86_64 */
-#if defined(CONFIG_DRM_MGAG200_IOBURST_WORKAROUND)
-   if (!vmap->is_iomem)
-   drm_clflush_virt_range(vmap->vaddr + clip->y1 * fb->pitches[0],
-  drm_rect_height(clip) * fb->pitches[0]);
-#endif
  }
  
  /*


--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)



[PATCH] arch: Fix name collision with ACPI's video.o

2024-05-17 Thread Thomas Zimmermann
Commit 2fd001cd3600 ("arch: Rename fbdev header and source files")
renames the video source files under arch/ such that they do not
refer to fbdev any longer. The new files named video.o conflict with
ACPI's video.ko module. Modprobing the ACPI module can then fail with
warnings about missing symbols, as shown below.

  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_unregister (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_register_backlight (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
__acpi_video_get_backlight_type (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_register (err -2)

Fix the issue by renaming the architecture's video.o to video-common.o.

Reported-by: Chaitanya Kumar Borah 
Closes: 
https://lore.kernel.org/intel-gfx/9dcac6e9-a3bf-4ace-bbdc-f697f767f...@suse.de/T/#t
Signed-off-by: Thomas Zimmermann 
Fixes: 2fd001cd3600 ("arch: Rename fbdev header and source files")
Cc: Arnd Bergmann 
Cc: linux-a...@vger.kernel.org
Cc: linux-fb...@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
---
 arch/sparc/video/Makefile| 2 +-
 arch/sparc/video/{video.c => video-common.c} | 0
 arch/x86/video/Makefile  | 2 +-
 arch/x86/video/{video.c => video-common.c}   | 0
 4 files changed, 2 insertions(+), 2 deletions(-)
 rename arch/sparc/video/{video.c => video-common.c} (100%)
 rename arch/x86/video/{video.c => video-common.c} (100%)

diff --git a/arch/sparc/video/Makefile b/arch/sparc/video/Makefile
index fdf83a408d750..dcfbe7a5912c0 100644
--- a/arch/sparc/video/Makefile
+++ b/arch/sparc/video/Makefile
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0-only
 
-obj-y  += video.o
+obj-y  += video-common.o
diff --git a/arch/sparc/video/video.c b/arch/sparc/video/video-common.c
similarity index 100%
rename from arch/sparc/video/video.c
rename to arch/sparc/video/video-common.c
diff --git a/arch/x86/video/Makefile b/arch/x86/video/Makefile
index fdf83a408d750..dcfbe7a5912c0 100644
--- a/arch/x86/video/Makefile
+++ b/arch/x86/video/Makefile
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0-only
 
-obj-y  += video.o
+obj-y  += video-common.o
diff --git a/arch/x86/video/video.c b/arch/x86/video/video-common.c
similarity index 100%
rename from arch/x86/video/video.c
rename to arch/x86/video/video-common.c
-- 
2.45.0



[PATCH] arch: Fix name collision with ACPI's video.o

2024-05-17 Thread Thomas Zimmermann
Commit 2fd001cd3600 ("arch: Rename fbdev header and source files")
renames the video source files under arch/ such that they do not
refer to fbdev any longer. The new files named video.o conflict with
ACPI's video.ko module. Modprobing the ACPI module can then fail with
warnings about missing symbols, as shown below.

  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_unregister (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_register_backlight (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
__acpi_video_get_backlight_type (err -2)
  (i915_selftest:1107) igt_kmod-WARNING: i915: Unknown symbol 
acpi_video_register (err -2)

Fix the issue by renaming the architecture's video.o to video-common.o.

Reported-by: Chaitanya Kumar Borah 
Closes: 
https://lore.kernel.org/intel-gfx/9dcac6e9-a3bf-4ace-bbdc-f697f767f...@suse.de/T/#t
Signed-off-by: Thomas Zimmermann 
Fixes: 2fd001cd3600 ("arch: Rename fbdev header and source files")
Cc: Arnd Bergmann 
Cc: linux-a...@vger.kernel.org
Cc: linux-fb...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
---
 arch/sparc/video/Makefile| 2 +-
 arch/sparc/video/{video.c => video-common.c} | 0
 arch/x86/video/Makefile  | 2 +-
 arch/x86/video/{video.c => video-common.c}   | 0
 4 files changed, 2 insertions(+), 2 deletions(-)
 rename arch/sparc/video/{video.c => video-common.c} (100%)
 rename arch/x86/video/{video.c => video-common.c} (100%)

diff --git a/arch/sparc/video/Makefile b/arch/sparc/video/Makefile
index fdf83a408d750..dcfbe7a5912c0 100644
--- a/arch/sparc/video/Makefile
+++ b/arch/sparc/video/Makefile
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0-only
 
-obj-y  += video.o
+obj-y  += video-common.o
diff --git a/arch/sparc/video/video.c b/arch/sparc/video/video-common.c
similarity index 100%
rename from arch/sparc/video/video.c
rename to arch/sparc/video/video-common.c
diff --git a/arch/x86/video/Makefile b/arch/x86/video/Makefile
index fdf83a408d750..dcfbe7a5912c0 100644
--- a/arch/x86/video/Makefile
+++ b/arch/x86/video/Makefile
@@ -1,3 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0-only
 
-obj-y  += video.o
+obj-y  += video-common.o
diff --git a/arch/x86/video/video.c b/arch/x86/video/video-common.c
similarity index 100%
rename from arch/x86/video/video.c
rename to arch/x86/video/video-common.c
-- 
2.45.0



Re: [PATCH] ntp: remove accidental integer wrap-around

2024-05-17 Thread Thomas Gleixner
On Thu, May 16 2024 at 16:40, Justin Stitt wrote:
> On Tue, May 14, 2024 at 3:38 AM Thomas Gleixner  wrote:
>> So how can 0xf42400 + 50/1000 overflow in the first place?
>>
>> It can't unless time_maxerror is somehow initialized to a bogus
>> value and indeed it is:
>>
>> process_adjtimex_modes()
>> 
>> if (txc->modes & ADJ_MAXERROR)
>> time_maxerror = txc->maxerror;
>>
>> So that wants to be fixed and not the symptom.
>
> Isn't this usually supplied from the user and can be some pretty
> random stuff?

Sure it comes from user space and can contain random nonsense as
syzkaller demonstrated.

> Are you suggesting we update timekeeping_validate_timex() to include a
> check to limit the maxerror field to (NTP_PHASE_LIMIT-(MAXFREQ /
> NSEC_PER_USEC))? It seems like we should handle the overflow case
> where it happens: in second_overflow().
>
> The clear intent of the existing code was to saturate at
> NTP_PHASE_LIMIT, they just did it in a way where the check itself
> triggers overflow sanitizers.

The clear intent of the code is to do saturation of a bound value.

Clearly the user space interface fails to validate the input to be in a
sane range and that makes you go and prevent the resulting overflow at
the usage site. Seriously?

Obviously the sanitizer detects the stupid in second_overflow(), but
that does not mean that the proper solution is to add overflow
protection to that code.

Tools are good to pin-point symptoms, but they are by definition
patently bad in root cause analysis. Otherwise we could just let the
tool write the "fix".

The obvious first question in such a case is to ask _WHY_ does
time_maxerror have a bogus value, which clearly cannot be achieved from
regular operation.

Once you figured out that the only way to set time_maxerror to a bogus
value is the user space interface the obvious follow up question is
whether such a value has to be considered as valid or not.

As it is obviously invalid the logical consequence is to add a sanity
check and reject that nonsense at that boundary, no?

Thanks,

tglx



[OE-core][PATCH] maintainers.inc: maintainer for opensbi

2024-05-17 Thread Thomas Perrot via lists.openembedded.org
From: Thomas Perrot 

Signed-off-by: Thomas Perrot 
---
 meta/conf/distro/include/maintainers.inc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta/conf/distro/include/maintainers.inc 
b/meta/conf/distro/include/maintainers.inc
index 014cf32e4091..d98cb0f194bc 100644
--- a/meta/conf/distro/include/maintainers.inc
+++ b/meta/conf/distro/include/maintainers.inc
@@ -556,7 +556,7 @@ RECIPE_MAINTAINER:pn-npth = "Alexander Kanavin 
"
 RECIPE_MAINTAINER:pn-nss-myhostname = "Anuj Mittal "
 RECIPE_MAINTAINER:pn-numactl = "Richard Purdie 
"
 RECIPE_MAINTAINER:pn-ofono = "Ross Burton "
-RECIPE_MAINTAINER:pn-opensbi = "Unassigned "
+RECIPE_MAINTAINER:pn-opensbi = "Thomas Perrot "
 RECIPE_MAINTAINER:pn-openssh = "Unassigned "
 RECIPE_MAINTAINER:pn-openssl = "Alexander Kanavin "
 RECIPE_MAINTAINER:pn-opkg = "Alex Stewart "
-- 
2.45.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#199509): 
https://lists.openembedded.org/g/openembedded-core/message/199509
Mute This Topic: https://lists.openembedded.org/mt/106150269/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[dolphin] [Bug 464919] Dolphin cannot read anon_inodes

2024-05-17 Thread Thomas Bertels
https://bugs.kde.org/show_bug.cgi?id=464919

Thomas Bertels  changed:

   What|Removed |Added

 Status|REPORTED|NEEDSINFO
 Resolution|--- |WAITINGFORINFO

--- Comment #2 from Thomas Bertels  ---
-> NEEDSINFO

-- 
You are receiving this mail because:
You are watching all bug changes.

[dolphin] [Bug 464919] Dolphin cannot read anon_inodes

2024-05-17 Thread Thomas Bertels
https://bugs.kde.org/show_bug.cgi?id=464919

Thomas Bertels  changed:

   What|Removed |Added

 CC||tbert...@gmail.com

--- Comment #1 from Thomas Bertels  ---
Is this still reproduceable?
This may have been fixed by
https://invent.kde.org/frameworks/kio/-/merge_requests/1237

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [PATCH] gitlab-ci: Replace Docker with Kaniko

2024-05-17 Thread Thomas Huth

On 16/05/2024 20.24, Daniel P. Berrangé wrote:

On Thu, May 16, 2024 at 05:52:43PM +0100, Camilla Conte wrote:

Enables caching from the qemu-project repository.

Uses a dedicated "$NAME-cache" tag for caching, to address limitations.
See issue "when using --cache=true, kaniko fail to push cache layer [...]":
https://github.com/GoogleContainerTools/kaniko/issues/1459

...

TL;DR: functionally this patch is capable of working. The key downside
is that it doubles our storage usage. I'm not convinced Kaniko offers
a compelling enough benefit to justify this penalty.


Will this patch fix the issues that we are currently seeing with the k8s 
runners not working in the upstream CI? If so, I think that would be enough 
benefit, wouldn't it?


 Thomas




[PATCH] hw/intc/s390_flic: Fix crash that occurs when saving the machine state

2024-05-17 Thread Thomas Huth
adapter_info_so_needed() treats its "opaque" parameter as a S390FLICState,
but the function belongs to a VMStateDescription that is attached to a
TYPE_VIRTIO_CCW_BUS device. This is currently causing a crash when the
user tries to save or migrate the VM state. Fix it by using s390_get_flic()
to get the correct device here instead.

Reported-by: Marc Hartmayer 
Fixes: 9d1b0f5bf5 ("s390_flic: add migration-enabled property")
Signed-off-by: Thomas Huth 
---
 hw/intc/s390_flic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
index 7f93080087..6771645699 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -459,7 +459,7 @@ type_init(qemu_s390_flic_register_types)
 
 static bool adapter_info_so_needed(void *opaque)
 {
-S390FLICState *fs = S390_FLIC_COMMON(opaque);
+S390FLICState *fs = s390_get_flic();
 
 return fs->migration_enabled;
 }
-- 
2.45.0




Re: [PATCH 02/13] s390_flic: add migration-enabled property

2024-05-16 Thread Thomas Huth

On 16/05/2024 16.42, Marc Hartmayer wrote:

On Thu, May 09, 2024 at 07:00 PM +0200, Paolo Bonzini  
wrote:

Instead of mucking with css_migration_enabled(), add a property specific to
the FLIC device, similar to what is done for TYPE_S390_STATTRIB.

Signed-off-by: Paolo Bonzini 
---
  include/hw/s390x/s390_flic.h | 1 +
  hw/intc/s390_flic.c  | 6 +-
  hw/s390x/s390-virtio-ccw.c   | 1 +
  3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/hw/s390x/s390_flic.h b/include/hw/s390x/s390_flic.h
index 3907a13d076..bcb081def58 100644
--- a/include/hw/s390x/s390_flic.h
+++ b/include/hw/s390x/s390_flic.h
@@ -47,6 +47,7 @@ struct S390FLICState {
  /* to limit AdapterRoutes.num_routes for compat */
  uint32_t adapter_routes_max_batch;
  bool ais_supported;
+bool migration_enabled;
  };
  
  
diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c

index f4a848460b8..7f930800877 100644
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -405,6 +405,8 @@ static void qemu_s390_flic_class_init(ObjectClass *oc, void 
*data)
  static Property s390_flic_common_properties[] = {
  DEFINE_PROP_UINT32("adapter_routes_max_batch", S390FLICState,
 adapter_routes_max_batch, ADAPTER_ROUTES_MAX_GSI),
+DEFINE_PROP_BOOL("migration-enabled", S390FLICState,
+ migration_enabled, true),
  DEFINE_PROP_END_OF_LIST(),
  };
  
@@ -457,7 +459,9 @@ type_init(qemu_s390_flic_register_types)
  
  static bool adapter_info_so_needed(void *opaque)

  {
-return css_migration_enabled();
+S390FLICState *fs = S390_FLIC_COMMON(opaque);
+
+return fs->migration_enabled;
  }

...

This patch causes QEMU to crash when trying to save the domain state
(e.g. using libvirt)


Oh, drat, that vmstate belongs to a ccw device, not to a flic device, so the 
"opaque" pointer in adapter_info_so_needed points to the wrong structure.


I guess the easiest fix is:

diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c
--- a/hw/intc/s390_flic.c
+++ b/hw/intc/s390_flic.c
@@ -459,7 +459,7 @@ type_init(qemu_s390_flic_register_types)

 static bool adapter_info_so_needed(void *opaque)
 {
-S390FLICState *fs = S390_FLIC_COMMON(opaque);
+S390FLICState *fs = s390_get_flic();

 return fs->migration_enabled;
 }

I'll send it as a proper patch...

 Thomas




Dennis van Dok: Application Manager report

2024-05-16 Thread Thomas Goirand (via nm.debian.org)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

For nm.debian.org, at 2024-05-17:
After looking at Dennis van Dok 's contributions and after 
exchanging some emails to
get to know him a bit better, I agree with the advocate(s) that Dennis van Dok 
 can
and should indeed be a Debian Developer, uploading right now. Indeed, it feels 
like to me that he has
the knowledge and skills. Plus he also seems interested in constantly learning, 
whichi is always good.
Let's not make him wait any longer.
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE3+Kkgn20FPaRPp/ST56os/RrPrsFAmZG73EACgkQT56os/Rr
Prvh7hAAxa/P7qDWANOzqshi/DqmgqbL5wEi37i4EhwdKwD7/YMX7Jo/u1FyMyY7
vYApPiulpKVy4E+uPxvLLDF+fJvo2zRYH0pbfaX1dqxrhkq3b9ssSGbWszIaMsjK
XB9rJG82JrS0Dezc2L9Y2W1nV53mXD4WsNiysLkR+qdZSWtjXQ0Ib2z/pM0AXqQQ
/Z9wxwknIBm1CS9PbXXJphQQVTw8ubI8MEVpiMuquWoa3Mppp7rexrGZy5lwDcLp
+xhTfz4zEQSksv8lNjbkk40WjtVvF90jSIOIyrA6SGJU9tAtR4jg0uM4lkR0DoYl
h6GyVjWVhNeD0MTP4NkibgwhfeM0PoOZAFsnBootqxWJRr/aXKpfT4HokGLk27kY
kLEkjatO1FK7OQwgzNFfyYbAViu7XDzbFLowMV5rGRnx3MYLqGg6LDt5gLtsooP8
aPx5fk1Qygmp6ipVA2H1OUVE5yqvHYmLUmvJiC+jZrHv5BIEByCgzpkQoN7AuH1I
xDLNNKT2clJbW7dghiinvBBMpHTXAlv6koFOvpQCU9Hwv9x1RNmmnrYZMuCaoPB3
j30kr94r4Sk8Wjo1ib397W7YtDvbpsrBv51DuTFXzdq56psvgV9Zac/pv4FU1PNr
sLh4ePFp3mblQX9ZRSchHEMezDnjsP+6IJ4JbMEx3sxeZMFaGnU=
=L7DV
-END PGP SIGNATURE-

Thomas Goirand (via nm.debian.org)

For details and to comment, visit https://nm.debian.org/process/1279/
-- 
https://nm.debian.org/process/1279/



Re: [PATCH v2 1/3] docs: introduce dedicated page about code provenance / sign-off

2024-05-16 Thread Thomas Huth

On 16/05/2024 19.43, Peter Maydell wrote:

On Thu, 16 May 2024 at 18:34, Michael S. Tsirkin  wrote:


On Thu, May 16, 2024 at 06:29:39PM +0100, Peter Maydell wrote:

On Thu, 16 May 2024 at 17:22, Daniel P. Berrangé  wrote:


Currently we have a short paragraph saying that patches must include
a Signed-off-by line, and merely link to the kernel documentation.
The linked kernel docs have a lot of content beyond the part about
sign-off an thus are misleading/distracting to QEMU contributors.


Thanks for this -- I've felt for ages that it was a bit awkward
that we didn't have a good place to link people to for the fuller
explanation of this.


This introduces a dedicated 'code-provenance' page in QEMU talking
about why we require sign-off, explaining the other tags we commonly
use, and what to do in some edge cases.


The version of the kernel SubmittingPatches we used to link to
includes the text "sorry, no pseudonyms or anonymous contributions".
This new documentation doesn't say anything either way about
our approach to pseudonyms. I think we should probably say
something, but I don't know if we have an in-practice consensus
there, so maybe we should approach that as a separate change on
top of this patch.



Well given we referred to kernel previously then I guess that's
the concensus, no?


AIUI the kernel devs have changed their point of view on the
pseudonym question, so it's a question of whether we were
deliberately referring to that specific revision of the kernel's
practice because we agreed with it or just by chance...

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=d4563201f33a022fc0353033d9dfeb1606a88330

is where the kernel changed to saying merely "no anonymous
contributions", dropping the 'pseudonyms' part.


FWIW, we had a clear statement in our document in the past:

https://gitlab.com/qemu-project/qemu/-/commit/ca127fe96ddb827f3ea153610c1e8f6e374708e2#9620a1442f724c9d8bfd5408e4611ba1839fcb8a_315_321

Quoting: "Please use your real name to sign a patch (not an alias or acronym)."

But it got lost in that rework, I assume by accident?

So IMHO we had a consensus once to not allow anonymous contributions. I'm in 
favor of adding such a sentence back here now.


 Thomas




Re: race condition when writing pg_control

2024-05-16 Thread Thomas Munro
The specific problem here is that LocalProcessControlFile() runs in
every launched child for EXEC_BACKEND builds.  Windows uses
EXEC_BACKEND, and Windows' NTFS file system is one of the two file
systems known to this list to have the concurrent read/write data
mashing problem (the other being ext4).




Re: A Script To Display The Viewrendered3 Pane in the Same Place s The Body

2024-05-16 Thread Thomas Passin
Here is a revised version of my script to show the viewrendered3 plugin in 
the same pane as the body editor.  The command toggles between the VR3 
rendered view and the body editor view.  I have it connected to a button on 
the button bar.  You can use it without even enabling the plugin in your 
settings file.

@language python
"""Toggles the body frame between the body editor and
a Viewrendered3 rendered view.
"""
from leo.core.leoQt import QtWidgets
import leo.plugins.viewrendered3 as v3

def toggle_vr3_in_body(c):
flc = c.free_layout  # free layout controller
top = flc.get_top_splitter()
if top:
bfw = top.find_child(QtWidgets.QWidget, "bodyFrame")
kids = bfw.children()

qf0 = kids[0]
stacked_layout = qf0.children()[0]
h = c.hash()
if stacked_layout.count() == 1:
v3.controllers[h] = vr3 = v3.ViewRenderedController3(c)
stacked_layout.addWidget(vr3)

if stacked_layout.currentIndex() == 0:
# vr3 = v3.controllers[h]
# stacked_layout.setCurrentWidget(vr3)
vr3 = stacked_layout.widget(1)
stacked_layout.setCurrentIndex(1)
vr3.set_unfreeze()
else:
vr3 = stacked_layout.widget(1)
vr3.set_freeze()
stacked_layout.setCurrentIndex(0)

toggle_vr3_in_body(c)

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/5b8886b9-bd10-405c-8ef6-d22af41d648en%40googlegroups.com.


Re: glitches installing and starting Leo

2024-05-16 Thread Thomas Passin

On Thursday, May 16, 2024 at 11:04:15 PM UTC-4 andyjim wrote:

one more question and I'll try to leave you alone:
I use Bike, an outlining editor. The file extension is .bike
Can I represent an external Bike file in a Leo outline and from that Leo 
outline can I open the external file in the Bike application? 

 
I don't know anything about a Bile file.  In general, if you can run Bike 
and get it to pen a file from the command line, we can easily issue that 
command from within Leo.  I see that a Bike outline is an html file. I 
don't know where they keep their images, etc, but it's not hard to display 
an html file in Leo.  Editing it is something else, though. I see that Bike 
also has an OPML format too.  OPML is a XML file but has a bad design for 
interchanging anything complicated.  In theory, an XSLT transformation 
could be written to convert it into a Leo file.  It's been so long since I 
worked on XSLT transformations (about 20 years) that I'm awfully rusty. 
There's also said to be a text format for Bike but I don't know anything 
about that.

So the answer might be that it can probably be done, but you might not be 
able to do the kind of editing of rich text that you can apparently do in 
Bike.  OTOH, Leo could be made to use a rich text editor instead of the 
plain text-based one it normally uses.  There's at least one plugin that 
does that, though I don't know if it still works.

Anyway, the answer to the question of representing an external Bike file in 
Leo is "probably yes, depending on what you want to do with it in Leo".

Reading a little more (Bike: An Elegant Outliner For Mac-Focused Workflows 
),
 
it looks like it's an elegant riff off of Dave Winer's MORE outliner.  
Surprise, so is Leo!  Dave strikes again.  And OPML is a data format from 
... Dave.  Or it could be more reminiscent of Radio Userland 
, another Dave Winer 
project.  Anyone else remember Radio Userland?

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/2c2d9547-e6c4-4627-81e1-b8045af83b3dn%40googlegroups.com.


Re: Status report re PR #3911

2024-05-16 Thread Thomas Passin
This will be complicated enough and have a potentially big impact on 
existing code and users that I'd like to see a concept of operations and 
some requirements.  For one thing, without something like them no one will 
be able to help you.  For another, no one will know how things should work 
nor how to write code for them.In addition, it's much easier to correct 
mistakes early, and if we can have some discussion about the goals and the 
design, the community could help.

Saying that you will "invert" Terry's code so the helpers are higher up or 
more visible is not very definite or specific.  I'd rather start with what 
Terry's code does that we like, and what about it we don't like.  For 
example, I found I can open VR3 on top of the body editor without using a  
nested splitter except to use it to find the stacked widget that contains 
the body editor.  I need free layout instance to get that splitter but not 
for anything else.  If I could directly get that stacked widget I would not 
have to talk to the free layout or a nested splitter at all.  So to me, it 
would be very desirable to have a method on c to return a container of a 
known object, and probably to enumerate those objects and containers, 
without having to know how to work my way through all the current layers of 
splitters and container widgets.  Notionally, something like this:

body_widget = c.getObject('bodyWrapper1')
container = body_widget.parent()
my_widget_index  = container.add(myWidget(c))
container.setIndex(my_widget_index)

The log frame works much like this.  Why can't all the container frames be 
more or less like the Log frame?

There are many Qt applications out there that have multiple panels and 
frames.  In some of them parts can be torn off and moved elsewhere.  New 
frames can be opened and populated. I don't suppose they all do it the same 
way, but there are probably some standard ways.  Pyzo is probably as good 
an exemplar as we could find.  Just how should the revised Leo interface 
work, from the point of view of users?  That would be a good place to start.

On Thursday, May 16, 2024 at 7:44:19 PM UTC-4 Edward K. Ream wrote:

> On Thursday, May 16, 2024 at 1:44:43 PM UTC-5 Edward K. Ream wrote:
>
> >> I hope that existing GUI plugins that use the 
> nested-splitter/free-layout will be able to continue working without 
> needing to be reworked
>
> > When True, the *g.allow_nested_splitter* switch enables both plugins to 
> work as before. As noted in the PR, this switch might be on "forever."
>
> Belay that. Leo's codebase should not contain toxic code switches. Such 
> switches are intolerable in the long run.
>
> I feel strongly enough about this that I am willing to convert legacy code 
> myself. This offer extends to you, Thomas, and anyone else.
>
> *Assuming the PR succeeds,* here is my present plan:
>
> - Terry's plugins (and the switch) will be part of Leo 6.7.9.
>
> - As part of 6.7.9, I'll convert all affected code in LeoPyRef.py.
>
> - The 6.7.9 release notes will warn of a breaking change in Leo 6.7.10 and 
> will offer to help with conversion.
>
> - I'll remove the switch and the two plugins as soon as 6.7.9 goes out the 
> door.
>
> *Summary*
>
> If PR #3911 <https://github.com/leo-editor/leo-editor/pull/3911> succeeds, 
> Leo 6.7.9 will be the last release that supports the free_layout and 
> nested_splitter plugins.
>
> The 6.7.9 release notes will warn of the breaking change and will offer 
> to help convert any existing code.
>
> Leo's long history includes removing many overly complex features. 
> Removing all vestiges of these plugins will make Leo simpler and more 
> maintainable.
>
> Again, I welcome all comments.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/6e0608b6-1b25-4142-84b8-105b58f9bc8bn%40googlegroups.com.


Re: Mailing list SPF Failure

2024-05-16 Thread Michael Thomas



On 5/16/24 7:36 PM, John R. Levine wrote:

I think a lot of us have nanog whitelisted or otherwise special cased.


I don't and gmail is my backend. That's trivial falsification that lack 
of an SPF records alone will cause gmail rejects.


Mike



Also, it's been pumping out list mail for decades and I expect has a 
close to zero complaint rate so even without the SPF ths IPs it sends 
from have a good reputation.


On Thu, 16 May 2024, Scott Q. wrote:


I'm surprised nobody noticed for close to 10 days. I was away
from work and upon coming back I saw the little discussion there was ,
in my Spam folder.

On Thursday, 16/05/2024 at 18:56 John R. Levine wrote:

On Thu, 16 May 2024, William Herrin wrote:

The message content (including the message headers) is theoretically
not used for SPF validation. In practice, some SPF validators don't
have direct access to the SMTP session so they rely on the SMTP
session placing the envelope sender in the Return-path header.


But that wasn't the problem here, the SPF record was just
gone.  Oops.

I see that the SPF record is back and seems have the correct addresses
so we can now return to our previously scheduled flamage.


Re: Should FCC look at SS7 vulnerabilities or BGP vulnerabilities

2024-05-16 Thread Michael Thomas



On 5/16/24 6:55 PM, John Levine wrote:

It appears that Brandon Martin  said:

I think the issue with their lack of effectiveness on spam calls is due
to the comparatively small number of players in the PSTN (speaking of
both classic TDM and modern IP voice-carrying and signaling networks)
world allowing lots of regulatory capture.

It's the opposite. SS7 was designed for a world with a handful of
large trustworthy telcos. But now that we have VoIP, it's a world of a
zillion sleasy little VoIP carriers stuffing junk into the network.
The real telcos have no desire to deliver spam calls. Everything is
bill and keep so they get no revenue and a lot of complaints.

Mike is right that STIR/SHAKEN is more complex than it needs to be but
even after it was widely deployed, the telcos had to argue with the
FCC to change the rules so they were allowed to drop spam calls which
only changed recently. That's why you see PROBABLE SPAM rather than
just not getting the call.


I was screaming at the top of my lungs that P-Asserted-Identity was 
going to bite them in the ass 20 years ago. And then they eventually 
came up with something that solved the wrong problem in the most 
bellheaded way possible 15 years later. Bellheads should not be trusted 
with internet security. The FCC is most likely not blameless here either 
but the telcos/bellheads most certainly aren't either. Anybody who 
thinks this is an either/or problem is wrong.


Mike



Re: Mailing list SPF Failure

2024-05-16 Thread Michael Thomas


On 5/16/24 7:22 PM, Scott Q. wrote:
Mike, you do realize Google/Gmail rejects e-mails with invalid/missing 
SPF right ?


I was receiving the mail while NANOG had no SPF record, so no? Any 
receiver would be really stupid take a single signal as disqualifying.


Mike




If you want to tell them they're broken...there's a few guys on the 
list here.


On Thursday, 16/05/2024 at 19:17 Michael Thomas wrote:

On 5/16/24 3:54 PM, William Herrin wrote:
> On Thu, May 16, 2024 at 12:03 PM John Levine mailto:jo...@iecc.com>> wrote:
>> It appears that Michael Thomas mailto:m...@mtcc.com>> said:
>>> Since probably 99% of the mail from NANOG is through this list, it
>>> hardly matters since SPF will always fail.
>> Sorry, but no. A mailing list puts its own envelope return
address on
>> the message so with a reasonable SPF record, SPF will normally
>> succeed.
> Exactly. SPF acts on the -envelope- sender. That means the one
> presented in the SMTP From:<> command. For mail from nanog, that's:
> nanog-bounces+addr...@nanog.org
<mailto:nanog-bounces+addr...@nanog.org>, regardless of what the
sender's
> header From address is.
>
> The message content (including the message headers) is theoretically
> not used for SPF validation. In practice, some SPF validators don't
> have direct access to the SMTP session so they rely on the SMTP
> session placing the envelope sender in the Return-path header.

Yes, and why is that needed? The mailing list resigning has the same
effect and then you only need one mechanism instead of two and
with DKIM
you get the benefit that it's signing the 822 address which can be
used
for user level stuff in way that SPF is a little sus. So it makes SPF
pretty irrelevant. IMO, SPF was always a stopgap since there was no
guarantee that DKIM would be deployed. 20 years on, I guess I
don't feel
like I need to keep my trap shut about that.

If a receiving site is rejecting something solely based on the
lack of a
SPF record but has a valid DKIM signature, the site is broken IMO.

Mike


Re: Potential stack overflow in incremental base backup

2024-05-16 Thread Thomas Munro
On Tue, Apr 16, 2024 at 4:10 AM Robert Haas  wrote:
> On Wed, Apr 10, 2024 at 9:55 PM Thomas Munro  wrote:
> > To rescue my initdb --rel-segsize project[1] for v18, I will have a go
> > at making that dynamic.  It looks like we don't actually need to
> > allocate that until we get to the GetFileBackupMethod() call, and at
> > that point we have the file size.  If I understand correctly,
> > statbuf.st_size / BLCKSZ would be enough, so we could embiggen our
> > block number buffer there if required, right?
>
> Yes.

Here is a first attempt at that.  Better factoring welcome.  New
observations made along the way: the current coding can exceed
MaxAllocSize and error out, or overflow 32 bit size_t and allocate
nonsense.  Do you think it'd be better to raise an error explaining
that, or silently fall back to full backup (tried that way in the
attached), or that + log messages?  Another option would be to use
huge allocations, so we only have to deal with that sort of question
for 32 bit systems (i.e. effectively hypothetical/non-relevant
scenarios), but I don't love that idea.

> ...
> I do understand that a 1GB segment size is not that big in 2024, and
> that filesystems with a 2GB limit are thought to have died out a long
> time ago, and I'm not against using larger segments. I do think,
> though, that increasing the segment size by 32768x in one shot is
> likely to be overdoing it.

My intuition is that the primary interesting lines to cross are at 2GB
and 4GB due to data type stuff.  I defend against the most basic
problem in my proposal: I don't let you exceed your off_t type, but
that doesn't mean we don't have off_t/ssize_t/size_t/long snafus
lurking in our code that could bite a 32 bit system with large files.
If you make it past those magic numbers and your tools are all happy,
I think you should be home free until you hit file system limits,
which are effectively unhittable on most systems except ext4's already
bemoaned 16TB limit AFAIK.  But all the same, I'm contemplating
limiting the range to 1TB in the first version, not because of general
fear of unknown unknowns, but just because it means we don't need to
use "huge" allocations for this known place, maybe until we can
address that.
From 30efb6d39c83e9d4f338e10e1b8944c64f8799c5 Mon Sep 17 00:00:00 2001
From: Thomas Munro 
Date: Wed, 15 May 2024 17:23:21 +1200
Subject: [PATCH v1] Limit block number buffer size in incremental backup.

Previously, basebackup.c would allocate an array big enough to hold a
block number for every block in a full sized md.c segment file.  That
works out to 512kB by default, which should be no problem.  However,
users can change the segment size at compile time, and the required
space for the array could be many gigabytes, posing problems:

1. For segment sizes over 2TB, MaxAllocSize would be exceeded,
   raising an error.
2. For segment sizes over 8TB, size_t arithmetic would overflow on 32 bit
   systems, leading to the wrong size allocation.
3. For any very large segment size, it's non-ideal to allocate a huge
   amount of memory if you're not actually going to use it.

This isn't a fundamental fix for the high memory requirement of the
algorithm as coded, but it seems like a good idea to avoid those limits
with a fallback strategy, and defer allocating until we see how big the
segments actually are in the cluster being backed up, upsizing as
required.

These are mostly theoretical problems at the moment, because it is so
hard to change the segment size in practice that people don't do it.  A
new proposal would make it changeable at run time, hence interest in
tidying these rough edges up.

Discussion: https://postgr.es/m/CA%2BhUKG%2B2hZ0sBztPW4mkLfng0qfkNtAHFUfxOMLizJ0BPmi5%2Bg%40mail.gmail.com

diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index 9a2bf59e84e..0703b77af94 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -34,6 +34,7 @@
 #include "pgstat.h"
 #include "pgtar.h"
 #include "port.h"
+#include "port/pg_bitutils.h"
 #include "postmaster/syslogger.h"
 #include "postmaster/walsummarizer.h"
 #include "replication/walsender.h"
@@ -45,6 +46,7 @@
 #include "storage/reinit.h"
 #include "utils/builtins.h"
 #include "utils/guc.h"
+#include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/relcache.h"
 #include "utils/resowner.h"
@@ -1198,13 +1200,7 @@ sendDir(bbsink *sink, const char *path, int basepathlen, bool sizeonly,
 	bool		isGlobalDir = false;
 	Oid			dboid = InvalidOid;
 	BlockNumber *relative_block_numbers = NULL;
-
-	/*
-	 * Since this array is relatively large, avoid putting it on the stack.
-	 * But we don't need it at all if this is not an incremental backup.
-	 */
-	if (ib != NULL)
-		relative_block_numbers 

Re: Should FCC look at SS7 vulnerabilities or BGP vulnerabilities

2024-05-16 Thread Michael Thomas



On 5/16/24 4:17 PM, Brandon Martin wrote:


I think the issue with their lack of effectiveness on spam calls is 
due to the comparatively small number of players in the PSTN (speaking 
of both classic TDM and modern IP voice-carrying and signaling 
networks) world allowing lots of regulatory capture. That's going to 
keep the FCC from issuing mandatory rules much beyond what much of the 
industry is on the road to implementing already to keep their 
customers placated.


I think it should be pointed out that the STIR/SHAKEN crowd doesn't 
really get it either. The problem is mainly a problem of the border 
between bad guys and the onramps onto the PSTN. SIP has made that dirt 
cheap and something anybody can do it for nothing at all down in their 
basements. It's essentially the same thing as email back in the days of 
open relays and no submission auth. STIR/SHAKEN obfuscated that problem 
by trying to solve the problem of who is allowed to assert what E.164 
address when it's much easier to solve in the "where did this come from 
and who should I blame?" realm. I don't hear anybody moaning about 
deploying DKIM except maybe spammer sites that don't want accountability 
and their onramp sites that turn a blind eye making money off them. They 
care these days because for legit senders, baddies cost them money due 
to deliverability. It would have been trivial to attach a DKIM like 
signature to SIP messages and be done with it instead of trying to boil 
the legacy addressing ocean. I should know, I did that for shits and 
giggles about 20 years ago.


Mike




Re: Mailing list SPF Failure

2024-05-16 Thread Michael Thomas



On 5/16/24 3:54 PM, William Herrin wrote:

On Thu, May 16, 2024 at 12:03 PM John Levine  wrote:

It appears that Michael Thomas  said:

Since probably 99% of the mail from NANOG is through this list, it
hardly matters since SPF will always fail.

Sorry, but no. A mailing list puts its own envelope return address on
the message so with a reasonable SPF record, SPF will normally
succeed.

Exactly. SPF acts on the -envelope- sender. That means the one
presented in the SMTP From:<> command. For mail from nanog, that's:
nanog-bounces+addr...@nanog.org, regardless of what the sender's
header From address is.

The message content (including the message headers) is theoretically
not used for SPF validation. In practice, some SPF validators don't
have direct access to the SMTP session so they rely on the SMTP
session placing the envelope sender in the Return-path header.


Yes, and why is that needed? The mailing list resigning has the same 
effect and then you only need one mechanism instead of two and with DKIM 
you get the benefit that it's signing the 822 address which can be used 
for user level stuff in way that SPF is a little sus. So it makes SPF 
pretty irrelevant. IMO, SPF was always a stopgap since there was no 
guarantee that DKIM would be deployed. 20 years on, I guess I don't feel 
like I need to keep my trap shut about that.


If a receiving site is rejecting something solely based on the lack of a 
SPF record but has a valid DKIM signature, the site is broken IMO.


Mike



Re: Requiring LLVM 14+ in PostgreSQL 18

2024-05-16 Thread Thomas Munro
On Fri, May 17, 2024 at 3:17 AM Nazir Bilal Yavuz  wrote:
> Actually, 32 bit builds are working but the Perl version needs to be
> updated to 'perl5.36-i386-linux-gnu' in .cirrus.tasks.yml. I changed
> 0001 with the working version of 32 bit builds [1] and the rest is the
> same. All tests pass now [2].

Ahh, right, thanks!  I will look at committing your CI/fixup patches.




Re: starting rkward with root priveleges in opensuse leap 15.5

2024-05-16 Thread Thomas Friedrichsmeier
Hi,

On Wed, 15 May 2024 20:09:14 -0500
ndordea  wrote:
> Could you please tell me  if it is possible to start rkward 0.7.5 [
> from opensuse R-release repo] as root .
> I tried using --no-sandbox and starting rkward from a root session or
> via kdesu -c ' rkward --no- sandbox'

I am somewhat baffled, why some people would like to run rkward as
root. It is definitely not something recommended, and I am not sure,
whether there is any true use case for it (although you are not the
first person to ask). Perhaps, if you could tell us what you want to
achieve, we can suggest a workaround.

Anyway, the immediate reason why running as root fails is indeed
sandboxing restrictions in the embedded QWebEngine. The correct way to
pass additional parameters (such as --no-sandbox) to the QWebEngine is
by using environment variables:

  sudo QTWEBENGINE_CHROMIUM_FLAGS="--no-sandbox" rkward

Regards
Thomas


pgp7XYjMOi84H.pgp
Description: OpenPGP digital signature


  1   2   3   4   5   6   7   8   9   10   >