[webkit-changes] [WebKit/WebKit] 42f434: [Writing Tools] Smart replies erases all content i...

2024-06-12 Thread Richard Robinson
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 42f4341e3c1b709ecd35df29d4756df51038122e
  
https://github.com/WebKit/WebKit/commit/42f4341e3c1b709ecd35df29d4756df51038122e
  Author: Richard Robinson 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm

  Log Message:
  ---
  [Writing Tools] Smart replies erases all content in mail compose, fails to 
insert reply, WebKit.content crashes @ com.apple.WebKit: 
IPC::ArgumentCoder::encode
https://bugs.webkit.org/show_bug.cgi?id=275419
rdar://129697243

Reviewed by Megan Gardner and Aditya Keerthi.

When using Smart Replies, an attributed string was being generated using the 
entire editable content.
However, Smart Replies doesn't actually need any attributed string context at 
all, so fix by returning
an empty attributed string in this case.

* 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm:
(WebCore::UnifiedTextReplacementController::willBeginTextReplacementSession):

Canonical link: https://commits.webkit.org/279968@main



To unsubscribe from these emails, change your notification settings at 
https://github.com/WebKit/WebKit/settings/notifications
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: Guix system image record with a large root partition

2024-06-12 Thread Richard Sent
Sergey Trofimov  writes:

> Building a full-sized disk image seems wasteful, especially for large
> partitions.

This hasn't been merged, but there was a patch proposed for a
resize-fs-service that would resize the partition on boot.
https://issues.guix.gnu.org/69090. Maybe worth taking a look at?

-- 
Take it easy,
Richard Sent
Making my computer weirder one commit at a time.



[Qemu-commits] [qemu/qemu] b4912a: scsi-disk: Fix crash for VM configured with USB CD...

2024-06-12 Thread Richard Henderson via Qemu-commits
mit/c94eb5db8e409c932da9eb187e68d4cdc14acc5b
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: fix unreachable code coverity issue

Set 'finish->id_block_en' early, so that it is properly reset.

Fixes coverity CID 1546887.

Fixes: 7b34df4426 ("i386/sev: Introduce 'sev-snp-guest' object")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-2-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: 48779faef3c8e2fe70bd8285bffa731bd76dc844
  
https://github.com/qemu/qemu/commit/48779faef3c8e2fe70bd8285bffa731bd76dc844
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: Move SEV_COMMON null check before dereferencing

Fixes Coverity CID 1546886.

Fixes: 9861405a8f ("i386/sev: Invoke launch_updata_data() for SEV class")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-3-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: cd7093a7a168a823d07671348996f049d45e8f67
  
https://github.com/qemu/qemu/commit/cd7093a7a168a823d07671348996f049d45e8f67
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: Return when sev_common is null

Fixes Coverity CID 1546885.

Fixes: 16dcf200dc ("i386/sev: Introduce "sev-common" type to encapsulate common 
SEV state")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-4-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: 4228eb8cc6ba44d35cd52b05508a47e780668051
  
https://github.com/qemu/qemu/commit/4228eb8cc6ba44d35cd52b05508a47e780668051
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/decode-new.h
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: remove CPUX86State argument from generator functions

CPUX86State argument would only be used to fetch bytes, but that has to be
done before the generator function is called.  So remove it, and all
temptation together with it.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: cc155f19717ced44d70df3cd5f149a5b9f9a13f1
  
https://github.com/qemu/qemu/commit/cc155f19717ced44d70df3cd5f149a5b9f9a13f1
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/cpu.h
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: rewrite flags writeback for ADCX/ADOX

Avoid using set_cc_op() in preparation for implementing APX; treat
CC_OP_EFLAGS similar to the case where we have the "opposite" cc_op
(CC_OP_ADOX for ADCX and CC_OP_ADCX for ADOX), except the resulting
cc_op is not CC_OP_ADCOX. This is written easily as two "if"s, whose
conditions are both false for CC_OP_EFLAGS, both true for CC_OP_ADCOX,
and one each true for CC_OP_ADCX/ADOX.

The new logic also makes it easy to drop usage of tmp0.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: e628387cf9a27a4895b00821313635fad4cfab43
  
https://github.com/qemu/qemu/commit/e628387cf9a27a4895b00821313635fad4cfab43
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: put BLS* input in T1, use generic flag writeback

This makes for easier cpu_cc_* setup, and not using set_cc_op()
should come in handy if QEMU ever implements APX.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: c2b6b6a65a227d2bb45e1b2694cf064b881543e4
  
https://github.com/qemu/qemu/commit/c2b6b6a65a227d2bb45e1b2694cf064b881543e4
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: change X86_ENTRYr to use T0

I am not sure why I made it use T1.  It is a bit more symmetric with
respect to X86_ENTRYwr (which uses T0 for the "w"ritten operand
and T1 for the "r"ead operand), but it is also less flexible because it
does not let you apply zextT0/sextT0.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: 4e2dc59cf99b5d352b426ee30b8fbb9804e237d1
  
https://github.com/qemu/qemu/commit/4e2dc59cf99b5d352b426ee30b8fbb9804e237d1
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: change X86_ENTRYwr to use T0, use it for moves

Just like X86_ENTRYr, X86_ENTRYwr is easily changed to use only T0.
In this case, the motivation is to use it for the MOV instruc

Re: [PULL 0/6] Tracing patches

2024-06-12 Thread Richard Henderson

On 6/10/24 10:13, Stefan Hajnoczi wrote:

The following changes since commit 80e8f0602168f451a93e71cbb1d59e93d745e62e:

   Merge tag 'bsd-user-misc-2024q2-pull-request' of gitlab.com:bsdimp/qemu into 
staging (2024-06-09 11:21:55 -0700)

are available in the Git repository at:

   https://gitlab.com/stefanha/qemu.git  tags/tracing-pull-request

for you to fetch changes up to 4c2b6f328742084a5bd770af7c3a2ef07828c41c:

   tracetool: Forbid newline character in event format (2024-06-10 13:05:27 
-0400)


Pull request

Cleanups from Philippe Mathieu-Daudé.


Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/9.1 as 
appropriate.


r~




[Qemu-commits] [qemu/qemu] 0e2b9e: tracetool: Remove unused vcpu.py script

2024-06-12 Thread Richard Henderson via Qemu-commits
  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 0e2b9edfb6126fd9ce235a1b34ba20bbeb2547ae
  
https://github.com/qemu/qemu/commit/0e2b9edfb6126fd9ce235a1b34ba20bbeb2547ae
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M meson.build
M scripts/tracetool/__init__.py
R scripts/tracetool/vcpu.py

  Log Message:
  ---
  tracetool: Remove unused vcpu.py script

vcpu.py is pointless since commit 89aafcf2a7 ("trace:
remove code that depends on setting vcpu"), remote it.

Signed-off-by: Philippe Mathieu-Daudé 
Reviewed-by: Daniel P. Berrangé 
Reviewed-by: Zhao Liu 
Message-id: 20240606102631.78152-1-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: 7682ecd48d6c177667e02a64b4287d7f31c27bd8
  
https://github.com/qemu/qemu/commit/7682ecd48d6c177667e02a64b4287d7f31c27bd8
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M backends/tpm/tpm_util.c
M backends/tpm/trace-events

  Log Message:
  ---
  backends/tpm: Remove newline character in trace event

Split the 'tpm_util_show_buffer' event in two to avoid
using a newline character.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Reviewed-by: Stefan Berger 
Message-id: 20240606103943.79116-2-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: 769244f9fcd12d91c56db1ad9f318f5bb28e4907
  
https://github.com/qemu/qemu/commit/769244f9fcd12d91c56db1ad9f318f5bb28e4907
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M hw/sh4/trace-events

  Log Message:
  ---
  hw/sh4: Remove newline character in trace events

Trace events aren't designed to be multi-lines. Remove
the newline character which doesn't bring much value.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Message-id: 20240606103943.79116-3-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: ce3d01da898ad73509c0d5a851d775670fb7ba1e
  
https://github.com/qemu/qemu/commit/ce3d01da898ad73509c0d5a851d775670fb7ba1e
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M hw/usb/trace-events

  Log Message:
  ---
  hw/usb: Remove newline character in trace events

Trace events aren't designed to be multi-lines.
Remove the newline characters.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Message-id: 20240606103943.79116-4-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: 956f63f87826fcd96d256bdbca17d31b060940e0
  
https://github.com/qemu/qemu/commit/956f63f87826fcd96d256bdbca17d31b060940e0
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M hw/vfio/trace-events

  Log Message:
  ---
  hw/vfio: Remove newline character in trace events

Trace events aren't designed to be multi-lines.
Remove the newline characters.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Message-id: 20240606103943.79116-5-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: 4c2b6f328742084a5bd770af7c3a2ef07828c41c
  
https://github.com/qemu/qemu/commit/4c2b6f328742084a5bd770af7c3a2ef07828c41c
  Author: Philippe Mathieu-Daudé 
  Date:   2024-06-10 (Mon, 10 Jun 2024)

  Changed paths:
M scripts/tracetool/__init__.py

  Log Message:
  ---
  tracetool: Forbid newline character in event format

Events aren't designed to be multi-lines. Multiple events
can be used instead. Prevent that format using multi-lines
by forbidding the newline character.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Message-id: 20240606103943.79116-6-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: f3e8cc47de2bc537d4991e883a85208e4e1c0f98
  
https://github.com/qemu/qemu/commit/f3e8cc47de2bc537d4991e883a85208e4e1c0f98
  Author: Richard Henderson 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M backends/tpm/tpm_util.c
M backends/tpm/trace-events
M hw/sh4/trace-events
M hw/usb/trace-events
M hw/vfio/trace-events
M meson.build
M scripts/tracetool/__init__.py
R scripts/tracetool/vcpu.py

  Log Message:
  ---
  Merge tag 'tracing-pull-request' of https://gitlab.com/stefanha/qemu into 
staging

Pull request

Cleanups from Philippe Mathieu-Daudé.

# -BEGIN PGP SIGNATURE-
#
# iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmZnNCQACgkQnKSrs4Gr
# c8hRQgf/WDNO0IvplK4U9PO5+Zm165xqY6lttfgniJzT2jb4p/dg0LiNOSqHx53Q
# 2eM/YJl7GxSXwnIESqNVuVxixh8DvExmOtM8RJm3HyJWtZoKfgMOV/dzHEhST3xj
# PglIEwL5Cm14skhQAVhJXzFlDjZ8seoz+YCbLhcYWk2B3an+5PvFySbp4iHS9cXJ
# lZUZx/aa9xjviwzMbsMxzFt3rA22pgNaxemV40FBIMWC0H+jP5pgBdZXE2n8jJvB
# 9eXZyG1kdkJKXO2DMhPYuG4rEEWOhV6dckXzmxCQEbHlGTH7X3Pn1F5B3+agi9

Re: BBC 1 TV Schedules not downloading

2024-06-12 Thread Richard
On Wednesday, 12 June 2024 23:33:33 BST MacFH - C E Macfarlane - News wrote:
> As per subject, myself and at least one or two others are not able to
> obtain BBC 1 TV Schedules, what we see is this ...
> 
> WARNING: Got 0 programmes for BBC One schedule page (HTML):
> https://www.bbc.co.uk/schedules/p00fzl6n/2024/w24
> 
> WARNING: Failed to parse BBC One schedule page:
> https://www.bbc.co.uk/schedules/p00fzl6n/2024/w24
> 
> Other channels seem fine.
> 
> ___
> get_iplayer mailing list
> get_iplayer@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/get_iplayer

I'm getting the same error here.

-- 
Richard.

signature.asc
Description: This is a digitally signed message part.
___
get_iplayer mailing list
get_iplayer@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/get_iplayer


RE: Questions regarding HBO Max, Hallmark Movies, and PBS

2024-06-12 Thread Richard Turner
Audio Description on the iPhone using the Prime app works fine.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: 'Kliph Miller Sr' via VIPhone  
Sent: Wednesday, June 12, 2024 1:41 PM
To: viphone@googlegroups.com
Subject: Re: Questions regarding HBO Max, Hallmark Movies, and PBS

I don’t know about the audio description threw prime, but I do know the app is 
most accessible on the Apple TV, phone is usable, and so is iPad, but Apple TV 
will give you the best experience.


> On Jun 12, 2024, at 12:29 AM, Terri Stimmel  
> wrote:
> 
> Hello everyone,
> 
> 
> I currently dropped my cable package, as it was just getting way too 
> expensive.
> 
> However, in doing this I lost free access to HBO Max. As long as I had the 
> HBO channel, I got access to Max for free, through Spectrum.
> 
> 
> I have other Apps, such as Peacock, Prime video, and Paramount. I use my 
> iPhone to watch stuff. As well as my iPad, and my Samsung TV.
> 
> 
> So, my question is this.
> 
> What might be the easiest way to access Max? Where might the App be the most 
> accessible?
> 
> Also, if I purchase it using something like Amazon Prime, will I then have 
> access to all of the content with audio description?
> 
> This is very important to me, and what I want.
> 
> 
> Another couple of questions I have are as follows.
> 
> 
> What is the most easiest, and accessible way to watch Hallmark movies?
> 
> Is this even an option?
> 
> 
> And my last question for now is this.
> 
> 
> Is there a way to easily access PBS, and any of the audio described content 
> offered there?
> 
> From my understanding, audio described content can be hit, or miss.
> 
> Does anyone know if this is still the case?
> 
> 
> Any thoughts, or advice on any of these Apps, will be very much appreciated!
> 
> 
> Thank you,
> 
> 
> Terri
> 
> -- 
> The following information is important for all members of the V iPhone list.
> 
> If you have any questions or concerns about the running of this list, or if 
> you feel that a member's post is inappropriate, please contact the owners or 
> moderators directly rather than posting on the list itself.
> 
> Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
> mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
> caraqu...@caraquinn.com
> 
> The archives for this list can be searched at:
> http://www.mail-archive.com/viphone@googlegroups.com/
> --- You received this message because you are subscribed to the Google Groups 
> "VIPhone" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to viphone+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/viphone/CY8PR14MB6851BDFDB840A39A79E64A2DA8C02%40CY8PR14MB6851.namprd14.prod.outlook.com.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/15C4C366-DA41-4823-85E1-95E16AC73318%40icloud.com.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/001001dabd0f%24dce81f90%2496b85eb0%24%40comcast.net.


Re: [PATCH v3 1/2] arm: Zero/Sign extends for CMSE security on Armv8-M.baseline [PR115253]

2024-06-12 Thread Richard Sandiford
"Richard Earnshaw (lists)"  writes:
> On 10/06/2024 15:04, Torbjörn SVENSSON wrote:
>> Properly handle zero and sign extension for Armv8-M.baseline as
>> Cortex-M23 can have the security extension active.
>> Currently, there is an internal compiler error on Cortex-M23 for the
>> epilog processing of sign extension.
>> 
>> This patch addresses the following CVE-2024-0151 for Armv8-M.baseline.
>> 
>> gcc/ChangeLog:
>> 
>>  PR target/115253
>>  * config/arm/arm.cc (cmse_nonsecure_call_inline_register_clear):
>>  Sign extend for Thumb1.
>>  (thumb1_expand_prologue): Add zero/sign extend.
>> 
>> Signed-off-by: Torbjörn SVENSSON 
>> Co-authored-by: Yvan ROUX 
>> ---
>>  gcc/config/arm/arm.cc | 71 ++-
>>  1 file changed, 63 insertions(+), 8 deletions(-)
>> 
>> diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc
>> index ea0c963a4d6..e7b4caf1083 100644
>> --- a/gcc/config/arm/arm.cc
>> +++ b/gcc/config/arm/arm.cc
>> [...]
>> +&& known_ge (GET_MODE_SIZE (TYPE_MODE (ret_type)), 2))
>
> You can use known_eq here.  We'll never have any value other than 2, given 
> the known_le (4) above and anyway it doesn't make sense to call extendhisi 
> with any other size.

BTW, I'm surprised we need known_* in arm-specific code.  Is it actually
needed?  Or is this just a conditioned response? ;)  

Richard



[webkit-changes] [WebKit/WebKit] eb9b61: Fix the build (again) after 279936@main

2024-06-12 Thread Richard Robinson
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: eb9b6104d51de9f07bbddbbd43ea3816efac524d
  
https://github.com/WebKit/WebKit/commit/eb9b6104d51de9f07bbddbbd43ea3816efac524d
  Author: Richard Robinson 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M Source/WebKit/UIProcess/API/Cocoa/WKWebViewInternal.h
M Source/WebKit/UIProcess/API/mac/WKWebViewMac.mm

  Log Message:
  ---
  Fix the build (again) after 279936@main

Unreviewed build fix.

* Source/WebKit/UIProcess/API/Cocoa/WKWebViewInternal.h:
* Source/WebKit/UIProcess/API/mac/WKWebViewMac.mm:

Canonical link: https://commits.webkit.org/279958@main



To unsubscribe from these emails, change your notification settings at 
https://github.com/WebKit/WebKit/settings/notifications
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[OE-core] [PATCH] oeqa/sdk/case: Ensure DL_DIR is populated with artefacts if used

2024-06-12 Thread Richard Purdie
Where we're using DL_DIR in sdk archive to try and cache testing artefacts,
copy into the cache so that it gets populated and this doesn't have to be done
manually. Currently we're making a lot of repeat requests to github as this
wasn't being populated.

Signed-off-by: Richard Purdie 
---
 meta/lib/oeqa/sdk/case.py | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/meta/lib/oeqa/sdk/case.py b/meta/lib/oeqa/sdk/case.py
index c45882689cb..46a3789f572 100644
--- a/meta/lib/oeqa/sdk/case.py
+++ b/meta/lib/oeqa/sdk/case.py
@@ -6,6 +6,7 @@
 
 import os
 import subprocess
+import shutil
 
 from oeqa.core.case import OETestCase
 
@@ -21,12 +22,14 @@ class OESDKTestCase(OETestCase):
 archive = os.path.basename(urlparse(url).path)
 
 if dl_dir:
-tarball = os.path.join(dl_dir, archive)
-if os.path.exists(tarball):
-return tarball
+archive_tarball = os.path.join(dl_dir, archive)
+if os.path.exists(archive_tarball):
+return archive_tarball
 
 tarball = os.path.join(workdir, archive)
 subprocess.check_output(["wget", "-O", tarball, url], 
stderr=subprocess.STDOUT)
+if dl_dir and not os.path.exists(archive_tarball):
+shutil.copyfile(tarball, archive_tarball)
 return tarball
 
 def check_elf(self, path, target_os=None, target_arch=None):

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#200580): 
https://lists.openembedded.org/g/openembedded-core/message/200580
Mute This Topic: https://lists.openembedded.org/mt/106639873/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: Update on TomEE 10

2024-06-12 Thread Richard Zowalla
Hi Markus,

Thanks for your reply.

I think we need to keep the Java 17 baseline anyway (we need it for activemq). 
The CXF downgrade is just a matter of reverting two commits, so should be the 
most easiest option.

I asked CXF, what they plan for 4.1.0 here: 
https://lists.apache.org/thread/rggjhm3w4gr81y8dmskp4mc7cydq13zq
If one looks into their Jira / EE10 ticket, the current statement is „somewhere 
this year“. So I wouldn’t count with it soon.

Gruß
Richard


> Am 12.06.2024 um 19:02 schrieb Markus Jung :
> 
> Hey Richard,
> 
> huge +1 from my side for an M2 when my OIDC PR is merged, that would allow us 
> to become the first real user of this feature ASAP and probably provide some 
> more feedback and patches ;)
> 
> Don't know how likely a CXF 4.1.0 release/milestone is in the near future, 
> I'm fine if we downgrade again for now. However, my OIDC code already uses 
> some language features not available in Java 11. So I think we need to keep 
> Java 17 as a requirement.
> 
> 
> Thanks
> 
> Markus
> 
>> On 12. Jun 2024, at 06:17, Richard Zowalla  wrote:
>> 
>> Hi all,
>> 
>> Here is a new update on TomEE 10.
>> 
>> Markus Jung has implemented the missing part of the EE10 security spec: [1] 
>> and the TCK for it looks good. Thanks a lot for that contribution! If 
>> anybody wants to give it a review, you find it here: [1]
>> 
>> We have updated most MicroProfile specs to be compliant with MP6 and the TCK 
>> for it looks good.
>> 
>> The only MicroProfile implementation missing is OpenTelemetry 1.0 [2] (and 
>> the removal of OpenTracing). There is a branch with a basic integration 
>> (TOMEE-4343) but while working on it, I found something odd, which I did 
>> discuss with Romain via Slack. The result is [3]. I hope to get some 
>> additional input from Mark Struberg on it, so we can hopefully find a way to 
>> fix the odd CDI part here. Overall, the related TCK has around 4-5 which are 
>> (most likely) a result of [3] because the interceptor is not working as 
>> expected.
>> 
>> Since we are more and more in a (better) EE10 shape, we need to go back into 
>> fixing/adding the existing/remaining TCKs inside the TomEE build to see, if 
>> we need to do some work in our upstream dependencies. I am planning to send 
>> an update for that area soon, so we get an overview of what is already added 
>> and what is broken / missing,
>> 
>> We are blocked by a release of CXF 4.1.0-SNAPSHOT. 
>> 
>> We should (imho) discuss, if it is worth to release a M2 with a downgrade to 
>> the latest stable CXF release since we added new features (MicroProfile 
>> updates, potentially OIDC soon) and upgraded a lot of 3rd party CVEs. So 
>> from my POV it would be crucial to get some feedback on a new milestone 
>> release. WDYT?
>> 
>> Gruß
>> Richard
>> 
>> [1] https://github.com/apache/tomee/pull/1178
>> [2] https://issues.apache.org/jira/browse/TOMEE-4343
>> [1] https://issues.apache.org/jira/browse/OWB-1441
> 
> 



RE: my SE3 not allowing me to get app updates

2024-06-12 Thread Richard Turner
I suggest it may be time to call Apple.
This makes less than no sense.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 11:06 AM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

I just did that and it didn't work.


-Original Message-
From: viphone@googlegroups.com  On Behalf Of Richard 
Turner
Sent: Wednesday, June 12, 2024 1:03 PM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

Yep, I've been there too.
I go into settings, wi-Fi and toggle it off and back on.
When it reconnects, then I try again.  99% of the time, that resolves it.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 11:01 AM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

I was hooked to my wi-fi last I knew. The phone didn't say otherwise.


-Original Message-
From: viphone@googlegroups.com  On Behalf Of Richard 
Turner
Sent: Wednesday, June 12, 2024 1:00 PM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

Are you sure you are connected to the internet?

I don't download over cellular, so I have to be on Wi-Fi to check for, and get 
updates.

I use the triple tap on the app store icon, then flick right to updates and 
double tap.

Then, sometimes, have to do the three finger swipe down to refresh the screen.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 10:53 AM
To: viphone@googlegroups.com
Subject: my SE3 not allowing me to get app updates

When I go to my account screen, I am unable to check for updates. When I get to 
the place where I should see my recent updates, all I get is the "sign out" 
button. I am asked to sign out but when I do and re-sign in, aI am back where I 
was. Any ideas. My wife's phone worked fine.


--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/001401dabcf1%2454597520%24fd0c5f60%24%40austin.rr.com.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/004e01dabcf2%245d7b2530%2418716f90%24%40comcast.net.

RE: Today's WWDC keynote and iOS 18 announcement

2024-06-12 Thread Richard Turner
Hopefully longer than the 14 pro and pro max.  They used to be good for at 
least 4 to 5 years.  And of course, they still are if you don't want the newest 
features ... o well.


Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of Malcolm 
Parfitt
Sent: Wednesday, June 12, 2024 11:11 AM
To: viphone@googlegroups.com
Subject: Re: Today's WWDC keynote and iOS 18 announcement

Quite right Richard, I wonder how long the shelf life of the 15 Pro Max will 
prove to be? 
Malcolm Parfitt

> On 12 Jun 2024, at 7:02 PM, Richard Turner  
> wrote:
> 
> That is what " planned obsolescence" means.  Companies plan on a piece of 
> equipment or software to become obscolete so you have to replace it, or put 
> up with the reduced effectiveness...
> 
> 
> 
> Richard, USA
> "It's no great honor to be blind, but it's more than a nuisance and less than 
> a disaster. Either you're going to fight like hell when your sight fails or 
> you're going to stand on the sidelines for the rest of your life." -- Dr. 
> Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)
> 
> My web site: https://www.turner42.com
> 
> -Original Message-
> From: viphone@googlegroups.com  On Behalf Of 
> Carolyn Arnold
> Sent: Wednesday, June 12, 2024 10:59 AM
> To: viphone@googlegroups.com
> Subject: RE: Today's WWDC keynote and iOS 18 announcement
> 
> It's a way to sell more phones too.
> 
> -Original Message-
> From: viphone@googlegroups.com [mailto:viphone@googlegroups.com] On Behalf Of 
> Richard Turner
> Sent: Wednesday, June 12, 2024 11:26 AM
> To: viphone@googlegroups.com
> Subject: RE: Today's WWDC keynote and iOS 18 announcement
> 
> Nope.
> 
> Apparently only the A17 chip will be able to take advantage of Apple 
> Intelligence.  I believe yours has the A16 chip.
> 
> As Sieghard mentioned, with the 14 and 15 they give the non-pro models the 
> previous phone’s chip.  So the 14 and 14 plus have the same chip as the 13 
> pro and pro max, the 14 pro and pro max had the newest chip, then the 15 and 
> 15 plus have the same A16 chip as the 14 pro and pro max, and the 15 pro and 
> pro max got the new A17.
> 
> So, logically, Apple will give the 16 and 16 plus the A17 and the 16 pro and 
> pro max the A18…
> 
> 
> 
> This is called by some, like me, planned obsolescence.  
> 
> Most companies practice this, so it isn’t just Apple.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Richard, USA
> 
> "It's no great honor to be blind, but it's more than a nuisance and less than 
> a disaster. Either you're going to fight like hell when your sight fails or 
> you're going to stand on the sidelines for the rest of your life." -- Dr. 
> Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)
> 
> 
> 
> My web site: https://www.turner42.com <https://www.turner42.com>
> 
> 
> 
> From: viphone@googlegroups.com  On Behalf Of 
> mi...@eastlink.ca
> Sent: Wednesday, June 12, 2024 8:14 AM
> To: viphone@googlegroups.com
> Subject: RE: Today's WWDC keynote and iOS 18 announcement
> 
> 
> 
> Hi I just got a I phone 14 prow max so will this phone have apple 
> intelligence? From Mich.
> 
> 
> 
> From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
> mailto:viphone@googlegroups.com> > On Behalf Of 
> Sieghard Weitzel
> Sent: June 11, 2024 8:27 PM
> To: viphone@googlegroups.com <mailto:viphone@googlegroups.com>
> Subject: RE: Today's WWDC keynote and iOS 18 announcement
> 
> 
> 
> Actually Apple Intelligence will only be available on phones with the A17 or 
> newer processor which means the iPhone 15 Pro and 15 Pro Max as well as 
> probably all this year's iPhone 16 models.
> 
> Starting with the iPhone 14 series phones Apple differentiated the regular 
> iPhone and iPhone Plus from the Pro and Pro Max by only giving the Pro phones 
> the latest processor and the regular phones had the processor from the year 
> before. Therefore, the iPhone 14 and 14 Plus have the same processor as the 
> iPhone 13 series phones, the A15. Only the iPhone 14 Pro and Pro Max received 
> the A16 Bionic processor in 2022. Then in 2023 when the iPhone 15/15 Plus and 
> 15 Pro and 15 Pro Max were released, the 15 and 15 Plus received the A16 
> Bionic from 2022 and the same as 

RE: my SE3 not allowing me to get app updates

2024-06-12 Thread Richard Turner
Yep, I've been there too.
I go into settings, wi-Fi and toggle it off and back on.
When it reconnects, then I try again.  99% of the time, that resolves it.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 11:01 AM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

I was hooked to my wi-fi last I knew. The phone didn't say otherwise.


-Original Message-
From: viphone@googlegroups.com  On Behalf Of Richard 
Turner
Sent: Wednesday, June 12, 2024 1:00 PM
To: viphone@googlegroups.com
Subject: RE: my SE3 not allowing me to get app updates

Are you sure you are connected to the internet?

I don't download over cellular, so I have to be on Wi-Fi to check for, and get 
updates.

I use the triple tap on the app store icon, then flick right to updates and 
double tap.

Then, sometimes, have to do the three finger swipe down to refresh the screen.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 10:53 AM
To: viphone@googlegroups.com
Subject: my SE3 not allowing me to get app updates

When I go to my account screen, I am unable to check for updates. When I get to 
the place where I should see my recent updates, all I get is the "sign out" 
button. I am asked to sign out but when I do and re-sign in, aI am back where I 
was. Any ideas. My wife's phone worked fine.


--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/001401dabcf1%2454597520%24fd0c5f60%24%40austin.rr.com.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/004e01dabcf2%245d7b2530%2418716f90%24%40comcast.net.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/01dabcf2%248400e690%2

RE: Today's WWDC keynote and iOS 18 announcement

2024-06-12 Thread Richard Turner
That is what " planned obsolescence" means.  Companies plan on a piece of 
equipment or software to become obscolete so you have to replace it, or put up 
with the reduced effectiveness...
 


Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of Carolyn 
Arnold
Sent: Wednesday, June 12, 2024 10:59 AM
To: viphone@googlegroups.com
Subject: RE: Today's WWDC keynote and iOS 18 announcement

It's a way to sell more phones too. 

-Original Message-
From: viphone@googlegroups.com [mailto:viphone@googlegroups.com] On Behalf Of 
Richard Turner
Sent: Wednesday, June 12, 2024 11:26 AM
To: viphone@googlegroups.com
Subject: RE: Today's WWDC keynote and iOS 18 announcement

Nope.

Apparently only the A17 chip will be able to take advantage of Apple 
Intelligence.  I believe yours has the A16 chip.

As Sieghard mentioned, with the 14 and 15 they give the non-pro models the 
previous phone’s chip.  So the 14 and 14 plus have the same chip as the 13 pro 
and pro max, the 14 pro and pro max had the newest chip, then the 15 and 15 
plus have the same A16 chip as the 14 pro and pro max, and the 15 pro and pro 
max got the new A17.

So, logically, Apple will give the 16 and 16 plus the A17 and the 16 pro and 
pro max the A18…

 

This is called by some, like me, planned obsolescence.  

Most companies practice this, so it isn’t just Apple.

 

 

 

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site: https://www.turner42.com <https://www.turner42.com> 

 

From: viphone@googlegroups.com  On Behalf Of 
mi...@eastlink.ca
Sent: Wednesday, June 12, 2024 8:14 AM
To: viphone@googlegroups.com
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

Hi I just got a I phone 14 prow max so will this phone have apple intelligence? 
From Mich.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Sieghard Weitzel
Sent: June 11, 2024 8:27 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

Actually Apple Intelligence will only be available on phones with the A17 or 
newer processor which means the iPhone 15 Pro and 15 Pro Max as well as 
probably all this year's iPhone 16 models.

Starting with the iPhone 14 series phones Apple differentiated the regular 
iPhone and iPhone Plus from the Pro and Pro Max by only giving the Pro phones 
the latest processor and the regular phones had the processor from the year 
before. Therefore, the iPhone 14 and 14 Plus have the same processor as the 
iPhone 13 series phones, the A15. Only the iPhone 14 Pro and Pro Max received 
the A16 Bionic processor in 2022. Then in 2023 when the iPhone 15/15 Plus and 
15 Pro and 15 Pro Max were released, the 15 and 15 Plus received the A16 Bionic 
from 2022 and the same as in iPhone 14 Pro/Pro Max. Only the 15 Pro and Pro Max 
received the new A17 Bionic chip released last year and it is that chip which 
seems to be the minimum requirement for Apple Intelligence. This year the 
regular iPhone 16 and 16 Plus will get the A17 Bionic from last year and I 
assume that these phones will therefore meet the minimum requirements for Apple 
Intelligence. The iPhone 16 Pro and Pro Max if that is what they will be called 
will get the latest A18 chip.

I assume Apple will want as many people to have access to Apple Intelligence 
and I therefore don't think they would consider making it available only on Pro 
phones and nothing was said about it during the presentation either. My 2-year 
contract for my 13 Pro will be up I think in late November and I may just have 
to upgrade in order to get access to Apple Intelligence and all it can do and 
all the new Siri with Chat GPT integration can do. Normally I would have 
probably kept my iPhone 13 Pro for another year, but this all sounds just too 
cool and useful to pass it up.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Chela Robles
Sent: Monday, June 10, 2024 9:44 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

Well, I got an email from Apple today and it looks like all the AI stuff is 
gonna be on all of

RE: my SE3 not allowing me to get app updates

2024-06-12 Thread Richard Turner
Are you sure you are connected to the internet?

I don't download over cellular, so I have to be on Wi-Fi to check for, and get 
updates.

I use the triple tap on the app store icon, then flick right to updates and 
double tap.

Then, sometimes, have to do the three finger swipe down to refresh the screen.



Richard, USA
"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

My web site: https://www.turner42.com

-Original Message-
From: viphone@googlegroups.com  On Behalf Of 
llump...@austin.rr.com
Sent: Wednesday, June 12, 2024 10:53 AM
To: viphone@googlegroups.com
Subject: my SE3 not allowing me to get app updates

When I go to my account screen, I am unable to check for updates. When I get to 
the place where I should see my recent updates, all I get is the "sign out" 
button. I am asked to sign out but when I do and re-sign in, aI am back where I 
was. Any ideas. My wife's phone worked fine.


--
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
---
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/001401dabcf1%2454597520%24fd0c5f60%24%40austin.rr.com.

-- 
The following information is important for all members of the V iPhone list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your V iPhone list moderator is Mark Taylor.  Mark can be reached at:  
mk...@ucla.edu.  Your list owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/viphone@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"VIPhone" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to viphone+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/viphone/004e01dabcf2%245d7b2530%2418716f90%24%40comcast.net.


[Qemu-commits] [qemu/qemu] f3e8cc: Merge tag 'tracing-pull-request' of https://gitlab...

2024-06-12 Thread Richard Henderson via Qemu-commits
  Branch: refs/heads/staging
  Home:   https://github.com/qemu/qemu
  Commit: f3e8cc47de2bc537d4991e883a85208e4e1c0f98
  
https://github.com/qemu/qemu/commit/f3e8cc47de2bc537d4991e883a85208e4e1c0f98
  Author: Richard Henderson 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M backends/tpm/tpm_util.c
M backends/tpm/trace-events
M hw/sh4/trace-events
M hw/usb/trace-events
M hw/vfio/trace-events
M meson.build
M scripts/tracetool/__init__.py
R scripts/tracetool/vcpu.py

  Log Message:
  ---
  Merge tag 'tracing-pull-request' of https://gitlab.com/stefanha/qemu into 
staging

Pull request

Cleanups from Philippe Mathieu-Daudé.

# -BEGIN PGP SIGNATURE-
#
# iQEzBAABCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmZnNCQACgkQnKSrs4Gr
# c8hRQgf/WDNO0IvplK4U9PO5+Zm165xqY6lttfgniJzT2jb4p/dg0LiNOSqHx53Q
# 2eM/YJl7GxSXwnIESqNVuVxixh8DvExmOtM8RJm3HyJWtZoKfgMOV/dzHEhST3xj
# PglIEwL5Cm14skhQAVhJXzFlDjZ8seoz+YCbLhcYWk2B3an+5PvFySbp4iHS9cXJ
# lZUZx/aa9xjviwzMbsMxzFt3rA22pgNaxemV40FBIMWC0H+jP5pgBdZXE2n8jJvB
# 9eXZyG1kdkJKXO2DMhPYuG4rEEWOhV6dckXzmxCQEbHlGTH7X3Pn1F5B3+agi9g3
# 39U1Z+WFb8JFLOQMCQ3jlcbkIfULzQ==
# =wqXR
# -END PGP SIGNATURE-
# gpg: Signature made Mon 10 Jun 2024 10:13:08 AM PDT
# gpg:using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi " [full]
# gpg: aka "Stefan Hajnoczi " [full]

* tag 'tracing-pull-request' of https://gitlab.com/stefanha/qemu:
  tracetool: Forbid newline character in event format
  hw/vfio: Remove newline character in trace events
  hw/usb: Remove newline character in trace events
  hw/sh4: Remove newline character in trace events
  backends/tpm: Remove newline character in trace event
  tracetool: Remove unused vcpu.py script

Signed-off-by: Richard Henderson 



To unsubscribe from these emails, change your notification settings at 
https://github.com/qemu/qemu/settings/notifications



Re: Mosquitto library.

2024-06-12 Thread Richard Gaskin via use-livecode
Mike Kerner wrote:

> Richard wrote:
>> Either way, I'd imagine a subscribe client looking to avoid polling
>> is going to depend on a long-lived socket, no?
>
> That's part of the point of a websocket. you don't have to keep
> reopening it, and both ends can use it, as needed.

Exactly, websockets are useful in browser apps because browsers don't offer 
direct socket support.

LiveCode makes OS-native apps and supports sockets.

The socketTimeoutInterval lets us set how long they live.

What am I missing?

--
Richard Gaskin
FourthWorld.com

___
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode


Re: Having ten thousands of mount bind causes various processes to go into loops

2024-06-12 Thread Richard
Best question probably is: what exactly are you needing 14.000 mounts for?
Even snaps shouldn't be that ridiculous. So what's your use case? Maybe
there's a better solution to what you are doing. If it's just about having
a place that is rw only without execution permissions, just crate a
separate partition, mount it somewhere - e.g. /home/test/mounts and tell
mount/fstab to use the option noexec. No need for for your script. Or if
it's a more advanced file system like btrfs you may be able to simply
create a subvolume with the same capabilities, no need to tinker around
with partitions.

It's true this issue should be looked into, but it doesn't look urgent as
long as there are alternatives.

Richard

Am Mi., 12. Juni 2024 um 16:33 Uhr schrieb Julien Petit :

> Dear,
>
> Not sure i should report a bug so here is a report first. For more
> than 10 years now, we've been using mount binds to create shares rw or
> ro. It's been working perfectly under older Debian. A few months ago,
> we migrated to Ubuntu Jammy and started having processes running 100%
> non stop. While examining the processes in question, we could see the
> same thing: it seemed to be reading all the mounts indefinitely.
> It started with the phpsessionclean.service. We managed to fix it
> editing /lib/systemd/system/phpsessionclean.service and disabling
> sandboxing entries. But then it started to happen with other
> processes.
> Anything related to systemd seems affected in a way. For instance, we
> cannot start haproxy if the mounts are mounted.
> We tested with the last Debian and it is affected too.
>
> We understand that 14 000 mounts is a lot. So maybe our usage will be
> questioned. But this has been working for ages so why not now?
>
> The problem can be very easily reproduced:
>
> 1. Launch the latest Debian stable
> 2. Execute the following script to create mounts:
> #!/bin/bash
> mkdir /home/test/directories
> mkdir /home/test/mounts
>
> for i in {1..14000}
> do
>echo "Mounting dir $i"
>mkdir "/home/test/directories/dir_$i"
>mkdir "/home/test/mounts/dir_$i"
>mount --bind -o rw "/home/test/directories/dir_$i"
> "/home/test/mounts/dir_$i"
> done
>
> After that, the "top" command will show processes getting stuck using
> 100% of CPU never ending.
>
> Has anyone a clue if this is fixable? Should i report a bug?
> Thanks for your help.
>
>


Re: Please help me identify package so I can report an important bug

2024-06-12 Thread Richard
Good catch. With the title of this thread and not seeing any proper
description of what's actually wrong on GitHub, I figured the change of the
adapter name was meant. Yes, with MAC randomization, that's what you'll
get. But it's nothing Debian defaults to. So question is, can this be
disabled on Proxmox? But with this hint, it should be easy enough to figure
out if this can be deactivated on the affected systems, and if not the bug
reports must be against these issues, as Debian itself doesn't do such
things. If it is an issue with Debian preventing the disablement, the devs
need to talk to each other.

Richard

Am Mi., 12. Juni 2024 um 17:10 Uhr schrieb Jeffrey Walton <
noloa...@gmail.com>:

> The random MAC address discussed in the bug report (with mention of
> Network Manager) could be
> <
> https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/
> >.
>
> Jeff
>


Re: [PATCH] rtlanal: Correct cost regularization in pattern_cost

2024-06-12 Thread Richard Sandiford
Richard Biener  writes:
> On Fri, May 10, 2024 at 4:25 AM HAO CHEN GUI  wrote:
>>
>> Hi,
>>The cost return from set_src_cost might be zero. Zero for
>> pattern_cost means unknown cost. So the regularization converts the zero
>> to COSTS_N_INSNS (1).
>>
>>// pattern_cost
>>cost = set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)), speed);
>>return cost > 0 ? cost : COSTS_N_INSNS (1);
>>
>>But if set_src_cost returns a value less than COSTS_N_INSNS (1), it's
>> untouched and just returned by pattern_cost. Thus "zero" from set_src_cost
>> is higher than "one" from set_src_cost.
>>
>>   For instance, i386 returns cost "one" for zero_extend op.
>> //ix86_rtx_costs
>> case ZERO_EXTEND:
>>   /* The zero extensions is often completely free on x86_64, so make
>>  it as cheap as possible.  */
>>   if (TARGET_64BIT && mode == DImode
>>   && GET_MODE (XEXP (x, 0)) == SImode)
>> *total = 1;
>>
>>   This patch fixes the problem by converting all costs which are less than
>> COSTS_N_INSNS (1) to COSTS_N_INSNS (1).
>>
>>   Bootstrapped and tested on x86 and powerpc64-linux BE and LE with no
>> regressions. Is it OK for the trunk?
>
> But if targets return sth < COSTS_N_INSNS (1) but > 0 this is now no
> longer meaningful.  So shouldn't it instead be
>
>   return cost > 0 ? cost : 1;
>
> ?  Alternatively returning fractions of COSTS_N_INSNS (1) from set_src_cost
> is invalid and thus the target is at fault (I do think that making zero the
> unknown value is quite bad since that makes it impossible to have zero
> as cost represented).

I agree zero is an unfortunate choice.  No-op moves should really have
zero cost, without having to be special-cased by callers.  And it came
as a surprise to me that we had this rule.

But like Segher says, it seems to have been around for a long time
(since 2004 by the looks of it, r0-59417).  Which just goes to show,
every day is a learning day. :)

IMO it would be nice to change it.  But then it would be even nicer
to get rid of pattern_cost and move everything to insn_cost.  And that's
going to be a lot of work to do together.

Maybe a compromise would be to open-code pattern_cost into insn_cost
and change the return value for insn_cost only?  That would still mean
auditing all current uses of insn_cost and all current target definitions
of the insn_cost hook, but at least it would be isolated from the work
of removing pattern_cost.

Thanks,
Richard


[webkit-changes] [WebKit/WebKit] b7108a: Fix the build after 279936@main

2024-06-12 Thread Richard Robinson
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: b7108a8d0f3cdec2cb2cf69a0144da7711e26502
  
https://github.com/WebKit/WebKit/commit/b7108a8d0f3cdec2cb2cf69a0144da7711e26502
  Author: Richard Robinson 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M Source/WebKit/UIProcess/ios/WKContentViewInteraction.h
M Source/WebKit/UIProcess/ios/WKExtendedTextInputTraits.h

  Log Message:
  ---
  Fix the build after 279936@main

Unreviewed build fix.

* Source/WebKit/UIProcess/ios/WKContentViewInteraction.h:
* Source/WebKit/UIProcess/ios/WKExtendedTextInputTraits.h:

Canonical link: https://commits.webkit.org/279948@main



To unsubscribe from these emails, change your notification settings at 
https://github.com/WebKit/WebKit/settings/notifications
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: [PATCH v2 8/9] target/arm: Add aarch64_tcg_ops

2024-06-12 Thread Richard Henderson

On 6/12/24 07:36, Alex Bennée wrote:

What happens when the CPU is running mixed mode code and jumping between
64 and 32 bit? Wouldn't it be easier to have a helper that routes to the
correct unwinder, c.f. gen_intermediate_code


GDB can't switch modes, so there is *never* any mode switching.


r~



Re: [yocto] [yocto-autobuilder-helper] config-json: use master branch for meta-agl

2024-06-12 Thread Richard Purdie
On Wed, 2024-06-12 at 11:30 -0400, Scott Murray via
lists.yoctoproject.org wrote:
> On Wed, 12 Jun 2024, Steve Sakoman via lists.yoctoproject.org wrote:
> 
> > scarthgap is no longer supported on next branch
> > 
> > Signed-off-by: Steve Sakoman 
> > ---
> >  config.json | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/config.json b/config.json
> > index fbf2e6c..2089fa9 100644
> > --- a/config.json
> > +++ b/config.json
> > @@ -1853,7 +1853,7 @@
> >  },
> >  "meta-agl": {
> >  "url" :
> > "https://git.automotivelinux.org/AGL/meta-agl;,
> > -    "branch" : "next",
> > +    "branch" : "master",
> >  "revision" : "HEAD",
> >  "no-layer-add" : true
> >  },
> 
> Is this patch missing a "[scarthgap]"?  I ask because it makes sense
> for the yocto-autobuilder-helper scarthgap branch, but master AB will
> need to keep using meta-agl's next branch.

It is and I did check that with Steve on irc. It only merged to the
scarthgap branch.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#63334): https://lists.yoctoproject.org/g/yocto/message/63334
Mute This Topic: https://lists.yoctoproject.org/mt/106633253/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



RE: Today's WWDC keynote and iOS 18 announcement

2024-06-12 Thread Richard Turner
Nope.

Apparently only the A17 chip will be able to take advantage of Apple 
Intelligence.  I believe yours has the A16 chip.

As Sieghard mentioned, with the 14 and 15 they give the non-pro models the 
previous phone’s chip.  So the 14 and 14 plus have the same chip as the 13 pro 
and pro max, the 14 pro and pro max had the newest chip, then the 15 and 15 
plus have the same A16 chip as the 14 pro and pro max, and the 15 pro and pro 
max got the new A17.

So, logically, Apple will give the 16 and 16 plus the A17 and the 16 pro and 
pro max the A18…

 

This is called by some, like me, planned obsolescence.  

Most companies practice this, so it isn’t just Apple.

 

 

 

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: viphone@googlegroups.com  On Behalf Of 
mi...@eastlink.ca
Sent: Wednesday, June 12, 2024 8:14 AM
To: viphone@googlegroups.com
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

Hi I just got a I phone 14 prow max so will this phone have apple intelligence? 
From Mich.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Sieghard Weitzel
Sent: June 11, 2024 8:27 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

Actually Apple Intelligence will only be available on phones with the A17 or 
newer processor which means the iPhone 15 Pro and 15 Pro Max as well as 
probably all this year's iPhone 16 models.

Starting with the iPhone 14 series phones Apple differentiated the regular 
iPhone and iPhone Plus from the Pro and Pro Max by only giving the Pro phones 
the latest processor and the regular phones had the processor from the year 
before. Therefore, the iPhone 14 and 14 Plus have the same processor as the 
iPhone 13 series phones, the A15. Only the iPhone 14 Pro and Pro Max received 
the A16 Bionic processor in 2022. Then in 2023 when the iPhone 15/15 Plus and 
15 Pro and 15 Pro Max were released, the 15 and 15 Plus received the A16 Bionic 
from 2022 and the same as in iPhone 14 Pro/Pro Max. Only the 15 Pro and Pro Max 
received the new A17 Bionic chip released last year and it is that chip which 
seems to be the minimum requirement for Apple Intelligence. This year the 
regular iPhone 16 and 16 Plus will get the A17 Bionic from last year and I 
assume that these phones will therefore meet the minimum requirements for Apple 
Intelligence. The iPhone 16 Pro and Pro Max if that is what they will be called 
will get the latest A18 chip.

I assume Apple will want as many people to have access to Apple Intelligence 
and I therefore don't think they would consider making it available only on Pro 
phones and nothing was said about it during the presentation either. My 2-year 
contract for my 13 Pro will be up I think in late November and I may just have 
to upgrade in order to get access to Apple Intelligence and all it can do and 
all the new Siri with Chat GPT integration can do. Normally I would have 
probably kept my iPhone 13 Pro for another year, but this all sounds just too 
cool and useful to pass it up.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Chela Robles
Sent: Monday, June 10, 2024 9:44 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

Well, I got an email from Apple today and it looks like all the AI stuff is 
gonna be on all of the iPhone 15 models, not series 14. So I for one won’t be 
getting any major update as far as seeing anything with AI as to my knowledge 
since I have an iPhone 14.

Sent from my iPhone

 

On Jun 10, 2024, at 8:36 PM, Dennis Long mailto:dennisl1...@gmail.com> > wrote:



I agree.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Sieghard Weitzel
Sent: Monday, June 10, 2024 5:45 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

We all know what Siri is like right now, but with full AI integration I have a 
feeling we may all be surprised how amazing it may end up being; of course from 
the little I heard this will not be one of these on/off moments where as soon 
as iOs 18 is released in September it will include all of what it will have a 
year from now and after the main .1, .2 and .3 updates which typically happen 
in Late October, before Christmas or in the new year and ag

[OE-core] [PATCH] selftest/spdx: Fix for SPDX_VERSION addition

2024-06-12 Thread Richard Purdie
Update the test for the addition of SPDX_VERSION to the deploy path.

Signed-off-by: Richard Purdie 
---
 meta/lib/oeqa/selftest/cases/spdx.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/meta/lib/oeqa/selftest/cases/spdx.py 
b/meta/lib/oeqa/selftest/cases/spdx.py
index 05fc4e390b2..7685a81e7fb 100644
--- a/meta/lib/oeqa/selftest/cases/spdx.py
+++ b/meta/lib/oeqa/selftest/cases/spdx.py
@@ -25,10 +25,11 @@ INHERIT += "create-spdx"
 
 deploy_dir = get_bb_var("DEPLOY_DIR")
 machine_var = get_bb_var("MACHINE")
+spdx_version = get_bb_var("SPDX_VERSION")
 # qemux86-64 creates the directory qemux86_64
 machine_dir = machine_var.replace("-", "_")
 
-full_file_path = os.path.join(deploy_dir, "spdx", machine_dir, 
high_level_dir, spdx_file)
+full_file_path = os.path.join(deploy_dir, "spdx", spdx_version, 
machine_dir, high_level_dir, spdx_file)
 
 try:
 os.remove(full_file_path)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#200575): 
https://lists.openembedded.org/g/openembedded-core/message/200575
Mute This Topic: https://lists.openembedded.org/mt/106634063/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: gcc git locked out for hours second day in a row

2024-06-12 Thread Richard Earnshaw (lists) via Gcc
On 12/06/2024 14:23, Mikael Morin via Gcc wrote:
> Le 12/06/2024 à 14:58, Jonathan Wakely a écrit :
>> On Wed, 12 Jun 2024 at 13:57, Mikael Morin via Gcc  wrote:
>>>
>>> Le 12/06/2024 à 13:48, Jakub Jelinek a écrit :
 Hi!

 Yesterday the gcc git repository was locked for 3 hours
 locked by user mikael at 2024-06-11 13:27:44.301067 (pid = 974167)
 78:06 python hooks/update.py 
 refs/users/mikael/tags/fortran-dev_merges/r10-1545 
  
 c2f9fe1d8111b9671bf0aa8362446516fd942f1d
 process until overseers killed it but today we have the same
 situation for 3 ours and counting again:
 locked by user mikael at 2024-06-12 08:35:48.137564 (pid = 2219652)
 78:06 python hooks/update.py refs/users/mikael/tags/toto 
  
 cca005166dba2cefeb51afac3ea629b3972acea3

 It is possible we have some bug in the git hook scripts, but it would
 be helpful trying to understand what exactly you're trying to commit
 and why nobody else (at least to my knowledge) has similarly stuck commits.

 The effect is that nobody can push anything else to gcc git repo
 for hours.

    Jakub

>>> Yes, sorry for the inconvenience.
>>> I tried pushing a series of tags labeling merge points between the
>>> fortran-dev branch and recent years master.
>>
>> Just pushing tags should not cause a problem, assuming all the commits
>> being tagged already exist. What exactly are you pushing?
>>
> Well, the individual commits to be merged do exist, but the merge points 
> don't and they are what I'm trying to push.
> 
> To be clear, the branch hasn't seen any update for years, and I'm trying to 
> reapply what happened on trunk since, in a step-wise manner.  With 300 merges 
> I'm summing up 6 commits of history.
> 
>>
>>> The number of merge points is a bit high (329) but I expected it to be a
>>> manageable number.  I tried again today with just the most recent merge
>>> point, but it got stuck again.  I should try with the oldest one, but
>>> I'm afraid locking the repository again.
>>>
>>> I waited for the push to finish for say one hour before killing it
>>> yesterday, and no more than 15 minutes today.  Unfortunately, killing
>>> the process doesn't seem to unlock things on the server side.
>>>
>>> It may be a misconfiguration on my side, but I have never had this
>>> problem before.
>>>
>>> Sorry again.
>>>
>>>
> 

Perhaps you could create a mirror version of the repo and do some experiments 
locally on that to identify where the bottle-neck is coming from?

R.


Re: arm: Add .type and .size to __gnu_cmse_nonsecure_call [PR115360]

2024-06-12 Thread Richard Earnshaw (lists)
On 12/06/2024 09:53, Andre Vieira (lists) wrote:
> 
> 
> On 06/06/2024 12:53, Richard Earnshaw (lists) wrote:
>> On 05/06/2024 17:07, Andre Vieira (lists) wrote:
>>> Hi,
>>>
>>> This patch adds missing assembly directives to the CMSE library wrapper to 
>>> call functions with attribute cmse_nonsecure_call.  Without the .type 
>>> directive the linker will fail to produce the correct veneer if a call to 
>>> this wrapper function is to far from the wrapper itself.  The .size was 
>>> added for completeness, though we don't necessarily have a usecase for it.
>>>
>>> I did not add a testcase as I couldn't get dejagnu to disassemble the 
>>> linked binary to check we used an appropriate branch instruction, I did 
>>> however test it locally and with this change the GNU linker now generates 
>>> an appropriate veneer and call to that veneer when 
>>> __gnu_cmse_nonsecure_call is too far.
>>>
>>> OK for trunk and backport to any release branches still in support (after 
>>> waiting a week or so)?
>>>
>>> libgcc/ChangeLog:
>>>
>>>  PR target/115360
>>>  * config/arm/cmse_nonsecure_call.S: Add .type and .size directives.
>>
>> OK.
>>
>> R.
> 
> OK to backport? I was thinking backporting it as far as gcc-11 (we haven't 
> done a 11.5 yet).
> 
> Kind Regards,
> Andre

Yes.

R.


Re: [RBW] New Bike Day - Lugged Susie

2024-06-12 Thread Richard Rose
What Valerie says precisely. Susie / Gus bikes are mountain bikes. “Hillibike” is clever & cute but does not do them justice, in my humble opinion. It’s confusing because you can ride them for other duties but.., if you have tires that optimize a Gus/Susie for MTB use, it’s not fantastic on pavement. That said I do not hesitate to ride my Gus to the trail or use it for gravel or bikepacking trips. But if dirt is not a big part of my ride I take my Clem. BTW, a riding friend just bought a used Sam - it was Rich Lesnik’s bike from Riv! It’s a beauty & a 26” bike. We did a Sub24 mostly gravel bikepacking trip. He on his Sam, I on my Clem. He was loving the Sam. But he does not do MTB.Sent from my iPhoneOn Jun 12, 2024, at 12:30 AM, Valerie Yates  wrote:I have both and consider them very different bikes. With the right tires, you can definitely ride your Appa on dirt and trails. It is my choice for loaded tours and scenic rides. The Susie has a very different feel to me. The tires can go much bigger and somehow its geometry makes me feel much more confident taking it on trails with more variable terrain. Downhills are a blast. But it is not my ride around town choice. To me, the Susie is the ultimate rigid mountain bike. It is big and bouncy and fun. If I could only have one, I consider the Appa more versatile. But it is mighty nice to have both. If you are on the fence, start taking your Appa on trails for now and see if you want a more dirt-oriented bike. Best,Val in Boulder CO On Tuesday, June 11, 2024 at 8:35:30 PM UTC-6 Matthew Williams wrote:Can someone who’s ridden both bikes tell me: what’s the difference, between the Susie and the Appaloosa? I’d like to start riding more dirt and trails—but no crazy fast stuff or jumps—and I’m wondering if I should get a Susie or just stick with my Appaloosa.

Does the Susie have better clearance, geometry, and/or strength to make it a better choice for dirt and trails? Or is the Susie similar enough to the Appaloosa that I won’t notice the difference?



-- 
You received this message because you are subscribed to the Google Groups "RBW Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rbw-owners-bunch/d2b0868a-1eb3-4520-a03e-42d9d1601ae4n%40googlegroups.com.




-- 
You received this message because you are subscribed to the Google Groups "RBW Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rbw-owners-bunch/7013CBA6-CBA5-4F40-BEEA-F95C73608983%40gmail.com.


Re: [PATCH 2/3] Enabled LRA for ia64.

2024-06-12 Thread Richard Biener
On Wed, 12 Jun 2024, René Rebe wrote:

> Hi,
> 
> > On Jun 12, 2024, at 13:01, Richard Biener  wrote:
> > 
> > On Wed, 12 Jun 2024, Rene Rebe wrote:
> >> 
> >> gcc/
> >>* config/ia64/ia64.cc: Enable LRA for ia64.
> >>* config/ia64/ia64.md: Likewise.
> >>* config/ia64/predicates.md: Likewise.
> > 
> > That looks simple enough.  I cannot find any copyright assignment on
> > file with the FSF so you probably want to contribute to GCC under
> > the DCO (see https://gcc.gnu.org/dco.html), in that case please post
> > patches with Signed-off-by: tags.
> 
> If it helps for the future, I can apply for copyright assignment, too.

It's not a requirement - you as contributor get the choice under
which legal framework you contribute to GCC, for the DCO there's
the formal requirement of Signed-off-by: tags.

> > For this patch please state how you tested it, I assume you
> > bootstrapped GCC natively on ia64-linux and ran the testsuite.
> > I can find two gcc-testresult postings, one appearantly with LRA
> > and one without?  Both from May:
> > 
> > https://sourceware.org/pipermail/gcc-testresults/2024-May/816422.html
> > https://sourceware.org/pipermail/gcc-testresults/2024-May/816346.html
> 
> Yes, that are the two I quoted in the patch cover letter.
> 
>   https://gcc.gnu.org/pipermail/gcc-patches/2024-June/654321.html
> 
> > somehow for example libstdc++ summaries were not merged, it might
> > be you do not have recent python installed on the system?  Or you
> > didn't use contrib/test_summary to create those mails.  It would be
> > nice to see the difference between LRA and not LRA in the testresults,
> > can you quote that?
> 
> We usually cross-compile gcc, but also ran natively for the testsuite.
> Given the tests run quite long natively on the hardware we currently
> have, I summed the results them up in the cover letter. I would assume
> that shoudl be enough to include with a note the resulting kernel and
> user-space world was booted and worked without issues?

I specifically wondered if bootstrap with LRA enabled succeeds.
That needs either native or emulated hardware.  I think we consider
ia64-linux a host platform and not only a cross compiler target.

> If so, I’ll just resend with the additional information added.

For the LRA enablement patch the requirement is that patches should
state how they were tested - usually you'll see sth like

Boostrapped and tested on x86_64-unknown-linux-gnu.

In your case it was

Cross-built from x86_64-linux(?) to ia64-linux, natively tested

not sure how you exactly did this though?  I've never tried
testing of a canadian-cross tree - did you copy the whole build
tree over from the x86 to the ia64 machine?

Thanks,
Richard.

> Thank you so much,
>   René
> 
> > Thanks,
> > Richard.
> > 
> >> ---
> >> gcc/config/ia64/ia64.cc   | 7 ++-
> >> gcc/config/ia64/ia64.md   | 4 ++--
> >> gcc/config/ia64/predicates.md | 2 +-
> >> 3 files changed, 5 insertions(+), 8 deletions(-)
> >> 
> >> diff --git a/gcc/config/ia64/ia64.cc b/gcc/config/ia64/ia64.cc
> >> index ac3d56073ac..d189bfb2cb4 100644
> >> --- a/gcc/config/ia64/ia64.cc
> >> +++ b/gcc/config/ia64/ia64.cc
> >> @@ -618,9 +618,6 @@ static const scoped_attribute_specs *const 
> >> ia64_attribute_table[] =
> >> #undef TARGET_LEGITIMATE_ADDRESS_P
> >> #define TARGET_LEGITIMATE_ADDRESS_P ia64_legitimate_address_p
> >> 
> >> -#undef TARGET_LRA_P
> >> -#define TARGET_LRA_P hook_bool_void_false
> >> -
> >> #undef TARGET_CANNOT_FORCE_CONST_MEM
> >> #define TARGET_CANNOT_FORCE_CONST_MEM ia64_cannot_force_const_mem
> >> 
> >> @@ -1329,7 +1326,7 @@ ia64_expand_move (rtx op0, rtx op1)
> >> {
> >>   machine_mode mode = GET_MODE (op0);
> >> 
> >> -  if (!reload_in_progress && !reload_completed && !ia64_move_ok (op0, 
> >> op1))
> >> +  if (!lra_in_progress && !reload_completed && !ia64_move_ok (op0, op1))
> >> op1 = force_reg (mode, op1);
> >> 
> >>   if ((mode == Pmode || mode == ptr_mode) && symbolic_operand (op1, 
> >> VOIDmode))
> >> @@ -1776,7 +1773,7 @@ ia64_expand_movxf_movrf (machine_mode mode, rtx 
> >> operands[])
> >> }
> >> }
> >> 
> >> -  if (!reload_in_progress && !reload_completed)
> >> +  if (!lra_in_progress && !reload_completed)
> >> {
> >>   operands[1] = spill_xfmode_rfmode_operand (operands[1], 0, m

Re: [PATCH] match: Improve gimple_bitwise_equal_p and gimple_bitwise_inverted_equal_p for truncating casts [PR115449]

2024-06-12 Thread Richard Biener
On Wed, Jun 12, 2024 at 6:39 AM Andrew Pinski  wrote:
>
> As mentioned by Jeff in r15-831-g05daf617ea22e1d818295ed2d037456937e23530, we 
> don't handle
> `(X | Y) & ~Y` -> `X & ~Y` on the gimple level when there are some different 
> signed
> (but same precision) types dealing with matching `~Y` with the `Y` part. This
> improves both gimple_bitwise_equal_p and gimple_bitwise_inverted_equal_p to
> be able to say `(truncate)a` and `(truncate)a` are bitwise_equal and
> that `~(truncate)a` and `(truncate)a` are bitwise_invert_equal.
>
> Bootstrapped and tested on x86_64-linux-gnu with no regressions.

OK.

Richard.

> PR tree-optimization/115449
>
> gcc/ChangeLog:
>
> * gimple-match-head.cc (gimple_maybe_truncate): New declaration.
> (gimple_bitwise_equal_p): Match truncations that differ only
> in types with the same precision.
> (gimple_bitwise_inverted_equal_p): For matching after bit_not_with_nop
> call gimple_bitwise_equal_p.
> * match.pd (maybe_truncate): New match pattern.
>
> gcc/testsuite/ChangeLog:
>
> * gcc.dg/tree-ssa/bitops-10.c: New test.
>
> Signed-off-by: Andrew Pinski 
> ---
>  gcc/gimple-match-head.cc  | 17 +---
>  gcc/match.pd  |  7 +
>  gcc/testsuite/gcc.dg/tree-ssa/bitops-10.c | 34 +++
>  3 files changed, 48 insertions(+), 10 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/bitops-10.c
>
> diff --git a/gcc/gimple-match-head.cc b/gcc/gimple-match-head.cc
> index e26fa0860ee..924d3f1e710 100644
> --- a/gcc/gimple-match-head.cc
> +++ b/gcc/gimple-match-head.cc
> @@ -243,6 +243,7 @@ optimize_successive_divisions_p (tree divisor, tree 
> inner_div)
>gimple_bitwise_equal_p (expr1, expr2, valueize)
>
>  bool gimple_nop_convert (tree, tree *, tree (*) (tree));
> +bool gimple_maybe_truncate (tree, tree *, tree (*) (tree));
>
>  /* Helper function for bitwise_equal_p macro.  */
>
> @@ -271,6 +272,10 @@ gimple_bitwise_equal_p (tree expr1, tree expr2, tree 
> (*valueize) (tree))
>  }
>if (expr2 != expr4 && operand_equal_p (expr1, expr4, 0))
>  return true;
> +  if (gimple_maybe_truncate (expr3, , valueize)
> +  && gimple_maybe_truncate (expr4, , valueize)
> +  && operand_equal_p (expr3, expr4, 0))
> +return true;
>return false;
>  }
>
> @@ -318,21 +323,13 @@ gimple_bitwise_inverted_equal_p (tree expr1, tree 
> expr2, bool , tree (*va
>/* Try if EXPR1 was defined as ~EXPR2. */
>if (gimple_bit_not_with_nop (expr1, , valueize))
>  {
> -  if (operand_equal_p (other, expr2, 0))
> -   return true;
> -  tree expr4;
> -  if (gimple_nop_convert (expr2, , valueize)
> - && operand_equal_p (other, expr4, 0))
> +  if (gimple_bitwise_equal_p (other, expr2, valueize))
> return true;
>  }
>/* Try if EXPR2 was defined as ~EXPR1. */
>if (gimple_bit_not_with_nop (expr2, , valueize))
>  {
> -  if (operand_equal_p (other, expr1, 0))
> -   return true;
> -  tree expr3;
> -  if (gimple_nop_convert (expr1, , valueize)
> - && operand_equal_p (other, expr3, 0))
> +  if (gimple_bitwise_equal_p (other, expr1, valueize))
> return true;
>  }
>
> diff --git a/gcc/match.pd b/gcc/match.pd
> index 5cfe81e80b3..3204cf41538 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -200,6 +200,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (match (maybe_bit_not @0)
>   (bit_xor_cst@0 @1 @2))
>
> +#if GIMPLE
> +(match (maybe_truncate @0)
> + (convert @0)
> + (if (INTEGRAL_TYPE_P (type)
> +  && TYPE_PRECISION (type) < TYPE_PRECISION (TREE_TYPE (@0)
> +#endif
> +
>  /* Transform likes of (char) ABS_EXPR <(int) x> into (char) ABSU_EXPR 
> ABSU_EXPR returns unsigned absolute value of the operand and the operand
> of the ABSU_EXPR will have the corresponding signed type.  */
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/bitops-10.c 
> b/gcc/testsuite/gcc.dg/tree-ssa/bitops-10.c
> new file mode 100644
> index 000..000c5aef237
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/bitops-10.c
> @@ -0,0 +1,34 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O1 -fdump-tree-optimized-raw" } */
> +/* PR tree-optimization/115449 */
> +
> +void setBit_un(unsigned char *a, int b) {
> +   unsigned char c = 0x1UL << b;
> +   *a &= ~c;
> +   *a |= c;
> +}
> +
> +void setBit_sign(signed char *a, int b) {
> +   signed char c = 0x1UL << b;
> +   *a &= ~c;
> +   *a |= c;
> +}
> +
>

Re: [PATCH v2] middle-end: Drop __builtin_prefetch calls in autovectorization [PR114061]

2024-06-12 Thread Richard Biener
On Tue, Jun 11, 2024 at 11:46 AM Victor Do Nascimento
 wrote:
>
> At present the autovectorizer fails to vectorize simple loops
> involving calls to `__builtin_prefetch'.  A simple example of such
> loop is given below:
>
> void foo(double * restrict a, double * restrict b, int n){
>   int i;
>   for(i=0; i a[i] = a[i] + b[i];
> __builtin_prefetch(&(b[i+8]));
>   }
> }
>
> The failure stems from two issues:
>
> 1. Given that it is typically not possible to fully reason about a
>function call due to the possibility of side effects, the
>autovectorizer does not attempt to vectorize loops which make such
>calls.
>
>Given the memory reference passed to `__builtin_prefetch', in the
>absence of assurances about its effect on the passed memory
>location the compiler deems the function unsafe to vectorize,
>marking it as clobbering memory in `vect_find_stmt_data_reference'.
>This leads to the failure in autovectorization.
>
> 2. Notwithstanding the above issue, though the prefetch statement
>would be classed as `vect_unused_in_scope', the loop invariant that
>is used in the address of the prefetch is the scalar loop's and not
>the vector loop's IV. That is, it still uses `i' and not `vec_iv'
>because the instruction wasn't vectorized, causing DCE to think the
>value is live, such that we now have both the vector and scalar loop
>invariant actively used in the loop.
>
> This patch addresses both of these:
>
> 1. About the issue regarding the memory clobber, data prefetch does
>not generate faults if its address argument is invalid and does not
>write to memory.  Therefore, it does not alter the internal state
>of the program or its control flow under any circumstance.  As
>such, it is reasonable that the function be marked as not affecting
>memory contents.
>
>To achieve this, we add the necessary logic to
>`get_references_in_stmt' to ensure that builtin functions are given
>given the same treatment as internal functions.  If the gimple call
>is to a builtin function and its function code is
>`BUILT_IN_PREFETCH', we mark `clobbers_memory' as false.
>
> 2. Finding precedence in the way clobber statements are handled,
>whereby the vectorizer drops these from both the scalar and
>vectorized versions of a given loop, we choose to drop prefetch
>hints in a similar fashion.  This seems appropriate given how
>software prefetch hints are typically ignored by processors across
>architectures, as they seldom lead to performance gain over their
>hardware counterparts.

OK.

Thanks,
Richard.

>PR tree-optimization/114061
>
> gcc/ChangeLog:
>
> * tree-data-ref.cc (get_references_in_stmt): set
> `clobbers_memory' to false for __builtin_prefetch.
> * tree-vect-loop.cc (vect_transform_loop): Drop all
> __builtin_prefetch calls from loops.
>
> gcc/testsuite/ChangeLog:
>
> * gcc.dg/vect/vect-prefetch-drop.c: New test.
> * gcc.target/aarch64/vect-prefetch-drop.c: Likewise.
> ---
>  gcc/testsuite/gcc.dg/vect/vect-prefetch-drop.c  | 12 
>  .../gcc.target/aarch64/vect-prefetch-drop.c | 13 +
>  gcc/tree-data-ref.cc|  2 ++
>  gcc/tree-vect-loop.cc   |  6 --
>  4 files changed, 31 insertions(+), 2 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.dg/vect/vect-prefetch-drop.c
>  create mode 100644 gcc/testsuite/gcc.target/aarch64/vect-prefetch-drop.c
>
> diff --git a/gcc/testsuite/gcc.dg/vect/vect-prefetch-drop.c 
> b/gcc/testsuite/gcc.dg/vect/vect-prefetch-drop.c
> new file mode 100644
> index 000..7a8915eb716
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/vect/vect-prefetch-drop.c
> @@ -0,0 +1,12 @@
> +/* { dg-do compile } */
> +/* { dg-require-effective-target vect_int } */
> +
> +void foo(int * restrict a, int * restrict b, int n){
> +  int i;
> +  for(i=0; i +a[i] = a[i] + b[i];
> +__builtin_prefetch(&(b[i+8]));
> +  }
> +}
> +
> +/* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect"  } } */
> diff --git a/gcc/testsuite/gcc.target/aarch64/vect-prefetch-drop.c 
> b/gcc/testsuite/gcc.target/aarch64/vect-prefetch-drop.c
> new file mode 100644
> index 000..e654b99fde8
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/vect-prefetch-drop.c
> @@ -0,0 +1,13 @@
> +/* { dg-do compile { target { aarch64*-*-* } } } */
> +/* { dg-additional-options "-O3 -march=armv9.2-a+sve --std=c99" { target { 
> aarch64*-*-* } } } */
> +
> +void foo(double * restrict a, double * r

[gcc r12-10555] cfgrtl: Fix MEM_EXPR update in duplicate_insn_chain [PR114924]

2024-06-12 Thread Richard Biener via Gcc-cvs
https://gcc.gnu.org/g:33663c0701a723846527f9bf2ea01d67d7033c0b

commit r12-10555-g33663c0701a723846527f9bf2ea01d67d7033c0b
Author: Alex Coplan 
Date:   Fri May 3 09:23:59 2024 +0100

cfgrtl: Fix MEM_EXPR update in duplicate_insn_chain [PR114924]

The PR shows that when cfgrtl.cc:duplicate_insn_chain attempts to
update the MR_DEPENDENCE_CLIQUE information for a MEM_EXPR we can end up
accidentally dropping (e.g.) an ARRAY_REF from the MEM_EXPR and end up
replacing it with the underlying MEM_REF.  This leads to an
inconsistency in the MEM_EXPR information, and could lead to wrong code.

While the walk down to the MEM_REF is necessary to update
MR_DEPENDENCE_CLIQUE, we should use the outer tree expression for the
MEM_EXPR.  This patch does that.

gcc/ChangeLog:

PR rtl-optimization/114924
* cfgrtl.cc (duplicate_insn_chain): When updating MEM_EXPRs,
don't strip (e.g.) ARRAY_REFs from the final MEM_EXPR.

(cherry picked from commit fe40d525619eee9c2821126390df75068df4773a)

Diff:
---
 gcc/cfgrtl.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gcc/cfgrtl.cc b/gcc/cfgrtl.cc
index 8decf4007e83..a8c95d82a2a7 100644
--- a/gcc/cfgrtl.cc
+++ b/gcc/cfgrtl.cc
@@ -4374,12 +4374,13 @@ duplicate_insn_chain (rtx_insn *from, rtx_insn *to,
   since MEM_EXPR is shared so make a copy and
   walk to the subtree again.  */
tree new_expr = unshare_expr (MEM_EXPR (*iter));
+   tree orig_new_expr = new_expr;
if (TREE_CODE (new_expr) == WITH_SIZE_EXPR)
  new_expr = TREE_OPERAND (new_expr, 0);
while (handled_component_p (new_expr))
  new_expr = TREE_OPERAND (new_expr, 0);
MR_DEPENDENCE_CLIQUE (new_expr) = newc;
-   set_mem_expr (const_cast  (*iter), new_expr);
+   set_mem_expr (const_cast  (*iter), orig_new_expr);
  }
  }
}


[gcc r12-10553] middle-end/40635 - SSA update losing PHI arg loations

2024-06-12 Thread Richard Biener via Gcc-cvs
https://gcc.gnu.org/g:844ff32c04a4e36bf69f3878634d9f50aec3a332

commit r12-10553-g844ff32c04a4e36bf69f3878634d9f50aec3a332
Author: Richard Biener 
Date:   Mon Dec 5 16:03:21 2022 +0100

middle-end/40635 - SSA update losing PHI arg loations

The following fixes an issue where SSA update loses PHI argument
locations when updating PHI nodes it didn't create as part of the
SSA update.  For the case where the reaching def is the same as
the current argument opt to do nothing and for the case where the
PHI argument already has a location keep that (that's an indication
the PHI node wasn't created as part of the update SSA process).

PR middle-end/40635
* tree-into-ssa.cc (rewrite_update_phi_arguments): Only
update the argument when the reaching definition is different
from the current argument.  Keep an existing argument
location.

* gcc.dg/uninit-pr40635.c: New testcase.

(cherry picked from commit 0d14720f93a8139a7f234b2762c361e8e5da99cc)

Diff:
---
 gcc/testsuite/gcc.dg/uninit-pr40635.c | 33 +
 gcc/tree-into-ssa.cc  | 11 +++
 2 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/gcc/testsuite/gcc.dg/uninit-pr40635.c 
b/gcc/testsuite/gcc.dg/uninit-pr40635.c
new file mode 100644
index ..fab7c3d49d9d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/uninit-pr40635.c
@@ -0,0 +1,33 @@
+/* { dg-do compile } */
+/* { dg-options "-O -Wuninitialized" } */
+
+struct hostent {
+char **h_addr_list;
+};
+struct hostent *gethostbyname(const char*);
+int socket(void);
+int close(int);
+int connect(int, const char*);
+
+int get_tcp_socket(const char *machine)
+{
+  struct hostent *hp;
+  int s42, x;
+  char **addr;
+
+  hp = gethostbyname(machine);
+  x = 0;
+  for (addr = hp->h_addr_list; *addr; addr++)
+{
+  s42 = socket();
+  if (s42 < 0)
+   return -1;
+  x = connect(s42, *addr);
+  if (x == 0)
+   break;
+  close(s42);
+}
+  if (x < 0)
+return -1;
+  return s42;  /* { dg-warning "uninitialized" } */
+}
diff --git a/gcc/tree-into-ssa.cc b/gcc/tree-into-ssa.cc
index dd41b1b77b7a..fa53beec2abf 100644
--- a/gcc/tree-into-ssa.cc
+++ b/gcc/tree-into-ssa.cc
@@ -2107,7 +2107,6 @@ rewrite_update_phi_arguments (basic_block bb)
 symbol we may find NULL arguments.  That's why we
 take the symbol from the LHS of the PHI node.  */
  reaching_def = get_reaching_def (lhs_sym);
-
}
  else
{
@@ -2119,8 +2118,9 @@ rewrite_update_phi_arguments (basic_block bb)
reaching_def = get_reaching_def (arg);
}
 
-  /* Update the argument if there is a reaching def.  */
- if (reaching_def)
+ /* Update the argument if there is a reaching def different
+from arg.  */
+ if (reaching_def && reaching_def != arg)
{
  location_t locus;
  int arg_i = PHI_ARG_INDEX_FROM_USE (arg_p);
@@ -2130,6 +2130,10 @@ rewrite_update_phi_arguments (basic_block bb)
  /* Virtual operands do not need a location.  */
  if (virtual_operand_p (reaching_def))
locus = UNKNOWN_LOCATION;
+ /* If SSA update didn't insert this PHI the argument
+might have a location already, keep that.  */
+ else if (gimple_phi_arg_has_location (phi, arg_i))
+   locus = gimple_phi_arg_location (phi, arg_i);
  else
{
  gimple *stmt = SSA_NAME_DEF_STMT (reaching_def);
@@ -2147,7 +2151,6 @@ rewrite_update_phi_arguments (basic_block bb)
  gimple_phi_arg_set_location (phi, arg_i, locus);
}
 
-
  if (e->flags & EDGE_ABNORMAL)
SSA_NAME_OCCURS_IN_ABNORMAL_PHI (USE_FROM_PTR (arg_p)) = 1;
}


[gcc r12-10554] [PR111497][LRA]: Copy substituted equivalence

2024-06-12 Thread Richard Biener via Gcc-cvs
https://gcc.gnu.org/g:959cef942508b818c7dcb8df0f3c7bf4968d406a

commit r12-10554-g959cef942508b818c7dcb8df0f3c7bf4968d406a
Author: Vladimir N. Makarov 
Date:   Mon Sep 25 16:19:50 2023 -0400

[PR111497][LRA]: Copy substituted equivalence

When we substitute the equivalence and it becomes shared, we can fail
to correctly update reg info used by LRA.  This can result in wrong
code generation, e.g. because of incorrect live analysis.  It can also
result in compiler crash as the pseudo survives RA.  This is what
exactly happened for the PR.  This patch solves this problem by
unsharing substituted equivalences.

gcc/ChangeLog:

PR middle-end/111497
* lra-constraints.cc (lra_constraints): Copy substituted
equivalence.
* lra.cc (lra): Change comment for calling unshare_all_rtl_again.

gcc/testsuite/ChangeLog:

PR middle-end/111497
* g++.target/i386/pr111497.C: new test.

(cherry picked from commit 3c23defed384cf17518ad6c817d94463a445d21b)

Diff:
---
 gcc/lra-constraints.cc   |  5 +
 gcc/lra.cc   |  5 ++---
 gcc/testsuite/g++.target/i386/pr111497.C | 22 ++
 3 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/gcc/lra-constraints.cc b/gcc/lra-constraints.cc
index d92ab76908c8..04b0b6fbfc2a 100644
--- a/gcc/lra-constraints.cc
+++ b/gcc/lra-constraints.cc
@@ -5139,6 +5139,11 @@ lra_constraints (bool first_p)
   loc_equivalence_callback, curr_insn);
  if (old != *curr_id->operand_loc[0])
{
+ /* If we substitute pseudo by shared equivalence, we can fail
+to update LRA reg info and this can result in many
+unexpected consequences.  So keep rtl unshared:  */
+ *curr_id->operand_loc[0]
+   = copy_rtx (*curr_id->operand_loc[0]);
  lra_update_insn_regno_info (curr_insn);
  changed_p = true;
}
diff --git a/gcc/lra.cc b/gcc/lra.cc
index 1444cb759144..5e29d3270d7d 100644
--- a/gcc/lra.cc
+++ b/gcc/lra.cc
@@ -2535,9 +2535,8 @@ lra (FILE *f)
   if (inserted_p)
 commit_edge_insertions ();
 
-  /* Replacing pseudos with their memory equivalents might have
- created shared rtx.  Subsequent passes would get confused
- by this, so unshare everything here.  */
+  /* Subsequent passes expect that rtl is unshared, so unshare everything
+ here.  */
   unshare_all_rtl_again (get_insns ());
 
   if (flag_checking)
diff --git a/gcc/testsuite/g++.target/i386/pr111497.C 
b/gcc/testsuite/g++.target/i386/pr111497.C
new file mode 100644
index ..a645bb95907e
--- /dev/null
+++ b/gcc/testsuite/g++.target/i386/pr111497.C
@@ -0,0 +1,22 @@
+// { dg-do compile { target ia32 } }
+// { dg-options "-march=i686 -mtune=generic -fPIC -O2 -g" }
+
+class A;
+struct B { const char *b1; int b2; };
+struct C : B { C (const char *x, int y) { b1 = x; b2 = y; } };
+struct D : C { D (B x) : C (x.b1, x.b2) {} };
+struct E { E (A *); };
+struct F : E { D f1, f2, f3, f4, f5, f6; F (A *, const B &, const B &, const B 
&); };
+struct G : F { G (A *, const B &, const B &, const B &); };
+struct H { int h; };
+struct I { H i; };
+struct J { I *j; };
+struct A : J {};
+inline F::F (A *x, const B , const B , const B )
+  : E(x), f1(y), f2(z), f3(w), f4(y), f5(z), f6(w) {}
+G::G (A *x, const B , const B , const B ) : F(x, y, z, w)
+{
+  H *h = >j->i;
+  if (h)
+h->h++;
+}


[gcc r12-10552] rtl-optimization/54052 - RTL SSA PHI insertion compile-time hog

2024-06-12 Thread Richard Biener via Gcc-cvs
https://gcc.gnu.org/g:1edc6a71feeb8460fbd4938b8926b5692fbab43f

commit r12-10552-g1edc6a71feeb8460fbd4938b8926b5692fbab43f
Author: Richard Biener 
Date:   Mon Feb 19 11:10:50 2024 +0100

rtl-optimization/54052 - RTL SSA PHI insertion compile-time hog

The following tries to address the PHI insertion compile-time hog in
RTL fwprop observed with the PR54052 testcase where the loop computing
the "unfiltered" set of variables possibly needing PHI nodes for each
block exhibits quadratic compile-time and memory-use.

It does so by pruning the local DEFs with LR_OUT of the block, removing
regs that can never be LR_IN (defined by this block) in the dominance
frontier.

PR rtl-optimization/54052
* rtl-ssa/blocks.cc (function_info::place_phis): Filter
local defs by LR_OUT.

(cherry picked from commit c7151283dc747769d4ac4f216d8f519bda2569b5)

Diff:
---
 gcc/rtl-ssa/blocks.cc | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/gcc/rtl-ssa/blocks.cc b/gcc/rtl-ssa/blocks.cc
index 959fad8f829d..0c5998f4b65f 100644
--- a/gcc/rtl-ssa/blocks.cc
+++ b/gcc/rtl-ssa/blocks.cc
@@ -639,7 +639,12 @@ function_info::place_phis (build_info )
   if (bitmap_empty_p ([b1]))
continue;
 
-  bitmap b1_def = _LR_BB_INFO (BASIC_BLOCK_FOR_FN (m_fn, b1))->def;
+  // Defs in B1 that are possibly in LR_IN in the dominance frontier
+  // blocks.
+  auto_bitmap b1_def;
+  bitmap_and (b1_def, _LR_BB_INFO (BASIC_BLOCK_FOR_FN (m_fn, b1))->def,
+ DF_LR_OUT (BASIC_BLOCK_FOR_FN (m_fn, b1)));
+
   bitmap_iterator bmi;
   unsigned int b2;
   EXECUTE_IF_SET_IN_BITMAP ([b1], 0, b2, bmi)


Re: [PATCH] aarch64: Add vector popcount besides QImode [PR113859]

2024-06-12 Thread Richard Sandiford
Pengxuan Zheng  writes:
> This patch improves GCC’s vectorization of __builtin_popcount for aarch64 
> target
> by adding popcount patterns for vector modes besides QImode, i.e., HImode,
> SImode and DImode.
>
> With this patch, we now generate the following for HImode:
>   cnt v1.16b, v.16b
>   uaddlp  v2.8h, v1.16b
>
> For SImode, we generate:
>   cnt v1.16b, v.16b
>   uaddlp  v2.8h, v1.16b
>   uaddlp  v3.4s, v2.8h
>
> For V2DI, we generate:
>   cnt v1.16b, v.16b
>   uaddlp  v2.8h, v1.16b
>   uaddlp  v3.4s, v2.8h
>   uaddlp  v4.2d, v3.4s
>
> gcc/ChangeLog:
>
>   PR target/113859
>   * config/aarch64/aarch64-simd.md (popcount2): New define_expand.
>
> gcc/testsuite/ChangeLog:
>
>   PR target/113859
>   * gcc.target/aarch64/popcnt-vec.c: New test.
>
> Signed-off-by: Pengxuan Zheng 
> ---
>  gcc/config/aarch64/aarch64-simd.md| 40 
>  gcc/testsuite/gcc.target/aarch64/popcnt-vec.c | 48 +++
>  2 files changed, 88 insertions(+)
>  create mode 100644 gcc/testsuite/gcc.target/aarch64/popcnt-vec.c
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md 
> b/gcc/config/aarch64/aarch64-simd.md
> index f8bb973a278..093c32ee8ff 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -3540,6 +3540,46 @@ (define_insn "popcount2"
>[(set_attr "type" "neon_cnt")]
>  )
>  
> +(define_expand "popcount2"
> +  [(set (match_operand:VQN 0 "register_operand" "=w")
> +(popcount:VQN (match_operand:VQN 1 "register_operand" "w")))]
> +  "TARGET_SIMD"
> +  {
> +rtx v = gen_reg_rtx (V16QImode);
> +rtx v1 = gen_reg_rtx (V16QImode);
> +emit_move_insn (v, gen_lowpart (V16QImode, operands[1]));
> +emit_insn (gen_popcountv16qi2 (v1, v));
> +if (mode == V8HImode)
> +  {
> +/* For V8HI, we generate:
> +cnt v1.16b, v.16b
> +uaddlp  v2.8h, v1.16b */
> +emit_insn (gen_aarch64_uaddlpv16qi (operands[0], v1));
> +DONE;
> +  }
> +rtx v2 = gen_reg_rtx (V8HImode);
> +emit_insn (gen_aarch64_uaddlpv16qi (v2, v1));
> +if (mode == V4SImode)
> +  {
> +/* For V4SI, we generate:
> +cnt v1.16b, v.16b
> +uaddlp  v2.8h, v1.16b
> +uaddlp  v3.4s, v2.8h */
> +emit_insn (gen_aarch64_uaddlpv8hi (operands[0], v2));
> +DONE;
> +  }
> +/* For V2DI, we generate:
> +cnt v1.16b, v.16b
> +uaddlp  v2.8h, v1.16b
> +uaddlp  v3.4s, v2.8h
> +uaddlp  v4.2d, v3.4s */
> +rtx v3 = gen_reg_rtx (V4SImode);
> +emit_insn (gen_aarch64_uaddlpv8hi (v3, v2));
> +emit_insn (gen_aarch64_uaddlpv4si (operands[0], v3));
> +DONE;
> +  }
> +)
> +

Could you add support for V4HI and V2SI at the same time?

I think it's possible to handle all 5 modes iteratively, like so:

(define_expand "popcount2"
  [(set (match_operand:VDQHSD 0 "register_operand")
(popcount:VDQHSD (match_operand:VDQHSD 1 "register_operand")))]
  "TARGET_SIMD"
{
  /* Generate a byte popcount.  */
  machine_mode mode =  == 64 ? V8QImode : V16QImode;
  rtx tmp = gen_reg_rtx (mode);
  auto icode = optab_handler (popcount_optab, mode);
  emit_insn (GEN_FCN (icode) (tmp, gen_lowpart (mode, operands[1])));

  /* Use a sequence of UADDLPs to accumulate the counts.  Each step doubles
 the element size and halves the number of elements.  */
  do
{
  auto icode = code_for_aarch64_addlp (ZERO_EXTEND, GET_MODE (tmp));
  mode = insn_data[icode].operand[0].mode;
  rtx dest = mode == mode ? operands[0] : gen_reg_rtx (mode);
  emit_insn (GEN_FCN (icode) (dest, tmp));
  tmp = dest;
}
  while (mode != mode);
  DONE;
})

(only lightly tested).  This requires changing:

(define_expand "aarch64_addlp"

to:

(define_expand "@aarch64_addlp"

Thanks,
Richard

>  ;; 'across lanes' max and min ops.
>  
>  ;; Template for outputting a scalar, so we can create __builtins which can be
> diff --git a/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c 
> b/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c
> new file mode 100644
> index 000..4c9a1b95990
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c
> @@ -0,0 +1,48 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2" } */
> +
> +/* This function should produce cnt v.16b. */
> +void
> +bar (unsigned char *__restrict b, unsigned char *__restrict d)
> +{
> +  for (int i = 0; i < 1024; i++)
> +d[i] = __builtin_popcount (b[i]);
> +}
> +

[Perl/perl5] c79fe2: S_new_SV: args unused, static inline & defined, re...

2024-06-12 Thread Richard Leach via perl5-changes
  Branch: refs/heads/blead
  Home:   https://github.com/Perl/perl5
  Commit: c79fe2b42ae2a540552f87251aa0e36a060dd584
  
https://github.com/Perl/perl5/commit/c79fe2b42ae2a540552f87251aa0e36a060dd584
  Author: Richard Leach 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M embed.fnc
M proto.h
M sv.h
M sv_inline.h

  Log Message:
  ---
  S_new_SV: args unused, static inline & defined, rename with Perl_ prefix

When sv_inline.h was created in
https://github.com/Perl/perl5/commit/75acd14e43f2ffb698fc7032498f31095b56adb5
and a number of things moved into it, the S_new_SV debugging function should
have been made PERL_STATIC_INLINE and given the Perl_ prefix. This commit
now does those things. It also marks the arguments to Perl_new_SV as
PERL_UNUSED_ARG, reducing warnings on some builds.

Additionally, now that we can use inline functions, the new_SV() macro is
now just a call to this inline function, rather than being that only on
DEBUG_LEAKING_SCALARS builds and a multi-line macro the rest of the time.



To unsubscribe from these emails, change your notification settings at 
https://github.com/Perl/perl5/settings/notifications


[Perl/perl5] 864b12: perlguts.pod - add some description of real vs fak...

2024-06-12 Thread Richard Leach via perl5-changes
  Branch: refs/heads/blead
  Home:   https://github.com/Perl/perl5
  Commit: 864b1299a6432ecfb65be2e5c9bf83988b0da0da
  
https://github.com/Perl/perl5/commit/864b1299a6432ecfb65be2e5c9bf83988b0da0da
  Author: Richard Leach 
  Date:   2024-06-12 (Wed, 12 Jun 2024)

  Changed paths:
M pod/perlguts.pod

  Log Message:
  ---
  perlguts.pod - add some description of real vs fake AVs



To unsubscribe from these emails, change your notification settings at 
https://github.com/Perl/perl5/settings/notifications


Re: [PATCH 2/3] Enabled LRA for ia64.

2024-06-12 Thread Richard Biener
On Wed, 12 Jun 2024, Rene Rebe wrote:
>
> gcc/
> * config/ia64/ia64.cc: Enable LRA for ia64.
> * config/ia64/ia64.md: Likewise.
> * config/ia64/predicates.md: Likewise.

That looks simple enough.  I cannot find any copyright assignment on
file with the FSF so you probably want to contribute to GCC under
the DCO (see https://gcc.gnu.org/dco.html), in that case please post
patches with Signed-off-by: tags.

For this patch please state how you tested it, I assume you
bootstrapped GCC natively on ia64-linux and ran the testsuite.
I can find two gcc-testresult postings, one appearantly with LRA
and one without?  Both from May:

https://sourceware.org/pipermail/gcc-testresults/2024-May/816422.html
https://sourceware.org/pipermail/gcc-testresults/2024-May/816346.html

somehow for example libstdc++ summaries were not merged, it might
be you do not have recent python installed on the system?  Or you
didn't use contrib/test_summary to create those mails.  It would be
nice to see the difference between LRA and not LRA in the testresults,
can you quote that?

Thanks,
Richard.

> ---
>  gcc/config/ia64/ia64.cc   | 7 ++-
>  gcc/config/ia64/ia64.md   | 4 ++--
>  gcc/config/ia64/predicates.md | 2 +-
>  3 files changed, 5 insertions(+), 8 deletions(-)
> 
> diff --git a/gcc/config/ia64/ia64.cc b/gcc/config/ia64/ia64.cc
> index ac3d56073ac..d189bfb2cb4 100644
> --- a/gcc/config/ia64/ia64.cc
> +++ b/gcc/config/ia64/ia64.cc
> @@ -618,9 +618,6 @@ static const scoped_attribute_specs *const 
> ia64_attribute_table[] =
>  #undef TARGET_LEGITIMATE_ADDRESS_P
>  #define TARGET_LEGITIMATE_ADDRESS_P ia64_legitimate_address_p
>  
> -#undef TARGET_LRA_P
> -#define TARGET_LRA_P hook_bool_void_false
> -
>  #undef TARGET_CANNOT_FORCE_CONST_MEM
>  #define TARGET_CANNOT_FORCE_CONST_MEM ia64_cannot_force_const_mem
>  
> @@ -1329,7 +1326,7 @@ ia64_expand_move (rtx op0, rtx op1)
>  {
>machine_mode mode = GET_MODE (op0);
>  
> -  if (!reload_in_progress && !reload_completed && !ia64_move_ok (op0, op1))
> +  if (!lra_in_progress && !reload_completed && !ia64_move_ok (op0, op1))
>  op1 = force_reg (mode, op1);
>  
>if ((mode == Pmode || mode == ptr_mode) && symbolic_operand (op1, 
> VOIDmode))
> @@ -1776,7 +1773,7 @@ ia64_expand_movxf_movrf (machine_mode mode, rtx 
> operands[])
>   }
>  }
>  
> -  if (!reload_in_progress && !reload_completed)
> +  if (!lra_in_progress && !reload_completed)
>  {
>operands[1] = spill_xfmode_rfmode_operand (operands[1], 0, mode);
>  
> diff --git a/gcc/config/ia64/ia64.md b/gcc/config/ia64/ia64.md
> index 698e302081e..d485acc0ea8 100644
> --- a/gcc/config/ia64/ia64.md
> +++ b/gcc/config/ia64/ia64.md
> @@ -2318,7 +2318,7 @@
> (match_operand:DI 3 "register_operand" "f"))
>(match_operand:DI 4 "nonmemory_operand" "rI")))
> (clobber (match_scratch:DI 5 "=f"))]
> -  "reload_in_progress"
> +  "lra_in_progress"
>"#"
>[(set_attr "itanium_class" "unknown")])
>  
> @@ -3407,7 +3407,7 @@
>  (match_operand:DI 2 "shladd_operand" "n"))
> (match_operand:DI 3 "nonmemory_operand" "r"))
>(match_operand:DI 4 "nonmemory_operand" "rI")))]
> -  "reload_in_progress"
> +  "lra_in_progress"
>"* gcc_unreachable ();"
>"reload_completed"
>[(set (match_dup 0) (plus:DI (mult:DI (match_dup 1) (match_dup 2))
> diff --git a/gcc/config/ia64/predicates.md b/gcc/config/ia64/predicates.md
> index 01a4effd339..85f5380e734 100644
> --- a/gcc/config/ia64/predicates.md
> +++ b/gcc/config/ia64/predicates.md
> @@ -347,7 +347,7 @@
>  allows reload the opportunity to avoid spilling addresses to
>  the stack, and instead simply substitute in the value from a
>  REG_EQUIV.  We'll split this up again when splitting the insn.  */
> - if (reload_in_progress || reload_completed)
> + if (lra_in_progress || reload_completed)
> return true;
>  
>   /* Some symbol types we allow to use with any offset.  */
> 

-- 
Richard Biener 
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)


Re: Please help me identify package so I can report an important bug

2024-06-12 Thread Richard
Question is, does it make that much sense to report it to Debian directly?
Are you encountering this issue on Debian itself or
Armbian/Raspbian/whatever? You reported this to the Raspberry Pi GitHub, so
I'd expect them to take this up with the upstream devs themselves, so by
the time Trixie is being released, it may already be included.

But besides that, what you describe in the first link sounds to me not like
a bug, but as a well thought-through decision. Network adapter names like
eth0 have been dropped with Debian 11 (I think, maybe even 10). So don't
get your hopes up too high to ever see this coming back. But also, just
searching the web for this topic, you should have come across this
answering your questions: https://wiki.debian.org/NetworkInterfaceNames

Richard

Am Mi., 12. Juni 2024 um 12:43 Uhr schrieb Peter Goodall <
pjgood...@gmail.com>:

> Hello,
>
> This  bug, or a close relative, has already been reported in
> https://github.com/raspberrypi/bookworm-feedback/issues/239
> as 'Predictable network names broken for ASIX USB ethernet in kernel
> 6.6.20'
>
> I added a comment reporting my experience in Proxmox here:
>
> https://github.com/raspberrypi/bookworm-feedback/issues/239#issuecomment-2162166863
>
> Because it happens in proxmox and rpi I assume its Debian or higher. I
> have not reported a Debian bug before...
>
> Thanks,
> --Peter G
>


Re: Question about optimizing function pointers for direct function calls

2024-06-12 Thread Richard Biener via Gcc
On Wed, Jun 12, 2024 at 11:57 AM Hanke Zhang  wrote:
>
> Richard Biener  于2024年5月24日周五 14:39写道:
> >
> > On Fri, May 24, 2024 at 5:53 AM Hanke Zhang via Gcc  wrote:
> > >
> > > Hi,
> > > I got a question about optimizing function pointers for direct
> > > function calls in C.
> > >
> > > Consider the following scenario: one of the fields of a structure is a
> > > function pointer, and all its assignments come from the same function.
> > > Can all its uses be replaced by direct calls to this function? So the
> > > later passes can do more optimizations.
> > >
> > > Here is the example:
> > >
> > > int add(int a, int b) { return a + b; }
> > > int sub(int a, int b) { return a - b; }
> > >
> > > struct Foo {
> > > int (*add)(int, int);
> > > };
> > > int main()
> > > {
> > > struct Foo foo[5] = malloc(sizeof(struct Foo) * 5);
> > >
> > > for (int i = 0; i < 5; i++) {
> > > foo[i].add = add;
> > > }
> > >
> > > int sum = 0;
> > > for (int i = 0; i < 5; i++) {
> > > sum += foo[i].add(1, 2);
> > > }
> > >
> > > return 0;
> > > }
> > >
> > > Can I replace the code above to the code below?
> > >
> > > int add(int a, int b) { return a + b; }
> > > int sub(int a, int b) { return a - b; }
> > >
> > > struct Foo {
> > > int (*add)(int, int);
> > > };
> > > int main()
> > > {
> > > struct Foo foo[5] = malloc(sizeof(struct Foo) * 5);
> > >
> > > for (int i = 0; i < 5; i++) {
> > > foo[i].add = add;
> > > }
> > >
> > > int sum = 0;
> > > for (int i = 0; i < 5; i++) {
> > > sum += add(1,2);
> > > }
> > >
> > > return 0;
> > > }
> > >
> > > My idea is to determine whether the assignment of the field is the
> > > same function, and if so, perform the replacement.
> >
> > If it's as simple as above then sure, even CSE should do it.  If you
> > can prove otherwise the memory location with the function pointer
> > always has the same value you are obviously fine.  If you just
> > do not see any other store via 'add's FIELD_DECL then no, that
> > isn't good enough.  Every void * store you do not know where it
> > goes might go to that slot.
> >
> > > Of course this is not a reasonable optimization, I just want to know
> > > if there are security issues in doing so, and if I want to do it in
> > > the IPA stage, is it possible?
> >
> > For the more general case you can do what we do for speculative
> > devirtualization - replace the code with
> >
> >   sum += foo[i].add == add ? add (1,2) : foo[i].add(1,2);
>
> Hi Richard,
>
> I'm trying to do what you suggested. (sum += foo[i].add == add ? add
> (1,2) : foo[i].add(1,2);)
>
> I created a new IPA-Pass before IPA-INLINE and I made the changes on
> the GIMPLE in the "function_transform" stage. But my newly created
> "foo[i].add(1,2)" seems to fail to be inlined. And it was optimized
> out in the subsequent FRE. I would like to ask if there is any way to
> mark my newly created cgraph_edge as "inline" in the
> "function_transform" stage.
>
> Here is part of the code creating the call_stmt in true basic_block in
> my IPA-PASS:
>
> // ...
> // part of the code have been omitted
> auto_vec params;
> for (int i = 0; i < gimple_call_num_args (call_stmt); i++) {
>   params.safe_push (gimple_call_arg (call_stmt, i));
> }
> gimple* tbb_call = gimple_build_call_vec (fndecl, params); /// = fn()
> tree tbb_ssa;
> if (ret_val_flag) {
>   tbb_ssa = make_ssa_name (TREE_TYPE (gimple_call_lhs(call_stmt)), NULL);
>   gimple_call_set_lhs (tbb_call, tbb_ssa); /// _2 = fn()
> }
> gsi = gsi_start_bb (tbb);
> gsi_insert_before (, tbb_call, GSI_SAME_STMT);
> cgraph_edge* tbb_callee = node->create_edge (cgraph_node::get_create
> (fndecl), (gcall*)tbb_call, tbb_call->bb->count, true);
> // what should I do to this cgraph_edge to mark it to be inlined
> // ...
>
> Or should I not do these things in the function_transform stage?

That's indeed too late.  You have to create speculative call edges,
I would suggest to look how IPA devirtualization handles this case
and possibly simply amend it.  I'm CCing Honza who should know
more about this, I do not know the details.

Richard.

> Thanks
> Hanke Zhang
>
>
> >
> > that way we can inline the direct call and hopefully the branch will be
> > well predicted.
> >
> > In some SPEC there is IIRC the situation where such speculative
> > devirtualization candidates can be found solely based on function
> > signature.  With LTO/IPA you'd basically collect candidate targets
> > for each indirect call (possibly address-taken function definitions
> > with correct signature) and if there's just a single one you can
> > choose that as speculative devirt target.
> >
> > Speculative devirt as we have now of course works with profile
> > data to identify the most probable candidate.
> >
> > Richard.
> >
> > >
> > > Thanks
> > > Hanke Zhang


Re: [PATCH v2] Arm: Fix disassembly error in Thumb-1 relaxed load/store [PR115188]

2024-06-12 Thread Richard Earnshaw



On 12/06/2024 11:35, Richard Earnshaw (lists) wrote:
> On 11/06/2024 17:35, Wilco Dijkstra wrote:
>> Hi Christophe,
>>
>>>  PR target/115153
>> I guess this is typo (should be 115188) ?
>>
>> Correct.
>>
>>> +/* { dg-options "-O2 -mthumb" } */-mthumb is included in arm_arch_v6m, so 
>>> I think you don't need to add it
>> here?
>>
>> Indeed, it's not strictly necessary. Fixed in v2:
>>
>> A Thumb-1 memory operand allows single-register LDMIA/STMIA. This doesn't get
>> printed as LDR/STR with writeback in unified syntax, resulting in strange
>> assembler errors if writeback is selected.  To work around this, use the 'Uw'
>> constraint that blocks writeback.
> 
> Doing just this will mean that the register allocator will have to undo a 
> pre/post memory operand that was accepted by the predicate (memory_operand).  
> I think we really need a tighter predicate (lets call it noautoinc_mem_op) 
> here to avoid that.  Note that the existing uses of Uw also had another 
> alternative that did permit 'm', so this wasn't previously practical, but 
> they had alternative ways of being reloaded.

No, sorry that won't work; there's another 'm' alternative here as well.
The correct fix is to add alternatives for T1, I think, similar to the one in 
thumb1_movsi_insn.

Also, by observation I think there's a similar problem in the load operations.

R.

> 
> R.
> 
>>
>> Passes bootstrap & regress, OK for commit and backport?
>>
>> gcc:
>> PR target/115188
>> * config/arm/sync.md (arm_atomic_load): Use 'Uw' constraint.
>> (arm_atomic_store): Likewise.
>>
>> gcc/testsuite:
>> PR target/115188
>> * gcc.target/arm/pr115188.c: Add new test.
>>
>> ---
>>
>> diff --git a/gcc/config/arm/sync.md b/gcc/config/arm/sync.md
>> index 
>> df8dbe170cacb6b60d56a6f19aadd5a6c9c51f7a..e856ee51d9ae7b945c4d1e9d1f08afeedc95707a
>>  100644
>> --- a/gcc/config/arm/sync.md
>> +++ b/gcc/config/arm/sync.md
>> @@ -65,7 +65,7 @@
>>  (define_insn "arm_atomic_load"
>>[(set (match_operand:QHSI 0 "register_operand" "=r,l")
>>  (unspec_volatile:QHSI
>> -  [(match_operand:QHSI 1 "memory_operand" "m,m")]
>> +  [(match_operand:QHSI 1 "memory_operand" "m,Uw")]
>>VUNSPEC_LDR))]
>>""
>>"ldr\t%0, %1"
>> @@ -81,7 +81,7 @@
>>  )
>>
>>  (define_insn "arm_atomic_store"
>> -  [(set (match_operand:QHSI 0 "memory_operand" "=m,m")
>> +  [(set (match_operand:QHSI 0 "memory_operand" "=m,Uw")
>>  (unspec_volatile:QHSI
>>[(match_operand:QHSI 1 "register_operand" "r,l")]
>>VUNSPEC_STR))]
>> diff --git a/gcc/testsuite/gcc.target/arm/pr115188.c 
>> b/gcc/testsuite/gcc.target/arm/pr115188.c
>> new file mode 100644
>> index 
>> ..9a4022b56796d6962bb3f22e40bac4b81eb78ccf
>> --- /dev/null
>> +++ b/gcc/testsuite/gcc.target/arm/pr115188.c
>> @@ -0,0 +1,10 @@
>> +/* { dg-do assemble } */
>> +/* { dg-require-effective-target arm_arch_v6m_ok }
>> +/* { dg-options "-O2" } */
>> +/* { dg-add-options arm_arch_v6m } */
>> +
>> +void init (int *p, int n)
>> +{
>> +  for (int i = 0; i < n; i++)
>> +__atomic_store_4 (p + i, 0, __ATOMIC_RELAXED);
>> +}
>>
> 


Re: [PATCH] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-12 Thread Richard Biener
On Wed, 12 Jun 2024, Richard Sandiford wrote:

> Richard Biener  writes:
> > On Wed, 12 Jun 2024, Richard Biener wrote:
> >
> >> On Tue, 11 Jun 2024, Richard Sandiford wrote:
> >> 
> >> > Don't think it makes any difference, but:
> >> > 
> >> > Richard Biener  writes:
> >> > > @@ -2151,7 +2151,16 @@ get_group_load_store_type (vec_info *vinfo, 
> >> > > stmt_vec_info stmt_info,
> >> > > access excess elements.
> >> > > ???  Enhancements include peeling multiple 
> >> > > iterations
> >> > > or using masked loads with a static mask.  */
> >> > > -|| (group_size * cvf) % cnunits + group_size - gap < 
> >> > > cnunits))
> >> > > +|| ((group_size * cvf) % cnunits + group_size - gap < 
> >> > > cnunits
> >> > > +/* But peeling a single scalar iteration is 
> >> > > enough if
> >> > > +   we can use the next power-of-two sized partial
> >> > > +   access.  */
> >> > > +&& ((cremain = (group_size * cvf - gap) % 
> >> > > cnunits), true
> >> > 
> >> > ...this might be less surprising as:
> >> > 
> >> >&& ((cremain = (group_size * cvf - gap) % cnunits, true)
> >> > 
> >> > in terms of how the & line up.
> >> 
> >> Yeah - I'll fix before pushing.
> >
> > The aarch64 CI shows that a few testcases no longer use SVE
> > (gcc.target/aarch64/sve/slp_perm_{4,7,8}.c) because peeling
> > for gaps is deemed isufficient.  Formerly we had
> >
> >   if (loop_vinfo
> >   && *memory_access_type == VMAT_CONTIGUOUS
> >   && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()
> >   && !multiple_p (group_size * LOOP_VINFO_VECT_FACTOR 
> > (loop_vinfo),
> >   nunits))
> > {
> >   unsigned HOST_WIDE_INT cnunits, cvf;
> >   if (!can_overrun_p
> >   || !nunits.is_constant ()
> >   || !LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant 
> > ()
> >   /* Peeling for gaps assumes that a single scalar 
> > iteration
> >  is enough to make sure the last vector iteration 
> > doesn't
> >  access excess elements.
> >  ???  Enhancements include peeling multiple iterations
> >  or using masked loads with a static mask.  */
> >   || (group_size * cvf) % cnunits + group_size - gap < 
> > cnunits)
> > {
> >   if (dump_enabled_p ())
> > dump_printf_loc (MSG_MISSED_OPTIMIZATION, 
> > vect_location,
> >  "peeling for gaps insufficient for "
> >  "access\n");
> >
> > and in all cases multiple_p (group_size * LOOP_VINFO_VECT_FACTOR, nunits)
> > is true so we didn't check for whether peeling one iteration is
> > sufficient.  But after the refactoring the outer checks merely
> > indicates there's overrun (which is there already because gap != 0).
> >
> > That is, we never verified, for the "regular" gap case, whether peeling
> > for a single iteration is sufficient.  But now of course we run into
> > the inner check which will always trigger if earlier checks didn't
> > work out to set overrun_p to false.
> >
> > For slp_perm_8.c we have a group_size of two, nunits is {16, 16}
> > and VF is {8, 8} and gap is one.  Given we know the
> > multiple_p we know that (group_size * cvf) % cnunits is zero,
> > so what remains is group_size - gap < nunits but 1 is probably
> > always less than {16, 16}.
> 
> I thought the idea was that the size of the gap was immaterial
> for VMAT_CONTIGUOUS, on the assumption that it would never be
> bigger than a page.  That is, any gap loaded by the final
> unpeeled iteration would belong to the same page as the non-gap
> data from either the same vector iteration or the subsequent
> peeled scalar iteration.

The subsequent scalar iteration might be on the same page as the
last vector iteration but that accessing elements beyond those
touched by the subsequent scalar iteration (which could be on

Re: [PATCH v2] Arm: Fix disassembly error in Thumb-1 relaxed load/store [PR115188]

2024-06-12 Thread Richard Earnshaw (lists)
On 11/06/2024 17:35, Wilco Dijkstra wrote:
> Hi Christophe,
> 
>>  PR target/115153
> I guess this is typo (should be 115188) ?
> 
> Correct.
> 
>> +/* { dg-options "-O2 -mthumb" } */-mthumb is included in arm_arch_v6m, so I 
>> think you don't need to add it
> here?
> 
> Indeed, it's not strictly necessary. Fixed in v2:
> 
> A Thumb-1 memory operand allows single-register LDMIA/STMIA. This doesn't get
> printed as LDR/STR with writeback in unified syntax, resulting in strange
> assembler errors if writeback is selected.  To work around this, use the 'Uw'
> constraint that blocks writeback.

Doing just this will mean that the register allocator will have to undo a 
pre/post memory operand that was accepted by the predicate (memory_operand).  I 
think we really need a tighter predicate (lets call it noautoinc_mem_op) here 
to avoid that.  Note that the existing uses of Uw also had another alternative 
that did permit 'm', so this wasn't previously practical, but they had 
alternative ways of being reloaded.

R.

> 
> Passes bootstrap & regress, OK for commit and backport?
> 
> gcc:
> PR target/115188
> * config/arm/sync.md (arm_atomic_load): Use 'Uw' constraint.
> (arm_atomic_store): Likewise.
> 
> gcc/testsuite:
> PR target/115188
> * gcc.target/arm/pr115188.c: Add new test.
> 
> ---
> 
> diff --git a/gcc/config/arm/sync.md b/gcc/config/arm/sync.md
> index 
> df8dbe170cacb6b60d56a6f19aadd5a6c9c51f7a..e856ee51d9ae7b945c4d1e9d1f08afeedc95707a
>  100644
> --- a/gcc/config/arm/sync.md
> +++ b/gcc/config/arm/sync.md
> @@ -65,7 +65,7 @@
>  (define_insn "arm_atomic_load"
>[(set (match_operand:QHSI 0 "register_operand" "=r,l")
>  (unspec_volatile:QHSI
> -  [(match_operand:QHSI 1 "memory_operand" "m,m")]
> +  [(match_operand:QHSI 1 "memory_operand" "m,Uw")]
>VUNSPEC_LDR))]
>""
>"ldr\t%0, %1"
> @@ -81,7 +81,7 @@
>  )
> 
>  (define_insn "arm_atomic_store"
> -  [(set (match_operand:QHSI 0 "memory_operand" "=m,m")
> +  [(set (match_operand:QHSI 0 "memory_operand" "=m,Uw")
>  (unspec_volatile:QHSI
>[(match_operand:QHSI 1 "register_operand" "r,l")]
>VUNSPEC_STR))]
> diff --git a/gcc/testsuite/gcc.target/arm/pr115188.c 
> b/gcc/testsuite/gcc.target/arm/pr115188.c
> new file mode 100644
> index 
> ..9a4022b56796d6962bb3f22e40bac4b81eb78ccf
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/arm/pr115188.c
> @@ -0,0 +1,10 @@
> +/* { dg-do assemble } */
> +/* { dg-require-effective-target arm_arch_v6m_ok }
> +/* { dg-options "-O2" } */
> +/* { dg-add-options arm_arch_v6m } */
> +
> +void init (int *p, int n)
> +{
> +  for (int i = 0; i < n; i++)
> +__atomic_store_4 (p + i, 0, __ATOMIC_RELAXED);
> +}
> 



Re: [PATCH] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-12 Thread Richard Sandiford
Richard Biener  writes:
> On Wed, 12 Jun 2024, Richard Biener wrote:
>
>> On Tue, 11 Jun 2024, Richard Sandiford wrote:
>> 
>> > Don't think it makes any difference, but:
>> > 
>> > Richard Biener  writes:
>> > > @@ -2151,7 +2151,16 @@ get_group_load_store_type (vec_info *vinfo, 
>> > > stmt_vec_info stmt_info,
>> > >   access excess elements.
>> > >   ???  Enhancements include peeling multiple 
>> > > iterations
>> > >   or using masked loads with a static mask.  */
>> > > -  || (group_size * cvf) % cnunits + group_size - gap < 
>> > > cnunits))
>> > > +  || ((group_size * cvf) % cnunits + group_size - gap < 
>> > > cnunits
>> > > +  /* But peeling a single scalar iteration is 
>> > > enough if
>> > > + we can use the next power-of-two sized partial
>> > > + access.  */
>> > > +  && ((cremain = (group_size * cvf - gap) % 
>> > > cnunits), true
>> > 
>> > ...this might be less surprising as:
>> > 
>> >  && ((cremain = (group_size * cvf - gap) % cnunits, true)
>> > 
>> > in terms of how the & line up.
>> 
>> Yeah - I'll fix before pushing.
>
> The aarch64 CI shows that a few testcases no longer use SVE
> (gcc.target/aarch64/sve/slp_perm_{4,7,8}.c) because peeling
> for gaps is deemed isufficient.  Formerly we had
>
>   if (loop_vinfo
>   && *memory_access_type == VMAT_CONTIGUOUS
>   && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()
>   && !multiple_p (group_size * LOOP_VINFO_VECT_FACTOR 
> (loop_vinfo),
>   nunits))
> {
>   unsigned HOST_WIDE_INT cnunits, cvf;
>   if (!can_overrun_p
>   || !nunits.is_constant ()
>   || !LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant 
> ()
>   /* Peeling for gaps assumes that a single scalar 
> iteration
>  is enough to make sure the last vector iteration 
> doesn't
>  access excess elements.
>  ???  Enhancements include peeling multiple iterations
>  or using masked loads with a static mask.  */
>   || (group_size * cvf) % cnunits + group_size - gap < 
> cnunits)
> {
>   if (dump_enabled_p ())
> dump_printf_loc (MSG_MISSED_OPTIMIZATION, 
> vect_location,
>  "peeling for gaps insufficient for "
>  "access\n");
>
> and in all cases multiple_p (group_size * LOOP_VINFO_VECT_FACTOR, nunits)
> is true so we didn't check for whether peeling one iteration is
> sufficient.  But after the refactoring the outer checks merely
> indicates there's overrun (which is there already because gap != 0).
>
> That is, we never verified, for the "regular" gap case, whether peeling
> for a single iteration is sufficient.  But now of course we run into
> the inner check which will always trigger if earlier checks didn't
> work out to set overrun_p to false.
>
> For slp_perm_8.c we have a group_size of two, nunits is {16, 16}
> and VF is {8, 8} and gap is one.  Given we know the
> multiple_p we know that (group_size * cvf) % cnunits is zero,
> so what remains is group_size - gap < nunits but 1 is probably
> always less than {16, 16}.

I thought the idea was that the size of the gap was immaterial
for VMAT_CONTIGUOUS, on the assumption that it would never be
bigger than a page.  That is, any gap loaded by the final
unpeeled iteration would belong to the same page as the non-gap
data from either the same vector iteration or the subsequent
peeled scalar iteration.

Will have to think more about this if that doesn't affect the
rest of the message, but FWIW...

> The new logic I added in the later patch that peeling a single
> iteration is OK when we use a smaller, rounded-up to power-of-two
> sized access is
>
>   || ((group_size * cvf) % cnunits + group_size - gap < 
> cnunits
>   /* But peeling a single scalar iteration is enough 
> if
>  we can use the next power-of-two sized partial
>  access.  */
>   && (cremain = (group_size * cvf - gap) % cnunits, 
> true)
>

[PATCH 3/3][v3] Improve code generation of strided SLP loads

2024-06-12 Thread Richard Biener
This avoids falling back to elementwise accesses for strided SLP
loads when the group size is not a multiple of the vector element
size.  Instead we can use a smaller vector or integer type for the load.

For stores we can do the same though restrictions on stores we handle
and the fact that store-merging covers up makes this mostly effective
for cost modeling which shows for gcc.target/i386/vect-strided-3.c
which we now vectorize with V4SI vectors rather than just V2SI ones.

For all of this there's still the opportunity to use non-uniform
accesses, say for a 6-element group with a VF of two do
V4SI, { V2SI, V2SI }, V4SI.  But that's for a possible followup.

* gcc.target/i386/vect-strided-1.c: New testcase.
* gcc.target/i386/vect-strided-2.c: Likewise.
* gcc.target/i386/vect-strided-3.c: Likewise.
* gcc.target/i386/vect-strided-4.c: Likewise.
---
 .../gcc.target/i386/vect-strided-1.c  |  24 +
 .../gcc.target/i386/vect-strided-2.c  |  17 +++
 .../gcc.target/i386/vect-strided-3.c  |  20 
 .../gcc.target/i386/vect-strided-4.c  |  20 
 gcc/tree-vect-stmts.cc| 100 --
 5 files changed, 127 insertions(+), 54 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-2.c
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-3.c
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-4.c

diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-1.c 
b/gcc/testsuite/gcc.target/i386/vect-strided-1.c
new file mode 100644
index 000..db4a06711f1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-strided-1.c
@@ -0,0 +1,24 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -msse2 -mno-avx" } */
+
+void foo (int * __restrict a, int *b, int s)
+{
+  for (int i = 0; i < 1024; ++i)
+{
+  a[8*i+0] = b[s*i+0];
+  a[8*i+1] = b[s*i+1];
+  a[8*i+2] = b[s*i+2];
+  a[8*i+3] = b[s*i+3];
+  a[8*i+4] = b[s*i+4];
+  a[8*i+5] = b[s*i+5];
+  a[8*i+6] = b[s*i+4];
+  a[8*i+7] = b[s*i+5];
+}
+}
+
+/* Three two-element loads, two four-element stores.  On ia32 we elide
+   a permute and perform a redundant load.  */
+/* { dg-final { scan-assembler-times "movq" 2 } } */
+/* { dg-final { scan-assembler-times "movhps" 2 { target ia32 } } } */
+/* { dg-final { scan-assembler-times "movhps" 1 { target { ! ia32 } } } } */
+/* { dg-final { scan-assembler-times "movups" 2 } } */
diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-2.c 
b/gcc/testsuite/gcc.target/i386/vect-strided-2.c
new file mode 100644
index 000..6fd64e28cf0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-strided-2.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -msse2 -mno-avx" } */
+
+void foo (int * __restrict a, int *b, int s)
+{
+  for (int i = 0; i < 1024; ++i)
+{
+  a[4*i+0] = b[s*i+0];
+  a[4*i+1] = b[s*i+1];
+  a[4*i+2] = b[s*i+0];
+  a[4*i+3] = b[s*i+1];
+}
+}
+
+/* One two-element load, one four-element store.  */
+/* { dg-final { scan-assembler-times "movq" 1 } } */
+/* { dg-final { scan-assembler-times "movups" 1 } } */
diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-3.c 
b/gcc/testsuite/gcc.target/i386/vect-strided-3.c
new file mode 100644
index 000..b462701a0b2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-strided-3.c
@@ -0,0 +1,20 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -msse2 -mno-avx -fno-tree-slp-vectorize" } */
+
+void foo (int * __restrict a, int *b, int s)
+{
+  if (s >= 6)
+for (int i = 0; i < 1024; ++i)
+  {
+   a[s*i+0] = b[4*i+0];
+   a[s*i+1] = b[4*i+1];
+   a[s*i+2] = b[4*i+2];
+   a[s*i+3] = b[4*i+3];
+   a[s*i+4] = b[4*i+0];
+   a[s*i+5] = b[4*i+1];
+  }
+}
+
+/* While the vectorizer generates 6 uint64 stores.  */
+/* { dg-final { scan-assembler-times "movq" 4 } } */
+/* { dg-final { scan-assembler-times "movhps" 2 } } */
diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-4.c 
b/gcc/testsuite/gcc.target/i386/vect-strided-4.c
new file mode 100644
index 000..dd922926a2a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-strided-4.c
@@ -0,0 +1,20 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -msse4.2 -mno-avx -fno-tree-slp-vectorize" } */
+
+void foo (int * __restrict a, int * __restrict b, int *c, int s)
+{
+  if (s >= 2)
+for (int i = 0; i < 1024; ++i)
+  {
+   a[s*i+0] = c[4*i+0];
+   a[s*i+1] = c[4*i+1];
+   b[s*i+0] = c[4*i+2];
+   b[s*i+1] = c[4*i+3];
+  }
+}
+
+/* Vectorization factor two, two two-element stores to a using movq
+   and two two-element stores to b via pextrq/movhps of the high part.  */
+/* { dg-final { scan-assembler-times "movq" 2 } } */
+/* { dg-final { scan-assembler-times "pextrq" 2 { target { ! ia32 } } } } */
+/* { dg-final { scan-assembler-times "movhps" 2 { target { ia32 } } } } */

[PATCH 2/3][v3] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-12 Thread Richard Biener
The following makes peeling of a single scalar iteration handle more
gaps, including non-power-of-two cases.  This can be done by rounding
up the remaining access to the next power-of-two which ensures that
the next scalar iteration will pick at least the number of excess
elements we access.

I've added a correctness testcase and one x86 specific scanning for
the optimization.

PR tree-optimization/115385
* tree-vect-stmts.cc (get_group_load_store_type): Peeling
of a single scalar iteration is sufficient if we can narrow
the access to the next power of two of the bits in the last
access.
(vectorizable_load): Ensure that the last access is narrowed.

* gcc.dg/vect/pr115385.c: New testcase.
* gcc.target/i386/vect-pr115385.c: Likewise.
---
 gcc/testsuite/gcc.dg/vect/pr115385.c  | 88 +++
 gcc/testsuite/gcc.target/i386/vect-pr115385.c | 53 +++
 gcc/tree-vect-stmts.cc| 44 --
 3 files changed, 180 insertions(+), 5 deletions(-)
 create mode 100644 gcc/testsuite/gcc.dg/vect/pr115385.c
 create mode 100644 gcc/testsuite/gcc.target/i386/vect-pr115385.c

diff --git a/gcc/testsuite/gcc.dg/vect/pr115385.c 
b/gcc/testsuite/gcc.dg/vect/pr115385.c
new file mode 100644
index 000..a18cd665d7d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/vect/pr115385.c
@@ -0,0 +1,88 @@
+/* { dg-require-effective-target mmap } */
+
+#include 
+#include 
+
+#define COUNT 511
+#define MMAP_SIZE 0x2
+#define ADDRESS 0x112200
+#define TYPE unsigned char
+
+#ifndef MAP_ANONYMOUS
+#define MAP_ANONYMOUS MAP_ANON
+#endif
+
+void __attribute__((noipa)) foo(TYPE * __restrict x,
+TYPE *y, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  x[16*i+0] = y[3*i+0];
+  x[16*i+1] = y[3*i+1];
+  x[16*i+2] = y[3*i+2];
+  x[16*i+3] = y[3*i+0];
+  x[16*i+4] = y[3*i+1];
+  x[16*i+5] = y[3*i+2];
+  x[16*i+6] = y[3*i+0];
+  x[16*i+7] = y[3*i+1];
+  x[16*i+8] = y[3*i+2];
+  x[16*i+9] = y[3*i+0];
+  x[16*i+10] = y[3*i+1];
+  x[16*i+11] = y[3*i+2];
+  x[16*i+12] = y[3*i+0];
+  x[16*i+13] = y[3*i+1];
+  x[16*i+14] = y[3*i+2];
+  x[16*i+15] = y[3*i+0];
+}
+}
+
+void __attribute__((noipa)) bar(TYPE * __restrict x,
+TYPE *y, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  x[16*i+0] = y[5*i+0];
+  x[16*i+1] = y[5*i+1];
+  x[16*i+2] = y[5*i+2];
+  x[16*i+3] = y[5*i+3];
+  x[16*i+4] = y[5*i+4];
+  x[16*i+5] = y[5*i+0];
+  x[16*i+6] = y[5*i+1];
+  x[16*i+7] = y[5*i+2];
+  x[16*i+8] = y[5*i+3];
+  x[16*i+9] = y[5*i+4];
+  x[16*i+10] = y[5*i+0];
+  x[16*i+11] = y[5*i+1];
+  x[16*i+12] = y[5*i+2];
+  x[16*i+13] = y[5*i+3];
+  x[16*i+14] = y[5*i+4];
+  x[16*i+15] = y[5*i+0];
+}
+}
+
+TYPE x[COUNT * 16];
+
+int
+main (void)
+{
+  void *y;
+  TYPE *end_y;
+
+  y = mmap ((void *) ADDRESS, MMAP_SIZE, PROT_READ | PROT_WRITE,
+MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+  if (y == MAP_FAILED)
+{
+  perror ("mmap");
+  return 1;
+}
+
+  end_y = (TYPE *) ((char *) y + MMAP_SIZE);
+
+  foo (x, end_y - COUNT * 3, COUNT);
+  bar (x, end_y - COUNT * 5, COUNT);
+
+  return 0;
+}
+
+/* We always require a scalar epilogue here but we don't know which
+   targets support vector composition this way.  */
diff --git a/gcc/testsuite/gcc.target/i386/vect-pr115385.c 
b/gcc/testsuite/gcc.target/i386/vect-pr115385.c
new file mode 100644
index 000..a6be9ce4e54
--- /dev/null
+++ b/gcc/testsuite/gcc.target/i386/vect-pr115385.c
@@ -0,0 +1,53 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -msse4.1 -mno-avx -fdump-tree-vect-details" } */
+
+void __attribute__((noipa)) foo(unsigned char * __restrict x,
+unsigned char *y, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  x[16*i+0] = y[3*i+0];
+  x[16*i+1] = y[3*i+1];
+  x[16*i+2] = y[3*i+2];
+  x[16*i+3] = y[3*i+0];
+  x[16*i+4] = y[3*i+1];
+  x[16*i+5] = y[3*i+2];
+  x[16*i+6] = y[3*i+0];
+  x[16*i+7] = y[3*i+1];
+  x[16*i+8] = y[3*i+2];
+  x[16*i+9] = y[3*i+0];
+  x[16*i+10] = y[3*i+1];
+  x[16*i+11] = y[3*i+2];
+  x[16*i+12] = y[3*i+0];
+  x[16*i+13] = y[3*i+1];
+  x[16*i+14] = y[3*i+2];
+  x[16*i+15] = y[3*i+0];
+}
+}
+
+void __attribute__((noipa)) bar(unsigned char * __restrict x,
+unsigned char *y, int n)
+{
+  for (int i = 0; i < n; ++i)
+{
+  x[16*i+0] = y[5*i+0];
+  x[16*i+1] = y[5*i+1];
+  x[16*i+2] = y[5*i+2];
+  x[16*i+3] = y[5*i+3];
+  x[16*i+4] = y[5*i+4];
+  x[16*i+5] = y[5*i+0];
+  x[16*i+6] = y[5*i+1];
+  x[16*i+7] = y[5*i+2];
+  x[16*i+8] = y[5*i+3];
+  x[16*i+9] = y[5*i+4];
+  x[16*i+10] = y[5*i+0];
+  x[16*i+11] = y[5*i+1];
+  x[16*i+12] = y[5*i+2];
+  x[16*i+13] = y[5*i+3];
+  

[PATCH 1/3][v3] tree-optimization/114107 - avoid peeling for gaps in more cases

2024-06-12 Thread Richard Biener
The following refactors the code to detect necessary peeling for
gaps, in particular the PR103116 case when there is no gap but
the group size is smaller than the vector size.  The testcase in
PR114107 shows we fail to SLP

  for (int i=0; i

Re: [PATCH] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-12 Thread Richard Biener
On Wed, 12 Jun 2024, Richard Biener wrote:

> On Tue, 11 Jun 2024, Richard Sandiford wrote:
> 
> > Don't think it makes any difference, but:
> > 
> > Richard Biener  writes:
> > > @@ -2151,7 +2151,16 @@ get_group_load_store_type (vec_info *vinfo, 
> > > stmt_vec_info stmt_info,
> > >access excess elements.
> > >???  Enhancements include peeling multiple iterations
> > >or using masked loads with a static mask.  */
> > > -   || (group_size * cvf) % cnunits + group_size - gap < cnunits))
> > > +   || ((group_size * cvf) % cnunits + group_size - gap < cnunits
> > > +   /* But peeling a single scalar iteration is enough if
> > > +  we can use the next power-of-two sized partial
> > > +  access.  */
> > > +   && ((cremain = (group_size * cvf - gap) % cnunits), true
> > 
> > ...this might be less surprising as:
> > 
> >   && ((cremain = (group_size * cvf - gap) % cnunits, true)
> > 
> > in terms of how the & line up.
> 
> Yeah - I'll fix before pushing.

The aarch64 CI shows that a few testcases no longer use SVE
(gcc.target/aarch64/sve/slp_perm_{4,7,8}.c) because peeling
for gaps is deemed isufficient.  Formerly we had

  if (loop_vinfo
  && *memory_access_type == VMAT_CONTIGUOUS
  && SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()
  && !multiple_p (group_size * LOOP_VINFO_VECT_FACTOR 
(loop_vinfo),
  nunits))
{
  unsigned HOST_WIDE_INT cnunits, cvf;
  if (!can_overrun_p
  || !nunits.is_constant ()
  || !LOOP_VINFO_VECT_FACTOR (loop_vinfo).is_constant 
()
  /* Peeling for gaps assumes that a single scalar 
iteration
 is enough to make sure the last vector iteration 
doesn't
 access excess elements.
 ???  Enhancements include peeling multiple iterations
 or using masked loads with a static mask.  */
  || (group_size * cvf) % cnunits + group_size - gap < 
cnunits)
{
  if (dump_enabled_p ())
dump_printf_loc (MSG_MISSED_OPTIMIZATION, 
vect_location,
 "peeling for gaps insufficient for "
 "access\n");

and in all cases multiple_p (group_size * LOOP_VINFO_VECT_FACTOR, nunits)
is true so we didn't check for whether peeling one iteration is
sufficient.  But after the refactoring the outer checks merely
indicates there's overrun (which is there already because gap != 0).

That is, we never verified, for the "regular" gap case, whether peeling
for a single iteration is sufficient.  But now of course we run into
the inner check which will always trigger if earlier checks didn't
work out to set overrun_p to false.

For slp_perm_8.c we have a group_size of two, nunits is {16, 16}
and VF is {8, 8} and gap is one.  Given we know the
multiple_p we know that (group_size * cvf) % cnunits is zero,
so what remains is group_size - gap < nunits but 1 is probably
always less than {16, 16}.

The new logic I added in the later patch that peeling a single
iteration is OK when we use a smaller, rounded-up to power-of-two
sized access is

  || ((group_size * cvf) % cnunits + group_size - gap < 
cnunits
  /* But peeling a single scalar iteration is enough 
if
 we can use the next power-of-two sized partial
 access.  */
  && (cremain = (group_size * cvf - gap) % cnunits, 
true)
  && (cpart_size = (1 << ceil_log2 (cremain))) != 
cnunits
  && vector_vector_composition_type 
   (vectype, cnunits / cpart_size, 
_vtype) == NULL_TREE)))

again knowing the multiple we know cremain is nunits - gap and with
gap == 1 rounding this size up will yield nunits and thus the existing
peeling is OK.  Something is inconsistent here and the pre-existing

  (group_size * cvf) % cnunits + group_size - gap < cnunits

check looks suspicious for a general check.

  (group_size * cvf - gap)

should be the number of elements we can access without touching
excess elements.  Peeling a single iteration will make sure
group_size * cvf + group_size - gap is accessed
(that's group_size * (cvf + 1) - gap).  The excess elements
touched in the vector loop are

  cnunits - (group_size * cvf - gap) % cnunits

I think that number needs to be less or equal to group_size, so
the correct check s

Re: [OE-core] Patchtest results for [PATCH v2 1/2] ccache: 4.9.1 -> 4.10

2024-06-12 Thread Richard Purdie
On Wed, 2024-06-12 at 08:54 +, Peter Kjellerstedt via
lists.openembedded.org wrote:
> > -Original Message-
> > From: openembedded-core@lists.openembedded.org  > c...@lists.openembedded.org> On Behalf Of Robert Yang via
> > lists.openembedded.org
> > Sent: den 12 juni 2024 07:54
> > To: patcht...@automation.yoctoproject.org
> > Cc: openembedded-core@lists.openembedded.org
> > Subject: Re: [OE-core] Patchtest results for [PATCH v2 1/2] ccache:
> > 4.9.1
> > -> 4.10
> > 
> > On 6/12/24 13:50, patcht...@automation.yoctoproject.org wrote:
> > > Thank you for your submission. Patchtest identified one
> > > or more issues with the patch. Please see the log below for
> > > more information:
> > > 
> > > ---
> > > Testing patch /home/patchtest/share/mboxes/v2-1-2-ccache-4.9.1---
> > 4.10.patch
> > > 
> > > FAIL: test lic files chksum modified not mentioned:
> > > LIC_FILES_CHKSUM
> > changed without "License-Update:" tag and description in commit
> > message
> > (test_metadata.TestMetadata.test_lic_files_chksum_modified_not_ment
> > ioned)
> > 
> > I did add "License-Update:" in V2:
> > 
> >  * License-Update:
> >    - Update LIC_FILES_CHKSUM becaue a few third party licenses
> > have
> > been
> > removed:
> >    $ git diff --stat v4.9.1..v4.10 LICENSE.adoc
> >  LICENSE.adoc | 222 +-
> >  1 file changed, 15 insertions(+), 207 deletions(-)
> > 
> >    And add more licenses for third party files.
> > 
> > // Robert
> 
> I believe it is expected to be at the start of the line, i.e., as a
> Git trailer 
> (similar to, e.g., Signed-off-by).

I think we can relax the constraint for Lincense-Update a little, the
main point is that it gets mentioned somewhere in the commit.

Cheers,

Richard

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#200568): 
https://lists.openembedded.org/g/openembedded-core/message/200568
Mute This Topic: https://lists.openembedded.org/mt/106627798/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH] aarch64: Use bitreverse rtl code instead of unspec [PR115176]

2024-06-12 Thread Richard Sandiford
Andrew Pinski  writes:
> Bitreverse rtl code was added with r14-1586-g6160572f8d243c. So let's
> use it instead of an unspec. This is just a small cleanup but it does
> have one small fix with respect to rtx costs which didn't handle vector modes
> correctly for the UNSPEC and now it does.
> This is part of the first step in adding __builtin_bitreverse's builtins
> but it is independent of it though.

Nice cleanup.

> Bootstrapped and tested on aarch64-linux-gnu with no regressions.
>
> gcc/ChangeLog:
>
>   PR target/115176
>   * config/aarch64/aarch64-simd.md (aarch64_rbit): Use
>   bitreverse instead of unspec.
>   * config/aarch64/aarch64-sve-builtins-base.cc (svrbit): Convert over to 
> using
>   rtx_code_function instead of unspec_based_function.
>   * config/aarch64/aarch64-sve.md: Update comment where RBIT is included.
>   * config/aarch64/aarch64.cc (aarch64_rtx_costs): Handle BITREVERSE like 
> BSWAP.
>   Remove UNSPEC_RBIT support.
>   * config/aarch64/aarch64.md (unspec): Remove UNSPEC_RBIT.
>   (aarch64_rbit): Use bitreverse instead of unspec.
>   * config/aarch64/iterators.md (SVE_INT_UNARY): Add bitreverse.
>   (optab): Likewise.
>   (sve_int_op): Likewise.
>   (SVE_INT_UNARY): Remove UNSPEC_RBIT.
>   (optab): Likewise.
>   (sve_int_op): Likewise.
>   (min_elem_bits): Likewise.
>
> Signed-off-by: Andrew Pinski 
> ---
>  gcc/config/aarch64/aarch64-simd.md  |  3 +--
>  gcc/config/aarch64/aarch64-sve-builtins-base.cc |  2 +-
>  gcc/config/aarch64/aarch64-sve.md   |  2 +-
>  gcc/config/aarch64/aarch64.cc   | 10 ++
>  gcc/config/aarch64/aarch64.md   |  3 +--
>  gcc/config/aarch64/iterators.md | 10 +-
>  6 files changed, 11 insertions(+), 19 deletions(-)
>
> diff --git a/gcc/config/aarch64/aarch64-simd.md 
> b/gcc/config/aarch64/aarch64-simd.md
> index f644bd1731e..0bb39091a38 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -377,8 +377,7 @@ (define_insn "bswap2"
>  
>  (define_insn "aarch64_rbit"
>[(set (match_operand:VB 0 "register_operand" "=w")
> - (unspec:VB [(match_operand:VB 1 "register_operand" "w")]
> -UNSPEC_RBIT))]
> + (bitreverse:VB (match_operand:VB 1 "register_operand" "w")))]
>"TARGET_SIMD"
>"rbit\\t%0., %1."
>[(set_attr "type" "neon_rbit")]
> diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc 
> b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> index 0d2edf3f19e..dea2f6e6bfc 100644
> --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
> @@ -3186,7 +3186,7 @@ FUNCTION (svqincp, svqdecp_svqincp_impl, (SS_PLUS, 
> US_PLUS))
>  FUNCTION (svqincw, svqinc_bhwd_impl, (SImode))
>  FUNCTION (svqincw_pat, svqinc_bhwd_impl, (SImode))
>  FUNCTION (svqsub, rtx_code_function, (SS_MINUS, US_MINUS, -1))
> -FUNCTION (svrbit, unspec_based_function, (UNSPEC_RBIT, UNSPEC_RBIT, -1))
> +FUNCTION (svrbit, rtx_code_function, (BITREVERSE, BITREVERSE, -1))
>  FUNCTION (svrdffr, svrdffr_impl,)
>  FUNCTION (svrecpe, unspec_based_function, (-1, UNSPEC_URECPE, UNSPEC_FRECPE))
>  FUNCTION (svrecps, unspec_based_function, (-1, -1, UNSPEC_FRECPS))
> diff --git a/gcc/config/aarch64/aarch64-sve.md 
> b/gcc/config/aarch64/aarch64-sve.md
> index d69db34016a..5331e7121d5 100644
> --- a/gcc/config/aarch64/aarch64-sve.md
> +++ b/gcc/config/aarch64/aarch64-sve.md
> @@ -3083,6 +3083,7 @@ (define_expand "vec_extract"
>  ;; - CLS (= clrsb)
>  ;; - CLZ
>  ;; - CNT (= popcount)
> +;; - RBIT (= bitreverse)
>  ;; - NEG
>  ;; - NOT
>  ;; -
> @@ -3171,7 +3172,6 @@ (define_insn "*cond__any"
>  ;;  [INT] General unary arithmetic corresponding to unspecs
>  ;; -
>  ;; Includes
> -;; - RBIT
>  ;; - REVB
>  ;; - REVH
>  ;; - REVW
> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
> index 13191ec8e34..0e9d7b1ec0f 100644
> --- a/gcc/config/aarch64/aarch64.cc
> +++ b/gcc/config/aarch64/aarch64.cc
> @@ -14690,6 +14690,7 @@ cost_plus:
>   return true;
>}
>  
> +case BITREVERSE:
>  case BSWAP:
>*cost = COSTS_N_INSNS (1);
>  
> @@ -15339,16 +15340,9 @@ cost_plus:
>  
>return false;
>  }
> -
> -  if (XINT (x, 1) == UNSPEC_RBIT)
> -{
> - 

Re: [PATCH v2 0/4] Libatomic: Cleanup ifunc selector and aliasing

2024-06-12 Thread Richard Sandiford
Victor Do Nascimento  writes:
> Changes in V2:
>
> As explained in patch v2 1/4, it has become clear that the current
> approach of querying assembler support for newer architectural
> extensions at compile time is undesirable both from a maintainability
> as well as a consistency standpoint - Different compiled versions of
> Libatomic may have different features depending on the machine on
> which they were built.
>
> These issues make for difficult testing as the explosion in number of
> `#ifdef' guards makes maintenance error-prone and the dependence on
> binutils version means that, as well as deploying changes for testing
> in a variety of target configurations, testing must also involve
> compiling the library on an increasing number of host configurations,
> meaning that the chance of bugs going undetected increases (as was
> proved in the pre-commit CI which, due to the use of an older version
> of Binutils, picked up on a runtime-error that had hitherto gone
> unnoticed).
>
> We therefore do away with the use of all assembly instructions
> dependent on Binutils 2.42, choosing to replace them with `.inst's
> instead.  This eliminates the latent bug picked up by CI and will
> ensure consistent builds of Libatomic across all versions of Binutils.

Nice!  Thanks for doing this.  It seems much cleaner and more flexible
than the current approach.

Thanks also for the clear organisation of the series.

OK for trunk.  (For the record, I didn't hand-check the encodings of the
.insts ...)

Richard

> ---
>
> The recent introduction of the optional LSE128 and RCPC3 architectural
> extensions to AArch64 has further led to the increased flexibility of
> atomic support in the architecture, with many extensions providing
> support for distinct atomic operations, each with different potential
> applications in mind.
>
> This has led to maintenance difficulties in Libatomic, in particular
> regarding the way the ifunc selector is generated via a series of
> macro expansions at compile-time.
>
> Until now, irrespective of the atomic operation in question, all atomic
> functions for a particular operand size were expected to have the same
> number of ifunc alternatives, meaning that a one-size-fits-all
> approach could reasonably be taken for the selector.
>
> This meant that if, hypothetically, for a particular architecture and
> operand size one particular atomic operation was to have 3 different
> implementations associated with different extensions, libatomic would
> likewise be required to present three ifunc alternatives for all other
> atomic functions.
>
> The consequence in the design choice was the unnecessary use of
> function aliasing and the unwieldy code which resulted from this.
>
> This patch series attempts to remediate this issue by making the
> preprocessor macros defining the number of ifunc alternatives and
> their respective selection functions dependent on the file importing
> the ifunc selector-generating framework.
>
> all files are given `LAT_' macros, defined at the beginning
> and undef'd at the end of the file.  It is these macros that are
> subsequently used to fine-tune the behaviors of `libatomic_i.h' and
> `host-config.h'.
>
> In particular, the definition of the `IFUNC_NCOND(N)' and
> `IFUNC_COND_' macros in host-config.h can now be guarded behind
> these new file-specific macros, which ultimately control what the
> `GEN_SELECTOR(X)' macro in `libatomic_i.h' expands to.  As both of
> these headers are imported once per file implementing some atomic
> operation, fine-tuned control is now possible.
>
> Regtested with both `--enable-gnu-indirect-function' and
> `--disable-gnu-indirect-function' configurations on armv9.4-a target
> with LRCPC3 and LSE128 support and without.
>
> Victor Do Nascimento (4):
>   Libatomic: AArch64: Convert all lse128 assembly to .insn directives
>   Libatomic: Define per-file identifier macros
>   Libatomic: Make ifunc selector behavior contingent on importing file
>   Libatomic: Clean up AArch64 `atomic_16.S' implementation file
>
>  libatomic/acinclude.m4   |  18 -
>  libatomic/auto-config.h.in   |   3 -
>  libatomic/cas_n.c|   2 +
>  libatomic/config/linux/aarch64/atomic_16.S   | 511 +--
>  libatomic/config/linux/aarch64/host-config.h |  35 +-
>  libatomic/configure  |  43 --
>  libatomic/configure.ac   |   3 -
>  libatomic/exch_n.c   |   2 +
>  libatomic/fadd_n.c   |   2 +
>  libatomic/fand_n.c   |   2 +
>  libatomic/fence.c|   2 +
>  libatomic/fenv.c  

Re: Update on TomEE 10

2024-06-12 Thread Richard Zowalla
Hi,

the thing is, that we cannot release with a SNAPSHOT dependency. While it might 
be "ok" for binaries, it has the potential to break every user, who is using a 
TomEE dependency inside their environments for building, testing or embedding 
applications.

If we decide for having CXF 4.1.0-SNAPSHOT included, we would need to create a 
short-living fork of the current CXF version (with different group-ids) in 
order to have something "stable" or shade it (as it was done before CXF 4.x) 
just for a m2 release. Best case would be, that CXF releases a milestone, but I 
don't think, that they will do it, so it leaves us only these options.

Best
Richard

On 2024/06/12 06:12:21 Alex The Rocker wrote:
> Hello Richard,
> 
> First of all, thank you very much for these updates, and a big thanks
> to Markus for his major contribution !
> 
> To answer your question, I beleive that indeed a Milestone 2 release
> end of June would be perfect timing on my side to run many tests on
> (relatively) various uses of TomEE.
> 
> However, I'm not sure that rollbacking to latest stable CXF would be a
> good idea - unless we plan to turn Millestone 2 into a released
> version within short time after Milestone 2. If we are not in such
> hurry, then I tend to prefer to use TomEE 10 Milestone 2 with on-going
> CXF release that will eventually be used in final TomEE 10, so as to
> help CXF community to get feedbacks through TomEE 10 Milestone 2
> testers.
> 
> That's my 2 cents !
> 
> Thanks,
> Alex
> 
> 
> Le mer. 12 juin 2024 à 06:17, Richard Zowalla  a écrit :
> >
> > Hi all,
> >
> > Here is a new update on TomEE 10.
> >
> > Markus Jung has implemented the missing part of the EE10 security spec: [1] 
> > and the TCK for it looks good. Thanks a lot for that contribution! If 
> > anybody wants to give it a review, you find it here: [1]
> >
> > We have updated most MicroProfile specs to be compliant with MP6 and the 
> > TCK for it looks good.
> >
> > The only MicroProfile implementation missing is OpenTelemetry 1.0 [2] (and 
> > the removal of OpenTracing). There is a branch with a basic integration 
> > (TOMEE-4343) but while working on it, I found something odd, which I did 
> > discuss with Romain via Slack. The result is [3]. I hope to get some 
> > additional input from Mark Struberg on it, so we can hopefully find a way 
> > to fix the odd CDI part here. Overall, the related TCK has around 4-5 which 
> > are (most likely) a result of [3] because the interceptor is not working as 
> > expected.
> >
> > Since we are more and more in a (better) EE10 shape, we need to go back 
> > into fixing/adding the existing/remaining TCKs inside the TomEE build to 
> > see, if we need to do some work in our upstream dependencies. I am planning 
> > to send an update for that area soon, so we get an overview of what is 
> > already added and what is broken / missing,
> >
> > We are blocked by a release of CXF 4.1.0-SNAPSHOT.
> >
> > We should (imho) discuss, if it is worth to release a M2 with a downgrade 
> > to the latest stable CXF release since we added new features (MicroProfile 
> > updates, potentially OIDC soon) and upgraded a lot of 3rd party CVEs. So 
> > from my POV it would be crucial to get some feedback on a new milestone 
> > release. WDYT?
> >
> > Gruß
> > Richard
> >
> > [1] https://github.com/apache/tomee/pull/1178
> > [2] https://issues.apache.org/jira/browse/TOMEE-4343
> > [1] https://issues.apache.org/jira/browse/OWB-1441
> 


Re: [PATCH-1v3] fwprop: Replace rtx_cost with insn_cost in try_fwprop_subst_pattern [PR113325]

2024-06-12 Thread Richard Sandiford
HAO CHEN GUI  writes:
> Hi,
>   This patch replaces rtx_cost with insn_cost in forward propagation.
> In the PR, one constant vector should be propagated and replace a
> pseudo in a store insn if we know it's a duplicated constant vector.
> It reduces the insn cost but not rtx cost. In this case, the cost is
> determined by destination operand (memory or pseudo). Unfortunately,
> rtx cost can't help.
>
>   The test case is added in the second rs6000 specific patch.
>
>   Compared to previous version, the main changes are:
> 1. Invoke change_is_worthwhile to judge if the cost is reduced and
> the replacement is worthwhile.
> 2. Invalidate recog data before getting the insn cost for the new
> rtl as insn cost might call extract_constrain_insn_cached and
> extract_insn_cached to cache the recog data. The cache data is
> invalid for the new rtl and it causes ICE.
> 3. Check if the insn cost of new rtl is zero which means unknown
> cost. The replacement should be rejected at this situation.
>
> Previous version
> https://gcc.gnu.org/pipermail/gcc-patches/2024-May/651233.html
>
>   The patch causes a regression cases on i386 as the pattern cost
> regulation has a bug. Please refer the patch and discussion here.
> https://gcc.gnu.org/pipermail/gcc-patches/2024-May/651363.html
>
>   Bootstrapped and tested on powerpc64-linux BE and LE with no
> regressions. Is it OK for the trunk?
>
> ChangeLog
> fwprop: invoke change_is_worthwhile to judge if a replacement is worthwhile
>
> gcc/
>   * fwprop.cc (try_fwprop_subst_pattern): Invoke change_is_worthwhile
>   to judge if a replacement is worthwhile.
>   * rtl-ssa/changes.cc (rtl_ssa::changes_are_worthwhile): Invalidate
>   recog data before getting the insn cost for the new rtl.  Check if
>   the insn cost of new rtl is unknown and fail the replacement.
>
> patch.diff
> diff --git a/gcc/fwprop.cc b/gcc/fwprop.cc
> index de543923b92..975de0eec7f 100644
> --- a/gcc/fwprop.cc
> +++ b/gcc/fwprop.cc
> @@ -471,29 +471,19 @@ try_fwprop_subst_pattern (obstack_watermark , 
> insn_change _change,
>redo_changes (0);
>  }
>
> -  /* ??? In theory, it should be better to use insn costs rather than
> - set_src_costs here.  That would involve replacing this code with
> - change_is_worthwhile.  */
>bool ok = recog (attempt, use_change);
> -  if (ok && !prop.changed_mem_p () && !use_insn->is_asm ())
> -if (rtx use_set = single_set (use_rtl))
> -  {
> - bool speed = optimize_bb_for_speed_p (BLOCK_FOR_INSN (use_rtl));
> - temporarily_undo_changes (0);
> - auto old_cost = set_src_cost (SET_SRC (use_set),
> -   GET_MODE (SET_DEST (use_set)), speed);
> - redo_changes (0);
> - auto new_cost = set_src_cost (SET_SRC (use_set),
> -   GET_MODE (SET_DEST (use_set)), speed);
> - if (new_cost > old_cost
> - || (new_cost == old_cost && !prop.likely_profitable_p ()))
> -   {
> - if (dump_file)
> -   fprintf (dump_file, "change not profitable"
> -" (cost %d -> cost %d)\n", old_cost, new_cost);
> - ok = false;
> -   }
> -  }
> +  if (ok && !prop.changed_mem_p () && !use_insn->is_asm ()
> +  && single_set (use_rtl))
> +{
> +  if (!change_is_worthwhile (use_change, false)
> +   || (!prop.likely_profitable_p ()
> +   && !change_is_worthwhile (use_change, true)))
> + {
> +   if (dump_file)
> + fprintf (dump_file, "change not profitable");
> +   ok = false;
> + }
> +}

It should only be necessary to call change_is_worthwhile once,
with strict == !prop.likely_profitable_p ()

So something like:

  bool ok = recog (attempt, use_change);
  if (ok && !prop.changed_mem_p () && !use_insn->is_asm ())
{
  bool strict_p = !prop.likely_profitable_p ();
  if (!change_is_worthwhile (use_change, strict_p))
{
  if (dump_file)
fprintf (dump_file, "change not profitable");
  ok = false;
}
}

> diff --git a/gcc/rtl-ssa/changes.cc b/gcc/rtl-ssa/changes.cc
> index 11639e81bb7..9bad6c2070c 100644
> --- a/gcc/rtl-ssa/changes.cc
> +++ b/gcc/rtl-ssa/changes.cc
> @@ -185,7 +185,18 @@ rtl_ssa::changes_are_worthwhile (array_slice *const> changes,
> * change->old_cost ());
>if (!change->is_deletion ())
>   {
> +   /* Invalidate recog data as insn_cost may call
> +  extract_insn_cached.  */
> +   INSN_CODE (change->rtl ()) = -1;

The:

  bool o

Re: How to represent a fallthrough condtion (with no else) in "Match and Simplify"?

2024-06-12 Thread Richard Biener via Gcc
On Wed, Jun 12, 2024 at 8:57 AM Hanke Zhang via Gcc  wrote:
>
> Hi,
>
> I'm trying to study "Match and Simplify" recently, and I had this sample code:
>
> int main() {
>   int n = 1000;
>   int *a = malloc (sizeof(int) * n);
>   int *b = malloc (sizeof(int) * n);
>   int *c = malloc (sizeof(int) * n);
>   for (int i = 0; i < n; i++) {
> if (a[i] & b[i]) {
>   a[i] ^= c[i];
> }
>   }
> }
>
> But this code cannot be vectorized very well. I hope it can become like this:
>
> int main() {
>   int n = 1000;
>   int *a = malloc (sizeof(int) * n);
>   int *b = malloc (sizeof(int) * n);
>   int *c = malloc (sizeof(int) * n);
>   for (int i = 0; i < n; i++) {
> int cond = ((a[i] & b[i]) == 1);
> unsigned int mask = cond ? -1 : 0;
> a[i] ^= (c[i] & mask);
>   }
> }
>
>
> This can finally result in concise and efficient vectorized
> instructions. But I want to know if this can be achieved through
> "Match and Simplify"? Because when I tried to write the pattern, I
> found that the condtional statement here seemed not to be matched
> well, as there is not an else block.
>
> Or is this not possible with "Match and Simplify"? Is it possible to
> implement it in if-conversion?

It's not possible to perform this transform in match-and-simplify,
if-conversion does this but it considers 'a' to be possibly not
writable and thus the conditional store has to be preserved.  It
should use a .MASK_STORE here and I verified it does with
-mavx2.  Note your testcase is optimized away as it's full
of dead code.

int foo (int n, int *a, int *b, int *c) {
  for (int i = 0; i < n; i++) {
if (a[i] & b[i]) {
  a[i] ^= c[i];
}
  }
}

is what I tried.  I suppose other compilers do not consider
read-only memory mappings?  Note there's also store data races
to be considered (but -Ofast might help with that).

In my testcase the c[i] access could also trap, requiring .MASK_LOAD
(I'm quite sure we can't analyze allocated array bounds when the
allocation stmt is seen as in your case).

Richard.

> Thanks
> Hanke Zhang


Re: [PATCH] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-12 Thread Richard Biener
On Tue, 11 Jun 2024, Richard Sandiford wrote:

> Don't think it makes any difference, but:
> 
> Richard Biener  writes:
> > @@ -2151,7 +2151,16 @@ get_group_load_store_type (vec_info *vinfo, 
> > stmt_vec_info stmt_info,
> >  access excess elements.
> >  ???  Enhancements include peeling multiple iterations
> >  or using masked loads with a static mask.  */
> > - || (group_size * cvf) % cnunits + group_size - gap < cnunits))
> > + || ((group_size * cvf) % cnunits + group_size - gap < cnunits
> > + /* But peeling a single scalar iteration is enough if
> > +we can use the next power-of-two sized partial
> > +access.  */
> > + && ((cremain = (group_size * cvf - gap) % cnunits), true
> 
> ...this might be less surprising as:
> 
> && ((cremain = (group_size * cvf - gap) % cnunits, true)
> 
> in terms of how the & line up.

Yeah - I'll fix before pushing.

Thanks,
Richard.

> Thanks,
> Richard
> 
> > + && ((cpart_size = (1 << ceil_log2 (cremain)))
> > + != cnunits)
> > + && vector_vector_composition_type
> > +  (vectype, cnunits / cpart_size,
> > +   _vtype) == NULL_TREE
> > {
> >   if (dump_enabled_p ())
> > dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> > @@ -11599,6 +11608,27 @@ vectorizable_load (vec_info *vinfo,
> >   gcc_assert (new_vtype
> >   || LOOP_VINFO_PEELING_FOR_GAPS
> >(loop_vinfo));
> > +   /* But still reduce the access size to the next
> > +  required power-of-two so peeling a single
> > +  scalar iteration is sufficient.  */
> > +   unsigned HOST_WIDE_INT cremain;
> > +   if (remain.is_constant ())
> > + {
> > +   unsigned HOST_WIDE_INT cpart_size
> > + = 1 << ceil_log2 (cremain);
> > +   if (known_gt (nunits, cpart_size)
> > +   && constant_multiple_p (nunits, cpart_size,
> > +   ))
> > + {
> > +   tree ptype;
> > +   new_vtype
> > + = vector_vector_composition_type (vectype,
> > +   num,
> > +   );
> > +   if (new_vtype)
> > + ltype = ptype;
> > + }
> > + }
> >   }
> >   }
> > tree offset
> 

-- 
Richard Biener 
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)


Re: [PATCH v2] Target-independent store forwarding avoidance.

2024-06-12 Thread Richard Biener
On Tue, 11 Jun 2024, Jeff Law wrote:

> 
> 
> On 6/11/24 7:52 AM, Philipp Tomsich wrote:
> > On Tue, 11 Jun 2024 at 15:37, Jeff Law  wrote:
> >>
> >>
> >>
> >> On 6/11/24 1:22 AM, Richard Biener wrote:
> >>
> >>>> Absolutely.   But forwarding from a smaller store to a wider load is
> >>>> painful
> >>>> from a hardware standpoint and if we can avoid it from a codegen
> >>>> standpoint,
> >>>> we should.
> >>>
> >>> Note there's also the possibility to increase the distance between the
> >>> store and the load - in fact the time a store takes to a) retire and
> >>> b) get from the store buffers to where the load-store unit would pick it
> >>> up (L1-D) is another target specific tuning knob.  That said, if that
> >>> distance isn't too large (on x86 there might be only an upper bound
> >>> given by the OOO window size and the L1D store latency(?), possibly
> >>> also additionally by the store buffer size) attacking the issue in
> >>> sched1 or sched2 might be another possibility.  So I think pass placement
> >>> is another thing to look at - I'd definitely place it after sched1
> >>> but I guess without looking at the pass again it's way before that?
> >> True, but I doubt there are enough instructions we could sink the load
> >> past to make a measurable difference.  This is especially true on the
> >> class of uarchs where this is going to be most important.
> >>
> >> In the case where the store/load can't be interchanged and thus this new
> >> pass rejects any transformation, we could try to do something in the
> >> scheduler to defer the load as long as possible.  Essentially it's a
> >> true dependency through a memory location using must-aliasing properties
> >> and in that case we'd want to crank up the "latency" of the store so
> >> that the load gets pushed away.
> >>
> >> I think one of the difficulties here is we often model stores as not
> >> having any latency (which is probably OK in most cases).  Input data
> >> dependencies and structural hazards dominate dominate considerations for
> >> stores.
> > 
> > I don't think that TARGET_SCHED_ADJUST_COST would even be called for a
> > data-dependence through a memory location.
> Probably correct, but we could adjust that behavior or add another mechanism
> to adjust costs based on memory dependencies.
> 
> > 
> > Note that, strictly speaking, the store does not have an extended
> > latency; it will be the load that will have an increased latency
> > (almost as if we knew that the load will miss to one of the outer
> > points-of-coherence).  The difference being that the load would not
> > hang around in a scheduling queue until being dispatched, but its
> > execution would start immediately and take more cycles (and
> > potentially block an execution pipeline for longer).
> Absolutely true.  I'm being imprecise in my language, increasing the "latency"
> of the store is really a proxy for "do something to encourage the load to move
> away from the store".
> 
> But overall rewriting the sequence is probably the better choice.  In my mind
> the scheduler approach would be a secondary attempt if we couldn't interchange
> the store/load.  And I'd make a small bet that its impact would be on the
> margins if we're doing a reasonable job in the new pass.

One of the points I wanted to make is that sched1 can make quite a
difference as to the relative distance of the store and load and
we have the instruction window the pass considers when scanning
(possibly driven by target uarch details).  So doing the rewriting
before sched1 might be not ideal (but I don't know how much cleanup
work the pass leaves behind - there's nothing between sched1 and RA).

On the hardware side I always wondered whether a failed load-to-store
forward results in the load uop stalling (because the hardware actually
_did_ see the conflict with an in-flight store) or whether this gets
catched later as the hardware speculates a load from L1 (with the
wrong value) but has to roll back because of the conflict.  I would
imagine the latter is cheaper to implement but worse in case of
conflict.

Richard.


Re: [PATCH v1] Widening-Mul: Take gsi after_labels instead of start_bb for gcall insertion

2024-06-12 Thread Richard Biener
On Tue, Jun 11, 2024 at 3:53 PM  wrote:
>
> From: Pan Li 
>
> We inserted the gcall of .SAT_ADD before the gsi_start_bb for avoiding
> the ssa def after use ICE issue.  Unfortunately,  there will be the
> potential ICE when the first stmt is label.  We cannot insert the gcall
> before the label.  Thus,  we take gsi_after_labels to locate the
> 'really' stmt that the gcall will insert before.
>
> The existing test cases pr115387-1.c and pr115387-2.c cover this change.

OK

> The below test suites are passed for this patch.
> * The rv64gcv fully regression test with newlib.
> * The x86 regression test.
> * The x86 bootstrap test.
>
> gcc/ChangeLog:
>
> * tree-ssa-math-opts.cc (math_opts_dom_walker::after_dom_children):
> Leverage gsi_after_labels instead of gsi_start_bb to skip the
> leading labels of bb.
>
> Signed-off-by: Pan Li 
> ---
>  gcc/tree-ssa-math-opts.cc | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/gcc/tree-ssa-math-opts.cc b/gcc/tree-ssa-math-opts.cc
> index fbb8e0ea306..c09e9006443 100644
> --- a/gcc/tree-ssa-math-opts.cc
> +++ b/gcc/tree-ssa-math-opts.cc
> @@ -6102,7 +6102,7 @@ math_opts_dom_walker::after_dom_children (basic_block 
> bb)
>for (gphi_iterator psi = gsi_start_phis (bb); !gsi_end_p (psi);
>  gsi_next ())
>  {
> -  gimple_stmt_iterator gsi = gsi_start_bb (bb);
> +  gimple_stmt_iterator gsi = gsi_after_labels (bb);
>match_unsigned_saturation_add (, psi.phi ());
>  }
>
> --
> 2.34.1
>


[webkit-changes] [WebKit/WebKit] c35d1f: [Writing Tools] Reverting a list transformation le...

2024-06-11 Thread Richard Robinson
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: c35d1fad75d58b9de8042a06f0fc21a9d411544e
  
https://github.com/WebKit/WebKit/commit/c35d1fad75d58b9de8042a06f0fc21a9d411544e
  Author: Richard Robinson 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M Source/WebCore/WebCore.xcodeproj/project.pbxproj
M Source/WebCore/editing/ReplaceSelectionCommand.h
M Source/WebCore/page/Page.cpp
M Source/WebCore/page/Page.h
M 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.h
M 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm
M Source/WebKit/WebProcess/WebPage/Cocoa/TextIndicatorStyleController.mm

  Log Message:
  ---
  [Writing Tools] Reverting a list transformation leaves a single errant bullet
https://bugs.webkit.org/show_bug.cgi?id=275336
rdar://126139492

Reviewed by Wenson Hsieh and Tim Horton.

Previously, we were undoing and restoring the text by creating and storing 
document fragments
and replacing the current selection. However, this approach is flawed for cases 
where lists
or table elements are used, since selecting all contents will exclude the first 
list bullet.

Fix by changing the implementation to instead just unapply and reapply the 
editing command itself,
which does not rely on selection.

Also, slightly refactor all the stateful instance variables into a single state 
struct.

* Source/WebCore/WebCore.xcodeproj/project.pbxproj:
* Source/WebCore/editing/ReplaceSelectionCommand.h:
* Source/WebCore/page/Page.cpp:
(WebCore::Page::willBeginTextReplacementSession):
(WebCore::Page::didBeginTextReplacementSession):
(WebCore::Page::textReplacementSessionDidReceiveReplacements):
(WebCore::Page::textReplacementSessionDidUpdateStateForReplacement):
(WebCore::Page::didEndTextReplacementSession):
(WebCore::Page::textReplacementSessionDidReceiveTextWithReplacementRange):
(WebCore::Page::contextRangeForSessionWithID const):
(WebCore::Page::textReplacementSessionDidReceiveEditAction):
* Source/WebCore/page/Page.h:
(WebCore::Page::unifiedTextReplacementController const): Deleted.
* 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.h:
* 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm:
(WebCore::UnifiedTextReplacementController::willBeginTextReplacementSession):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidReceiveReplacements):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidUpdateStateForReplacement):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidReceiveTextWithReplacementRange):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidReceiveEditAction):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidReceiveEditAction):
(WebCore::UnifiedTextReplacementController::textReplacementSessionDidReceiveEditAction):
(WebCore::UnifiedTextReplacementController::didEndTextReplacementSession):
(WebCore::UnifiedTextReplacementController::didEndTextReplacementSession):
(WebCore::UnifiedTextReplacementController::updateStateForSelectedReplacementIfNeeded):
(WebCore::UnifiedTextReplacementController::contextRangeForSessionWithID const):
(WebCore::UnifiedTextReplacementController::stateForSession):
(WebCore::UnifiedTextReplacementController::replaceContentsOfRangeInSessionInternal):
(WebCore::UnifiedTextReplacementController::replaceContentsOfRangeInSession):
* Source/WebKit/WebProcess/WebPage/Cocoa/TextIndicatorStyleController.mm:
(WebKit::TextIndicatorStyleController::contextRangeForSessionWithID const):

Canonical link: https://commits.webkit.org/279937@main



To unsubscribe from these emails, change your notification settings at 
https://github.com/WebKit/WebKit/settings/notifications
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Update on TomEE 10

2024-06-11 Thread Richard Zowalla
Hi all,

Here is a new update on TomEE 10.

Markus Jung has implemented the missing part of the EE10 security spec: [1] and 
the TCK for it looks good. Thanks a lot for that contribution! If anybody wants 
to give it a review, you find it here: [1]

We have updated most MicroProfile specs to be compliant with MP6 and the TCK 
for it looks good.

The only MicroProfile implementation missing is OpenTelemetry 1.0 [2] (and the 
removal of OpenTracing). There is a branch with a basic integration 
(TOMEE-4343) but while working on it, I found something odd, which I did 
discuss with Romain via Slack. The result is [3]. I hope to get some additional 
input from Mark Struberg on it, so we can hopefully find a way to fix the odd 
CDI part here. Overall, the related TCK has around 4-5 which are (most likely) 
a result of [3] because the interceptor is not working as expected.

Since we are more and more in a (better) EE10 shape, we need to go back into 
fixing/adding the existing/remaining TCKs inside the TomEE build to see, if we 
need to do some work in our upstream dependencies. I am planning to send an 
update for that area soon, so we get an overview of what is already added and 
what is broken / missing,

We are blocked by a release of CXF 4.1.0-SNAPSHOT. 

We should (imho) discuss, if it is worth to release a M2 with a downgrade to 
the latest stable CXF release since we added new features (MicroProfile 
updates, potentially OIDC soon) and upgraded a lot of 3rd party CVEs. So from 
my POV it would be crucial to get some feedback on a new milestone release. 
WDYT?

Gruß
Richard

[1] https://github.com/apache/tomee/pull/1178
[2] https://issues.apache.org/jira/browse/TOMEE-4343
[1] https://issues.apache.org/jira/browse/OWB-1441

[webkit-changes] [WebKit/WebKit] c6800d: [Writing Tools] Replace `ENABLE_UNIFIED_TEXT_REPLA...

2024-06-11 Thread Richard Robinson
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: c6800d4fa31447eac584dee14396ed6d4efbac18
  
https://github.com/WebKit/WebKit/commit/c6800d4fa31447eac584dee14396ed6d4efbac18
  Author: Richard Robinson 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M Source/WTF/Scripts/Preferences/UnifiedWebPreferences.yaml
M Source/WTF/wtf/PlatformEnable.h
M Source/WTF/wtf/PlatformEnableCocoa.h
M Source/WebCore/dom/DocumentMarker.h
M Source/WebCore/dom/DocumentMarkerController.cpp
M Source/WebCore/loader/EmptyClients.cpp
M Source/WebCore/page/ChromeClient.h
M Source/WebCore/page/ContextMenuClient.h
M Source/WebCore/page/ContextMenuController.cpp
M Source/WebCore/page/Page.cpp
M Source/WebCore/page/Page.h
M 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.h
M 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm
M Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementTypes.h
M Source/WebCore/platform/LocalizedStrings.cpp
M Source/WebCore/platform/LocalizedStrings.h
M Source/WebCore/rendering/MarkedText.cpp
M Source/WebCore/rendering/MarkedText.h
M Source/WebCore/rendering/StyledMarkedText.cpp
M Source/WebCore/rendering/TextBoxPainter.cpp
M Source/WebCore/testing/Internals.cpp
M Source/WebCore/testing/Internals.h
M Source/WebCore/testing/Internals.idl
M Source/WebKit/Shared/TextIndicatorStyle.serialization.in
M Source/WebKit/Shared/WebCoreArgumentCoders.serialization.in
M Source/WebKit/UIProcess/API/APIPageConfiguration.h
M Source/WebKit/UIProcess/API/Cocoa/WKWebView.mm
M Source/WebKit/UIProcess/API/Cocoa/WKWebViewConfiguration.mm
M Source/WebKit/UIProcess/API/Cocoa/WKWebViewInternal.h
M Source/WebKit/UIProcess/API/mac/WKWebViewMac.mm
M Source/WebKit/UIProcess/Cocoa/PageClientImplCocoa.h
M Source/WebKit/UIProcess/Cocoa/PageClientImplCocoa.mm
M Source/WebKit/UIProcess/Cocoa/WebPageProxyCocoa.mm
M Source/WebKit/UIProcess/PageClient.h
M Source/WebKit/UIProcess/WKSTextStyleManager.h
M Source/WebKit/UIProcess/WebPageProxy.h
M Source/WebKit/UIProcess/WebPageProxy.messages.in
M Source/WebKit/UIProcess/WebPageProxyInternals.h
M Source/WebKit/UIProcess/ios/WKContentViewInteraction.h
M Source/WebKit/UIProcess/ios/WKContentViewInteraction.mm
M Source/WebKit/UIProcess/ios/WKExtendedTextInputTraits.mm
M Source/WebKit/UIProcess/mac/PageClientImplMac.h
M Source/WebKit/UIProcess/mac/PageClientImplMac.mm
M Source/WebKit/UIProcess/mac/WKTextIndicatorStyleManager.h
M Source/WebKit/UIProcess/mac/WKTextIndicatorStyleManager.mm
M Source/WebKit/UIProcess/mac/WebContextMenuProxyMac.mm
M Source/WebKit/UIProcess/mac/WebViewImpl.h
M Source/WebKit/UIProcess/mac/WebViewImpl.mm
M Source/WebKit/WebProcess/WebCoreSupport/WebChromeClient.cpp
M Source/WebKit/WebProcess/WebCoreSupport/WebChromeClient.h
M Source/WebKit/WebProcess/WebCoreSupport/WebContextMenuClient.h
M Source/WebKit/WebProcess/WebCoreSupport/mac/WebContextMenuClientMac.mm
M Source/WebKit/WebProcess/WebPage/Cocoa/TextIndicatorStyleController.mm
M Source/WebKit/WebProcess/WebPage/Cocoa/WebPageCocoa.mm
M Source/WebKit/WebProcess/WebPage/TextIndicatorStyleController.h
M Source/WebKit/WebProcess/WebPage/WebPage.cpp
M Source/WebKit/WebProcess/WebPage/WebPage.h
M Source/WebKit/WebProcess/WebPage/WebPage.messages.in
M Source/WebKitLegacy/mac/WebCoreSupport/WebContextMenuClient.h
M Source/WebKitLegacy/mac/WebCoreSupport/WebContextMenuClient.mm
M Tools/TestWebKitAPI/Tests/WebCore/MarkedText.cpp

  Log Message:
  ---
  [Writing Tools] Replace `ENABLE_UNIFIED_TEXT_REPLACEMENT` with 
`ENABLE_WRITING_TOOLS` and remove UTRAdditions.h
https://bugs.webkit.org/show_bug.cgi?id=275376
rdar://129627552

Reviewed by Megan Gardner.

* Source/WTF/Scripts/Preferences/UnifiedWebPreferences.yaml:
* Source/WTF/wtf/PlatformEnable.h:
* Source/WTF/wtf/PlatformEnableCocoa.h:
* Source/WebCore/dom/DocumentMarker.h:
(WebCore::DocumentMarker::allMarkers):
(WebCore::DocumentMarker::description const):
* Source/WebCore/dom/DocumentMarkerController.cpp:
(WebCore::shouldInsertAsSeparateMarker):
(WebCore::DocumentMarkerController::addMarker):
(WebCore::DocumentMarkerController::removeMarkers):
(WebCore::DocumentMarkerController::unifiedTextReplacementAnimationTimerFired):
* Source/WebCore/loader/EmptyClients.cpp:
* Source/WebCore/page/ChromeClient.h:
* Source/WebCore/page/ContextMenuClient.h:
* Source/WebCore/page/ContextMenuController.cpp:
(WebCore::ContextMenuController::contextMenuItemSelected):
(WebCore::ContextMenuController::populate):
* Source/WebCore/page/Page.cpp:
* Source/WebCore/page/Page.h:
* 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.h:
* 
Source/WebCore/page/unified-text-replacement/UnifiedTextReplacementController.mm

[Qemu-commits] [qemu/qemu] b67e35: block: drop force_dup parameter of raw_reconfigure...

2024-06-11 Thread Richard Henderson via Qemu-commits
used instead. Prevent that format using multi-lines
by forbidding the newline character.

Signed-off-by: Philippe Mathieu-Daudé 
Acked-by: Mads Ynddal 
Reviewed-by: Daniel P. Berrangé 
Message-id: 20240606103943.79116-6-phi...@linaro.org
Signed-off-by: Stefan Hajnoczi 


  Commit: 903916f0a017fe4b7789f1c6c6982333a5a71876
  
https://github.com/qemu/qemu/commit/903916f0a017fe4b7789f1c6c6982333a5a71876
  Author: Chuang Xu 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/cpu.c

  Log Message:
  ---
  i386/cpu: fixup number of addressable IDs for processor cores in the physical 
package

When QEMU is started with:
-cpu host,host-cache-info=on,l3-cache=off \
-smp 2,sockets=1,dies=1,cores=1,threads=2
Guest can't acquire maximum number of addressable IDs for processor cores in
the physical package from CPUID[04H].

When creating a CPU topology of 1 core per package, host-cache-info only
uses the Host's addressable core IDs field (CPUID.04H.EAX[bits 31-26]),
resulting in a conflict (on the multicore Host) between the Guest core
topology information in this field and the Guest's actual cores number.

Fix it by removing the unnecessary condition to cover 1 core per package
case. This is safe because cores_per_pkg will not be 0 and will be at
least 1.

Fixes: d7caf13b5fcf ("x86: cpu: fixup number of addressable IDs for logical 
processors sharing cache")
Signed-off-by: Guixiong Wei 
Signed-off-by: Yipeng Yin 
Signed-off-by: Chuang Xu 
Reviewed-by: Zhao Liu 
Message-ID: <20240611032314.64076-1-xuchuangxc...@bytedance.com>
Signed-off-by: Paolo Bonzini 


  Commit: c94eb5db8e409c932da9eb187e68d4cdc14acc5b
  
https://github.com/qemu/qemu/commit/c94eb5db8e409c932da9eb187e68d4cdc14acc5b
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: fix unreachable code coverity issue

Set 'finish->id_block_en' early, so that it is properly reset.

Fixes coverity CID 1546887.

Fixes: 7b34df4426 ("i386/sev: Introduce 'sev-snp-guest' object")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-2-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: 48779faef3c8e2fe70bd8285bffa731bd76dc844
  
https://github.com/qemu/qemu/commit/48779faef3c8e2fe70bd8285bffa731bd76dc844
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: Move SEV_COMMON null check before dereferencing

Fixes Coverity CID 1546886.

Fixes: 9861405a8f ("i386/sev: Invoke launch_updata_data() for SEV class")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-3-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: cd7093a7a168a823d07671348996f049d45e8f67
  
https://github.com/qemu/qemu/commit/cd7093a7a168a823d07671348996f049d45e8f67
  Author: Pankaj Gupta 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/sev.c

  Log Message:
  ---
  i386/sev: Return when sev_common is null

Fixes Coverity CID 1546885.

Fixes: 16dcf200dc ("i386/sev: Introduce "sev-common" type to encapsulate common 
SEV state")
Signed-off-by: Pankaj Gupta 
Message-ID: <20240607183611.100-4-pankaj.gu...@amd.com>
Signed-off-by: Paolo Bonzini 


  Commit: 4228eb8cc6ba44d35cd52b05508a47e780668051
  
https://github.com/qemu/qemu/commit/4228eb8cc6ba44d35cd52b05508a47e780668051
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/decode-new.h
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: remove CPUX86State argument from generator functions

CPUX86State argument would only be used to fetch bytes, but that has to be
done before the generator function is called.  So remove it, and all
temptation together with it.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: cc155f19717ced44d70df3cd5f149a5b9f9a13f1
  
https://github.com/qemu/qemu/commit/cc155f19717ced44d70df3cd5f149a5b9f9a13f1
  Author: Paolo Bonzini 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M target/i386/cpu.h
M target/i386/tcg/emit.c.inc

  Log Message:
  ---
  target/i386: rewrite flags writeback for ADCX/ADOX

Avoid using set_cc_op() in preparation for implementing APX; treat
CC_OP_EFLAGS similar to the case where we have the "opposite" cc_op
(CC_OP_ADOX for ADCX and CC_OP_ADCX for ADOX), except the resulting
cc_op is not CC_OP_ADCOX. This is written easily as two "if"s, whose
conditions are both false for CC_OP_EFLAGS, both true for CC_OP_ADCOX,
and one each true for CC_OP_ADCX/ADOX.

The new logic also makes it easy to drop usage of tmp0.

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: e628387cf9a27a4895b00821313635fad4cfab43
  
https:/

Improving build-time checks for kernel module configuration + two other discussion points

2024-06-11 Thread Richard Sent
Hi Guix!

Guix provides both linux-libre and linux-libre--generic kernels.
The generic kernels seem to match the upstream defconfigs very closely
with a few minor adjustments (namely default-extra-linux-options) while
the linux-libre kernel is entirely customized.

This can result in awkward bugs when using Guix services that expect
certain options to be set. Generally, the linux-libre kernel seems to
have plenty of options set for Guix-packaged services to operate while
-generic kernels do not. These bugs are difficult for users to
troubleshoot without a lucky dive on the mailing list [1].

Unfortunately, -generic kernels can have better support for running Guix
on certain single board computers [1][2] or specific devices (e.g.
Pinebook Pro). This places users in an lose-lose situation of either a)
manually customizing the linux-libre kernel with the appropriate
upstream options until the system boots or b) adding config options
piecemeal to the -generic kernel as runtime breakages are detected.

My idea for how to mitigate this is adding some sort of extensible
service where services can register necessary kernel configuration
settings. When the config option is unset, a build-time error is raised.

Alternatively instead of merely verifying the config option, this
service could set the config option outright. However, without enhancing
customize-linux to recursively enable config dependencies [3] this may
not be feasible.

Does this sound reasonable? Looking at the code it seems the appropriate
places for verification would be in (guix scripts system) and (gnu
machine ssh) . Does anyone have a different suggestion? Did I miss the
mark?

Having skimmed the code this may be challenging to implement as it would
have to occur after the kernel is built. (Or at least after
guix_defconfig is generated.) At present all checks seem to be performed
/before/ building the OS. There's also system containers to consider.

A couple of other thoughts while I'm at it:

There doesn't seem to be much shared understanding on the meaning of
-generic kernels [4]. Perhaps we should consider renaming them while
we're at it or at least better document the distinction.

Several options are set in -generic kernels to provide support for
specific boards. (This is most notable in linux-libre-arm64-generic).
This doesn't feel like the cleanest solution to me. I think we should
instead make those kernels smaller, customized variants of (ideally)
linux-libre or linux-libre-*-generic so their purpose is a bit more
distinct.

To summarize here's the action items:

1. Add build-time checks for kernel config options
2. Better identify and/or document the meaning of -generic kernels.
3. If needed, adjust the behavior of -generic kernels to match 2 and add
variants as necessary.

Thoughts? 易

[1]: https://issues.guix.gnu.org/61173
[2]: 
https://git.sr.ht/~freakingpenguin/rsent/tree/master/item/rsent/machines/lan/caustic.scm
[3]: https://issues.guix.gnu.org/66355#1-lineno28
[4]: https://issues.guix.gnu.org/43078#2

-- 
Take it easy,
Richard Sent
Making my computer weirder one commit at a time.



Re: default value of Right-to-left cursor movement preference

2024-06-11 Thread Richard Kimberly Heck

On 6/10/24 15:44, Udicoudco wrote:

Dear all,

Personally I don't know anyone who is intentionally
use the current default value of this preference,
so I thought about changing the default from 'logical'
to 'visual' to make the experience of new users easier,
but I don't want to do that without it being a consensus.

If you want to understand what this preference is for,
follow these steps:

* Open a new document
* Open the command buffer (View->Toolbars->Command Buffer->On)
* Start a new language with an RTL script by typing e.g. language hebrew
   in the command buffer
* Open an equation inset, and type several characters in it and outside of it.
* Start moving the caret with the left and right arrow keys

You will notice how inside the equation the caret navigate to the opposite
direction from which you are ordering it to, unlike in normal text.
Note that it happens not only in equations, but in other places which
are considered intrinsically LTR, like ERT.

After switching the Right-to-left cursor movement preference
to visual, you should notice that the caret now always moves to the
direction you are ordering it.

Is there anyone who would prefer to leave things
as they are?


That makes sense to me. But I'm not sure we can do it in 2.4.x. 
Typically, when we make this kind of change, we add something to 
prefs2prefs_prefs.py so that things do not change for existing users. 
I.e., if someone did want to use 'logical', that would not be written to 
the preferences file, since it's the default. Changing the default would 
then surprise them. So the prefs2prefs code would add


\visual_cursor false

if there's nothing in the existing preferences file.

Riki


--
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


Re: [LyX/master] Add "full" drawing strategy

2024-06-11 Thread Richard Kimberly Heck

On 6/11/24 12:06, Jean-Marc Lasgouttes wrote:

Le 11/06/2024 à 17:44, Richard Kimberly Heck a écrit :

On 6/11/24 09:36, Jean-Marc Lasgouttes wrote:
The point is to offer a mode where the screen is redrawn fully every 
time (only the drawing, not the metrics computation). The 
performance seems quite reasonable to me, actually.


I assume it is otherwise safe?


This is what I assume too :) All it does is to add the 
Update::ForceDraw flag to force a full redraw. So it goes through 
known drawing mechanisms to do its thing.


Having a proper UI for it would be great, but I am procrastinating 
on that.


What's our policy on preference updates? (I can't remember.) Do we 
care if a preference file saved with 2.4.1 cannot be used with 2.4.0?


This is a good question. As it is, it will fall back to the default 
value ("partial" if supported, "backingstore" otherwise) if it does 
not understand the value that was given (I just checked). 


OK, go ahead.

Riki


--
lyx-devel mailing list
lyx-devel@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-devel


Re: [LyX/master] Remove hebrew letter document class

2024-06-11 Thread Richard Kimberly Heck

On 6/11/24 14:39, Udicoudco wrote:

On Tue, Jun 11, 2024 at 9:37 PM Udicoudco  wrote:

How can it break old documents?
Other than custom user layouts,
which requires this one, I can't think
of a problem.

And in such a rare case they can simply input
the regular letter layout instead (which gives
the correct alignment in the GUI anyway).


It's more a matter of policy. Our goal is that any document created with 
LyX can be opened in any later version, and compiled to produce the same 
output. Of course, we can't always do that, but we try. In this case, 
removing this file will prevent a document that uses it from loading 
properly, let alone compiling. So we keep it and mark it obsolete.


Riki


--
lyx-devel mailing list
lyx-devel@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-devel


Re: [PATCH] Improve code generation of strided SLP loads

2024-06-11 Thread Richard Sandiford
Richard Biener  writes:
> This avoids falling back to elementwise accesses for strided SLP
> loads when the group size is not a multiple of the vector element
> size.  Instead we can use a smaller vector or integer type for the load.
>
> For stores we can do the same though restrictions on stores we handle
> and the fact that store-merging covers up makes this mostly effective
> for cost modeling which shows for gcc.target/i386/vect-strided-3.c
> which we now vectorize with V4SI vectors rather than just V2SI ones.
>
> For all of this there's still the opportunity to use non-uniform
> accesses, say for a 6-element group with a VF of two do
> V4SI, { V2SI, V2SI }, V4SI.  But that's for a possible followup.
>
> Bootstrapped and tested on x86_64-unknown-linux-gnu, textually
> this depends on the gap improvement series so I'll push only
> after those.  Target independent testing is difficult, strided
> accesses are difficult for VLA - I suppose they should go
> through gather/scatter but we have to be able to construct the
> offset vector there.

Yeah, agreed.  And I suppose for tests like these, which load
consecutive pairs of 32-bit elements, we'd want to generate a gather
of 64-bit elements.  So there'd be a similar accretion process,
but only if it applies regularly across the whole vector.

Richard

>
> Richard.
>
>   * gcc.target/i386/vect-strided-1.c: New testcase.
>   * gcc.target/i386/vect-strided-2.c: Likewise.
>   * gcc.target/i386/vect-strided-3.c: Likewise.
>   * gcc.target/i386/vect-strided-4.c: Likewise.
> ---
>  .../gcc.target/i386/vect-strided-1.c  |  24 +
>  .../gcc.target/i386/vect-strided-2.c  |  17 +++
>  .../gcc.target/i386/vect-strided-3.c  |  20 
>  .../gcc.target/i386/vect-strided-4.c  |  20 
>  gcc/tree-vect-stmts.cc| 100 --
>  5 files changed, 127 insertions(+), 54 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-1.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-2.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-3.c
>  create mode 100644 gcc/testsuite/gcc.target/i386/vect-strided-4.c
>
> diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-1.c 
> b/gcc/testsuite/gcc.target/i386/vect-strided-1.c
> new file mode 100644
> index 000..db4a06711f1
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/vect-strided-1.c
> @@ -0,0 +1,24 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -msse2 -mno-avx" } */
> +
> +void foo (int * __restrict a, int *b, int s)
> +{
> +  for (int i = 0; i < 1024; ++i)
> +{
> +  a[8*i+0] = b[s*i+0];
> +  a[8*i+1] = b[s*i+1];
> +  a[8*i+2] = b[s*i+2];
> +  a[8*i+3] = b[s*i+3];
> +  a[8*i+4] = b[s*i+4];
> +  a[8*i+5] = b[s*i+5];
> +  a[8*i+6] = b[s*i+4];
> +  a[8*i+7] = b[s*i+5];
> +}
> +}
> +
> +/* Three two-element loads, two four-element stores.  On ia32 we elide
> +   a permute and perform a redundant load.  */
> +/* { dg-final { scan-assembler-times "movq" 2 } } */
> +/* { dg-final { scan-assembler-times "movhps" 2 { target ia32 } } } */
> +/* { dg-final { scan-assembler-times "movhps" 1 { target { ! ia32 } } } } */
> +/* { dg-final { scan-assembler-times "movups" 2 } } */
> diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-2.c 
> b/gcc/testsuite/gcc.target/i386/vect-strided-2.c
> new file mode 100644
> index 000..6fd64e28cf0
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/vect-strided-2.c
> @@ -0,0 +1,17 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -msse2 -mno-avx" } */
> +
> +void foo (int * __restrict a, int *b, int s)
> +{
> +  for (int i = 0; i < 1024; ++i)
> +{
> +  a[4*i+0] = b[s*i+0];
> +  a[4*i+1] = b[s*i+1];
> +  a[4*i+2] = b[s*i+0];
> +  a[4*i+3] = b[s*i+1];
> +}
> +}
> +
> +/* One two-element load, one four-element store.  */
> +/* { dg-final { scan-assembler-times "movq" 1 } } */
> +/* { dg-final { scan-assembler-times "movups" 1 } } */
> diff --git a/gcc/testsuite/gcc.target/i386/vect-strided-3.c 
> b/gcc/testsuite/gcc.target/i386/vect-strided-3.c
> new file mode 100644
> index 000..b462701a0b2
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/i386/vect-strided-3.c
> @@ -0,0 +1,20 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -msse2 -mno-avx -fno-tree-slp-vectorize" } */
> +
> +void foo (int * __restrict a, int *b, int s)
> +{
> +  if (s >= 6)
> +for (int i = 0; i < 1024; ++i)
> +  {
> + a[s*i+0] = b[4*i+0];
> + a[s*i+1] =

Re: [PATCH] tree-optimization/115385 - handle more gaps with peeling of a single iteration

2024-06-11 Thread Richard Sandiford
Don't think it makes any difference, but:

Richard Biener  writes:
> @@ -2151,7 +2151,16 @@ get_group_load_store_type (vec_info *vinfo, 
> stmt_vec_info stmt_info,
>access excess elements.
>???  Enhancements include peeling multiple iterations
>or using masked loads with a static mask.  */
> -   || (group_size * cvf) % cnunits + group_size - gap < cnunits))
> +   || ((group_size * cvf) % cnunits + group_size - gap < cnunits
> +   /* But peeling a single scalar iteration is enough if
> +  we can use the next power-of-two sized partial
> +  access.  */
> +   && ((cremain = (group_size * cvf - gap) % cnunits), true

...this might be less surprising as:

  && ((cremain = (group_size * cvf - gap) % cnunits, true)

in terms of how the & line up.

Thanks,
Richard

> +   && ((cpart_size = (1 << ceil_log2 (cremain)))
> +   != cnunits)
> +   && vector_vector_composition_type
> +(vectype, cnunits / cpart_size,
> + _vtype) == NULL_TREE
>   {
> if (dump_enabled_p ())
>   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -11599,6 +11608,27 @@ vectorizable_load (vec_info *vinfo,
> gcc_assert (new_vtype
> || LOOP_VINFO_PEELING_FOR_GAPS
>  (loop_vinfo));
> + /* But still reduce the access size to the next
> +required power-of-two so peeling a single
> +scalar iteration is sufficient.  */
> + unsigned HOST_WIDE_INT cremain;
> + if (remain.is_constant ())
> +   {
> + unsigned HOST_WIDE_INT cpart_size
> +   = 1 << ceil_log2 (cremain);
> + if (known_gt (nunits, cpart_size)
> + && constant_multiple_p (nunits, cpart_size,
> + ))
> +   {
> + tree ptype;
> + new_vtype
> +   = vector_vector_composition_type (vectype,
> + num,
> + );
> + if (new_vtype)
> +   ltype = ptype;
> +   }
> +   }
> }
> }
>   tree offset


Re: [patch, rs6000, middle-end 0/1] v1: Add implementation for different targets for pair mem fusion

2024-06-11 Thread Richard Sandiford
Ajit Agarwal  writes:
> Hello Richard:
>
> On 11/06/24 9:41 pm, Richard Sandiford wrote:
>> Ajit Agarwal  writes:
>>>>> Thanks a lot. Can I know what should we be doing with neg (fma)
>>>>> correctness failures with load fusion.
>>>>
>>>> I think it would involve:
>>>>
>>>> - describing lxvp and stxvp as unspec patterns, as I mentioned
>>>>   in the previous reply
>>>>
>>>> - making plain movoo split loads and stores into individual
>>>>   lxv and stxvs.  (Or, alternative, it could use lxvp and stxvp,
>>>>   but internally swap the registers after load and before store.)
>>>>   That is, movoo should load the lower-numbered register from the
>>>>   lower address and the higher-numbered register from the higher
>>>>   address, and likewise for stores.
>>>>
>>>
>>> Would you mind elaborating the above.
>> 
>> I think movoo should use rs6000_split_multireg_move for all alternatives,
>> like movxo does.  movoo should split into 2 V1TI loads/stores and movxo
>> should split into 4 V1TI loads/stores.  lxvp and stxvp would be
>> independent patterns of the form:
>> 
>>   (set ...
>>(unspec [...] UNSPEC_FOO))
>> 
>> ---
>> 
>
> In load fusion pass I generate the above pattern for adjacent merge
> pairs.
>
>> rs6000_split_multireg_move has:
>> 
>>   /* The __vector_pair and __vector_quad modes are multi-register
>>  modes, so if we have to load or store the registers, we have to be
>>  careful to properly swap them if we're in little endian mode
>>  below.  This means the last register gets the first memory
>>  location.  We also need to be careful of using the right register
>>  numbers if we are splitting XO to OO.  */
>> 
>> But I don't see how this can work reliably if we allow the kind of
>> subregs that you want to create here.  The register order is the opposite
>> from the one that GCC expects.
>> 
>> This is more a question for the PowerPC maintainers though.
>>
>
> Above unspec pattern generated and modified the movoo pattern to accept
> the above spec it goes through the rs6000_split_multireg_move
> it splits into 2 VITI loads and generate consecutive loads with sequential
> registers. In load_fusion pass I generate the subreg along with load results 
> subreg (reg OO R) 16 and subreg (reg OO R) 0.
>
> But it doesnt generate lxvp instruction. If above unspec instruction
> pattern and write separate pattern in md file to generate lxvp instead of
> normal movoo, then it won't go through rs6000_split_multireg_move

I don't understand the last bit, sorry.  Under the scheme I described,
lxvp should be generated only through an unspec (and no other way).
Same for stxvp.  The fusion pass should generate those unspecs.

If the fusion pass has generated the code correctly, the lxvp unspec
will remain throughout compilation, unless all uses of it are later
deleted as dead.

The movoo rtl pattern should continue to be:

  [(set (match_operand:OO 0 "nonimmediate_operand" "=wa,ZwO,wa")
(match_operand:OO 1 "input_operand" "ZwO,wa,wa"))]

But movoo should generate individual loads, stores and moves.  By design,
it should never generate lxvp or stxvp.

This means that, if a fused load is spilled, the sequence will be
something like:

  lxvp ...   // original fused load (unspec)
  ...
  stxv ...   // store one half to the stack (split from movoo)
  stxv ...   // store the other half to the stack (split from movoo)

Then insns that use the pair will load whichever half they need
from the stack.

I realise that isn't great, but it should at least be correct.

Thanks,
Richard


bug#71495: Add command line flag to add to load path without evaluation

2024-06-11 Thread Richard Sent
Hi Guix!

In Guile, -L is a equivalent shorthand for adding to the %load-path
variable. No actual files are evaluated. In Guix, -L actually evaluates
files (at least in some capacity) to look for package definitions,
allowing for uses like $ guix -L . .

This has a performance impact as channels grow, so it would be nice if
there was an alternative command line flag that matched Guile's
behavior.

To showcase the issue, here's three examples of "building" an
already-built home environment. I would use $ guix repl instead, but -L
in guix repl seems to match Guile's behavior, not Guix's.

--8<---cut here---start->8---
# Baseline, no load path additions
gibraltar :) rsent$ bash -c 'time guix home build rsent/home/minimal.scm'
/gnu/store/5m062lg4f32j9hlirfkcp5141px6sgkv-home

real0m9.776s
user0m22.981s
sys 0m0.233s

# GUILE_LOAD_PATH, within margin of error of baseline
gibraltar :) rsent$ GUILE_LOAD_PATH=. bash -c 'time guix home build 
rsent/home/minimal.scm'
/gnu/store/5m062lg4f32j9hlirfkcp5141px6sgkv-home

real0m10.016s
user0m23.064s
sys 0m0.186s

# -L ., consistently ~25% longer to complete 
gibraltar :) rsent$ bash -c 'time guix home build -L . rsent/home/minimal.scm'
/gnu/store/5m062lg4f32j9hlirfkcp5141px6sgkv-home

real0m12.791s
user0m29.569s
sys 0m0.247s
--8<---cut here---end--->8---

At present one can set GUILE_LOAD_PATH manually to work around this
issue. In my opinion this isn't very discoverable. Furthermore, it can't
_cleanly_ handle cases when GUILE_LOAD_PATH is already set or needs
multiple entries. It also makes certain commands with bash builtins
(like time...) awkward since you have to enter a subshell.

-- 
Take it easy,
Richard Sent
Making my computer weirder one commit at a time.





Re: [PATCH] ifcvt: Clarify if_info.original_cost.

2024-06-11 Thread Richard Sandiford
Robin Dapp  writes:
>> I was looking at the code in more detail and just wanted to check.
>> We have:
>> 
>>   int last_needs_comparison = -1;
>> 
>>   bool ok = noce_convert_multiple_sets_1
>> (if_info, _no_cmov, _src, , ,
>>  _insns, _needs_comparison);
>>   if (!ok)
>>   return false;
>> 
>>   /* If there are insns that overwrite part of the initial
>>  comparison, we can still omit creating temporaries for
>>  the last of them.
>>  As the second try will always create a less expensive,
>>  valid sequence, we do not need to compare and can discard
>>  the first one.  */
>>   if (last_needs_comparison != -1)
>> {
>>   end_sequence ();
>>   start_sequence ();
>>   ok = noce_convert_multiple_sets_1
>>  (if_info, _no_cmov, _src, , ,
>>   _insns, _needs_comparison);
>>   /* Actually we should not fail anymore if we reached here,
>>   but better still check.  */
>>   if (!ok)
>>return false;
>> }
>> 
>> But noce_convert_multiple_sets_1 ends with:
>> 
>>   /* Even if we did not actually need the comparison, we want to make sure
>>  to try a second time in order to get rid of the temporaries.  */
>>   if (*last_needs_comparison == -1)
>> *last_needs_comparison = 0;
>> 
>> 
>>   return true;
>> 
>> AFAICT that means that the first attempt is always redundant.
>> 
>> Have I missed something?
>
> (I might not have fully gotten the question)
>
> The idea is that the first attempt goes through all insns and sets
> *last_need_comparison to the insn number that either
> - used the condition/comparison by preferring seq1 or
> - used the condition as a side-effect insn when creating a CC-using
>   insn in seq2.
> (And we only know that after actually creating the sequences). 
>
> The second attempt then improves on the first one by skipping
> any temporary destination registers after the last insn that required
> the condition (even though its target overlaps with the condition
> registers).  This is true for all cmovs that only use the CC
> (instead of the condition).  Essentially, we know that all following
> cmovs can be created via the CC which is not overwritten.
>
> So, even when we never used the condition because of all CC-using
> cmovs we would skip the temporary targets in the second attempt.
> But we can't know that all we ever needed is the CC comparison
> before actually creating the sequences in the first attempt.

Hmm, ok.  The bit that confused me most was:

  if (last_needs_comparison != -1)
{
  end_sequence ();
  start_sequence ();
  ...
}

which implied that the second attempt was made conditionally.
It seems like it's always used and is an inherent part of the
algorithm.

If the problem is tracking liveness, wouldn't it be better to
iterate over the "then" block in reverse order?  We would start
with the liveness set for the join block and update as we move
backwards through the "then" block.  This liveness set would
tell us whether the current instruction needs to preserve a
particular register.  That should make it possible to do the
transformation in one step, and so avoid the risk that the
second attempt does something that is unexpectedly different
from the first attempt.

FWIW, the reason for asking was that it seemed safer to pass
use_cond_earliest back from noce_convert_multiple_sets_1
to noce_convert_multiple_sets, as another parameter,
and then do the adjustment around noce_convert_multiple_sets's
call to targetm.noce_conversion_profitable_p.  That would avoid
the new for a new if_info field, which in turn would make it
less likely that stale information is carried over from one attempt
to the next (e.g. if other ifcvt techniques end up using the same
field in future).

Thanks,
Richard


[clang] [Clang] allow `` `@$ `` in raw string delimiters in C++26 (PR #93216)

2024-06-11 Thread Richard Smith via cfe-commits


@@ -2261,8 +2261,17 @@ bool Lexer::LexRawStringLiteral(Token , const 
char *CurPtr,
 
   unsigned PrefixLen = 0;
 
-  while (PrefixLen != 16 && isRawStringDelimBody(CurPtr[PrefixLen]))
+  while (PrefixLen != 16 && isRawStringDelimBody(CurPtr[PrefixLen])) {
 ++PrefixLen;
+if (!isLexingRawMode() &&
+llvm::is_contained({'$', '@', '`'}, CurPtr[PrefixLen])) {

zygoloid wrote:

There's an off-by-one error here: we're incrementing `PrefixLen` before 
checking the character, so this is checking the character *after* the one we 
just processed. Hence we don't diagnose `"@(foo)@"`, because the characters we 
look at are the `(` and `"` after the `@`s, not the `@`s themselves.

https://github.com/llvm/llvm-project/pull/93216
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[Perl/perl5] 06e421: S_fold_constants: remove early SvREADONLY(sv) to a...

2024-06-11 Thread Richard Leach via perl5-changes
  Branch: refs/heads/blead
  Home:   https://github.com/Perl/perl5
  Commit: 06e421c559c63975f29c35ba3588a0e6b0c75eca
  
https://github.com/Perl/perl5/commit/06e421c559c63975f29c35ba3588a0e6b0c75eca
  Author: Richard Leach 
  Date:   2024-06-11 (Tue, 11 Jun 2024)

  Changed paths:
M ext/Devel-Peek/t/Peek.t
M op.c
M t/op/undef.t

  Log Message:
  ---
  S_fold_constants: remove early SvREADONLY(sv) to allow SvIsCOW(sv)

Standard CONST PVs have the IsCOW flag set, meaning that COW can
be used when assigning the CONST to a variable, rather than making
a copy of the buffer. CONST PVs arising from constant folding have
been lacking this flag, leading to unnecessary copying of PV buffers.

This seems to have occurred because a common branch in S_fold_constants
marks SVs as READONLY before the new CONST OP is created. When the OP
is created, the Perl_ck_svconst() check function is called - this is
the same as when a standard CONST OP is created. If the SV is not
already marked as READONLY, the check function will try to set IsCOW
if it is safe to do so, then in either case will make sure that the
READONLY flag is set.

This commit therefore removes the SvREADONLY(sv) statement from
S_fold_constants(), allowing Perl_ck_svconst() to set the IsCOW
and READONLY flags itself. Minor test updates are also included.



To unsubscribe from these emails, change your notification settings at 
https://github.com/Perl/perl5/settings/notifications


Re: [PATCH] hostfs: Add const qualifier to host_root in hostfs_fill_super()

2024-06-11 Thread Richard Weinberger
- Ursprüngliche Mail -
> Von: "Nathan Chancellor" 
> An: "Christian Brauner" 
> CC: "Hongbo Li" , "richard" , "anton 
> ivanov" ,
> "Johannes Berg" , "linux-um" 
> , "linux-kernel"
> , "Nathan Chancellor" 
> Gesendet: Dienstag, 11. Juni 2024 21:58:41
> Betreff: [PATCH] hostfs: Add const qualifier to host_root in 
> hostfs_fill_super()

> After the recent conversion to the new mount API, there is a warning
> when building hostfs (which may be upgraded to an error via
> CONFIG_WERROR=y):
> 
>  fs/hostfs/hostfs_kern.c: In function 'hostfs_fill_super':
>  fs/hostfs/hostfs_kern.c:942:27: warning: initialization discards 'const'
>  qualifier from pointer target type [-Wdiscarded-qualifiers]
>942 | char *host_root = fc->source;
>|   ^~
> 
> Add the 'const' qualifier, as host_root will not be modified after its
> assignment. Move the assignment to keep the existing reverse Christmas
> tree order intact.
> 
> Fixes: cd140ce9f611 ("hostfs: convert hostfs to use the new mount API")
> Signed-off-by: Nathan Chancellor 

Acked-by: Richard Weinberger 

Thanks,
//richard



[tor-announce] New Release: Tor Browser 13.0.16 (Android, Windows, macOS, Linux)

2024-06-11 Thread Richard Pospesel

Hi everyone,

Tor Browser 13.0.16 has now been published for all platforms. For 
details please see our blog post:

- https://blog.torproject.org/new-release-tor-browser-13016/

Changelog:

Tor Browser 13.0.16 - June 11th 2024
 * All Platforms
   * Updated Tor to 0.4.8.12
   * Updated OpenSSL to 3.0.14
   * Bug 42625: Rebase Tor Browser Stable 13.0 onto 115.12.0esr [tor-browser]
 * Windows + macOS + Linux
   * Updated Firefox to 115.12.0esr
 * Android
   * Updated GeckoView to 115.12.0esr
   * Bug 42621: Backport security fixes from Firefox 127 [tor-browser]
 * Build System
   * All Platforms
 * Updated Go to 1.21.11


best,
-richard


OpenPGP_0xDE47360363F34B2C.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-announce mailing list
tor-announce@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-announce


[tor-announce] New Release: Tor Browser 13.5a9 (Android, Windows, macOS, Linux)

2024-06-11 Thread Richard Pospesel

Hi everyone,

Tor Browser 13.5a9 has now been published for all platforms. For details 
please see our blog post:

- https://blog.torproject.org/new-alpha-release-tor-browser-135a9/

Changelog:

Tor Browser 13.5a9 - June 10 2024
 * All Platforms
   * Updated Tor to 0.4.8.12
   * Updated OpenSSL to 3.0.14
   * Bug 41467: compat: beacon: re-enable the API but transform it to a no-op 
[tor-browser]
   * Bug 42604: Add some debug logs about circuits [tor-browser]
   * Bug 42614: Rebase Tor Browser Stable onto 115.12.0esr [tor-browser]
 * Windows + macOS + Linux
   * Updated Firefox to 115.12.0esr
   * Bug 41149: Add be, bg and pt-PT translations to nightlies 
[tor-browser-build]
 * Windows + macOS
   * Bug 42586: Add support link to OS deprecation message [tor-browser]
 * Windows
   * Bug 41859: Font used for IPs in circuit display is illegible [tor-browser]
 * Android
   * Updated GeckoView to 115.12.0esr
   * Bug 42593: Unable to disable bridges after they have been configured and 
successfully bootstrapped [tor-browser]
   * Bug 42621: Backport security fixes from Firefox 127 [tor-browser]
 * Build System
   * All Platforms
 * Updated Go to 1.21.11
 * Bug 42594: Update mach to work with python 3.12 [tor-browser]
 * Bug 41153: Update README for Ubuntu 24.04 unprivileged user namespace 
changes [tor-browser-build]
 * Bug 41154: Update keyring/boklm.gpg for new subkeys [tor-browser-build]
   * Windows
 * Bug 41150: Do not check for SSE2 on the Windows installer 
[tor-browser-build]
   * Linux
 * Bug 41126: Make deb and rpm packages for Tor Browser [tor-browser-build]
 * Bug 41160: Enable build of Tor Browser rpm/deb packages for nightly only 
[tor-browser-build]
   * Android
 * Bug 42568: Remove legacy tor dependencies from firefox-android 
[tor-browser]
 * Bug 42569: Remove tor-onion-proxy-library and and tor-android-service 
deployment/ingestion steps from firefox-android dev tools/scripts [tor-browser]
 * Bug 42570: Add tor-expert-bundle aar depoyment/ingestion step to 
firefox-android dev tools/scripts [tor-browser]
 * Bug 42581: Check if a file exists before trying to sign it in 
tools/tba-sign-devbuild.sh [tor-browser]
 * Bug 41139: Remove tor-android-service and tor-onion-proxy-library 
dependencies and ingestion steps from firefox-android config and build script 
[tor-browser-build]
 * Bug 41140: Remove tor-onion-proxy-library and tor-android-service 
projects from tor-browser-build [tor-browser-build]
 * Bug 41141: Add tor-expert-bundle aar dependency to firefox-android 
[tor-browser-build]


best,
-richard


OpenPGP_0xDE47360363F34B2C.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-announce mailing list
tor-announce@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-announce


[tor-commits] [Git][tpo/applications/tor-browser-update-responses][main] release: new version, 13.0.16

2024-06-11 Thread richard (@richard) via tor-commits


richard pushed to branch main at The Tor Project / Applications / Tor Browser 
update responses


Commits:
0da5aaef by Richard Pospesel at 2024-06-11T18:48:06+00:00
release: new version, 13.0.16

- - - - -


30 changed files:

- update_3/release/.htaccess
- − update_3/release/13.0.12-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.12-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.12-13.0.15-macos-ALL.xml
- − update_3/release/13.0.12-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.12-13.0.15-windows-x86_64-ALL.xml
- − update_3/release/13.0.13-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.13-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.13-13.0.15-macos-ALL.xml
- − update_3/release/13.0.13-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.13-13.0.15-windows-x86_64-ALL.xml
- + update_3/release/13.0.13-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.13-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.13-13.0.16-macos-ALL.xml
- + update_3/release/13.0.13-13.0.16-windows-i686-ALL.xml
- + update_3/release/13.0.13-13.0.16-windows-x86_64-ALL.xml
- − update_3/release/13.0.14-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.14-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.14-13.0.15-macos-ALL.xml
- − update_3/release/13.0.14-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.14-13.0.15-windows-x86_64-ALL.xml
- + update_3/release/13.0.14-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.14-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.14-13.0.16-macos-ALL.xml
- + update_3/release/13.0.14-13.0.16-windows-i686-ALL.xml
- + update_3/release/13.0.14-13.0.16-windows-x86_64-ALL.xml
- + update_3/release/13.0.15-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.15-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.15-13.0.16-macos-ALL.xml
- + update_3/release/13.0.15-13.0.16-windows-i686-ALL.xml


The diff was not included because it is too large.


View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/0da5aaef63a331e16a079fd990fc00ebd4156d75

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/0da5aaef63a331e16a079fd990fc00ebd4156d75
You're receiving this email because of your account on gitlab.torproject.org.


___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tbb-commits] [Git][tpo/applications/tor-browser-update-responses][main] release: new version, 13.0.16

2024-06-11 Thread richard (@richard)


richard pushed to branch main at The Tor Project / Applications / Tor Browser 
update responses


Commits:
0da5aaef by Richard Pospesel at 2024-06-11T18:48:06+00:00
release: new version, 13.0.16

- - - - -


30 changed files:

- update_3/release/.htaccess
- − update_3/release/13.0.12-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.12-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.12-13.0.15-macos-ALL.xml
- − update_3/release/13.0.12-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.12-13.0.15-windows-x86_64-ALL.xml
- − update_3/release/13.0.13-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.13-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.13-13.0.15-macos-ALL.xml
- − update_3/release/13.0.13-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.13-13.0.15-windows-x86_64-ALL.xml
- + update_3/release/13.0.13-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.13-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.13-13.0.16-macos-ALL.xml
- + update_3/release/13.0.13-13.0.16-windows-i686-ALL.xml
- + update_3/release/13.0.13-13.0.16-windows-x86_64-ALL.xml
- − update_3/release/13.0.14-13.0.15-linux-i686-ALL.xml
- − update_3/release/13.0.14-13.0.15-linux-x86_64-ALL.xml
- − update_3/release/13.0.14-13.0.15-macos-ALL.xml
- − update_3/release/13.0.14-13.0.15-windows-i686-ALL.xml
- − update_3/release/13.0.14-13.0.15-windows-x86_64-ALL.xml
- + update_3/release/13.0.14-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.14-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.14-13.0.16-macos-ALL.xml
- + update_3/release/13.0.14-13.0.16-windows-i686-ALL.xml
- + update_3/release/13.0.14-13.0.16-windows-x86_64-ALL.xml
- + update_3/release/13.0.15-13.0.16-linux-i686-ALL.xml
- + update_3/release/13.0.15-13.0.16-linux-x86_64-ALL.xml
- + update_3/release/13.0.15-13.0.16-macos-ALL.xml
- + update_3/release/13.0.15-13.0.16-windows-i686-ALL.xml


The diff was not included because it is too large.


View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/0da5aaef63a331e16a079fd990fc00ebd4156d75

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/0da5aaef63a331e16a079fd990fc00ebd4156d75
You're receiving this email because of your account on gitlab.torproject.org.


___
tbb-commits mailing list
tbb-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tbb-commits


[tbb-commits] [Git][tpo/applications/tor-browser] Pushed new tag base-browser-115.12.0esr-13.0-1-build1

2024-06-11 Thread richard (@richard)


richard pushed new tag base-browser-115.12.0esr-13.0-1-build1 at The Tor 
Project / Applications / Tor Browser

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser/-/tree/base-browser-115.12.0esr-13.0-1-build1
You're receiving this email because of your account on gitlab.torproject.org.


___
tbb-commits mailing list
tbb-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tbb-commits


[tor-commits] [Git][tpo/applications/tor-browser] Pushed new tag base-browser-115.12.0esr-13.0-1-build1

2024-06-11 Thread richard (@richard) via tor-commits


richard pushed new tag base-browser-115.12.0esr-13.0-1-build1 at The Tor 
Project / Applications / Tor Browser

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser/-/tree/base-browser-115.12.0esr-13.0-1-build1
You're receiving this email because of your account on gitlab.torproject.org.


___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[Tails-dev] Tor Browser 130.16 (Android, Windows, macOS, Linux)

2024-06-11 Thread Richard Pospesel

Hello,

Unsigned Tor Browser 13.0.16 release candidate builds are now available 
for testing:


- 
https://tb-build-02.torproject.org/~richard/builds/torbrowser/release/unsigned/13.0.16/


This should be the last stable release in the 13.0 series before 13.5 
scheduled for early next week.


The full changelog can be found here:

- 
https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/raw/tbb-13.0.16-build1/projects/browser/Bundle-Data/Docs-TBB/ChangeLog.txt


best,
-richard


OpenPGP_0xDE47360363F34B2C.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
Tails-dev mailing list
Tails-dev@boum.org
https://www.autistici.org/mailman/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


[tor-qa] Tor Browser 13.0.16 (Android, Windows, macOS, Linux)

2024-06-11 Thread Richard Pospesel

Hello,

Unsigned Tor Browser 13.0.16 release candidate builds are now available 
for testing:


- 
https://tb-build-02.torproject.org/~richard/builds/torbrowser/release/unsigned/13.0.16/


The full changelog can be found here:

- 
https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/raw/tbb-13.0.16-build1/projects/browser/Bundle-Data/Docs-TBB/ChangeLog.txt


best,
-richard


OpenPGP_0xDE47360363F34B2C.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-qa mailing list
tor-qa@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-qa


[tor-commits] [Git][tpo/applications/tor-browser-update-responses][main] alpha: new version, 13.5a9

2024-06-11 Thread richard (@richard) via tor-commits


richard pushed to branch main at The Tor Project / Applications / Tor Browser 
update responses


Commits:
5a2b962f by Richard Pospesel at 2024-06-11T16:48:19+00:00
alpha: new version, 13.5a9

- - - - -


30 changed files:

- update_3/alpha/.htaccess
- − update_3/alpha/13.5a5-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-windows-x86_64-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-windows-i686-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-windows-x86_64-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-windows-i686-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-windows-i686-ALL.xml


The diff was not included because it is too large.


View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/5a2b962f7fe1d1841a8d5025a44326d81ee4db64

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/5a2b962f7fe1d1841a8d5025a44326d81ee4db64
You're receiving this email because of your account on gitlab.torproject.org.


___
tor-commits mailing list
tor-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-commits


[tbb-commits] [Git][tpo/applications/tor-browser-update-responses][main] alpha: new version, 13.5a9

2024-06-11 Thread richard (@richard)


richard pushed to branch main at The Tor Project / Applications / Tor Browser 
update responses


Commits:
5a2b962f by Richard Pospesel at 2024-06-11T16:48:19+00:00
alpha: new version, 13.5a9

- - - - -


30 changed files:

- update_3/alpha/.htaccess
- − update_3/alpha/13.5a5-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a5-13.5a8-windows-x86_64-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a6-13.5a8-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-windows-i686-ALL.xml
- + update_3/alpha/13.5a6-13.5a9-windows-x86_64-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-linux-i686-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-linux-x86_64-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-macos-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-windows-i686-ALL.xml
- − update_3/alpha/13.5a7-13.5a8-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-windows-i686-ALL.xml
- + update_3/alpha/13.5a7-13.5a9-windows-x86_64-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-linux-i686-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-linux-x86_64-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-macos-ALL.xml
- + update_3/alpha/13.5a8-13.5a9-windows-i686-ALL.xml


The diff was not included because it is too large.


View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/5a2b962f7fe1d1841a8d5025a44326d81ee4db64

-- 
View it on GitLab: 
https://gitlab.torproject.org/tpo/applications/tor-browser-update-responses/-/commit/5a2b962f7fe1d1841a8d5025a44326d81ee4db64
You're receiving this email because of your account on gitlab.torproject.org.


___
tbb-commits mailing list
tbb-commits@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tbb-commits


RE: Today's WWDC keynote and iOS 18 announcement

2024-06-11 Thread Richard Turner
It was yesterday, June 10th, at 10 AM Pacific.

You can watch it at this link:

https://www.apple.com/apple-events/

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: viphone@googlegroups.com  On Behalf Of Esther 
Levegnale
Sent: Tuesday, June 11, 2024 8:56 AM
To: viphone@googlegroups.com
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

Hi,everyone,

 

What time will this be held?  I'm looking forward to listening to the WWDC 
about the new phones that are coming out.

 

Thanks.

 

Esther

Sent From Esther's Amazing and Awesome iPhone 13 Pro Max!





On Jun 11, 2024, at 11:42 AM, Chela Robles mailto:cdrobles...@gmail.com> > wrote:

I believe the 15 series is the first one to offer the M chip.

Sent from my iPhone





On Jun 11, 2024, at 8:07 AM, Richard Turner mailto:richardr_tur...@comcast.net> > wrote:



I kind of figured that, but was hoping.  Is the 15 the first model to use the M 
chips?  I think it is.

I’m too lazy to go searching for that info right now.

If in the fall when the 16 comes out, if my carrier offers a great trade-in 
deal on my 13 pro, I might update to the 15 pro, or even the 16 if the offer is 
good enough.  I won’t be holding my breath though, .

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Chela Robles
Sent: Monday, June 10, 2024 9:44 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

Well, I got an email from Apple today and it looks like all the AI stuff is 
gonna be on all of the iPhone 15 models, not series 14. So I for one won’t be 
getting any major update as far as seeing anything with AI as to my knowledge 
since I have an iPhone 14.

Sent from my iPhone






On Jun 10, 2024, at 8:36 PM, Dennis Long mailto:dennisl1...@gmail.com> > wrote:



I agree.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Sieghard Weitzel
Sent: Monday, June 10, 2024 5:45 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

We all know what Siri is like right now, but with full AI integration I have a 
feeling we may all be surprised how amazing it may end up being; of course from 
the little I heard this will not be one of these on/off moments where as soon 
as iOs 18 is released in September it will include all of what it will have a 
year from now and after the main .1, .2 and .3 updates which typically happen 
in Late October, before Christmas or in the new year and again in early March. 
And then of course it is my understanding that the true Apple Intelligence will 
require at least an iPhone 15 Pro or later so for many of us it may be years 
before we experience it all and by then of course even more powerful hardware 
and advances in AI will even add more functionality. One thing I am sure of is 
that in the next years we'll see a lot of cool stuff around all the AI 
development and we'll benefit greatly from it.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Cristóbal Muñoz
Sent: Monday, June 10, 2024 12:30 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

I literally asked Siri this morning what time was the WWDC keynote and I got 
one of those generic “”this is what I found on the web.”

Whatever they do or don’t’ do with Siri, it can’t be as bad as it is X years 
later than when it was introduced. Just one hot useless mess.

 

Cristóbal

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of Tai 
Tomasi
Sent: Monday, June 10, 2024 11:10 AM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

We are currently into a whole new section where they talk about AI. They just 
separated it.

Tai Tomasi, J.D., M.P.A.

Email: tai.t

Re: [patch, rs6000, middle-end 0/1] v1: Add implementation for different targets for pair mem fusion

2024-06-11 Thread Richard Sandiford
Ajit Agarwal  writes:
>>> Thanks a lot. Can I know what should we be doing with neg (fma)
>>> correctness failures with load fusion.
>> 
>> I think it would involve:
>> 
>> - describing lxvp and stxvp as unspec patterns, as I mentioned
>>   in the previous reply
>> 
>> - making plain movoo split loads and stores into individual
>>   lxv and stxvs.  (Or, alternative, it could use lxvp and stxvp,
>>   but internally swap the registers after load and before store.)
>>   That is, movoo should load the lower-numbered register from the
>>   lower address and the higher-numbered register from the higher
>>   address, and likewise for stores.
>> 
>
> Would you mind elaborating the above.

I think movoo should use rs6000_split_multireg_move for all alternatives,
like movxo does.  movoo should split into 2 V1TI loads/stores and movxo
should split into 4 V1TI loads/stores.  lxvp and stxvp would be
independent patterns of the form:

  (set ...
   (unspec [...] UNSPEC_FOO))

---

rs6000_split_multireg_move has:

  /* The __vector_pair and __vector_quad modes are multi-register
 modes, so if we have to load or store the registers, we have to be
 careful to properly swap them if we're in little endian mode
 below.  This means the last register gets the first memory
 location.  We also need to be careful of using the right register
 numbers if we are splitting XO to OO.  */

But I don't see how this can work reliably if we allow the kind of
subregs that you want to create here.  The register order is the opposite
from the one that GCC expects.

This is more a question for the PowerPC maintainers though.

And this is one of the (admittedly many) times when I wish GCC's
subreg model was more like LLVM's. :)

Thanks,
Richard


Re: [Qgis-user] Fwd: Export data with tiff and csv files in QGIS 3.36.3

2024-06-11 Thread Richard McDonnell via QGIS-User
Hi Sofia,
I unfortunately am unable to access the files you sent, as there are 
restrictions on our systems here in relation to accessing files.
Apologies for that.
Kind Regards,

Richard


——
Richard McDonnell MSc GIS, FME Certified Professional
Flood Risk Management - Data Management

——
Oifig na nOibreacha Poiblí
Office of Public Works

Sráid Jonathan Swift, Baile Átha Troim, Co na Mí, C15 NX36
Jonathan Swift Street, Trim, Co Meath, C15 NX36
——
M +353 87 688 5964 T +353 46 942 2409
https://gov.ie/opw

——
To send me files larger than 30MB, please use the link below 
https://filetransfer.opw.ie/filedrop/richard.mcdonn...@opw.ie

Email Disclaimer: 
https://www.gov.ie/en/organisation-information/439daf-email-disclaimer/
From: ΣΟΦΙΑ ΤΣΙΤΟΥ 
Sent: 11 June 2024 14:34
To: Richard McDonnell 
Cc: qgis-user@lists.osgeo.org
Subject: Re: [Qgis-user] Fwd: Export data with tiff and csv files in QGIS 3.36.3

[https://ssl.gstatic.com/docs/doclist/images/icon_10_generic_list.png] 
igme5000.tif<https://drive.google.com/file/d/1vxfYWoy-Z24hCoYXNwELB6gJqQC2I6c0/view?usp=drive_web>[X]
Dear Richard,

Thank you very much for your quick response. I followed this procedure many 
times but unfortunately I gοt null values. Attached you can find the files that 
I used (two different tiff maps-I use one map each time- and one csv file with 
the coordinates). I can't find out if there's a problem with my files or 
something I'm not doing right in the process.

Thank you.

Warm regards,
Sofia Tsitou

Στις Τρί 11 Ιουν 2024 στις 3:42 μ.μ., ο/η Richard McDonnell 
mailto:richard.mcdonn...@opw.ie>> έγραψε:
Hi Sofia,
So I will start from the beginning, hopefully I understand your query properly…


1.   Add your Raster to the Canvas

2.   Using the Data Source Manager and Delimited Text, I add the CSV, 
making sure the Headers, Columns and rows are correct, while also 
setting/specifying the X, Y and Z fields (as required)

3.   This will result in a Point Dataset being added to the Canvas, with 
the CSV loaded as a temporary layer

4.   In the Processing Toolbox search bar, type “sample” and then select 
Sample raster values

a.   Input layer set to the Point Dataset

b.   Raster Layer set to the Raster you want to analyse

c.   Output column Prefix is handy if you have multiple datasets you want 
to sample, as you can change this for each subsequent sample.

d.   Sampled you can specify a Location for your output dataset, or you can 
leave it as a temporary layer until you have carried out all sampling.

I hope that clarifies things,

Kind Regards,

Richard



——
Richard McDonnell MSc GIS, FME Certified Professional
Flood Risk Management - Data Management

——
Oifig na nOibreacha Poiblí
Office of Public Works

Sráid Jonathan Swift, Baile Átha Troim, Co na Mí, C15 NX36
Jonathan Swift Street, Trim, Co Meath, C15 NX36
——
M +353 87 688 5964 T +353 46 942 2409
https://gov.ie/opw

——
To send me files larger than 30MB, please use the link below 
https://filetransfer.opw.ie/filedrop/richard.mcdonn...@opw.ie

Email Disclaimer: 
https://www.gov.ie/en/organisation-information/439daf-email-disclaimer/

From: QGIS-User 
mailto:qgis-user-boun...@lists.osgeo.org>> 
On Behalf Of S?F?? ?S via QGIS-User
Sent: 11 June 2024 12:52
To: qgis-user@lists.osgeo.org<mailto:qgis-user@lists.osgeo.org>
Subject: [Qgis-user] Fwd: Export data with tiff and csv files in QGIS 3.36.3


Dear QGIS team,

I have two maps in tiff format and coordinates in csv format and I try through 
zonal statistics and sample raster values to export the data but it gives me 
null values. Could you tell me the steps to extract the data because I might be 
making a mistake?

Thank you in advance,
Sofia Tsitou
--
Tsitou Sofia

Αdjunct Lecturer
Department of Economics
University of Ioannina
Greece

Post Doctoral Fellow
Department of Economics
University of Macedonia, Thessaloniki
Greece

Research Fellow
DAISSy Research Group
Hellenic Open University
Greece

e-mail: sofitsi...@gmail.com<mailto:sofitsi...@gmail.com>
Homepage: https://sites.google.com/view/sofiatsitou/home 
<https://sites.google.com/view/sofiatsitou/home>


--
Tsitou Sofia

Αdjunct Lecturer
Department of Economics
University of Ioannina
Greece

Post Doctoral Fellow
Department of Economics
University of Macedonia, Thessaloniki
Greece

Research Fellow
DAISSy Research Group
Hellenic Open University
Greece

e-mail: sofitsi...@gmail.com<mailto:sofitsi...@gmail.com>
Homepage: https://sites.google.com/view/sofiatsitou/home 
<https://sites.google.com/view/sofiatsitou/home>


--
Tsitou Sofia

Αdjunct Lecturer
Department of Economics
University of Ioannina
Greece

Post Doctoral Fellow
Department of Economics
University of Macedonia, Thessaloniki
Greece

Research Fellow
DAISSy Research Group
Hellenic Open University
Greece

e-mail: sofitsi...@gmail.com<mailto:sofitsi...@gmail.com>
Homepage: https://sites.google.com/view/sofiatsitou

Re: [PATCH] ifcvt: Clarify if_info.original_cost.

2024-06-11 Thread Richard Sandiford
Robin Dapp  writes:
> The attached v3 tracks the use of cond_earliest as you suggested
> and adds its cost in default_noce_conversion_profitable_p.
>
> Bootstrapped and regtested on x86 and p10, aarch64 still
> running.  Regtested on riscv64.
>
> Regards
>  Robin
>
> Before noce_find_if_block processes a block it sets up an if_info
> structure that holds the original costs.  At that point the costs of
> the then/else blocks have not been added so we only care about the
> "if" cost.
>
> The code originally used BRANCH_COST for that but was then changed
> to COST_N_INSNS (2) - a compare and a jump.
>
> This patch computes the jump costs via
>   insn_cost (if_info.jump, ...)
> under the assumption that the target takes BRANCH_COST into account
> when costing a jump instruction.
>
> In noce_convert_multiple_sets, we keep track of the need for the initial
> CC comparison.  If we needed it for the generated sequence we add its
> cost in default_noce_conversion_profitable_p.

I was looking at the code in more detail and just wanted to check.
We have:

  int last_needs_comparison = -1;

  bool ok = noce_convert_multiple_sets_1
(if_info, _no_cmov, _src, , ,
 _insns, _needs_comparison);
  if (!ok)
  return false;

  /* If there are insns that overwrite part of the initial
 comparison, we can still omit creating temporaries for
 the last of them.
 As the second try will always create a less expensive,
 valid sequence, we do not need to compare and can discard
 the first one.  */
  if (last_needs_comparison != -1)
{
  end_sequence ();
  start_sequence ();
  ok = noce_convert_multiple_sets_1
(if_info, _no_cmov, _src, , ,
 _insns, _needs_comparison);
  /* Actually we should not fail anymore if we reached here,
 but better still check.  */
  if (!ok)
  return false;
}

But noce_convert_multiple_sets_1 ends with:

  /* Even if we did not actually need the comparison, we want to make sure
 to try a second time in order to get rid of the temporaries.  */
  if (*last_needs_comparison == -1)
*last_needs_comparison = 0;


  return true;

AFAICT that means that the first attempt is always redundant.

Have I missed something?

I don't know if this was something that Manolis's patches addressed.

Thanks,
Richard

>
> gcc/ChangeLog:
>
>   * ifcvt.cc (default_noce_conversion_profitable_p):  Add cost of
>   CC comparison.
>   (noce_convert_multiple_sets_1): Set use_cond_earliest.
>   (noce_process_if_block): Just use original cost.
>   (noce_find_if_block): Use insn_cost (jump_insn).
>   * ifcvt.h (struct noce_if_info): Add use_cond_earliest.
> ---
>  gcc/ifcvt.cc | 37 ++---
>  gcc/ifcvt.h  |  3 +++
>  2 files changed, 25 insertions(+), 15 deletions(-)
>
> diff --git a/gcc/ifcvt.cc b/gcc/ifcvt.cc
> index 58ed42673e5..9b408eeb313 100644
> --- a/gcc/ifcvt.cc
> +++ b/gcc/ifcvt.cc
> @@ -814,7 +814,16 @@ default_noce_conversion_profitable_p (rtx_insn *seq,
>/* Cost up the new sequence.  */
>unsigned int cost = seq_cost (seq, speed_p);
>  
> -  if (cost <= if_info->original_cost)
> +  /* If the created sequence does not use cond_earliest (but the jump
> + does) add its cost to the original_cost here.  */
> +  unsigned int cost_adjust = 0;
> +
> +  if (if_info->jump != if_info->cond_earliest
> +  && !if_info->use_cond_earliest)
> +cost_adjust = insn_cost (if_info->cond_earliest,
> +  if_info->speed_p);
> +
> +  if (cost <= if_info->original_cost + cost_adjust)
>  return true;
>  
>/* When compiling for size, we can make a reasonably accurately guess
> @@ -3780,6 +3789,7 @@ noce_convert_multiple_sets_1 (struct noce_if_info 
> *if_info,
> temp_dest = temp_dest2;
> if (!second_try && read_comparison)
>   *last_needs_comparison = count;
> +   if_info->use_cond_earliest = true;
>   }
>else
>   {
> @@ -3931,16 +3941,13 @@ noce_process_if_block (struct noce_if_info *if_info)
>   to calculate a value for x.
>   ??? For future expansion, further expand the "multiple X" rules.  */
>  
> -  /* First look for multiple SETS.  The original costs already include
> - a base cost of COSTS_N_INSNS (2): one instruction for the compare
> - (which we will be needing either way) and one instruction for the
> - branch.  When comparing costs we want to use the branch instruction
> - cost and the sets vs. the cmovs generated here.  Therefore subtract
> - the costs of the compare before checking.
> - ??? Actually, instead of the branch instruction costs we might want
> - to 

Re: [LyX/master] Add "full" drawing strategy

2024-06-11 Thread Richard Kimberly Heck

On 6/11/24 09:36, Jean-Marc Lasgouttes wrote:

Le 11/06/2024 à 15:17, Jean-Marc Lasgouttes a écrit :

commit f48cf461010daa8aceb220a6762cb50c1192db0d
Author: Jean-Marc Lasgouttes 
Date:   Sat Jul 15 11:46:25 2023 +0200

 Add "full" drawing strategy


Riki, would it be OK to backport this change to 2.4.1? I would like to 
know whether it helps people with weird display issues.


The point is to offer a mode where the screen is redrawn fully every 
time (only the drawing, not the metrics computation). The performance 
seems quite reasonable to me, actually.


I assume it is otherwise safe?

Having a proper UI for it would be great, but I am procrastinating on 
that.


What's our policy on preference updates? (I can't remember.) Do we care 
if a preference file saved with 2.4.1 cannot be used with 2.4.0?


Riki


--
lyx-devel mailing list
lyx-devel@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-devel


Re: [patch, rs6000, middle-end 0/1] v1: Add implementation for different targets for pair mem fusion

2024-06-11 Thread Richard Sandiford
Ajit Agarwal  writes:
> On 11/06/24 7:07 pm, Richard Sandiford wrote:
>> Ajit Agarwal  writes:
>>> Hello Richard:
>>> On 11/06/24 6:12 pm, Richard Sandiford wrote:
>>>> Ajit Agarwal  writes:
>>>>> Hello Richard:
>>>>>
>>>>> On 11/06/24 5:15 pm, Richard Sandiford wrote:
>>>>>> Ajit Agarwal  writes:
>>>>>>> Hello Richard:
>>>>>>> On 11/06/24 4:56 pm, Ajit Agarwal wrote:
>>>>>>>> Hello Richard:
>>>>>>>>
>>>>>>>> On 11/06/24 4:36 pm, Richard Sandiford wrote:
>>>>>>>>> Ajit Agarwal  writes:
>>>>>>>>>>>>>> After LRA reload:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> (insn 9299 2472 2412 187 (set (reg:V2DF 51 19 [orig:240 
>>>>>>>>>>>>>> vect__302.545 ] [240])
>>>>>>>>>>>>>> (mem:V2DF (plus:DI (reg:DI 8 8 [orig:1285 ivtmp.886 ] 
>>>>>>>>>>>>>> [1285])
>>>>>>>>>>>>>> (const_int 16 [0x10])) [1 MEM >>>>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4188]+16 S16 A64])) 
>>>>>>>>>>>>>> "shell_lam.fppized.f":238:72 1190 {vsx_movv2df_64bit}
>>>>>>>>>>>>>>  (nil))
>>>>>>>>>>>>>> (insn 2412 9299 2477 187 (set (reg:V2DF 51 19 [orig:240 
>>>>>>>>>>>>>> vect__302.545 ] [240])
>>>>>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 39 7 [ MEM >>>>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050]+16 ])
>>>>>>>>>>>>>> (reg:V2DF 44 12 [3119])
>>>>>>>>>>>>>> (neg:V2DF (reg:V2DF 51 19 [orig:240 
>>>>>>>>>>>>>> vect__302.545 ] [240]) {*vsx_nfmsv2df4}
>>>>>>>>>>>>>>  (nil))
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> (insn 2473 9311 9312 187 (set (reg:V2DF 38 6 [orig:905 
>>>>>>>>>>>>>> vect__302.545 ] [905])
>>>>>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 44 12 [3119])
>>>>>>>>>>>>>> (reg:V2DF 38 6 [orig:2561 MEM >>>>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ] [2561])
>>>>>>>>>>>>>> (neg:V2DF (reg:V2DF 47 15 [5266]) 
>>>>>>>>>>>>>> {*vsx_nfmsv2df4}
>>>>>>>>>>>>>>  (nil))
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In the above allocated code it assign registers 51 and 47 and 
>>>>>>>>>>>>>> they are not sequential.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The reload for 2412 looks valid.  What was the original pre-reload
>>>>>>>>>>>>> version of insn 2473?  Also, what happened to insn 2472?  Was it 
>>>>>>>>>>>>> deleted?
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This is preload version of 2473:
>>>>>>>>>>>>
>>>>>>>>>>>> (insn 2473 2396 2478 161 (set (reg:V2DF 905 [ vect__302.545 ])
>>>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 4283 [3119])
>>>>>>>>>>>> (subreg:V2DF (reg:OO 2561 [ MEM >>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ]) 0)
>>>>>>>>>>>> (neg:V2DF (subreg:V2DF (reg:OO 2572 [ 
>>>>>>>>>>>> vect__300.543_236 ]) 0) {*vsx_nfmsv2df4}
>>>>>>>>>>>>  (expr_list:REG_DEAD (reg:OO 2572 [ vect__300.543_236 ])
>>>>>>>>>>>> (expr_list:REG_DEAD (reg:OO 2561 [ MEM >>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ])

RE: Today's WWDC keynote and iOS 18 announcement

2024-06-11 Thread Richard Turner
I kind of figured that, but was hoping.  Is the 15 the first model to use the M 
chips?  I think it is.

I’m too lazy to go searching for that info right now.

If in the fall when the 16 comes out, if my carrier offers a great trade-in 
deal on my 13 pro, I might update to the 15 pro, or even the 16 if the offer is 
good enough.  I won’t be holding my breath though, .

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: viphone@googlegroups.com  On Behalf Of Chela 
Robles
Sent: Monday, June 10, 2024 9:44 PM
To: viphone@googlegroups.com
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

Well, I got an email from Apple today and it looks like all the AI stuff is 
gonna be on all of the iPhone 15 models, not series 14. So I for one won’t be 
getting any major update as far as seeing anything with AI as to my knowledge 
since I have an iPhone 14.

Sent from my iPhone





On Jun 10, 2024, at 8:36 PM, Dennis Long mailto:dennisl1...@gmail.com> > wrote:



I agree.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Sieghard Weitzel
Sent: Monday, June 10, 2024 5:45 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

We all know what Siri is like right now, but with full AI integration I have a 
feeling we may all be surprised how amazing it may end up being; of course from 
the little I heard this will not be one of these on/off moments where as soon 
as iOs 18 is released in September it will include all of what it will have a 
year from now and after the main .1, .2 and .3 updates which typically happen 
in Late October, before Christmas or in the new year and again in early March. 
And then of course it is my understanding that the true Apple Intelligence will 
require at least an iPhone 15 Pro or later so for many of us it may be years 
before we experience it all and by then of course even more powerful hardware 
and advances in AI will even add more functionality. One thing I am sure of is 
that in the next years we'll see a lot of cool stuff around all the AI 
development and we'll benefit greatly from it.

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of 
Cristóbal Muñoz
Sent: Monday, June 10, 2024 12:30 PM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: RE: Today's WWDC keynote and iOS 18 announcement

 

I literally asked Siri this morning what time was the WWDC keynote and I got 
one of those generic “”this is what I found on the web.”

Whatever they do or don’t’ do with Siri, it can’t be as bad as it is X years 
later than when it was introduced. Just one hot useless mess.

 

Cristóbal

 

From: viphone@googlegroups.com <mailto:viphone@googlegroups.com>  
mailto:viphone@googlegroups.com> > On Behalf Of Tai 
Tomasi
Sent: Monday, June 10, 2024 11:10 AM
To: viphone@googlegroups.com <mailto:viphone@googlegroups.com> 
Subject: Re: Today's WWDC keynote and iOS 18 announcement

 

We are currently into a whole new section where they talk about AI. They just 
separated it.

Tai Tomasi, J.D., M.P.A.

Email: tai.toma...@gmail.com <mailto:tai.toma...@gmail.com> 

Sent from my iPhone. Please excuse my brevity and any grammatical errors.

 

On Jun 10, 2024, at 2:08 PM, Sieghard Weitzel mailto:siegh...@live.ca> > wrote:

 

After all the hipe in the tech media about how Apple was going to announce how 
Siri in iOS 18 would be deeply integrated with AI, it was disappointing to 
listen to the section about iOS 18 and to my knowledge not hear the word Siri 
or AI even once. Of course this could still mean that Apple decided to not go 
into this because maybe it's not yet ready to be presented and we'll hear all 
about it when the new iPhones are released in September, but I thought it was a 
bit of a let down anyways.

 

Best regards,

Sieghard

 

-- 
The following information is important for all members of the V iPhone list.
 
If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.
 
Your V iPhone list moderator is Mark Taylor. Mark can be reached at: 
mk...@ucla.edu <mailto:mk...@ucla.edu> . Your list owner is Cara Quinn - you 
can reach Cara at caraqu...@caraquinn.com <mailto:caraqu...@caraquinn.com> 
 
The archives for this list can be searched at:
http://www.

GitHub email lists require running nnnfree JS to join.

2024-06-11 Thread Richard Stallman
--- Start of forwarded message ---
Date: Sat, 08 Jun 2024 09:44:58 +0300
Message-Id: <86jzj0djbp@gnu.org>
From: Eli Zaretskii 
To: r...@gnu.org
Cc: a...@alphapapa.net, y...@rabkins.net, emacs-de...@gnu.org
In-Reply-To:  (message from Richard
Stallman on Fri, 07 Jun 2024 22:54:26 -0400)
Subject: Re: Emms and the Spotify Search API

> From: Richard Stallman 
> Cc: y...@rabkins.net, emacs-de...@gnu.org
> Date: Fri, 07 Jun 2024 22:54:26 -0400
> 
>   > Sure, please see <https://github.com/alphapapa/listen.el/discussions/29>.
> 
> Is it possible to participate in that discussion without running Github's
> nonfree Javascript code?  I know it is impossible to make a Github account
> without running that code.  Can you participate in that discussion without
> a Github account?

You can participate via email, but I think doing so requires that you
first login and add yourself to the people who are tracking the
discussion.  After you add yourself, you get all the posts via email
and can reply via email.  But the requirement to login and add
yourself probably means that in practice the answer is NO, at least as
long as email participation is considered.

Maybe there are other methods, but I'm unaware of them.
--- End of forwarded message ---

-- 
Dr Richard Stallman (https://stallman.org)
Chief GNUisance of the GNU Project (https://gnu.org)
Founder, Free Software Foundation (https://fsf.org)
Internet Hall-of-Famer (https://internethalloffame.org)





Re: [Intel-wired-lan] [PATCH] ixgbe: Add support for firmware update

2024-06-11 Thread Chien, Richard (Options Engineering)
> I would also think about why Intel has not submitted this code before?
> Maybe because it does things the wrong way? Please look at how other
> Ethernet drivers support firmware. Is it the same? It might be you need to
> throw away this code and reimplement it to mainline standards, maybe using
> devlink flash, or ethtool -f.

See Jacob's reply for details.
 
> One additional question. Is the firmware part of linux-firmware? Given this is
> Intel, i expect the firmware is distributeable, but have they distributed it?

It is the Intel 10G NIC firmware embedded into HPE firmware update packages and 
redistributed to the end user.

Thanks,
Richard


Re: [Intel-wired-lan] [PATCH] igb: Add support for firmware update

2024-06-11 Thread Chien, Richard (Options Engineering)
> However, this implementation is wrong. It is exposing the
> ETHTOOL_GEEPROM and ETHTOOL_SEEPROM interface and abusing it to
> implement a non-standard interface that is custom to the out-of-tree Intel
> drivers to support the flash update utility.
> 
> This implementation was widely rejected when discovered in i40e and in
> submissions for the  ice driver. It abuses the ETHTOOL_GEEPROM and
> ETHTOOL_SEEPROM interface in order to allow tools to access the hardware.
> The use violates the documented behavior of the ethtool interface and breaks
> the intended functionality of ETHTOOL_GEEPROM and ETHTOOL_SEEPROM.

Thank you for your detailed explanation.

> The correct way to implement flash update is via the devlink dev flash
> interface, using request_firmware, and implementing the entire update
> process in the driver. The common portions of this could be done in a shared
> module.

In that case, does Intel have a plan to implement this mechanism
in in-kernel drivers?

> Attempting to support the broken legacy update that is supported by the out-
> of-tree drivers is a non-starter for upstream. We (Intel) have known this for
> some time, and this is why the patches and support have never been
> published.

Although the utility in question has been enhanced to perform firmware
update against Intel 1G/10G NICs by using the /dev/mem, this method
would not work when the secure boot is enabled. Considering out-of-band
firmware update (via the BMC) is not supported for Intel 1G/10G NICs, it
would be desirable to have the support for the devlink dev flash interface
in in-kernel drivers (igb & ixgbe).

Thanks
Richard  
 


Re: [PATCH] [testsuite] [arm] test board cflags in multilib.exp

2024-06-11 Thread Richard Earnshaw (lists)
On 07/06/2024 05:47, Alexandre Oliva wrote:
> 
> multilib.exp tests for multilib-altering flags in a board's
> multilib_flags and skips the test, but if such flags appear in the
> board's cflags, with the same distorting effects on tested multilibs,
> we fail to skip the test.
> 
> Extend the skipping logic to board's cflags as well.
> 
> Regstrapping on x86_64-linux-gnu.  Already tested on arm-eabi (gcc-13
> and trunk).  Ok to install?
> 

OK, thanks.

R.

> 
> for  gcc/testsuite/ChangeLog
> 
>   * gcc.target/arm/multilib.exp: Skip based on board cflags too.
> ---
>  gcc/testsuite/gcc.target/arm/multilib.exp |8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/gcc/testsuite/gcc.target/arm/multilib.exp 
> b/gcc/testsuite/gcc.target/arm/multilib.exp
> index 4442d5d754bd6..12c93bc89d222 100644
> --- a/gcc/testsuite/gcc.target/arm/multilib.exp
> +++ b/gcc/testsuite/gcc.target/arm/multilib.exp
> @@ -18,13 +18,15 @@ load_lib gcc-dg.exp
>  
>  dg-init
>  
> -if { [board_info [target_info name] exists multilib_flags] 
> - && [regexp 
> {(-marm|-mthumb|-march=.*|-mcpu=.*|-mfpu=.*|-mfloat=abi=.*)\y} [board_info 
> [target_info name] multilib_flags]] } {
> +foreach flagsvar {multilib_flags cflags} {
> +  if { [board_info [target_info name] exists $flagsvar] 
> + && [regexp 
> {(-marm|-mthumb|-march=.*|-mcpu=.*|-mfpu=.*|-mfloat=abi=.*)\y} [board_info 
> [target_info name] $flagsvar]] } {
>   
>  # Multilib flags override anything we can apply to a test, so
>  # skip if any of the above options are set there.
> -verbose "skipping multilib tests due to multilib_flags setting" 1
> +verbose "skipping multilib tests due to $flagsvar setting" 1
>  return
> +  }
>  }
>  
>  # We don't want to run this test multiple times in a parallel make check.
> 



Re: [PATCH v3 2/2] testsuite: Fix expand-return CMSE test for Armv8.1-M [PR115253]

2024-06-11 Thread Richard Earnshaw (lists)
On 10/06/2024 15:04, Torbjörn SVENSSON wrote:
> For Armv8.1-M, the clearing of the registers is handled differently than
> for Armv8-M, so update the test case accordingly.
> 
> gcc/testsuite/ChangeLog:
> 
>   PR target/115253
>   * gcc.target/arm/cmse/extend-return.c: Update test case
>   condition for Armv8.1-M.
> 
> Signed-off-by: Torbjörn SVENSSON 
> Co-authored-by: Yvan ROUX 
> ---
>  .../gcc.target/arm/cmse/extend-return.c   | 62 +--
>  1 file changed, 56 insertions(+), 6 deletions(-)
> 
> diff --git a/gcc/testsuite/gcc.target/arm/cmse/extend-return.c 
> b/gcc/testsuite/gcc.target/arm/cmse/extend-return.c
> index 081de0d699f..2288d166bd3 100644
> --- a/gcc/testsuite/gcc.target/arm/cmse/extend-return.c
> +++ b/gcc/testsuite/gcc.target/arm/cmse/extend-return.c
> @@ -1,5 +1,7 @@
>  /* { dg-do compile } */
>  /* { dg-options "-mcmse -fshort-enums" } */
> +/* ARMv8-M expectation with target { ! arm_cmse_clear_ok }.  */
> +/* ARMv8.1-M expectation with target arm_cmse_clear_ok.  */
>  /* { dg-final { check-function-bodies "**" "" "" } } */
>  
>  #include 
> @@ -20,7 +22,15 @@ typedef enum offset __attribute__ ((cmse_nonsecure_call)) 
> ns_enum_foo_t (void);
>  typedef bool __attribute__ ((cmse_nonsecure_call)) ns_bool_foo_t (void);
>  
>  /*
> -**unsignNonsecure0:
> +**unsignNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   uxtbr0, r0
> +**   ...
> +*/
> +/*
> +**unsignNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   uxtbr0, r0
> @@ -32,7 +42,15 @@ unsigned char unsignNonsecure0 (ns_unsign_foo_t * ns_foo_p)
>  }
>  
>  /*
> -**signNonsecure0:
> +**signNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   sxtbr0, r0
> +**   ...
> +*/
> +/*
> +**signNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   sxtbr0, r0
> @@ -44,7 +62,15 @@ signed char signNonsecure0 (ns_sign_foo_t * ns_foo_p)
>  }
>  
>  /*
> -**shortUnsignNonsecure0:
> +**shortUnsignNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   uxthr0, r0
> +**   ...
> +*/
> +/*
> +**shortUnsignNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   uxthr0, r0
> @@ -56,7 +82,15 @@ unsigned short shortUnsignNonsecure0 
> (ns_short_unsign_foo_t * ns_foo_p)
>  }
>  
>  /*
> -**shortSignNonsecure0:
> +**shortSignNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   sxthr0, r0
> +**   ...
> +*/
> +/*
> +**shortSignNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   sxthr0, r0
> @@ -68,7 +102,15 @@ signed short shortSignNonsecure0 (ns_short_sign_foo_t * 
> ns_foo_p)
>  }
>  
>  /*
> -**enumNonsecure0:
> +**enumNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   uxtbr0, r0
> +**   ...
> +*/
> +/*
> +**enumNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   uxtbr0, r0
> @@ -80,7 +122,15 @@ unsigned char __attribute__((noipa)) enumNonsecure0 
> (ns_enum_foo_t * ns_foo_p)
>  }
>  
>  /*
> -**boolNonsecure0:
> +**boolNonsecure0:  { target arm_cmse_clear_ok }
> +**   ...
> +**   blxns   r[0-3]
> +**   ...
> +**   uxtbr0, r0
> +**   ...
> +*/
> +/*
> +**boolNonsecure0: { target { ! arm_cmse_clear_ok } }
>  **   ...
>  **   bl  __gnu_cmse_nonsecure_call
>  **   uxtbr0, r0

OK when the nits in the first patch are sorted.

R.


Re: [PATCH v3 1/2] arm: Zero/Sign extends for CMSE security on Armv8-M.baseline [PR115253]

2024-06-11 Thread Richard Earnshaw (lists)
On 10/06/2024 15:04, Torbjörn SVENSSON wrote:
> Properly handle zero and sign extension for Armv8-M.baseline as
> Cortex-M23 can have the security extension active.
> Currently, there is an internal compiler error on Cortex-M23 for the
> epilog processing of sign extension.
> 
> This patch addresses the following CVE-2024-0151 for Armv8-M.baseline.
> 
> gcc/ChangeLog:
> 
>   PR target/115253
>   * config/arm/arm.cc (cmse_nonsecure_call_inline_register_clear):
>   Sign extend for Thumb1.
>   (thumb1_expand_prologue): Add zero/sign extend.
> 
> Signed-off-by: Torbjörn SVENSSON 
> Co-authored-by: Yvan ROUX 
> ---
>  gcc/config/arm/arm.cc | 71 ++-
>  1 file changed, 63 insertions(+), 8 deletions(-)
> 
> diff --git a/gcc/config/arm/arm.cc b/gcc/config/arm/arm.cc
> index ea0c963a4d6..e7b4caf1083 100644
> --- a/gcc/config/arm/arm.cc
> +++ b/gcc/config/arm/arm.cc
> @@ -19220,17 +19220,22 @@ cmse_nonsecure_call_inline_register_clear (void)
> || TREE_CODE (ret_type) == BOOLEAN_TYPE)
> && known_lt (GET_MODE_SIZE (TYPE_MODE (ret_type)), 4))
>   {
> -   machine_mode ret_mode = TYPE_MODE (ret_type);
> +   rtx ret_reg = gen_rtx_REG (TYPE_MODE (ret_type), R0_REGNUM);
> +   rtx si_reg = gen_rtx_REG (SImode, R0_REGNUM);
> rtx extend;
> if (TYPE_UNSIGNED (ret_type))
> - extend = gen_rtx_ZERO_EXTEND (SImode,
> -   gen_rtx_REG (ret_mode, 
> R0_REGNUM));
> + extend = gen_rtx_SET (si_reg, gen_rtx_ZERO_EXTEND (SImode,
> +ret_reg));
> else
> - extend = gen_rtx_SIGN_EXTEND (SImode,
> -   gen_rtx_REG (ret_mode, 
> R0_REGNUM));
> -   emit_insn_after (gen_rtx_SET (gen_rtx_REG (SImode, R0_REGNUM),
> -  extend), insn);
> -
> + /* Signed-extension is a special case because of
> +thumb1_extendhisi2.  */
> + if (TARGET_THUMB1

You effectively have an 'else if' split across a comment here, and the 
indentation looks weird.  Either write 'else if' on one line (and re-indent 
accordingly) or put this entire block inside braces.

> + && known_ge (GET_MODE_SIZE (TYPE_MODE (ret_type)), 2))

You can use known_eq here.  We'll never have any value other than 2, given the 
known_le (4) above and anyway it doesn't make sense to call extendhisi with any 
other size.

> +   extend = gen_thumb1_extendhisi2 (si_reg, ret_reg);
> + else
> +   extend = gen_rtx_SET (si_reg, gen_rtx_SIGN_EXTEND (SImode,
> +  ret_reg));
> +   emit_insn_after (extend, insn);
>   }
>  
>  
> @@ -27250,6 +27255,56 @@ thumb1_expand_prologue (void)
>live_regs_mask = offsets->saved_regs_mask;
>lr_needs_saving = live_regs_mask & (1 << LR_REGNUM);
>  

Similar comments to above apply to the hunk below.

> +  /* The AAPCS requires the callee to widen integral types narrower
> + than 32 bits to the full width of the register; but when handling
> + calls to non-secure space, we cannot trust the callee to have
> + correctly done so.  So forcibly re-widen the result here.  */
> +  if (IS_CMSE_ENTRY (func_type))
> +{
> +  function_args_iterator args_iter;
> +  CUMULATIVE_ARGS args_so_far_v;
> +  cumulative_args_t args_so_far;
> +  bool first_param = true;
> +  tree arg_type;
> +  tree fndecl = current_function_decl;
> +  tree fntype = TREE_TYPE (fndecl);
> +  arm_init_cumulative_args (_so_far_v, fntype, NULL_RTX, fndecl);
> +  args_so_far = pack_cumulative_args (_so_far_v);
> +  FOREACH_FUNCTION_ARGS (fntype, arg_type, args_iter)
> + {
> +   rtx arg_rtx;
> +
> +   if (VOID_TYPE_P (arg_type))
> + break;
> +
> +   function_arg_info arg (arg_type, /*named=*/true);
> +   if (!first_param)
> + /* We should advance after processing the argument and pass
> +the argument we're advancing past.  */
> + arm_function_arg_advance (args_so_far, arg);
> +   first_param = false;
> +   arg_rtx = arm_function_arg (args_so_far, arg);
> +   gcc_assert (REG_P (arg_rtx));
> +   if ((TREE_CODE (arg_type) == INTEGER_TYPE
> +   || TREE_CODE (arg_type) == ENUMERAL_TYPE
> +   || TREE_CODE (arg_type) == BOOLEAN_TYPE)
> +   && known_lt (GET_MODE_SIZE (GET_MODE (arg_rtx)), 4))
> + {
> +   rtx res_reg = gen_rtx_REG (SImode, REGNO (arg_rtx));
> +   if (TYPE_UNSIGNED (arg_type))
> + emit_set_insn (res_reg, gen_rtx_ZERO_EXTEND (SImode, arg_rtx));
> +   else
> + /* Signed-extension is a special case because of
> +thumb1_extendhisi2.  */
> + if 

[jira] [Resolved] (DERBY-6445) JDBC 4.2: Add support for new date and time classes

2024-06-11 Thread Richard N. Hillegas (Jira)


 [ 
https://issues.apache.org/jira/browse/DERBY-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard N. Hillegas resolved DERBY-6445.

Fix Version/s: 10.15.2.1
   10.16.1.2
   10.17.1.1
   10.18.0.0
   Resolution: Fixed

> JDBC 4.2: Add support for new date and time classes
> ---
>
> Key: DERBY-6445
> URL: https://issues.apache.org/jira/browse/DERBY-6445
> Project: Derby
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: 10.10.1.1
>Reporter: Knut Anders Hatlen
>Priority: Major
> Fix For: 10.15.2.1, 10.16.1.2, 10.17.1.1, 10.18.0.0
>
> Attachments: DERBY-6445.patch, Derby-6445.html, Derby-6445.html, 
> derby-6445-01-aa-DERBY-6445.patchPlusJavadocCleanup.diff, 
> derby-6445-01-ab-DERBY-6445.patchPlusPlusTweaks.diff, 
> derby-6445-02-aa-patchExplanation.diff, tweaks.diff
>
>
> JDBC 4.2 added type mappings for new date and time classes found in Java 8. 
> Derby should support these new mappings.
> This would at least affect Derby's implementation of the various getObject(), 
> setObject() and setNull() methods in ResultSet, PreparedStatement and 
> CallableStatement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [patch, rs6000, middle-end 0/1] v1: Add implementation for different targets for pair mem fusion

2024-06-11 Thread Richard Sandiford
Ajit Agarwal  writes:
> Hello Richard:
> On 11/06/24 6:12 pm, Richard Sandiford wrote:
>> Ajit Agarwal  writes:
>>> Hello Richard:
>>>
>>> On 11/06/24 5:15 pm, Richard Sandiford wrote:
>>>> Ajit Agarwal  writes:
>>>>> Hello Richard:
>>>>> On 11/06/24 4:56 pm, Ajit Agarwal wrote:
>>>>>> Hello Richard:
>>>>>>
>>>>>> On 11/06/24 4:36 pm, Richard Sandiford wrote:
>>>>>>> Ajit Agarwal  writes:
>>>>>>>>>>>> After LRA reload:
>>>>>>>>>>>>
>>>>>>>>>>>> (insn 9299 2472 2412 187 (set (reg:V2DF 51 19 [orig:240 
>>>>>>>>>>>> vect__302.545 ] [240])
>>>>>>>>>>>> (mem:V2DF (plus:DI (reg:DI 8 8 [orig:1285 ivtmp.886 ] 
>>>>>>>>>>>> [1285])
>>>>>>>>>>>> (const_int 16 [0x10])) [1 MEM >>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4188]+16 S16 A64])) 
>>>>>>>>>>>> "shell_lam.fppized.f":238:72 1190 {vsx_movv2df_64bit}
>>>>>>>>>>>>  (nil))
>>>>>>>>>>>> (insn 2412 9299 2477 187 (set (reg:V2DF 51 19 [orig:240 
>>>>>>>>>>>> vect__302.545 ] [240])
>>>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 39 7 [ MEM >>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050]+16 ])
>>>>>>>>>>>> (reg:V2DF 44 12 [3119])
>>>>>>>>>>>> (neg:V2DF (reg:V2DF 51 19 [orig:240 vect__302.545 
>>>>>>>>>>>> ] [240]) {*vsx_nfmsv2df4}
>>>>>>>>>>>>  (nil))
>>>>>>>>>>>>
>>>>>>>>>>>> (insn 2473 9311 9312 187 (set (reg:V2DF 38 6 [orig:905 
>>>>>>>>>>>> vect__302.545 ] [905])
>>>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 44 12 [3119])
>>>>>>>>>>>> (reg:V2DF 38 6 [orig:2561 MEM >>>>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ] [2561])
>>>>>>>>>>>> (neg:V2DF (reg:V2DF 47 15 [5266]) 
>>>>>>>>>>>> {*vsx_nfmsv2df4}
>>>>>>>>>>>>  (nil))
>>>>>>>>>>>>
>>>>>>>>>>>> In the above allocated code it assign registers 51 and 47 and they 
>>>>>>>>>>>> are not sequential.
>>>>>>>>>>>
>>>>>>>>>>> The reload for 2412 looks valid.  What was the original pre-reload
>>>>>>>>>>> version of insn 2473?  Also, what happened to insn 2472?  Was it 
>>>>>>>>>>> deleted?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This is preload version of 2473:
>>>>>>>>>>
>>>>>>>>>> (insn 2473 2396 2478 161 (set (reg:V2DF 905 [ vect__302.545 ])
>>>>>>>>>> (neg:V2DF (fma:V2DF (reg:V2DF 4283 [3119])
>>>>>>>>>> (subreg:V2DF (reg:OO 2561 [ MEM >>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ]) 0)
>>>>>>>>>> (neg:V2DF (subreg:V2DF (reg:OO 2572 [ 
>>>>>>>>>> vect__300.543_236 ]) 0) {*vsx_nfmsv2df4}
>>>>>>>>>>  (expr_list:REG_DEAD (reg:OO 2572 [ vect__300.543_236 ])
>>>>>>>>>> (expr_list:REG_DEAD (reg:OO 2561 [ MEM >>>>>>>>> real(kind=8)> [(real(kind=8) *)_4050] ])
>>>>>>>>>> (nil
>>>>>>>>>>
>>>>>>>>>> insn 2472 is replaced with 9299 after reload.
>>>>>>>>>
>>>>>>>>> You'd have to check the dumps to be sure, but I think 9299 is instead
>>>>>>>>> generated as an input reload of 2412, rather than being a replacement
>>>>>>>>> of insn 2472.  T
>>>>>>>>
>>>>>>>> Yes it is 

  1   2   3   4   5   6   7   8   9   10   >