[plasmashell] [Bug 436318] Save session doesn't work under Wayland
https://bugs.kde.org/show_bug.cgi?id=436318 Michael D changed: What|Removed |Added CC|nortex...@gmail.com | -- You are receiving this mail because: You are watching all bug changes.
[jira] [Commented] (COMDEV-546) project.apache.org uses google charts
[ https://issues.apache.org/jira/browse/COMDEV-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852709#comment-17852709 ] Gary D. Gregory commented on COMDEV-546: What does DPA mean in this context? > project.apache.org uses google charts > - > > Key: COMDEV-546 > URL: https://issues.apache.org/jira/browse/COMDEV-546 > Project: Community Development > Issue Type: Bug >Reporter: Sebb >Priority: Major > > p.a.o references Google charts. We don't (and won't) have a DPA with them. > The terms of Google charts do not allow for local hosting, so either prior > permission must be sought from the user, or the charting code must be > replaced. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: dev-unsubscr...@community.apache.org For additional commands, e-mail: dev-h...@community.apache.org
[Issue 24588] Buy Psilocybin Magic Mushrooms
https://issues.dlang.org/show_bug.cgi?id=24588 RazvanN changed: What|Removed |Added Status|NEW |RESOLVED CC||razvan.nitu1...@gmail.com Resolution|--- |INVALID --
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 10:36:50 UTC, Nick Treleaven wrote: ```d import std.stdio; alias s_cell = int; void main() { writeln("Maze generation demo"); s_cell [5][5] maze; int n; foreach (i, row; maze) foreach (j, col; row) maze[i][j] = n++; s_cell[][5] slices; foreach (i, _; maze) slices[i] = maze[i]; print_maze (slices); } void print_maze ( s_cell [][] maze ) { foreach (a; maze) a.writeln(); } ``` Thanks for the feedback Almost all my projects works intensively with multiple dimensions arrays. So I want to know the best way to manage multi dimensional arrays. I guess the best solution so far are: 1) Only use dynamic arrays. 2) Use a single dimension array, and compute the index value from x,y,z coordinates (Makes dynamic allocation easier). This solution could work well with pointers too. 3) Make my own data structure or class containing the array. Allowing to pass the structure/class by reference. Could allow encapsulating single or multi dimensional arrays. About .ptr, the documentation page state that: ~~~ The .ptr property for static and dynamic arrays will give the address of the first element in the array: ~~~ So I assumed that the following expressions where equivalent, but I guess the multiple dimensions do complicate things: ~~~ array.ptr == == [0] ~~~ So ommitting the "ref" keyword it's like if the data was read only even if the variable is not passed by value. That means that this signature cannot modify the content of the 2D array: ~~~ void print_maze ( s_cell [][] maze ) ~~~ For the create_maze() function, I would need to use the follwing signature since it changes the content of the array. ~~~ void print_maze ( ref s_cell [][] maze ) ~~~ I imagine `a.writeln();` is the same as `writeln(a);` ? Your foreach loops look better than mine. Here is the code I have been using to print the maze. ~~~ void print_maze ( s_cell [][] maze ) { //print top row, assume full foreach ( cell; maze[0] ) write("+---"); writeln("+"); for ( int y = 0; y < maze.length; y++) { //print content write("|"); //assume edge is always full for ( int x = 0; x < maze[y].length; x++) { write(" "); write( maze[y][x].east ? "|": " "); } writeln(); foreach ( cell; maze[y] ) write( cell.south ? "+---" : " " ); writeln("+"); } } ~~~ your iteration version is more neat: ~~~ foreach (i, row; maze) foreach (j, col; row) maze[i][j] = n++; ~~~ I like that using 2 variables (i,row) or (j,col) allow to access the variable later as a element in a collection or as an index. It's more flexible. I'll guess I'll need to read more code to avoid programming too much old school (^_^).
Bug#1072035: marked as done (bookworm-pu: package dns-root-data/2024041801~deb12u1)
Control: reopen -1 Control: tags -1 + pending On Wed, 2024-06-05 at 21:51 +, Debian Bug Tracking System wrote: > Your message dated Wed, 05 Jun 2024 21:47:08 + > with message-id > and subject line Bug#1072035: fixed in dns-root-data > 2024041801~deb12u1 > has caused the Debian Bug report #1072035, > regarding bookworm-pu: package dns-root-data/2024041801~deb12u1 > to be marked as done. Please don't close release.debian.org bugs in your uploads. We'll do that once the fix is actually in (old)stable, i.e. after a point release. Regards, Adam
Bug#1072035: marked as done (bookworm-pu: package dns-root-data/2024041801~deb12u1)
Control: reopen -1 Control: tags -1 + pending On Wed, 2024-06-05 at 21:51 +, Debian Bug Tracking System wrote: > Your message dated Wed, 05 Jun 2024 21:47:08 + > with message-id > and subject line Bug#1072035: fixed in dns-root-data > 2024041801~deb12u1 > has caused the Debian Bug report #1072035, > regarding bookworm-pu: package dns-root-data/2024041801~deb12u1 > to be marked as done. Please don't close release.debian.org bugs in your uploads. We'll do that once the fix is actually in (old)stable, i.e. after a point release. Regards, Adam
Compatibility with jdk 21
Hi team, Which version of Kafka is compatible with jdk 21 or is there any computability matric I can refer for this info? Regards Sahil
Re: [edk2-devel] Call for TianoCore Community Meeting topics
No topics received. Meeting canceled. Mike From: Kinney, Michael D Sent: Tuesday, June 4, 2024 11:19 PM To: edk2-devel-groups-io Cc: Kinney, Michael D Subject: Call for TianoCore Community Meeting topics Hi, Are there any topics for the TianoCore community meeting this month? Thanks, Mike -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119502): https://edk2.groups.io/g/devel/message/119502 Mute This Topic: https://groups.io/mt/106498468/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[USRP-users] Re: GPS fix behavior on USRP E320
On 05/06/2024 22:36, David Raeman wrote: Correct, gpsd was stopped (in fact I cannot even open the tty device if gpsd is running). I am also going to backpedal because I haven’t able to reproduce what I saw/logged in the earlier test. The largest NMEA sentence burst I’m seeing is about 550 bytes. It possible my earlier observation is a sporadic issue with the receiver, but it’s more likely I botched something in my test because I cannot reproduce that behavior. I did find the root cause of my problem, though, and it’s unrelated to the SDR. I have a Raspberry Pi in the same chassis as the USRP E320, and it has an attached USB3/Ethernet dongle. There’s a well-known issue where certain USB3 devices and cables emit significant broadband RF interference via the high-speed bus signaling. Afflicted devices can jam co-located receivers including GPS and WiFi. Intel published a whitepaper on the topic more than a decade ago [1]. When I remove this USB3/Ethernet dongle from the system, GPS immediately works well. When I plug it back in, I immediately lose the satellites again. This dongle has nothing to do with the USRP’s function, but it was positioned just 3-4 inches from the GPS antenna that feeds into the USRP. So not an SDR issue, but perhaps this thread may help a USRP user in the future.. [1] https://www.usb.org/sites/default/files/327216.pdf Thanks for sleuthing this, David! *From:*Marcus D. Leech *Sent:* Wednesday, June 5, 2024 7:59 PM *To:* David Raeman ; usrp-users@lists.ettus.com *Subject:* Re: [USRP-users] Re: GPS fix behavior on USRP E320 On 05/06/2024 11:19, David Raeman wrote: Thanks for the suggestion – in this case they were all sitting on the roof of my vehicle in an open parking lot, with 6-8” separation between radios. I guess there could be minimal shadowing for satellites at low grazing angles, but I’m skeptical of that as a full explanation. I have a hypothesis that the default 5Hz update rate is problematic on these devices. The serial connection between the GPS receiver the Zynq PS runs at 38400 baud. With standard 8N1 framing, that only allows for 768 bytes of sentence data per 200ms cycle. If I capture the raw GPS serial output (by directly watching /dev/ttyPS1, not the scrubbed data filtered through gpsd), it’s quickly obvious that many sentences get truncated and/or dropped. For example, there are very frequent “time skips” happening in the time-related sentences, as well as random sentence fragments. Some cycles would be expected to have a larger data volume, such as when multiple GPGSV sentences list all satellites in view, and I think that’s mangling the serial stream. This explains discrepancies in what ‘gpsmon’ sees, as well as discrepancies I’ve sometimes seen on E320s trying to sync common GPS time with PPS assertion (sometimes radios are wrong by 200ms). This should not impact the “gps_locked” sensor, which gets its state via an I/O signal from the GPS receiver and not by parsing sentences. However, I am currently using information from sentences to determine lock status because “gps_locked” doesn’t seem to work as expected in UHD 4.4 on the E320 (looks like that might’ve been fixed in UHD 4.5 though). So long story short – I think 5Hz update rate is problematic. It can be changed to 1Hz by removing a resistor, and as far as I can tell, neither UHD nor the radio filesystem would care about that change. I may try this on one radio and see if it helps improve consistency.. -David You're not trying to capture /dev/ttyPS1 data *while* GPSD is capturing it, are you? You can't usefully share a resource like a serial port -- some characters will go to you, some to GPSD. Now, having said that, yeah, only 768 bytes per update interval max. How many bytes in a typical NMEA sentence, and how many sentences per interval? *From:*Marcus D. Leech <mailto:patchvonbr...@gmail.com> *Sent:* Wednesday, June 5, 2024 8:56 AM *To:* usrp-users@lists.ettus.com *Subject:* [USRP-users] Re: GPS fix behavior on USRP E320 On 05/06/2024 08:43, David Raeman via USRP-users wrote: Hello, I'm having a difficult time getting consistent GPS fix behavior from a set of USRP E320 radios. They are all using UHD 4.4 with the same active GPS antenna (Siretta Tango 21, which has a 28dB LNA and short ~6" coax run). When outside with a view of the sky and 6 radios sitting together, 10-15 minutes after power-on, some of the radios will have a lock and others will not. For radios that get a lock, sometimes they will briefly glitch into "unlocked" state briefly every 20-30 seconds before reporting as locked again. If I let it sit another 10-15 minutes, nothing really changes. Looking at the output of
Re: [edk2-devel] GitHub PR Code Review process now active
Hi Guo, Thanks. Edk2 is running PatchCheck.py as part of CI. The Maintainer still needs to verify the commit message contents even if it passes PatchCheck.py. Mike > -Original Message- > From: Dong, Guo > Sent: Wednesday, June 5, 2024 6:22 PM > To: devel@edk2.groups.io; Kinney, Michael D > Subject: RE: [edk2-devel] GitHub PR Code Review process now active > > > Hi Mike, > > Glad to see EDK2 PR code review process is active. > In Slim Bootloader project, it runs BaseTools/Scripts/PatchCheck.py to check > the PR commit message when running QEMU CI test > Maybe you could refer > https://github.com/slimbootloader/slimbootloader/blob/48a24b87824321c053cccf3 > 67c7c3637ff581fdf/.azurepipelines/azure-pipelines.yml#L38 to check if EDK2 > could use similar check. > > Thanks, > Guo > > -Original Message- > From: devel@edk2.groups.io On Behalf Of Michael D > Kinney > Sent: Wednesday, June 5, 2024 3:21 PM > To: devel@edk2.groups.io > Cc: Kinney, Michael D > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > Hello, > > The PR code review process has been active for a little over a week now. > > There have been about 17 PRs merged since the switch and it appears to have > been mostly working well. I also note that the emails per day on this mailing > list is much smaller as the code reviews have migrated to PRs. > > A few issues have been noted: > > 1) Contributors that are not EDK II Maintainers do not have permissions >to assign reviewers > >* Mitigation #1: EDK II Maintainers review new PRs and assign reviewers >* Mitigation #2: Use CODEOWNERS to auto assign maintainers. WIP. > > * EDK II Maintainers must review the commit message in each commit and > to make sure it follows the required commit message format and has > an appropriate Signed-off-by tag. GitHub does not provide a way to > provide review comments for a commit message. Instead, feedback on > commit messages must be provided in the main PR conversation and quote > commit message that requires changes as required. > > * Slow CI Performance > > This appears to be due to longer than expected queues in Azure Pipelines. > Azure Pipelines is working through the backlog. It may help if the > number of requests to rebase and number of new commits to open PRs are > reduced. The Tools/CI team will continue to collect data and determine > if other changes are needed to reduce the CI overhead. > > * Some PRs have been merged using the "Rebase and Merge" button in the > PR after all required reviews completed and all CI checks pass. Instead, > the "push" label should continue to be used. There does not appear to be > any unexpected side effects from the "Rebase and Merge" button, but that > option is not available if the PR needs to be rebased. This is what > Mergify handles through a merge queue, so the easiest way to merge right > now is the "push" label. > > If the most recent commit was not performed by an EDK II Maintainers, then > Mergify attempt to rebase may fail. > > Mitigation #1: EDK II Maintainer perform a rebase > Mitigation #2: Update Mergify to use a bot account with write permission >to perform rebase operations. > > There was feedback earlier in the year that the git commit history does > not indicate which maintainer was the committer. Instead it always shows > Mergify. > > The use of GitHub Merge Queues will be evaluated to see if it can be used > instead of Mergify and remove the need for the "push" label and allow the > "Rebase and Merge" button to be used and avoid the Mergify permission > issues. > > * Some PRs do not complete all CI checks waiting for "Workflow Approval". > this can occur when a PR is updated by an outside collaborator that does > not have any previous "Workflow Approvals" accepted. > > Mitigation #1: EDK II Maintainers review PRs and accept the "Workflow > Approval" > if the PR looks like a good change request. > Mitigation #2: Relax the edk2 repo configuration settings related to > workflows > > * When a PR needs to be rebased, there are 2 options available through the > Web UI: > > * Update with merge commit (Never use - generates PatchCheck errors) > * Update with rebase (Only use this one) > > If Update with merge commit is accidently applied, then redo again > Using "Update with rebase" > > Please provide feedback if you are seeing other issues or have other > suggestions to improve the process. > > Thanks, >
[TLS]Is NIST actually prohibiting X25519?
Andrei Popov writes: > This is a complicated compliance question. I'm not qualified to > comment on this option. I think it's worth investigating, considering the following NIST quote: Their associated key agreement schemes, X25519 and X448, will be considered for inclusion in a subsequent revision to SP 800-56A. The CMVP does not intend to enforce compliance with SP 800-56A until these revisions are complete. https://web.archive.org/web/20200810165057/https://csrc.nist.gov/projects/cryptographic-module-validation-program/notices Does anyone have any documents showing that NIST has reneged on the above announcement? Possibilities: * Yes: then I'd appreciate a pointer so that concerned members of the community can tell NIST what they think about this and, hopefully, get NIST to change course. * No: then the announcement and consistent handling of this by NIST would be another reason for IETF to not be dragged down by the current limitations of NIST SP 800-56A. If nobody has ever tried asking NIST to approve an X25519 solution as per the above announcement, surely that would be a useful experiment, creating a path towards simplifying subsequent TLS WG discussions. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[USRP-users] Re: GPS fix behavior on USRP E320
On 05/06/2024 11:19, David Raeman wrote: Thanks for the suggestion – in this case they were all sitting on the roof of my vehicle in an open parking lot, with 6-8” separation between radios. I guess there could be minimal shadowing for satellites at low grazing angles, but I’m skeptical of that as a full explanation. I have a hypothesis that the default 5Hz update rate is problematic on these devices. The serial connection between the GPS receiver the Zynq PS runs at 38400 baud. With standard 8N1 framing, that only allows for 768 bytes of sentence data per 200ms cycle. If I capture the raw GPS serial output (by directly watching /dev/ttyPS1, not the scrubbed data filtered through gpsd), it’s quickly obvious that many sentences get truncated and/or dropped. For example, there are very frequent “time skips” happening in the time-related sentences, as well as random sentence fragments. Some cycles would be expected to have a larger data volume, such as when multiple GPGSV sentences list all satellites in view, and I think that’s mangling the serial stream. This explains discrepancies in what ‘gpsmon’ sees, as well as discrepancies I’ve sometimes seen on E320s trying to sync common GPS time with PPS assertion (sometimes radios are wrong by 200ms). This should not impact the “gps_locked” sensor, which gets its state via an I/O signal from the GPS receiver and not by parsing sentences. However, I am currently using information from sentences to determine lock status because “gps_locked” doesn’t seem to work as expected in UHD 4.4 on the E320 (looks like that might’ve been fixed in UHD 4.5 though). So long story short – I think 5Hz update rate is problematic. It can be changed to 1Hz by removing a resistor, and as far as I can tell, neither UHD nor the radio filesystem would care about that change. I may try this on one radio and see if it helps improve consistency.. -David You're not trying to capture /dev/ttyPS1 data *while* GPSD is capturing it, are you? You can't usefully share a resource like a serial port -- some characters will go to you, some to GPSD. Now, having said that, yeah, only 768 bytes per update interval max. How many bytes in a typical NMEA sentence, and how many sentences per interval? *From:*Marcus D. Leech *Sent:* Wednesday, June 5, 2024 8:56 AM *To:* usrp-users@lists.ettus.com *Subject:* [USRP-users] Re: GPS fix behavior on USRP E320 On 05/06/2024 08:43, David Raeman via USRP-users wrote: Hello, I'm having a difficult time getting consistent GPS fix behavior from a set of USRP E320 radios. They are all using UHD 4.4 with the same active GPS antenna (Siretta Tango 21, which has a 28dB LNA and short ~6" coax run). When outside with a view of the sky and 6 radios sitting together, 10-15 minutes after power-on, some of the radios will have a lock and others will not. For radios that get a lock, sometimes they will briefly glitch into "unlocked" state briefly every 20-30 seconds before reporting as locked again. If I let it sit another 10-15 minutes, nothing really changes. Looking at the output of 'gpsmon' on the radio, the radios which never locked will see fewer satellites, and the ones in common will have far different SNR levels. I'm trying to find a solution for more consistent behavior, especially since these are outside with a view of the sky. I confirmed the radio's GPS ANT port has the +3.3V bias so I assume the antennas receive power as expected. Searching the mailing list, over the years this topic has come up a couple times specifically with E320 radios. I know the same Jackson Labs LTE-Lite SOM is also used in the newer X410 radios, though it's configured a bit differently via strapping pins. I think: * The X410 sets the module in 1Hz mode instead of 5Hz. * The X410 uses it in "mobile" mode instead of auto-surveying “stationary” mode. * Curiously, the E320 seems to connect pin 1 (EFC) to pin 2 (NC), though this doesn't make any sense based on the LTE-Lite public tech manual. The X410 leaves them NC. Does anybody know whether any of the changes (or others) represent "lessons learned" that would improve GPS TTFF or disciplining behavior? I don’t mind changing resistor populations if there is a reason to. Or any other suggestions around this topic? Thank you, David Raeman ___ USRP-users mailing list --usrp-users@lists.ettus.com To unsubscribe send an email tousrp-users-le...@lists.ettus.com IF you move the antennas further apart, what happens? If they are all tightly packed together, there's an opportunity for shadowing (small, but, maybe?). ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
Re: [edk2-devel] GitHub PR Code Review process now active
The Merge button may not work in that case either if there is parallel activity by other developers or Mergify. Using the "push" label is recommended so Maintainers do not have to wait and rebase. I understand the desire to apply to other repos quickly. Let's work through some of these known issues for another week and then evaluate if it should be applied to edk2-platforms yet. Mike > -Original Message- > From: Rebecca Cran > Sent: Wednesday, June 5, 2024 3:48 PM > To: devel@edk2.groups.io; Kinney, Michael D > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > On 6/5/2024 4:21 PM, Michael D Kinney via groups.io wrote: > > * Some PRs have been merged using the "Rebase and Merge" button in the > >PR after all required reviews completed and all CI checks pass. Instead, > >the "push" label should continue to be used. There does not appear to be > >any unexpected side effects from the "Rebase and Merge" button, but that > >option is not available if the PR needs to be rebased. This is what > >Mergify handles through a merge queue, so the easiest way to merge right > >now is the "push" label. > > > >If the most recent commit was not performed by an EDK II Maintainers, > then > >Mergify attempt to rebase may fail. > > > > Mitigation #1: EDK II Maintainer perform a rebase > > Mitigation #2: Update Mergify to use a bot account with write > permission > > to perform rebase operations. > > > >There was feedback earlier in the year that the git commit history does > >not indicate which maintainer was the committer. Instead it always > shows > >Mergify. > > > >The use of GitHub Merge Queues will be evaluated to see if it can be > used > >instead of Mergify and remove the need for the "push" label and allow > the > >"Rebase and Merge" button to be used and avoid the Mergify permission > issues. > > So it sounds like using the "Merge" button is fine as long as the user > understands they may need to rebase, wait for CI to finish and then merge? > > Also, is there a timeline on enabling PRs for the other repos? I'd > really like to switch to them for edk2-platforms, even if it means > having to update settings in multiple repos as we find issues with the > process. > > > -- > Rebecca Cran -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119491): https://edk2.groups.io/g/devel/message/119491 Mute This Topic: https://groups.io/mt/106355103/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[Issue 24589] New: [std.sreaching] take functions seem to be missing range overload/version
https://issues.dlang.org/show_bug.cgi?id=24589 Issue ID: 24589 Summary: [std.sreaching] take functions seem to be missing range overload/version Product: D Version: D2 Hardware: All URL: http://dlang.org/phobos/ OS: All Status: NEW Severity: enhancement Priority: P3 Component: phobos Assignee: nob...@puremagic.com Reporter: crazymonk...@gmail.com ``` import std; auto takeUntil(R)(R base,R other){ static struct Result{ R base; R other; alias base this; bool empty()=>base==other; } return Result(base,other); } unittest{ iota(5).takeUntil(iota(5).drop(3)).writeln; } ``` --
Re: [edk2-devel] GitHub PR Code Review process now active
Hello, The PR code review process has been active for a little over a week now. There have been about 17 PRs merged since the switch and it appears to have been mostly working well. I also note that the emails per day on this mailing list is much smaller as the code reviews have migrated to PRs. A few issues have been noted: 1) Contributors that are not EDK II Maintainers do not have permissions to assign reviewers * Mitigation #1: EDK II Maintainers review new PRs and assign reviewers * Mitigation #2: Use CODEOWNERS to auto assign maintainers. WIP. * EDK II Maintainers must review the commit message in each commit and to make sure it follows the required commit message format and has an appropriate Signed-off-by tag. GitHub does not provide a way to provide review comments for a commit message. Instead, feedback on commit messages must be provided in the main PR conversation and quote commit message that requires changes as required. * Slow CI Performance This appears to be due to longer than expected queues in Azure Pipelines. Azure Pipelines is working through the backlog. It may help if the number of requests to rebase and number of new commits to open PRs are reduced. The Tools/CI team will continue to collect data and determine if other changes are needed to reduce the CI overhead. * Some PRs have been merged using the "Rebase and Merge" button in the PR after all required reviews completed and all CI checks pass. Instead, the "push" label should continue to be used. There does not appear to be any unexpected side effects from the "Rebase and Merge" button, but that option is not available if the PR needs to be rebased. This is what Mergify handles through a merge queue, so the easiest way to merge right now is the "push" label. If the most recent commit was not performed by an EDK II Maintainers, then Mergify attempt to rebase may fail. Mitigation #1: EDK II Maintainer perform a rebase Mitigation #2: Update Mergify to use a bot account with write permission to perform rebase operations. There was feedback earlier in the year that the git commit history does not indicate which maintainer was the committer. Instead it always shows Mergify. The use of GitHub Merge Queues will be evaluated to see if it can be used instead of Mergify and remove the need for the "push" label and allow the "Rebase and Merge" button to be used and avoid the Mergify permission issues. * Some PRs do not complete all CI checks waiting for "Workflow Approval". this can occur when a PR is updated by an outside collaborator that does not have any previous "Workflow Approvals" accepted. Mitigation #1: EDK II Maintainers review PRs and accept the "Workflow Approval" if the PR looks like a good change request. Mitigation #2: Relax the edk2 repo configuration settings related to workflows * When a PR needs to be rebased, there are 2 options available through the Web UI: * Update with merge commit (Never use - generates PatchCheck errors) * Update with rebase (Only use this one) If Update with merge commit is accidently applied, then redo again Using "Update with rebase" Please provide feedback if you are seeing other issues or have other suggestions to improve the process. Thanks, Mike > -Original Message- > From: Kinney, Michael D > Sent: Monday, June 3, 2024 12:38 PM > To: Neal Gompa ; devel@edk2.groups.io > Cc: Kinney, Michael D > Subject: RE: [edk2-devel] GitHub PR Code Review process now active > > The reason to allow a draft PR is to allow contributors to run all the > CI tests to see if they pass and resolve issues before starting a review. > > The CI tests include combinations of OS/compiler that not all contributors > have available. > > Mike > > > -Original Message- > > From: Neal Gompa > > Sent: Monday, June 3, 2024 11:47 AM > > To: devel@edk2.groups.io; Kinney, Michael D > > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > > > Hmm, I don't see a setting for it anymore, maybe that's not a thing > anymore? > > > > I seemingly recall that draft PRs didn't get CI runs, but if that's > > not a thing anymore, then that's fine. > > > > That said, draft PRs cannot be reviewed, so we should not be telling > > people to make draft PRs. > > > > > > On Mon, Jun 3, 2024 at 12:26 PM Michael D Kinney via groups.io > > > > wrote: > > > > > > CI jobs are dispatched to both GitHub Actions and Azure Pipelines. > > > > > > For Draft PRs, I see both GitHub Actions and Azure Pipelines jobs > running. > > > > > > This must
Re: [cayugabirds-l] Bluebird nest question
I would think that disturbing the nest would be a bad idea (and probably against the bird protection act). The eggs might be sterile. Possilbly, the adults will come back, build a new nest and try again. John Newman On Wednesday, June 5, 2024, 09:05:23 AM EDT, Peter Saracino wrote: So, a friend has this bluebird house which has had 2 blue eggs in it for a while, weeks at least No one seems to be taking care of them, but the bluebirds return every now and then and check it out and go away. Should she remove the eggs? The whole nest?Thank youPeter Saracino/NY State Master Naturalist Volunteer --Cayugabirds-L List Info:Welcome and BasicsRules and InformationSubscribe, Configuration and LeaveArchives:The Mail ArchiveSurfbirdsABAPlease submit your observations to eBird!-- -- (copy & paste any URL below, then modify any text "_DOT_" to a period ".") Cayugabirds-L List Info: NortheastBirding_DOT_com/CayugabirdsWELCOME_DOT_htm NortheastBirding_DOT_com/CayugabirdsRULES_DOT_htm NortheastBirding_DOT_com/CayugabirdsSubscribeConfigurationLeave_DOT_htm ARCHIVES: 1) mail-archive_DOT_com/cayugabirds-l@cornell_DOT_edu/maillist_DOT_html 2) surfbirds_DOT_com/birdingmail/Group/Cayugabirds 3) aba_DOT_org/birding-news/ Please submit your observations to eBird: ebird_DOT_org/content/ebird/ --
[TLS]Re: Curve-popularity data?
Andrei Popov writes: > I support this change, willing to implement it in the Windows TLS > stack. We have thousands of customers concerned about increased > latencies due to the enablement of TLS 1.3. The services they connect > to require NIST curves and HRR is required to get TLS clients to send > appropriate key shares. To clarify, when you say "require NIST curves", do you mean "require conformance with NIST SP 800-56A"? In other words, will another curve be allowed once it's added to NIST SP 800-56A? Maybe faster: would the short-term problem be addressed if we can convince NIST to announce that it will consider X25519 and X448 for a revision of SP 800-56A, and doesn't intend to enforce conformance of cryptographic modules with SP 800-56A until the revisions are done? Or are you saying that, independently of NIST's decisions, the services in question are for some reason specifically requiring what's typically called the "NIST curves", namely the fifteen NSA curves that NIST standardized? Or the subset of those that NIST hasn't deprecated yet? Thanks in advance for the clarification. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
Re: bool passed by ref, safe or not ?
On Wednesday, 5 June 2024 at 01:18:06 UTC, Paul Backus wrote: On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: ```d void main(string[] args) { ushort a = 0b; bool* b = cast(bool*) setIt(*b); assert(a == 0b); // what actually happens assert(a == 0b1110); // what would be safe } ``` [...] Do I corrupt memory here or not ? Is that a safety violation ? `cast(bool*)` is a safety violation. The only [safe values][1] for a `bool` are 0 (false) and 1 (true). By creating a `bool*` that points to a different value, you have violated the language's safety invariants. Because of this, operations that would normally be safe (reading or writing through the `bool*`) may now result in undefined behavior. [1]: https://dlang.org/spec/function.html#safe-values Obviously the topic was created because of the recent move D made. Sorry for the "catchy" aspect BTW. Now I remember that D safety is unrelated to undefined behaviors.
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 06:22:34 UTC, Eric P626 wrote: I tried the following signatures with the ref keyword and it did not change anything: ~~~ void print_maze ( ref s_cell maze ) void print_maze ( ref s_cell [][] maze ) ~~~ From what I found, arrays passed in parameters are always passed by reference. So the ref keyword seems pointless. There is one useful functionality about the ref keyword when passing arrays, it is that you can change the original array reference to another array reference. Ex. ``` void foo(ref int[] x) { x = [1,2,3]; } void bar(int[] y) { y = [1,2,3]; } void main() { auto x = [0,0,0]; auto y = [1,1,1]; foo(x); bar(y); writeln(x); writeln(y); } ``` The output of the program is: ``` [1, 2, 3] [1, 1, 1] ``` Of course in your case this doesn't matter, but just wanted to point out that adding ref to array parameters actually pose a function.
[Issue 24588] Buy Psilocybin Magic Mushrooms
https://issues.dlang.org/show_bug.cgi?id=24588 veronapressbuymagicmushrooms changed: What|Removed |Added URL||http://www.veronapress.com/ ||contributed/you-can-now-buy ||-magic-mushrooms-online-100 ||-legal/article_40c42984-e7d ||4-11ee-b152-5778bdfa4e1d.ht ||ml --
[Issue 20010] allow cast of type, not only expressions
https://issues.dlang.org/show_bug.cgi?id=20010 Bolpat changed: What|Removed |Added CC||qs.il.paperi...@gmail.com --- Comment #1 from Bolpat --- I’d be in favor of the cast. Grammar: ```diff CastQual: - cast ( TypeCtors? ) UnaryExpression + cast ( MemberFunctionAttributes? ) UnaryExpression + cast ( ... MemberFunctionAttributes ) UnaryExpression + cast ( ! ...[opt] MemberFunctionAttributes ) UnaryExpression ``` Note that `MemberFunctionAttributes` includes `TypeCtors`. Semantics of `cast()` must remain so that it only removes `TypeCtor`s. To remove other things, either use e.g. `cast(@system)` or fix https://issues.dlang.org/show_bug.cgi?id=24587. One issue I see, however, is with delegate types: `cast(immutable)` would produce `immutable(R delegate())` and there’s no way to go from `R delegate()` to `R delegate() immutable` with a cast. This is what the second rule does: It applies the member function attribute to the function type even if it’s a type qualifier. `cast(pure)` is unambiguous, so it doesn’t require `cast(... pure)`, but `cast(... pure)` should be legal. The `...` essentially becomes `typeof(UnaryExpression)`. As for Issue 24587 (negated forms), that’s the last rule: `cast(! ... const)` removes `const` as a member function attribute. --
[Issue 24587] New: Allow negated qualifiers in cast expressions
https://issues.dlang.org/show_bug.cgi?id=24587 Issue ID: 24587 Summary: Allow negated qualifiers in cast expressions Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: qs.il.paperi...@gmail.com Allow a `!` in a cast expression to remove specific qualifiers from a type. Unlike `cast()`, notably `cast(!shared)` removes `shared` only and leaves `inout` and/or `const` intact if they’re present. More than one can be used and `!` negates all of them: `cast(!const shared)` removes `const` and `shared`, but leaves `inout` intact. Grammar: ```diff CastQual: cast ( TypeCtors? ) UnaryExpression + cast ( ! TypeCtors ) UnaryExpression ``` Note that the `TypeCtors` in the second rule one aren’t optional. --
[USRP-users] Re: GPS fix behavior on USRP E320
On 05/06/2024 08:43, David Raeman via USRP-users wrote: Hello, I'm having a difficult time getting consistent GPS fix behavior from a set of USRP E320 radios. They are all using UHD 4.4 with the same active GPS antenna (Siretta Tango 21, which has a 28dB LNA and short ~6" coax run). When outside with a view of the sky and 6 radios sitting together, 10-15 minutes after power-on, some of the radios will have a lock and others will not. For radios that get a lock, sometimes they will briefly glitch into "unlocked" state briefly every 20-30 seconds before reporting as locked again. If I let it sit another 10-15 minutes, nothing really changes. Looking at the output of 'gpsmon' on the radio, the radios which never locked will see fewer satellites, and the ones in common will have far different SNR levels. I'm trying to find a solution for more consistent behavior, especially since these are outside with a view of the sky. I confirmed the radio's GPS ANT port has the +3.3V bias so I assume the antennas receive power as expected. Searching the mailing list, over the years this topic has come up a couple times specifically with E320 radios. I know the same Jackson Labs LTE-Lite SOM is also used in the newer X410 radios, though it's configured a bit differently via strapping pins. I think: * The X410 sets the module in 1Hz mode instead of 5Hz. * The X410 uses it in "mobile" mode instead of auto-surveying “stationary” mode. * Curiously, the E320 seems to connect pin 1 (EFC) to pin 2 (NC), though this doesn't make any sense based on the LTE-Lite public tech manual. The X410 leaves them NC. Does anybody know whether any of the changes (or others) represent "lessons learned" that would improve GPS TTFF or disciplining behavior? I don’t mind changing resistor populations if there is a reason to. Or any other suggestions around this topic? Thank you, David Raeman ___ USRP-users mailing list --usrp-users@lists.ettus.com To unsubscribe send an email tousrp-users-le...@lists.ettus.com IF you move the antennas further apart, what happens? If they are all tightly packed together, there's an opportunity for shadowing (small, but, maybe?). ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
[USRP-users] Re: Big network Latency on 100G port in X410
On 05/06/2024 08:32, zhou via USRP-users wrote: Hi All, I am using MCX516A-CCAT for X410 USRP. It has three network ports, two for 100Gb QSFP and one for 1Gb ethernet. They are directly connected to host. Surprisingly, I find much bigger latency on the 100Gb link than the 1Gb link when ping them. I didn't notice this before. Then I checked X310. Its latency is also pretty big compared to the 1Gb port: rtt min/avg/max/mdev = 0.341/0.539/0.793/0.187 ms Why is the latency in 100Gb bigger than 1Gb port? ~$ uhd_find_devices [INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; UHD_4.5.0.0-0-g471af98f -- -- UHD Device 0 -- Device Address: serial: 3289B23 addr: 192.168.20.2 claimed: False fpga: CG_400 mgmt_addr: 192.168.10.2 mgmt_addr: 192.168.20.2 mgmt_addr: 192.168.6.66 name: ni-x4xx-3289B23 product: x410 type: x4xx ~$ ping 192.168.10.2 PING 192.168.10.2 (192.168.10.2) 56(84) bytes of data. 64 bytes from 192.168.10.2: icmp_seq=1 ttl=64 time=0.998 ms 64 bytes from 192.168.10.2: icmp_seq=2 ttl=64 time=0.888 ms 64 bytes from 192.168.10.2: icmp_seq=3 ttl=64 time=0.886 ms 64 bytes from 192.168.10.2: icmp_seq=4 ttl=64 time=0.894 ms ^C --- 192.168.10.2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3036ms rtt min/avg/max/mdev = 0.886/0.916/0.998/0.047 ms ~$ ping 192.168.6.66 PING 192.168.6.66 (192.168.6.66) 56(84) bytes of data. 64 bytes from 192.168.6.66: icmp_seq=1 ttl=64 time=0.180 ms 64 bytes from 192.168.6.66: icmp_seq=2 ttl=64 time=0.143 ms 64 bytes from 192.168.6.66: icmp_seq=3 ttl=64 time=0.115 ms 64 bytes from 192.168.6.66: icmp_seq=4 ttl=64 time=0.119 ms ^C --- 192.168.6.66 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3080ms rtt min/avg/max/mdev = 0.115/0.139/0.180/0.025 ms Thanks, H. Probably because the RJ45 (1G) port is more directly routed to the CPU part of the RFSoC chip, since it is intended by the RFSoC architecture for management of the embedded Linux component. The 100G ports are, in this application, intended largely for radio sample traffic, and management functions, like ICMP, are more circuitously-routed through the FPGA and into the Linux side of the house. Suffice it to say, that sample traffic suffers no such long latency. ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 11:27:32 UTC, Nick Treleaven wrote: On Wednesday, 5 June 2024 at 09:24:23 UTC, evilrat wrote: for simple cases like this it might work, but 2d array is not even contiguous, A 2D static array is contiguous: https://dlang.org/spec/arrays.html#rectangular-arrays D static arrays, while using the same syntax, are implemented as a fixed rectangular layout in a contiguous block of memory Yeah ok, i might have messed up with columns last time, but it works. ```d int[5][5] test; foreach(i; 0..5) { foreach(j; 0..5) { test[i][j] = i * 5 + j; } } foreach(i; 0..25) { assert(test[0].ptr[i] == i); } ```
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 09:24:23 UTC, evilrat wrote: for simple cases like this it might work, but 2d array is not even contiguous, A 2D static array is contiguous: https://dlang.org/spec/arrays.html#rectangular-arrays D static arrays, while using the same syntax, are implemented as a fixed rectangular layout in a contiguous block of memory
[Issue 24582] Detect unsafe `cast(bool[])`
https://issues.dlang.org/show_bug.cgi?id=24582 Nick Treleaven changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=20148 --
[Issue 20148] void initializated bool can be both true and false
https://issues.dlang.org/show_bug.cgi?id=20148 Nick Treleaven changed: What|Removed |Added See Also||https://issues.dlang.org/sh ||ow_bug.cgi?id=24582 --
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 10:27:47 UTC, Nick Treleaven wrote: foreach (i, row; maze) slices[i] = row; Sorry that assignment was wrong (edited at last minute). Fixed: ```d import std.stdio; alias s_cell = int; void main() { writeln("Maze generation demo"); s_cell [5][5] maze; int n; foreach (i, row; maze) foreach (j, col; row) maze[i][j] = n++; s_cell[][5] slices; foreach (i, _; maze) slices[i] = maze[i]; print_maze (slices); } void print_maze ( s_cell [][] maze ) { foreach (a; maze) a.writeln(); } ```
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 10:27:47 UTC, Nick Treleaven wrote: //~ void print_maze ( s_cell [][] maze... ) I meant to delete that line!
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: ~~~ void main() { writeln("Maze generation demo"); s_cell [5][5] maze; print_maze (maze); } void print_maze ( s_cell [][] maze ) { } ~~~ This is how to do it without GC allocations (I have used `int` instead for demo purposes): ```d import std.stdio; alias s_cell = int; void main() { writeln("Maze generation demo"); s_cell [5][5] maze = [0, 1, 2, 3, 4]; s_cell[][5] slices; // static array of 5 slices foreach (i, row; maze) slices[i] = row; print_maze (slices); } //~ void print_maze ( s_cell [][] maze... ) void print_maze ( s_cell [][] maze ) { foreach (a; maze) a.writeln(); } ```
Re: How to pass in reference a fixed array in parameter
With accessor: ``` void main() { s_cell[] maze=make(5,5); s_cell a=maze.get(1,2); print_maze(maze); } void print_maze(s_cell[] maze) { } s_cell[] make(int width, int height) { return new s_cell[width*height]; } s_cell get(s_cell[] maze, int x, int y) { return maze[5*y+x]; //oops } ``` looks like you need to store the maze width somewhere.
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 06:22:34 UTC, Eric P626 wrote: Now according to the book, it's possible to assign a slice from a fixed array. This code will compile: ~~~ int[12] monthDays = [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ]; int[] a_slice = monthDays; ~~~ The element types are both int, so the compiler can slice the static array. As if you had written `a_slice = monthDays[];`. How come the assignment does not work when passing a parameter. I tried the following and it failed: ~~~ s_cell [5][5] maze; The element type is s_cell[5]. s_cell [][] sliced_maze = maze; The element type of sliced_maze is s_cell[], so the element types are incompatible. ~~~ void print_maze ( ref s_cell maze ) void print_maze ( ref s_cell [][] maze ) ~~~ From what I found, arrays passed in parameters are always passed by reference. So the ref keyword seems pointless. You don't need `ref` to be able to read the array length and elements. However, if you want to modify the array length, and have it affect the caller's dynamic array, you need `ref`. --- The only solution left is to use pointers. But even this does not seems to work as in C. I created a function with different pointer signature and they all fails. Normally in C, this would have worked: ~~~ s_cell [5][5] maze; create_maze(); Pass `[0][0]` instead. ~~~ Error: function `mprmaze.create_maze(s_cell[][]* maze)` is not callable using argument types `(s_cell[5][5]*)` cannot pass argument `& maze` of type `s_cell[5][5]*` to parameter `s_cell[][]* maze` ~~~ s_cell[5][5] cannot implicitly convert to s_cell[][]. Now I think it expect a 2D array of pointers instead of a pointer on a 2D array. It's also not clear if there is a difference between those 2 notations: ~~~ maze.ptr ~~~ is a pointer to s_cell[5][5]. maze.ptr is a pointer to s_cell[5]. `.ptr` means a pointer to the first element of the array.
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: I try to create a 2D array of fixed length and pass it in parameter as a reference. Normally, in C, I would have used a pointer as parameter, and pass the address of the array. Not obvious what you're trying to do. How would you do it in C? Use one dimensional array? You can use one dimensional array in D too. If dimensions of the maze are dynamic, you just write the maze creation function that allocates the maze as you want. In simple case: ``` void main() { writeln("Maze generation demo"); s_cell [5][5] maze; print_maze (maze); } void print_maze (ref s_cell [5][5] maze ) { } ``` With factory: ``` void main() { s_cell[][] maze=make(5,5); print_maze(maze); } void print_maze(s_cell[][] maze) { } s_cell[][] make(int width, int height) { } ```
Re: How to pass in reference a fixed array in parameter
On Wednesday, 5 June 2024 at 06:22:34 UTC, Eric P626 wrote: On Tuesday, 4 June 2024 at 16:19:39 UTC, Andy Valencia wrote: On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: Thanks for the comments. So far, I only managed to make it work by creating a dynamic array and keeping the same signature: ~~~ void main() { s_cell [][] maze = new s_cell[][](5,5); print_maze (maze); } void print_maze ( s_cell [][] maze ) { } ~~~ Now according to the book, it's possible to assign a slice from a fixed array. This code will compile: ~~~ int[12] monthDays = [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ]; int[] a_slice = monthDays; ~~~ for simple cases like this it might work, but 2d array is not even contiguous, simpler case like s_cell[5][] might work too. How come the assignment does not work when passing a parameter. I tried the following and it failed: ~~~ s_cell [5][5] maze; s_cell [][] sliced_maze = maze; ~~~ with this message: ~~~ Error: cannot implicitly convert expression `maze` of type `s_cell[5][5]` to `s_cell[][]` ~~~ Is it because it's a 2D array (slice of slice)? I need to manually copy each slice manually, or use a utility function to do the copy? This is why it cannot auto-magically do it with just when passing a parameter. very likely this is the only solution - make a dynamic array by copying all elements. there was few old bug tracker issues discussed wrt to static arrays and join function but there is seems to be no agreement so far. ~~~ Error: function `mprmaze.create_maze(s_cell[][]* maze)` is not callable using argument types `(s_cell[5][5]*)` cannot pass argument `& maze` of type `s_cell[5][5]*` to parameter `s_cell[][]* maze` ~~~ Now I think it expect a 2D array of pointers instead of a pointer on a 2D array. It's also not clear if there is a difference between those 2 notations: ~~~ maze.ptr ~~~ there is, array itself is a tuple of length and pointer, the .ptr notation is just the data location, this is what you usually pass to C functions and not itself. to sum up, i wasn't able to make fixed-size arrays to work with dynamic arrays without making a copy, and I don't think this will change in the future because of various reasons including type system limitations and binary object formats. so if you really absolutely need static arrays for example to avoid GC allocations in hot path than you need to make function that takes fixed size array. in addition to that spec (https://dlang.org/spec/arrays.html#static-arrays) says static arrays is passed by value, unlike dynamic arrays that even when passed as length-and-pointer tuple will allow writing back to original data.
Re: bool passed by ref, safe or not ?
On Wednesday, 5 June 2024 at 09:09:40 UTC, Kagamin wrote: On Wednesday, 5 June 2024 at 01:18:06 UTC, Paul Backus wrote: The only safe values for a `bool` are 0 (false) and 1 (true). AFAIK that was fixed and now full 8-bit range is safe. `cast(bool) someByte` is fine - that doesn't reinterpret the bit representation. The problem is certain values such as `0x2` for the byte representation can cause the boolean to be both true and false: https://issues.dlang.org/show_bug.cgi?id=20148#c3 Void initialization of bool and bool union fields are now deprecated in @safe functions as of 2.109. There is a remaining case of casting an array to bool[], which I am working on disallowing in @safe.
Re: bool passed by ref, safe or not ?
On Wednesday, 5 June 2024 at 01:18:06 UTC, Paul Backus wrote: The only safe values for a `bool` are 0 (false) and 1 (true). AFAIK that was fixed and now full 8-bit range is safe.
Re: bool passed by ref, safe or not ?
Basile B. kirjoitti 4.6.2024 klo 19.58: I understand that the notion of `bool` doesn't exist on X86, hence what will be used is rather an instruction that write on the lower 8 bits, but with a 7 bits corruption. Do I corrupt memory here or not ? Is that a safety violation ? Viewing a valid boolean as an integer is still valid. Bit pattern of `false` is 0b_, and bit pattern of `true` is `0b_0001`. And even if the boolean is invalid, viewing it as an integer is probably valid if it was assigned to as an integer and not as an invalid boolean. There's a related case though where the situation is unclear. How do `ubytes` other than 1 or 0, when viewed as bools? We probably can't say it's undefined behaviour, since it is allowed in `@safe`. How I would define it, is that it's unspecific behaviour. That is if you have ```D bool* unspecified = cast(bool*) new ubyte(0xff); ``` then ``` // same as void-initialising bool a = *unspecified; // also same as void-initialising ubyte b = *unspecified; // Reliably 0xff. It's using the memory slot as bool that makes it unspecified, but what's in the memory slot is not affected. ubyte c = * cast(ubyte*) unspecified; // Unspecified which happens. One and only one must happen though. if (*unspecified) fun() else gun(); // Should this be required to call the same function as above? I'm not sure. if (*unspecified) fun() else gun(); ```
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 16:19:39 UTC, Andy Valencia wrote: On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: I tried to find a solution on the internet, but could not find anything, I stumble a lot on threads about Go or Rust language even if I specify "d language" in my search. Aside from the excellent answer already present, I wanted to mention that searching with "dlang" has helped target my searches. Welcome to D! (From another newbie.) Andy Thanks for the comments. So far, I only managed to make it work by creating a dynamic array and keeping the same signature: ~~~ void main() { s_cell [][] maze = new s_cell[][](5,5); print_maze (maze); } void print_maze ( s_cell [][] maze ) { } ~~~ Now according to the book, it's possible to assign a slice from a fixed array. This code will compile: ~~~ int[12] monthDays = [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ]; int[] a_slice = monthDays; ~~~ How come the assignment does not work when passing a parameter. I tried the following and it failed: ~~~ s_cell [5][5] maze; s_cell [][] sliced_maze = maze; ~~~ with this message: ~~~ Error: cannot implicitly convert expression `maze` of type `s_cell[5][5]` to `s_cell[][]` ~~~ Is it because it's a 2D array (slice of slice)? I need to manually copy each slice manually, or use a utility function to do the copy? This is why it cannot auto-magically do it with just when passing a parameter. I tried the following signatures with the ref keyword and it did not change anything: ~~~ void print_maze ( ref s_cell maze ) void print_maze ( ref s_cell [][] maze ) ~~~ From what I found, arrays passed in parameters are always passed by reference. So the ref keyword seems pointless. --- The only solution left is to use pointers. But even this does not seems to work as in C. I created a function with different pointer signature and they all fails. Normally in C, this would have worked: ~~~ s_cell [5][5] maze; create_maze(); void create_maze ( s_cell *maze) { } ~~~ I get the following error ~~~ Error: function `mprmaze.create_maze(s_cell* maze)` is not callable using argument types `(s_cell[5][5]*)` cannot pass argument `& maze` of type `s_cell[5][5]*` to parameter `s_cell* maze` ~~~ But I get the idea of ambiguity, is the pointer pointing on a single cell, or an array of cells, so there might need a way to specify that it's not just an elements. I tried this: ~~~ s_cell [5][5] maze; create_maze(); void create_maze ( s_cell [][]*maze) { } ~~~ and get this error ~~~ Error: function `mprmaze.create_maze(s_cell[][]* maze)` is not callable using argument types `(s_cell[5][5]*)` cannot pass argument `& maze` of type `s_cell[5][5]*` to parameter `s_cell[][]* maze` ~~~ Now I think it expect a 2D array of pointers instead of a pointer on a 2D array. It's also not clear if there is a difference between those 2 notations: ~~~ maze.ptr ~~~ Do you have a code sample on how to pass a 2D array by pointer? So far, the pointer solution seems like the only method that should be compatible with both fixed and dynamic arrays unless I am mistaken.
[edk2-devel] Call for TianoCore Community Meeting topics
Hi, Are there any topics for the TianoCore community meeting this month? Thanks, Mike -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119468): https://edk2.groups.io/g/devel/message/119468 Mute This Topic: https://groups.io/mt/106498468/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
Re: Planning for 12.6/11.10
press@ - please could you comment on the dates proposed below Thanks, Adam On Mon, 2024-05-27 at 13:07 +0100, Jonathan Wiltshire wrote: > Hi, > > The final bullseye point release 11.10 (and therefore also 12.6 for > versioning) should be soon after 10th June, when security team > support > will end. > > Please indicate availability for: > > Saturday 15th June > Saturday 22nd June > Saturday 29th June > > Thanks, >
Re: Planning for 12.6/11.10
press@ - please could you comment on the dates proposed below Thanks, Adam On Mon, 2024-05-27 at 13:07 +0100, Jonathan Wiltshire wrote: > Hi, > > The final bullseye point release 11.10 (and therefore also 12.6 for > versioning) should be soon after 10th June, when security team > support > will end. > > Please indicate availability for: > > Saturday 15th June > Saturday 22nd June > Saturday 29th June > > Thanks, >
Re: Planning for 12.6/11.10
press@ - please could you comment on the dates proposed below Thanks, Adam On Mon, 2024-05-27 at 13:07 +0100, Jonathan Wiltshire wrote: > Hi, > > The final bullseye point release 11.10 (and therefore also 12.6 for > versioning) should be soon after 10th June, when security team > support > will end. > > Please indicate availability for: > > Saturday 15th June > Saturday 22nd June > Saturday 29th June > > Thanks, >
Re: bool passed by ref, safe or not ?
On Wednesday, 5 June 2024 at 05:15:42 UTC, Olivier Pisano wrote: This is technically not a memory corruption, because as bool.sizeof < int.sizeof, you just write the low order byte of an int you allocated on the stack. It was not an int, it was a ushort. Anyway, what I wrote still applies.
Re: bool passed by ref, safe or not ?
On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: question in the header, code in the body, execute on a X86 or X86_64 CPU I understand that the notion of `bool` doesn't exist on X86, hence what will be used is rather an instruction that write on the lower 8 bits, but with a 7 bits corruption. Do I corrupt memory here or not ? Is that a safety violation ? The problem is that while setIt() is @safe, your main function is not. So the pointer cast (which is not @safe) is permitted. A bool is a 1 byte type with two possible values : false (0) and true (1). When you set the value to false, you write 0 to the byte it points to. This is technically not a memory corruption, because as bool.sizeof < int.sizeof, you just write the low order byte of an int you allocated on the stack.
Re: bool passed by ref, safe or not ?
On Wednesday, 5 June 2024 at 01:18:06 UTC, Paul Backus wrote: On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: you have violated the language's safety invariants. ah mais non.
[Issue 24586] New: [REG 2.108] initialization of immutable arrays with a system function marks the array as system
https://issues.dlang.org/show_bug.cgi?id=24586 Issue ID: 24586 Summary: [REG 2.108] initialization of immutable arrays with a system function marks the array as system Product: D Version: D2 Hardware: All OS: All Status: NEW Keywords: rejects-valid Severity: regression Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: schvei...@gmail.com In 2.108, I started receiving deprecation messages about system variables. This bizarrely occurs on explicitly typed arrays initialized from unmarked CTFE functions, and not on inferred ones. ```d int[] arr() { return [1, 2, 3]; } static immutable x = arr(); static immutable int[] x2 = [1, 2, 3]; static immutable int[] x3 = arr(); void main() @safe { int v; v = x[0]; // ok v = x2[0]; // ok v = x3[0]; // deprecation } ``` The deprecation message is: ``` Deprecation: cannot access `@system` variable `x3` in @safe code ``` Marking `arr` as `@safe` fixes the problem. Using the preview switch indeed treats this as an error, so marking as rejects-valid. This does not happen for 2.107 and earlier, so this is a regression. --
Re: bool passed by ref, safe or not ?
On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: ```d void main(string[] args) { ushort a = 0b; bool* b = cast(bool*) setIt(*b); assert(a == 0b); // what actually happens assert(a == 0b1110); // what would be safe } ``` [...] Do I corrupt memory here or not ? Is that a safety violation ? `cast(bool*)` is a safety violation. The only [safe values][1] for a `bool` are 0 (false) and 1 (true). By creating a `bool*` that points to a different value, you have violated the language's safety invariants. Because of this, operations that would normally be safe (reading or writing through the `bool*`) may now result in undefined behavior. [1]: https://dlang.org/spec/function.html#safe-values
Re: bool passed by ref, safe or not ?
On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: question in the header, code in the body, execute on a X86 or X86_64 CPU ```d module test; void setIt(ref bool b) @safe { b = false; } void main(string[] args) { ushort a = 0b; bool* b = cast(bool*) setIt(*b); assert(a == 0b); // what actually happens assert(a == 0b1110); // what would be safe } ``` I understand that the notion of `bool` doesn't exist on X86, hence what will be used is rather an instruction that write on the lower 8 bits, but with a 7 bits corruption. Do I corrupt memory here or not ? I don't think so. You passed an address to a bool, which uses 8 bits of space, even though the compiler treats it as a 1-bit integer. In order for your code to do what you expect, all bool writes would have to be read/modify/write operations. I don't think anyone would prefer this. Is that a safety violation ? No, you are not writing to memory you don't have access to. An address is pointing at a byte level, not a bit level. -Steve
[TLS]Re: Curve-popularity data?
Richard Barnes writes: > Popularity of algorithms is entirely > irrelevant to this working group's charter. To clarify, are you saying that there was some relevant charter change after the extensive TLS WG discussions that decided on the algorithms to include in TLS 1.3---which, naturally, included discussions of algorithm popularity? If so, can you please say which change you're referring to? Thanks in advance. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[TLS]Re: Curve-popularity data?
Dennis Jackson writes: > Especially when the referenced comment was unconnected to any active > discussion within the WG or decisions made by the chairs. Hybrids are an ongoing topic of active discussion within the WG, with hundreds of messages on the WG mailing list in the past year (including 18 from me before this thread) that simultaneously mention post-quantum crypto and X25519. Beyond X25519 hybrids, there have been proposals to use, or at least allow, other curves in hybrids. It's clear that the WG will end up deciding what exactly to do for TLS. I designed X25519 in the first place to address various problems created by the NSA/NIST curves. Subsequent research has found even more reasons to recommend X25519 over P-256. So, of course, I recommend focusing on X25519 as the default curve for hybrids, as in the PQ deployment in OpenSSH, ALTS, etc., rather than using P-256 as in PQ3. (Sure, NIST took until 2023 to standardize Ed25519 and still hasn't standardized X25519. But my understanding is that NIST allows X25519+PQ when the PQ part is a NIST standard. More fundamentally, I think NIST standards shouldn't be allowed to drag down IETF standards---this would have stopped TLS from using X25519 in the first place!) Recently some people have instead been advocating P-256 over X25519--- not just for TLS, but certainly including TLS. See, e.g., the WG email dated 02 Jun 2024 23:02:39 +0200, confirming that there was already a plan to raise this on the WG list: I actually meant to bring this up ... it would actually make my life much easier if the one universal hybrid (and/or default client key share) was P-256+ML-KEM-768. I had, obviously before seeing that email, been pointed to a statement from October of one of the explicit rationales for considering P-256: Should we still use 25519 for all new designs? Or should we take seriously at the idea of using the P curves again? ... I think we should take seriously because P 256 is the most popular curve in the world besides the bitcoin curve. And I donât have head to head numbers, and the bitcoin curve is SEC P, but P 256 is most popular curve on the internet. So certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve. That was in another venue. That venue isn't a mailing list allowing open discussion. The TLS WG mailing list is an obvious venue for discussion: the source was appointed TLS co-chair in November; the quote mentions specifically "TLS, handshakes"; and, again, the TLS WG is certainly going to be taking action here. So I'm baffled at the notion that this is off topic for the TLS WG. I started this thread explicitly asking for the basis for the "world", "internet", and "handshake" popularity claims quoted above. I would expect the response to simply be a pointer to the data source (or a retraction of the claims if they aren't based on data), so that subsequent decisions can take that information into account. The TLS measurements that have been posted to the list so far are all very far from the "70 plus percent" claim, but they also have noticeable differences from each other (e.g., P-256 is reportedly 15% of the curves selected by Chrome handshakes on Windows, while other reports give much smaller percentages of handshakes selecting P-256), so it seems possible that the claims are coming from different measurements. Such divergence would be very interesting to study. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
Re: bool passed by ref, safe or not ?
On Tuesday, 4 June 2024 at 16:58:50 UTC, Basile B. wrote: question in the header, code in the body, execute on a X86 or X86_64 CPU ```d module test; void setIt(ref bool b) @safe { b = false; } void main(string[] args) { ushort a = 0b; bool* b = cast(bool*) setIt(*b); assert(a == 0b); // what actually happens assert(a == 0b1110); // what would be safe } ``` I understand that the notion of `bool` doesn't exist on X86, hence what will be used is rather an instruction that write on the lower 8 bits, but with a 7 bits corruption. Do I corrupt memory here or not ? Is that a safety violation ? No everything is fine. The bool is the same size like byte or char. So your cast makes pointer to a byte. And this byte has to be made completely zero by setIt, otherwise it would not be false in the sense of bool type.
bool passed by ref, safe or not ?
question in the header, code in the body, execute on a X86 or X86_64 CPU ```d module test; void setIt(ref bool b) @safe { b = false; } void main(string[] args) { ushort a = 0b; bool* b = cast(bool*) setIt(*b); assert(a == 0b); // what actually happens assert(a == 0b1110); // what would be safe } ``` I understand that the notion of `bool` doesn't exist on X86, hence what will be used is rather an instruction that write on the lower 8 bits, but with a 7 bits corruption. Do I corrupt memory here or not ? Is that a safety violation ?
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: I tried to find a solution on the internet, but could not find anything, I stumble a lot on threads about Go or Rust language even if I specify "d language" in my search. Aside from the excellent answer already present, I wanted to mention that searching with "dlang" has helped target my searches. Welcome to D! (From another newbie.) Andy
Re: lvm2 deadlock
On Tue, 4 Jun 2024, Roger Heflin wrote: My experience is that heavy disk io/batch disk io systems work better with these values being smallish. I don't see a use case for having large values. It seems to have no real upside and several downsides. Get the buffer size small enough and you will still get pauses to clear the writes the be pauses will be short enough to not be a problem. Not a normal situation, but I should mention my recent experience. One of the disks in an underlying RAID was going bad. It still worked, but the disk struggled manfully with multiple retries and recalibrates to complete many reads/writes - i.e. it was extremely slow. I was running into all kinds of strange boundary conditions because of this. E.g. VMs were getting timeouts on their virtio disk devices, leading to file system corruption and other issues. I was not modifying any LVM volumes, so did not run into any problems with LVM - but that is a boundary condition to keep in mind. You don't necessarily need to fully work under such conditions, but need to do something sane.
[USRP-users] Re: usrp x310
On 04/06/2024 09:23, Moussa GUEMDANI wrote: Hello, I would like to know if I can use the usrp x310 as an O-RU, connected to a CU/DU via openfronthaul interface, (split 7.2x) Best regards Moussa ___ USRP-users mailing list --usrp-users@lists.ettus.com To unsubscribe send an email tousrp-users-le...@lists.ettus.com There is this: https://openairinterface.org/wp-content/uploads/2023/11/Neel-Pandeya-NI.pdf I'll ask Neel if there's an implementation for X310. ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: I am currently trying to learn how to program in D. I thought that I could start by trying some maze generation algorithms. I have a maze stored as 2D array of structure defined as follow which keep tracks of wall positions: ~~~ struct s_cell { bool north = true; bool east = true; bool south = true; bool west = true; } ~~~ I try to create a 2D array of fixed length and pass it in parameter as a reference. Normally, in C, I would have used a pointer as parameter, and pass the address of the array. Here, I thought it would have been easier just to pass a slice of the array, since a slice is a reference to the original array. So I wrote the signature like this: ~~~ void main() { writeln("Maze generation demo"); s_cell [5][5] maze; print_maze (maze); } void print_maze ( s_cell [][] maze ) { } ~~~ My idea is that print_maze use a slice of what ever is sent in parameter. Unfortunately, I get the following error message: ~~~ Error: function `mprmaze.print_maze(s_cell[][] maze)` is not callable using argument types `(s_cell[5][5])` cannot pass argument `maze` of type `s_cell[5][5]` to parameter `s_cell[][] maze` ~~~ I tried to find a solution on the internet, but could not find anything, I stumble a lot on threads about Go or Rust language even if I specify "d language" in my search. You have declared static array here, they cannot be implicitly converted to dynamic arrays. It is not very obvious but it is a part of language design to avoid unnecessary GC allocations and for C compatibility reasons in some cases (e.g. strings known at compile implicitly has null appended to it to be able to pass pointer as is to C functions). IIRC you can explicitly cast it to s_cell[][] to make it work but it will allocate new array when you append to it. Else is there other ways to pass an array as reference using parameter modifiers like: ref,in,out ... `ref` is exactly for that. Else, can it be done the C way using pointers? absolutely, even ref behind the scenes will basically do the same thing anyway.
How to pass in reference a fixed array in parameter
I am currently trying to learn how to program in D. I thought that I could start by trying some maze generation algorithms. I have a maze stored as 2D array of structure defined as follow which keep tracks of wall positions: ~~~ struct s_cell { bool north = true; bool east = true; bool south = true; bool west = true; } ~~~ I try to create a 2D array of fixed length and pass it in parameter as a reference. Normally, in C, I would have used a pointer as parameter, and pass the address of the array. Here, I thought it would have been easier just to pass a slice of the array, since a slice is a reference to the original array. So I wrote the signature like this: ~~~ void main() { writeln("Maze generation demo"); s_cell [5][5] maze; print_maze (maze); } void print_maze ( s_cell [][] maze ) { } ~~~ My idea is that print_maze use a slice of what ever is sent in parameter. Unfortunately, I get the following error message: ~~~ Error: function `mprmaze.print_maze(s_cell[][] maze)` is not callable using argument types `(s_cell[5][5])` cannot pass argument `maze` of type `s_cell[5][5]` to parameter `s_cell[][] maze` ~~~ I tried to find a solution on the internet, but could not find anything, I stumble a lot on threads about Go or Rust language even if I specify "d language" in my search. Else is there other ways to pass an array as reference using parameter modifiers like: ref,in,out ... Else, can it be done the C way using pointers? Thank you.
[Issue 24585] Allow switch with multiple arguments
https://issues.dlang.org/show_bug.cgi?id=24585 Bolpat changed: What|Removed |Added Priority|P1 |P4 CC||qs.il.paperi...@gmail.com --
[Issue 24585] New: Allow switch with multiple arguments
https://issues.dlang.org/show_bug.cgi?id=24585 Issue ID: 24585 Summary: Allow switch with multiple arguments Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: qs.il.paperi...@gmail.com Allow `switch` statements to have more than one argument: `switch (x, y)`. Case labels then take the same number of arguments: `case (0, 0):` The parentheses after `case` are then required. A case label matches if it would match components individually. While this should work for all types for which `switch` works, if the type of the components’ sizes adds up to ≤ 64 bit, arguments can be connected into an `ulong` (or smaller) and checked together. --
Re: need help to use C++ callback from garnet
On Wednesday, 29 May 2024 at 09:01:13 UTC, evilrat wrote: On Wednesday, 29 May 2024 at 07:47:01 UTC, Dakota wrote: [...] (here is the signature of callback) https://github.com/microsoft/garnet/blob/ade2991f3737b9b5e3151d0dd0b614adfd4bcecd/libs/storage/Tsavorite/cc/src/device/async.h#L25 [...] Thanks for the tips.
Re: LDC 1.39.0-beta1
On Monday, 3 June 2024 at 19:42:42 UTC, kinke wrote: Glad to announce the first beta for LDC 1.39. Major changes: * Based on D 2.109.0. * LLVM for prebuilt packages bumped to v18.1.6. * Support for LLVM 11-14 was dropped. The CLI options `-passmanager` and `-opaque-pointers` were removed. Full release log and downloads: https://github.com/ldc-developers/ldc/releases/tag/v1.39.0-beta1 Please help test, and thanks to all contributors & sponsors! 1.38.0 work fine with `-mtriple=x86_64-w64-mingw32 -march=x86-64` from linux. v1.39.0-beta1 throw this error: ```sh call void @__assert_fail(ptr @.str, ptr @.str.8, i32 8860, ptr @.str.24) #2, !dbg !515 Incorrect number of arguments passed to called function! call void @__assert_fail(ptr @.str.23, ptr @.str.8, i32 8858, ptr @.str.24) #2, !dbg !512 LLVM ERROR: Broken module found, compilation aborted! #0 0x55b9b789c697 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/usr/local/ldc2/bin/ldc2+0x6dcb697) #1 0x55b9b789a48c llvm::sys::RunSignalHandlers() (/usr/local/ldc2/bin/ldc2+0x6dc948c) #2 0x55b9b789cd3f SignalHandler(int) Signals.cpp:0:0 #3 0x7f2a09446050 (/lib/x86_64-linux-gnu/libc.so.6+0x3c050) #4 0x7f2a09494e2c __pthread_kill_implementation ./nptl/pthread_kill.c:44:76 #5 0x7f2a09445fb2 raise ./signal/../sysdeps/posix/raise.c:27:6 #6 0x7f2a09430472 abort ./stdlib/abort.c:81:7 #7 0x55b9b7805372 llvm::report_fatal_error(llvm::Twine const&, bool) (/usr/local/ldc2/bin/ldc2+0x6d34372) #8 0x55b9b78051a6 (/usr/local/ldc2/bin/ldc2+0x6d341a6) #9 0x55b9b76945ac (/usr/local/ldc2/bin/ldc2+0x6bc35ac) #10 0x55b9b7b7841d llvm::detail::PassModelllvm::VerifierPass, llvm::PreservedAnalyses, llvm::AnalysisManager>::run(llvm::Module&, llvm::AnalysisManager&) ld-temp.o:0:0 #11 0x55b9b766b214 llvm::PassManagerllvm::AnalysisManager>::run(llvm::Module&, llvm::AnalysisManager&) (/usr/local/ldc2/bin/ldc2+0x6b9a214) #12 0x55b9b7b72696 runOptimizationPasses(llvm::Module*) (/usr/local/ldc2/bin/ldc2+0x70a1696) #13 0x55b9b7c0200e writeModule(llvm::Module*, char const*) (/usr/local/ldc2/bin/ldc2+0x713100e) #14 0x55b9b7c010da ldc::CodeGenerator::writeAndFreeLLModule(char const*) (/usr/local/ldc2/bin/ldc2+0x71300da) #15 0x55b9b7c01b47 ldc::CodeGenerator::emit(Module*) (/usr/local/ldc2/bin/ldc2+0x7130b47) #16 0x55b9b4640238 codegenModules(Array&) (/usr/local/ldc2/bin/ldc2+0x3b6f238) #17 0x55b9b45deae1 mars_tryMain(Param&, Array&) (/usr/local/ldc2/bin/ldc2+0x3b0dae1) #18 0x55b9b4643de0 cppmain() (/usr/local/ldc2/bin/ldc2+0x3b72de0) #19 0x55b9b7dbc63d _D2rt6dmain212_d_run_main2UAAamPUQgZiZ6runAllMFZv (/usr/local/ldc2/bin/ldc2+0x72eb63d) #20 0x55b9b7dbc418 _d_run_main2 (/usr/local/ldc2/bin/ldc2+0x72eb418) #21 0x55b9b7dbc22d _d_run_main (/usr/local/ldc2/bin/ldc2+0x72eb22d) #22 0x55b9b7bfe6e8 main (/usr/local/ldc2/bin/ldc2+0x712d6e8) #23 0x7f2a0943124a __libc_start_call_main ./csu/../sysdeps/nptl/libc_start_call_main.h:74:3 #24 0x7f2a09431305 call_init ./csu/../csu/libc-start.c:128:20 #25 0x7f2a09431305 __libc_start_main ./csu/../csu/libc-start.c:347:5 #26 0x55b9b4647d5e _start (/usr/local/ldc2/bin/ldc2+0x3b76d5e) Error: Error executing /usr/local/ldc2/bin/ldc2: Aborted ```
Integrated: 8332866: Crash in ImageIO JPEG decoding when MEM_STATS in enabled
On Fri, 24 May 2024 08:37:25 GMT, Jayathirth D V wrote: > In IJG library's jmemmgr.c file we can define MEM_STATS(by default this flag > is disabled and we don't see this issue) to enable printing of memory trace > logs when we have OOM. But if we enable it we get crash while disposing IJG > stored objects in jmemmgr->free-pool() function. > > This is happening because we delete the error handler before we actually > start deleting IJG stored objects and while freeing the IJG objects we try to > access cinfo->err->trace_level of error handler. This early deletion of error > handler is happening in imageioJPEG.c->imageio_dispose() function. > > Moved the logic to delete error handler after we are done with deleting IJG > stored objects, after this change there is no crash. There is no regression > test because this issue is seen only when we enable MEM_STATS flag in IJG > library. Ran jtreg ImageIO tests with code update and i don't see any > regressions. > > I have verified that this issue doesn't effect SplashScreen code path and > disposing of IJG objects is handled differently in SplashScreen. This pull request has now been integrated. Changeset: ca307263 Author:Jayathirth D V URL: https://git.openjdk.org/jdk/commit/ca3072635215755766575b4eb70dc6267969a550 Stats: 5 lines in 1 file changed: 2 ins; 2 del; 1 mod 8332866: Crash in ImageIO JPEG decoding when MEM_STATS in enabled Reviewed-by: abhiscxk, psadhukhan - PR: https://git.openjdk.org/jdk/pull/19386
Re: RFR: 8332866: Crash in ImageIO JPEG decoding when MEM_STATS in enabled [v2]
> In IJG library's jmemmgr.c file we can define MEM_STATS(by default this flag > is disabled and we don't see this issue) to enable printing of memory trace > logs when we have OOM. But if we enable it we get crash while disposing IJG > stored objects in jmemmgr->free-pool() function. > > This is happening because we delete the error handler before we actually > start deleting IJG stored objects and while freeing the IJG objects we try to > access cinfo->err->trace_level of error handler. This early deletion of error > handler is happening in imageioJPEG.c->imageio_dispose() function. > > Moved the logic to delete error handler after we are done with deleting IJG > stored objects, after this change there is no crash. There is no regression > test because this issue is seen only when we enable MEM_STATS flag in IJG > library. Ran jtreg ImageIO tests with code update and i don't see any > regressions. > > I have verified that this issue doesn't effect SplashScreen code path and > disposing of IJG objects is handled differently in SplashScreen. Jayathirth D V has updated the pull request incrementally with one additional commit since the last revision: Update copyright year - Changes: - all: https://git.openjdk.org/jdk/pull/19386/files - new: https://git.openjdk.org/jdk/pull/19386/files/abe4de70..69e9d1c7 Webrevs: - full: https://webrevs.openjdk.org/?repo=jdk=19386=01 - incr: https://webrevs.openjdk.org/?repo=jdk=19386=00-01 Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod Patch: https://git.openjdk.org/jdk/pull/19386.diff Fetch: git fetch https://git.openjdk.org/jdk.git pull/19386/head:pull/19386 PR: https://git.openjdk.org/jdk/pull/19386
[Logica-l] LSFA 2024: Final Call For Papers
...@furg.br -- LOGICA-L Lista acadêmica brasileira dos profissionais e estudantes da área de Lógica --- Você está recebendo esta mensagem porque se inscreveu no grupo "LOGICA-L" dos Grupos do Google. Para cancelar inscrição nesse grupo e parar de receber e-mails dele, envie um e-mail para logica-l+unsubscr...@dimap.ufrn.br. Para acessar esta discussão na web, acesse https://groups.google.com/a/dimap.ufrn.br/d/msgid/logica-l/CABWVEOr9XqeZi_zLHaPmcw6FWHQnYH8LMrW_%3Dhc%3D%3DL8Pc_%2BNmw%40mail.gmail.com.
Re: [PATCH] Fortran: fix ALLOCATE with SOURCE=, zero-length character [PR83865]
On 6/3/24 1:12 PM, Harald Anlauf wrote: Dear all, the attached simple patch fixes an ICE for ALLOCATE with SOURCE= of a deferred-length character array with source-expression being an array of character with length zero. The reason was that the array descriptor of the source-expression was discarded in the special case of length 0. Solution: restrict special case to rank 0. Regtested on x86_64-pc-linux-gnu. OK for mainline? The offending code was introduced during 7-development, so it is technically a regression. I would therefore like to backport after waiting for a week or two. Thanks, Harald OK and thanks for patch. Jerry
Re: [PATCH] Fortran: fix ALLOCATE with SOURCE=, zero-length character [PR83865]
On 6/3/24 1:12 PM, Harald Anlauf wrote: Dear all, the attached simple patch fixes an ICE for ALLOCATE with SOURCE= of a deferred-length character array with source-expression being an array of character with length zero. The reason was that the array descriptor of the source-expression was discarded in the special case of length 0. Solution: restrict special case to rank 0. Regtested on x86_64-pc-linux-gnu. OK for mainline? The offending code was introduced during 7-development, so it is technically a regression. I would therefore like to backport after waiting for a week or two. Thanks, Harald OK and thanks for patch. Jerry
[jira] [Commented] (HTTPCLIENT-2329) BasicHttpClientConnectionManager reuses closed connection objects
[ https://issues.apache.org/jira/browse/HTTPCLIENT-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851849#comment-17851849 ] Gary D. Gregory commented on HTTPCLIENT-2329: - [~ttang] I just uploaded a build, please try again. > BasicHttpClientConnectionManager reuses closed connection objects > - > > Key: HTTPCLIENT-2329 > URL: https://issues.apache.org/jira/browse/HTTPCLIENT-2329 > Project: HttpComponents HttpClient > Issue Type: Bug >Affects Versions: 5.3.1 >Reporter: Teresa Tang >Priority: Major > Attachments: TestBasicConnectionManager.java > > > In the discardEndpoint method of InternalExecRuntime.java, the endpoint and > connection are closed. The manager releases the connection with a timevalue > of 0 ms. Because 0 is not considered positive, this leads to the expiration > being set to Long.MAX_VALUE. Upon the next connection request, the manager > will continue to use this unexpired connection object, even though it is > closed. > > The intention of the 0 ms timevalue was to have the connection expire > immediately and then be discarded so that the manager will create a new > connection for the next request. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org For additional commands, e-mail: dev-h...@hc.apache.org
[TLS]Re: Curve-popularity data?
Peter Gutmann writes: > This will also heavily skew any statistics Sorry, can you please clarify which statistics would be heavily skewed by Chrome retrying connections to 0.6% of servers? Here's why I'm saying 0.6% here: My understanding is that you're talking specifically about connections from Chrome to servers that support P-256 and not X25519. Cloudflare just said that 97.6% of servers they connect to support P-256 and 97.0% support X25519. I presume that approximately all X25519 servers also support P-256 as required by the current specs. Of course, if there are measurements coming up with a different number from 0.6%, I'd be interested in seeing that too. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[TLS]Re: Curve-popularity data?
Dennis Jackson writes: > This was used by Eli Biham and Lior Neumann to break Bluetooth pairing > standard back in 2018 [1]. The Bluetooth standard previously said > implementers could choose to do full point validation or always use > ephemeral keys, and folks opted for the less complex choice. This isn't a > clear separator between X25519 and P-256 though, since X25519 would also > need to reject small order points in order to avoid the same attack. Unless I'm confused, the attack against Bluetooth's use of P-224 relied on the fact that Bluetooth's authentication covered only x-coordinates and failed to cover y-coordinates---so the attacker was free to manipulate y-coordinates. (And, yes, of course implementors didn't check whether (x,y) was on the curve.) X25519 sends just an x-coordinate. Upgrading Bluetooth to X25519 would have simply scrapped the y-coordinate. Doesn't this mean that the full points would have been authenticated, stopping the attack? What exactly do you mean when you say "the same attack"? To be clear, looking around enough _does_ find literature on protocols that, unlike the original DH protocol, need "contributory" behavior. In those protocols, points of low order (e.g., the point at infinity, which exists on any curve and is allowed by some point encodings) have to be rejected. The very first version of the Curve25519 web page--- https://web.archive.org/web/20060210045513/https://cr.yp.to/ecdh.html ---already spelled out the list of inputs to reject for this case: How do I validate Curve25519 public keys? Don't. The Curve25519 function was carefully designed to allow all 32-byte strings as Diffie-Hellman public keys. Relevant lower-level facts: ... This is discussed in more detail in the curve25519 paper. There are some unusual non-Diffie-Hellman elliptic-curve protocols that need to ensure "contributory" behavior. In those protocols, you should reject the 32-byte strings that, in little-endian form, represent 0, 1, [various further numbers], and 2(2^255 - 19) + 1. But these exclusions are unnecessary for Diffie-Hellman. For NSA/NIST ECC, ensuring contributory behavior is considerably more error-prone: sure, the list of low-order points is shorter per curve, but people again and again fail to check whether the input (x,y) is on the curve to begin with, and then the attacker has an infinite pool of low-order points to exploit. This is also a major reason that NSA/NIST ECC keeps failing to protect the secrecy of long-term DH keys. Given how well known the oops-I-forgot-to-check failure mode is, why isn't every encryption protocol designed to just send an x-coordinate? Here are two answers: * For the NSA/NIST curves, people complain about the implementation impact of using x instead of (x,y). * Patent 6141420 (filed in 1994, expired 2014) claimed recovery of (x,y) from x and limited information about y. There's clear prior art (Eurocrypt 1992, page 171), but Certicom was scaring people for many years. Montgomery curves neatly dodge both of these issues: the Montgomery ladder never looks at y, and it's simpler and faster than any other approach. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[jira] [Commented] (HTTPCLIENT-2329) BasicHttpClientConnectionManager reuses closed connection objects
[ https://issues.apache.org/jira/browse/HTTPCLIENT-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851820#comment-17851820 ] Gary D. Gregory commented on HTTPCLIENT-2329: - [~ttang] Point your POM to [https://repository.apache.org/content/repositories/snapshots/] See https://maven.apache.org/guides/mini/guide-multiple-repositories.html > BasicHttpClientConnectionManager reuses closed connection objects > - > > Key: HTTPCLIENT-2329 > URL: https://issues.apache.org/jira/browse/HTTPCLIENT-2329 > Project: HttpComponents HttpClient > Issue Type: Bug >Affects Versions: 5.3.1 >Reporter: Teresa Tang >Priority: Major > Attachments: TestBasicConnectionManager.java > > > In the discardEndpoint method of InternalExecRuntime.java, the endpoint and > connection are closed. The manager releases the connection with a timevalue > of 0 ms. Because 0 is not considered positive, this leads to the expiration > being set to Long.MAX_VALUE. Upon the next connection request, the manager > will continue to use this unexpired connection object, even though it is > closed. > > The intention of the 0 ms timevalue was to have the connection expire > immediately and then be discarded so that the manager will create a new > connection for the next request. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org For additional commands, e-mail: dev-h...@hc.apache.org
[TLS]Re: Curve-popularity data?
David Adrian writes: > I believe we were also discussing _certificates_ Yes, I quoted that from the outset: P 256 is the most popular curve in the world besides the bitcoin curve. And I donât have head to head numbers, and the bitcoin curve is SEC P, but P 256 is most popular curve on the internet. So certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve. Immediately after quoting that, I wrote the following: "Last I heard, _certificates_ hadn't upgraded to allowing Ed25519 yet. My question is about the 'handshake' claim, and more broadly about the 'internet' and 'world' claims." > you decided to take the comment out of context No. The specific quote that I had been pointed to was shorter. I looked at quite a bit of text before and after that, and ended up giving the longer quote shown above. If you think that the puzzling aspects of what I quoted are explained by further context, please give a fuller quote and explain the relevance. In any event, please refrain from personal attacks. Thanks in advance. > and single out "the TLS co-chair" As I said, the statement is from one of the current TLS co-chairs, a month before the co-chair appointment. The position as co-chair adds to the importance of ensuring accurate information. > in a quote that begins with "I don't have the numbers". Let's look again at what I quoted: P 256 is the most popular curve in the world besides the bitcoin curve. And I donât have head to head numbers, and the bitcoin curve is SEC P, but P 256 is most popular curve on the internet. So certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve. The reader understands "I don't have head to head numbers" as referring to P-256 vs. the Bitcoin curve. That's not the part I'm asking about. Where does the "certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve" number come from? Where's the data showing that "P 256 is most popular curve on the internet", or "in the world besides the bitcoin curve"? > the utter irrelevance of current popularity of curves to the > introduction of a _new_ standard It's obviously not _the same_ question, but I don't agree with the extreme claim of "utter irrelevance". The original text also doesn't agree. It says that P-256 should be taken "seriously" for "new designs" because P-256 is "the most popular curve" (aside from maybe the Bitcoin curve): Should we still use 25519 for all new designs? Or should we take seriously at the idea of using the P curves again? ... I think we should take seriously because P 256 is the most popular curve in the world besides the bitcoin curve. And I donât have head to head numbers, and the bitcoin curve is SEC P, but P 256 is most popular curve on the internet. So certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve. People hearing that P-256 is the most popular curve on the Internet _presume_ that other curves don't have important advantages, and _worry_ that moving to another curve will incur tremendous startup costs. Are these guarantees? Of course not. Every solution that takes over because of its advantages has some initial time where it hasn't taken over; extrapolating from the initial unpopularity would be a mistake. But popularity measurements still give us _some_ sort of aggregate idea of what people care about. The picture is very different if the facts are instead that X25519 is the most popular curve in handshakes, and more broadly on the Internet. Readers hearing this become much less worried about the startup costs, and _presume_ that people actually do care about the advantages. Again: relationships, not guarantees. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
Re: Release D 2.109.0
On Sunday, 2 June 2024 at 15:51:04 UTC, Iain Buclaw wrote: Glad to announce D 2.109.0, ♥ to the 44 contributors. This release comes with 15 major changes and 26 fixed Bugzilla issues, including: Thanks! I've written a changelog entry about reinterpreting a byte as bool being unsafe: https://github.com/dlang/dmd/pull/16560 I hope it's OK to update the online changelog with that.
LDC 1.39.0-beta1
Glad to announce the first beta for LDC 1.39. Major changes: * Based on D 2.109.0. * LLVM for prebuilt packages bumped to v18.1.6. * Support for LLVM 11-14 was dropped. The CLI options `-passmanager` and `-opaque-pointers` were removed. Full release log and downloads: https://github.com/ldc-developers/ldc/releases/tag/v1.39.0-beta1 Please help test, and thanks to all contributors & sponsors!
Re: [edk2-devel] GitHub PR Code Review process now active
The reason to allow a draft PR is to allow contributors to run all the CI tests to see if they pass and resolve issues before starting a review. The CI tests include combinations of OS/compiler that not all contributors have available. Mike > -Original Message- > From: Neal Gompa > Sent: Monday, June 3, 2024 11:47 AM > To: devel@edk2.groups.io; Kinney, Michael D > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > Hmm, I don't see a setting for it anymore, maybe that's not a thing anymore? > > I seemingly recall that draft PRs didn't get CI runs, but if that's > not a thing anymore, then that's fine. > > That said, draft PRs cannot be reviewed, so we should not be telling > people to make draft PRs. > > > On Mon, Jun 3, 2024 at 12:26 PM Michael D Kinney via groups.io > > wrote: > > > > CI jobs are dispatched to both GitHub Actions and Azure Pipelines. > > > > For Draft PRs, I see both GitHub Actions and Azure Pipelines jobs running. > > > > This must imply that edk2 repo allows this. Do you happen to know where > > this is configurable or a link to GitHub docs for configuration? > > > > Mike > > > > > -Original Message- > > > From: Neal Gompa > > > Sent: Monday, June 3, 2024 9:13 AM > > > To: devel@edk2.groups.io; Kinney, Michael D > > > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > > > > > On Tue, May 28, 2024 at 2:53 PM Michael D Kinney via groups.io > > > wrote: > > > > > > > > Hello, > > > > > > > > The GitHub PR code review process is now active. Please > > > > use the new PR based code review process for all new > > > > submissions starting today. > > > > > > > > * The Wiki has been updated with the process changes. > > > > > > > > https://github.com/tianocore/tianocore.github.io/wiki/EDK-II- > Development- > > > Process > > > > > > > > Big thanks to Michael Kubacki for writing up all the > > > > changes based on the RFC proposal and community discussions. > > > > > > > > We will learn by using, so if you see anything missing or > > > > incorrect or clarifications needed, please send feedback > > > > here so the Wiki pages can be updated quickly for everyone. > > > > > > > > * The edk2 repo settings have been updated to require > > > > a GitHub PR code review approval before merging and > > > > all conversations must be resolved before merging. > > > > > > > > * A PR has been opened that removes the requirement for > > > > Cc: tags in the commit messages and is the first PR > > > > that will use the new process. This PR needs to be > > > > reviewed and merged to support the revised commit > > > > message format. > > > > > > > > https://github.com/tianocore/edk2/pull/5688 > > > > > > > > https://github.com/tianocore/tianocore.github.io/wiki/Commit-Message- > > > Format > > > > > > > > * Please use "Draft" PRs to run CI without any reviews. > > > > Once ready for reviews, convert from "Draft" to > > > > "Ready for Review". > > > > > > > > > > Generally GitHub doesn't allow CI to run on PRs created as draft pull > > > requests. Was this changed for edk2? > > > > > > > > > -- > > > 真実はいつも一つ!/ Always, there's only one truth! > > > > > > > > > > > > > -- > 真実はいつも一つ!/ Always, there's only one truth! -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119434): https://edk2.groups.io/g/devel/message/119434 Mute This Topic: https://groups.io/mt/106355103/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[Issue 24582] Detect unsafe `cast(bool[])`
https://issues.dlang.org/show_bug.cgi?id=24582 --- Comment #3 from Nick Treleaven --- bool v = cast(bool) 2; ubyte[] a = [2, 4]; auto b = cast(bool[]) a; auto c = cast(bool[]) [2, 4]; // literal cast import std.stdio; writeln(*cast(byte*)); // 1, OK writeln(*cast(byte*)b.ptr); // 2, unsafe writeln(*cast(byte*)c.ptr); // 1, OK So only the runtime array cast is unsafe. --
[Issue 24582] Detect unsafe `cast(bool[])`
https://issues.dlang.org/show_bug.cgi?id=24582 Nick Treleaven changed: What|Removed |Added Summary|Detect unsafe casting to|Detect unsafe |bool|`cast(bool[])` --
[jira] [Commented] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851778#comment-17851778 ] Gary D. Gregory commented on CONFIGURATION-847: --- PR merged to git master. Build in [https://repository.apache.org/content/repositories/snapshots/] > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory resolved CONFIGURATION-847. --- Resolution: Fixed > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (CONFIGURATION-847) Property with an empty string value are not processed in the current main (2.11.0-snapshot)
[ https://issues.apache.org/jira/browse/CONFIGURATION-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary D. Gregory updated CONFIGURATION-847: -- Assignee: Gary D. Gregory > Property with an empty string value are not processed in the current main > (2.11.0-snapshot) > --- > > Key: CONFIGURATION-847 > URL: https://issues.apache.org/jira/browse/CONFIGURATION-847 > Project: Commons Configuration > Issue Type: Bug >Affects Versions: Nightly Builds >Reporter: Andrea Bollini >Assignee: Gary D. Gregory >Priority: Critical > Fix For: 2.11.0 > > > I hit a side effect of the > https://issues.apache.org/jira/browse/CONFIGURATION-846 recently solved. > {{Assuming that we have a property file as configuration source like that}} > {{test.empty.property =}} > > and that we will try to inject such property in a spring bean > {{@Value("${test.empty.property"})}} > {{private String emptyValue;}} > {{ we will get an exception like: BeanDefinitionStore Invalid bean > definition ... Could not resolve placeholder}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[Issue 20423] di generator does not emit nothrow on dtors
https://issues.dlang.org/show_bug.cgi?id=20423 Richard (Rikki) Andrew Cattermole changed: What|Removed |Added CC||alphaglosi...@gmail.com Hardware|x86_64 |All Summary|Interface file and mangling |di generator does not emit |do not agree about nothrow |nothrow on dtors |dtors | OS|Linux |All Severity|enhancement |blocker --
Re: [edk2-devel] GitHub PR Code Review process now active
CI jobs are dispatched to both GitHub Actions and Azure Pipelines. For Draft PRs, I see both GitHub Actions and Azure Pipelines jobs running. This must imply that edk2 repo allows this. Do you happen to know where this is configurable or a link to GitHub docs for configuration? Mike > -Original Message- > From: Neal Gompa > Sent: Monday, June 3, 2024 9:13 AM > To: devel@edk2.groups.io; Kinney, Michael D > Subject: Re: [edk2-devel] GitHub PR Code Review process now active > > On Tue, May 28, 2024 at 2:53 PM Michael D Kinney via groups.io > wrote: > > > > Hello, > > > > The GitHub PR code review process is now active. Please > > use the new PR based code review process for all new > > submissions starting today. > > > > * The Wiki has been updated with the process changes. > > > > https://github.com/tianocore/tianocore.github.io/wiki/EDK-II-Development- > Process > > > > Big thanks to Michael Kubacki for writing up all the > > changes based on the RFC proposal and community discussions. > > > > We will learn by using, so if you see anything missing or > > incorrect or clarifications needed, please send feedback > > here so the Wiki pages can be updated quickly for everyone. > > > > * The edk2 repo settings have been updated to require > > a GitHub PR code review approval before merging and > > all conversations must be resolved before merging. > > > > * A PR has been opened that removes the requirement for > > Cc: tags in the commit messages and is the first PR > > that will use the new process. This PR needs to be > > reviewed and merged to support the revised commit > > message format. > > > > https://github.com/tianocore/edk2/pull/5688 > > > > https://github.com/tianocore/tianocore.github.io/wiki/Commit-Message- > Format > > > > * Please use "Draft" PRs to run CI without any reviews. > > Once ready for reviews, convert from "Draft" to > > "Ready for Review". > > > > Generally GitHub doesn't allow CI to run on PRs created as draft pull > requests. Was this changed for edk2? > > > -- > 真実はいつも一つ!/ Always, there's only one truth! -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#119430): https://edk2.groups.io/g/devel/message/119430 Mute This Topic: https://groups.io/mt/106355103/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-
[TLS]Re: Curve-popularity data?
Thanks to Martin Thomson, Bas Westerbaan, and David Adrian for the measurement data. I'm still puzzled as to what led to the statement that I quoted at the beginning: P 256 is the most popular curve in the world besides the bitcoin curve. And I donât have head to head numbers, and the bitcoin curve is SEC P, but P 256 is most popular curve on the internet. So certificates, TLS, handshakes, all of that is like 70 plus percent negotiated with the P 256 curve. Maybe the TLS co-chair has a comment? Again, I understand that certificates haven't upgraded to allowing Ed25519 yet; my question is about the "handshake", "internet", and "world" claims. In context, these popularity claims were presented as an argument for regressing to P-256: "Should we still use 25519 for all new designs? Or should we take seriously at the idea of using the P curves again? ... I think we should take seriously because P 256 is the most popular curve in the world besides the bitcoin curve." John Mattsson writes: > If you are doing hybrid for reason number 1, and you are currently > using P-384 or P-521 to get a higher security level, you likely want > to continue to use P-384 or P-521. I agree that the obvious way to address the "Yikes this could be losing security" objection to post-quantum rollout---which is a reasonable objection both because of attacks against the math and because of attacks against the software---is to have a hybrid choose whichever pre-quantum system people were using already. However, endless combinations create their own slowdowns. If most connections are using X25519 anyway, then what's best for fast rollout is to get X25519+PQ moving as quickly as possible, not delaying that to figure out what should be done for the fringe cases (maybe X448+PQ). > I think the NIST P-curves are well-designed for being published in > 1998. No, the Montgomery ladder was already introduced in Montgomery's 1987 paper. The speed and simplicity of the ladder were clear from the paper. NSA's rationale for taking Weierstrass curves in Jacobian coordinates was the false claim that this provides "the fastest arithmetic on elliptic curves". That's a quote from IEEE P1363, so there can't have been any serious review. See the "fake mathematics" section in https://blog.cr.yp.to/20220805-nsa.html for another example. ---D. J. Bernstein ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[Issue 24582] Detect unsafe casting to bool
https://issues.dlang.org/show_bug.cgi?id=24582 Dennis changed: What|Removed |Added CC||dkor...@live.nl Hardware|x86_64 |All OS|Linux |All --- Comment #2 from Dennis --- These casts only produce safe values 0 and 1 right? --
[Issue 24584] New: [phobos] `make unittest` should not rerun tests unnecessarily
https://issues.dlang.org/show_bug.cgi?id=24584 Issue ID: 24584 Summary: [phobos] `make unittest` should not rerun tests unnecessarily Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: enhancement Priority: P1 Component: phobos Assignee: nob...@puremagic.com Reporter: n...@geany.org Currently it seems to run every test, regardless of whether relevant Phobos dependencies have changed. This makes it a very slow process to find & fix all errors when introducing a dmd error. Alternatively, having some way of running all tests despite any failing tests would be useful. --
[Issue 3947] Implicit and explicit casting of floating point to bool produces different results
https://issues.dlang.org/show_bug.cgi?id=3947 Nick Treleaven changed: What|Removed |Added CC||n...@geany.org --- Comment #1 from Nick Treleaven --- > - finite real numbers > -1 and < 1 are false That is not the case, at least for all compilers currently on run.dlang.io: static assert(cast(bool) 0.1f); Built-in complex numbers are now deprecated. --
[jira] [Commented] (NET-731) FTPSClient no longer supports fileTransferMode (eg DEFLATE)
[ https://issues.apache.org/jira/browse/NET-731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851668#comment-17851668 ] Gary D. Gregory commented on NET-731: - Hello [~fanningpj] Thank you for your report. What version are you using compared to the previous behavior? > FTPSClient no longer supports fileTransferMode (eg DEFLATE) > --- > > Key: NET-731 > URL: https://issues.apache.org/jira/browse/NET-731 > Project: Commons Net > Issue Type: Task > Components: FTP >Reporter: PJ Fanning >Priority: Major > > The new openDataSecureConnection method in FTPSClient does not support > fileTransferMode (eg DEFLATE). > https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9[ > > |https://github.com/apache/commons-net/pull/90/files#diff-b4292a5bd3e39f502d24bce1eb934384a951a120080c870cdc68c0585a78c6e9] > > The FTPSClient code used to delegate to FTPClient _openDataConnection_ > [https://github.com/apache/commons-net/blob/b5038eff135dff54e2ee2d09b94ec7d8937cb09b/src/main/java/org/apache/commons/net/ftp/FTPClient.java#L696] > This method supports `wrapOnDeflate` while openDataSecureConnection does not. > I'm not sure if FTPS supports DEFLATE transfer mode but while implementing an > Apache Pekko workaround for the NET-718, I spotted the diff. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[Issue 24583] di generator emits return scope and scope return in wrong order
https://issues.dlang.org/show_bug.cgi?id=24583 Richard (Rikki) Andrew Cattermole changed: What|Removed |Added Keywords||mangling, safe --
Re: DConf '24 Schedule & BeerConf News
Very awesome lineup this year!
[Issue 24583] New: di generator emits return scope and scope return in wrong order
https://issues.dlang.org/show_bug.cgi?id=24583 Issue ID: 24583 Summary: di generator emits return scope and scope return in wrong order Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: blocker Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: alphaglosi...@gmail.com Currently the .di generator order scope and return correctly. Actual: ``nothrow @nogc return scope @trusted sidero.base.text.unicode.builder_utf8.StringBuilder_UTF8 sidero.base.text.unicode.builder_utf8.StringBuilder_UTF8.append(const(char)[]...)`` Generated: ``nothrow @nogc scope return @trusted sidero.base.text.unicode.builder_utf8.StringBuilder_UTF8 sidero.base.text.unicode.builder_utf8.StringBuilder_UTF8.append(const(char)[]...)`` This blocks all usage of the .di generator. --
Re: DConf '24 Schedule & BeerConf News
On Monday, 3 June 2024 at 13:26:54 UTC, Mike Parker wrote: ## The DConf '24 Schedule The DConf '24 schedule is now live: https://dconf.org/2024/index.html#schedule You'll notice that we've departed from the norm in a few places. That's because of the number of submissions we received. Typically, we receive either just enough or many more than we need. This year, we needed 16 and received submissions from 19 people. Yay! Maybe on the template for next year, have an optional rating field for "How important is this talk to you / how time-critical is this talk" so you can sort by "is fine being deferred". My Neat talk was a bit time-critical to me due to the development schedule, and of course close to my heart; conversely, I wouldn't have had a problem with moving the Dustmite talk to '25, for instance.
DConf '24 Schedule & BeerConf News
## The DConf '24 Schedule The DConf '24 schedule is now live: https://dconf.org/2024/index.html#schedule You'll notice that we've departed from the norm in a few places. That's because of the number of submissions we received. Typically, we receive either just enough or many more than we need. This year, we needed 16 and received submissions from 19 people. As we were agonizing over which 3 submissions to reject, Atila announced that he'd be willing to give up his slot this year. That meant we only needed to reject 2. Then our most recent guest speaker invitation was declined. We decided to use that slot for one of the submissions as well. At that point, none of us wanted to reject just a single submission. We wanted to hear all of them. The only practical way to do that without spending more money on the A/V set up on the Hackathon day would be to give up either the AUA or the Lightning Talks. So in the end, we decided to drop the AUA this year. We're still going to do it as a pre-DConf live stream session. The upside there is that we don't have to limit ourselves to an hour. We'll most likely do it on the last weekend of August or the first weekend of September. I'll announce it here and add it to the DConf schedule once it's settled. We want to thank everyone who submitted a talk. We're excited about all of them. ## BeerConf I've also added some information about this year's real-world BeerConf: https://dconf.org/2024/index.html#beerconf Once again, pub hire rates are ridiculously high, even more than last year. Fortunately, an anonymous donor has stepped forward to sponsor one night at a pub called The Trinity Bell, around a 15-minute walk from the venue. Drinks are on the house for all DConf attendees who show up at the pub on September 17th, the first night of DConf, from 18:00 until the tab runs out. Click the link above for more info. It's unlikely we'll be able to hire the pub out for the remaining two nights, so until you hear otherwise from me, the lobby bar at Travelodge Central City Road is once again designated as the default nightly gathering spot for BeerConf. ## Registration As a reminder, June 17 is the last day of early-bird registration. If you haven't signed up yet, get it done before then to get that 15% discount: https://dconf.org/2024/index.html#register See you in London!
[Issue 24582] Detect unsafe casting to bool
https://issues.dlang.org/show_bug.cgi?id=24582 Dlang Bot changed: What|Removed |Added Keywords||pull --- Comment #1 from Dlang Bot --- @ntrel created dlang/dmd pull request #16558 "Fix Bugzilla 24582 - Detect unsafe casting to bool" fixing this issue: - Fix Bugzilla 24582 - Detect unsafe casting to bool https://github.com/dlang/dmd/pull/16558 --
[Issue 24582] Detect unsafe casting to bool
https://issues.dlang.org/show_bug.cgi?id=24582 Nick Treleaven changed: What|Removed |Added Keywords||safe --
[Issue 24582] New: Detect unsafe casting to bool
https://issues.dlang.org/show_bug.cgi?id=24582 Issue ID: 24582 Summary: Detect unsafe casting to bool Product: D Version: D2 Hardware: x86_64 OS: Linux Status: NEW Severity: major Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: n...@geany.org Given that only 0 and 1 are safe values: https://dlang.org/spec/function.html#safe-values Each of the casts below should fail. void main() @safe { bool v = cast(bool) 2; ubyte[] a = [2, 4]; auto b = cast(bool[]) a; auto c = cast(bool[]) [2, 4]; // literal cast } PR incoming. --
[TLS]Re: Curve-popularity data?
ioned above. Sure, there could be one hybrid for TLS and a different hybrid for other protocols, but that split has its own costs. I think everyone agrees that, all else being equal, having fewer hybrids is better---but I'm applying this idea to a broader ecosystem. Third, even if we ignore all the other uses of X25519, my understanding of the TLS situation today is that X25519 is in basically all TLS stacks anyway, for reasons that aren't going to disappear any time soon---and, in particular, that won't be changed by the PQ rollout. So it's not true that choosing P-256+PQ instead of X25519+PQ will suddenly create simpler TLS code. As for the long term, I don't see why speculation that X25519 will go away in N years should be given higher weight than speculation that P-256 will go away in N years. > marginal performance benefit Why say "marginal" instead of presenting some actual numbers? I ran the script below on a 2.245GHz AMD Zen 2 core (with Turbo Boost disabled; see https://blog.cr.yp.to/20230609-turboboost.html) running Debian 12 (where the built-in OpenSSL is OpenSSL 3.0.11 from September). The script downloads and compiles lib25519, and then runs benchmarks using "openssl speed" with a beta OpenSSL "provider" for lib25519. Here's the output: op op/s 256 bits ecdh (nistp256) 0.0001s 12982.0 253 bits ecdh (X25519) 0.s 24152.0 signverifysign/s verify/s 256 bits ecdsa (nistp256) 0.s 0.0001s 27993.0 9834.0 signverify sign/s verify/s 253 bits EdDSA (Ed25519) 0.s 0.0001s 69821.4 17070.0 The actual gap is bigger: OpenSSL's built-in code cheats by not measuring various key-expansion steps, whereas lib25519 never cheats. ---D. J. Bernstein #!/bin/sh export LD_LIBRARY_PATH="$HOME/lib" export LIBRARY_PATH="$HOME/lib" export CPATH="$HOME/include" export PATH="$HOME/bin:$PATH" ( wget -m https://randombytes.cr.yp.to/librandombytes-latest-version.txt version=$(cat randombytes.cr.yp.to/librandombytes-latest-version.txt) wget -m https://randombytes.cr.yp.to/librandombytes-$version.tar.gz tar -xzf randombytes.cr.yp.to/librandombytes-$version.tar.gz cd librandombytes-$version ./configure --prefix=$HOME && make -j install ) ( wget -m https://cpucycles.cr.yp.to/libcpucycles-latest-version.txt version=$(cat cpucycles.cr.yp.to/libcpucycles-latest-version.txt) wget -m https://cpucycles.cr.yp.to/libcpucycles-$version.tar.gz tar -xzf cpucycles.cr.yp.to/libcpucycles-$version.tar.gz cd libcpucycles-$version ./configure --prefix=$HOME && make -j install ) ( wget -m https://lib25519.cr.yp.to/lib25519-latest-version.txt version=$(cat lib25519.cr.yp.to/lib25519-latest-version.txt) wget -m https://lib25519.cr.yp.to/lib25519-$version.tar.gz tar -xzf lib25519.cr.yp.to/lib25519-$version.tar.gz cd lib25519-$version ./use-s2n-bignum ./configure --prefix=$HOME && make -j install ) ( wget -m https://cr.yp.to/2024/20240314/openssl_x25519_lib25519.c wget -m https://cr.yp.to/2024/20240314/openssl_ed25519_lib25519.c wget -m https://cr.yp.to/2024/20240314/xtest wget -m https://cr.yp.to/2024/20240314/edtest cd cr.yp.to/2024/20240314 sh xtest sh edtest ) ___ TLS mailing list -- tls@ietf.org To unsubscribe send an email to tls-le...@ietf.org
[Issue 24581] Add a @gc attribute already
https://issues.dlang.org/show_bug.cgi?id=24581 Bolpat changed: What|Removed |Added CC||qs.il.paperi...@gmail.com --
[Issue 24581] New: Add a @gc attribute already
https://issues.dlang.org/show_bug.cgi?id=24581 Issue ID: 24581 Summary: Add a @gc attribute already Product: D Version: D2 Hardware: All OS: All Status: NEW Severity: enhancement Priority: P1 Component: dmd Assignee: nob...@puremagic.com Reporter: qs.il.paperi...@gmail.com Please add `@gc` as the contravariant counterpart of `@nogc`, like `@system` is to `@safe` and `throw` is to `nothrow`. --
[Issue 19916] union member access should be un-@safe
https://issues.dlang.org/show_bug.cgi?id=19916 Nick Treleaven changed: What|Removed |Added CC||n...@geany.org --- Comment #24 from Nick Treleaven --- (In reply to Simen Kjaeraas from comment #5) > struct NotAPointer { > private size_t n; > @disable this(); > @trusted this(int* p) { > assert(p.isValid); > n = cast(size_t)p; > } > @trusted void callMethod() { > *cast(int*)n = 3; > } > } The answer to this is to mark `n` as `@system`. --
Re: How does one attach a manifest file to a D executable on Windows?
On Sunday, 2 June 2024 at 21:46:41 UTC, solidstate1991 wrote: Well, it turns out I used the windres found in mingw instead of `rc.exe` since the latter cannot be found anywhere on my PC, even after reinstalling stuff. I need to hunt it down somehow. rc.exe comes with the Windows SDK - it gets installed in one of the subfolders of "C:\Program Files (x86)\Windows Kits\10\bin" (on my machine it's in "10.0.22000.0\x64").