Re: MRSP 2.9: Issues #252 and #266 - Incident Reporting

2023-07-26 Thread Ben Wilson
All,

We have created a draft wiki page to explain vulnerability disclosure being
proposed for v. 2.9 of the MRSP.  See
https://wiki.mozilla.org/CA/Vulnerability_Disclosure.
<https://wiki.mozilla.org/CA/Vulnerability_Disclosure>

We did not want to confuse this security vulnerability reporting process
with the existing Incident Reporting process (
https://www.ccadb.org/cas/incident-report).

So, the proposed language is as follows:

"Additionally, and not in lieu of the requirement to publicly report
incidents as outlined above, a CA Operator MUST disclose a serious
vulnerability or security incident in Bugzilla as a secure bug [link] in
accordance with guidance found on the Vulnerability Disclosure wiki page
[link to https://wiki.mozilla.org/CA/Vulnerability_Disclosure].;

Also, in the MRSP where we refer to or link to Security Incident bugs, we
have changed the language to refer to a Vulnerability Disclosure "filed as
a secure bug in Bugzilla".

Here is how those proposed changes appear in Github:
https://github.com/mozilla/pkipolicy/compare/master...BenWilson-Mozilla:pkipolicy:67bbeb820dc2dce3cb54b4d54b9326dc75e1d79d

Please review this proposed addition and the draft wiki page and let us
know of any comments or concerns.

Thanks,
Ben and Kathleen

On Wed, Jul 12, 2023 at 12:43 AM Roman Fischer 
wrote:

> Dear Matt,
>
> The way towards something like full disclosure is a difficult one to walk.
> I was working in the airline industry for a couple of years and experienced
> firsthand what it means to establish and nurture a "no blame" culture that
> truly motivates people to talk about mistakes, drifts towards unsafe
> behaviour and such. It's a long process and all participants need to want
> to support it.
>
> I think that the current process of disclosing incidents publicly on
> Bugzilla does not help build a "full disclosure - no blame" culture. So
> CA's (and all the other participants in the ecosystem) will continue to try
> and limit the possible negative impact of what they have to disclose.
>
> From my point of view, it makes no big difference if the word
> "significant" is there or not. As long as the culture is "blame and shame",
> all participants will think more than twice before posting a Bugzilla.
>
> Kind regards
> Roman
>
> -Original Message-
> From: dev-security-policy@mozilla.org 
> On Behalf Of Matt Palmer
> Sent: Mittwoch, 12. Juli 2023 08:03
> To: dev-security-policy@mozilla.org
> Subject: Re: MRSP 2.9: Issues #252 and #266 - Incident Reporting
>
> On Tue, Jul 11, 2023 at 09:04:06AM -0600, Ben Wilson wrote:
> > effect, " 'Reportable Security Incident' means any security event,
> > breach, or compromise that has the potential to significantly impact
> > the confidentiality, integrity, or availability of CA infrastructure,
> > CA
>
> I'd suggest removing the word "significantly", because that's entirely
> open to interpretation, and history has shown that CAs aren't shy about
> interpreting things in a manner most favourable to their interests.  I
> don't see any real problem with requiring CAs to report *everything* with
> the potential to impact CIA of CA-related things, because even minor
> hiccups can become major, and they can also be a learning experience for
> everyone -- which is the same reason why most safety-critical industries
> require the reporting of near-misses, not just actual incidents.
>
> - Matt
>
> --
> You received this message because you are subscribed to the Google Groups "
> dev-security-policy@mozilla.org" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dev-security-policy+unsubscr...@mozilla.org.
> To view this discussion on the web visit
> https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/ZK5CExtS3j0cKC5t%40hezmatt.org
> .
>
> --
> You received this message because you are subscribed to the Google Groups "
> dev-security-policy@mozilla.org" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dev-security-policy+unsubscr...@mozilla.org.
> To view this discussion on the web visit
> https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/ZRAP278MB05627473060C050F4B0EB20CFA36A%40ZRAP278MB0562.CHEP278.PROD.OUTLOOK.COM
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaac29x2CdtfKoYdnMVtcOTwoz%3D%3DPAunuddXQGbM2fcmCg%40mail.gmail.com.


Re: MRSP 2.9: Issue #123: Annual Compliance Self-Assessment

2023-07-26 Thread Ben Wilson
And, for section 3.3 (CPs and CPSes), I am thinking that the same change
should be made from 365 to 366 days, and that item 4 would read, "all CPs,
CPSes, and combined CP/CPSes MUST be reviewed and updated as necessary at
least once every 366 days."
Ben

On Wed, Jul 26, 2023 at 3:35 PM Ben Wilson  wrote:

> All,
> For submission of self-assessments, what do people think about "at least
> every 366 days" instead of the original proposal of 365 days?  That gives
> flexibility for leap years.
> Ben
>
> On Thu, Jun 29, 2023 at 9:48 PM Antti Backman 
> wrote:
>
>> I concur to Bruce's consern,
>>
>> Albeit not directly conserning this discussion, we already have this
>> issue in our hands:
>> https://www.chromium.org/Home/chromium-security/root-ca-policy/#6-annual-self-assessments
>>
>> But yes, this will be moving target, I would propose that this could be
>> tight together with the end of audit period, which anyhow is hardcoded
>> date. And maybe then similarly to posting audit reports having some fixed
>> amount of days after the end of audit period this should (at least and at
>> latest) be submitted.
>>
>> Antti Backman
>> Telia Company
>>
>> torstai 29. kesäkuuta 2023 klo 22.36.32 UTC+3 Bruce Morton kirjoitti:
>>
>>> The issue I have with "at least every 365 days" is that I like to put
>>> something on the schedule and do it the same month every year. We do this
>>> with our annual compliance audit. If we have to provide the self-assessment
>>> at least every 365 days, then each year it will be earlier to provide some
>>> insurance time to meet the requirement. Is there any way we can provide the
>>> requirement to stop this progression? Something like "on an annual basis,
>>> but not more longer than 398-days".
>>>
>>> On Friday, June 23, 2023 at 12:05:03 PM UTC-4 Ben Wilson wrote:
>>>
>>>> All,
>>>>
>>>> Historically, Mozilla has required that CAs perform an annual
>>>> Self-Assessment of their compliance with the CA/Browser Forum's TLS
>>>> Baseline Requirements and Mozilla's Root Store Policy (MRSP).  See
>>>> https://wiki.mozilla.org/CA/Compliance_Self-Assessment. While there
>>>> has not been any requirement that CAs submit their self-assessments to
>>>> Mozilla, several CAs have had it a practice to do so.
>>>>
>>>> We would like to propose that the operators of TLS CAs (those with the
>>>> websites trust bit enabled) be required to submit these self-assessments
>>>> annually by providing a link to them in the Common CA Database (CCADB).
>>>> Therefore, we are proposing a new section 3.4 in the MRSP to read as
>>>> follows:
>>>>
>>>>  Begin Draft for MRSP-
>>>>
>>>> 3.4 Compliance Self-Assessments
>>>> Effective January 1, 2024, CA operators with CA certificates capable of
>>>> issuing working TLS server certificates MUST complete a [Compliance
>>>> Self-Assessment](https://www.ccadb.org/cas/self-assessment) at least
>>>> every 365 days and provide the Common CA Database with the location where
>>>> that Compliance Self-Assessment can be retrieved.
>>>>
>>>> - End Draft for MRSP -
>>>>
>>>> The effective date of January 1, 2024, is not intended to result in a
>>>> huge batch of self-assessments being submitted that day. Rather, we would
>>>> hope that CAs begin providing the locations of their self-assessments as
>>>> soon as possible by completing the "Self-Assessment" section under the
>>>> "Root Information" tab of an Add/Update Root Case in the CCADB
>>>> <https://www.ccadb.org/cas/updates>. (The field for this information
>>>> already exists in the CCADB under the heading "Self-Assessment".)
>>>>
>>>> Please provide any comments or suggestions.
>>>>
>>>> Thanks,
>>>>
>>>> Ben and Kathleen
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaaxbRDcz6wPr1U-Q8bH1MqhYQbR99aKeRzm2u-L_Ht7VA%40mail.gmail.com.


Re: MRSP 2.9: Issue #123: Annual Compliance Self-Assessment

2023-07-26 Thread Ben Wilson
All,
For submission of self-assessments, what do people think about "at least
every 366 days" instead of the original proposal of 365 days?  That gives
flexibility for leap years.
Ben

On Thu, Jun 29, 2023 at 9:48 PM Antti Backman  wrote:

> I concur to Bruce's consern,
>
> Albeit not directly conserning this discussion, we already have this issue
> in our hands:
> https://www.chromium.org/Home/chromium-security/root-ca-policy/#6-annual-self-assessments
>
> But yes, this will be moving target, I would propose that this could be
> tight together with the end of audit period, which anyhow is hardcoded
> date. And maybe then similarly to posting audit reports having some fixed
> amount of days after the end of audit period this should (at least and at
> latest) be submitted.
>
> Antti Backman
> Telia Company
>
> torstai 29. kesäkuuta 2023 klo 22.36.32 UTC+3 Bruce Morton kirjoitti:
>
>> The issue I have with "at least every 365 days" is that I like to put
>> something on the schedule and do it the same month every year. We do this
>> with our annual compliance audit. If we have to provide the self-assessment
>> at least every 365 days, then each year it will be earlier to provide some
>> insurance time to meet the requirement. Is there any way we can provide the
>> requirement to stop this progression? Something like "on an annual basis,
>> but not more longer than 398-days".
>>
>> On Friday, June 23, 2023 at 12:05:03 PM UTC-4 Ben Wilson wrote:
>>
>>> All,
>>>
>>> Historically, Mozilla has required that CAs perform an annual
>>> Self-Assessment of their compliance with the CA/Browser Forum's TLS
>>> Baseline Requirements and Mozilla's Root Store Policy (MRSP).  See
>>> https://wiki.mozilla.org/CA/Compliance_Self-Assessment. While there has
>>> not been any requirement that CAs submit their self-assessments to Mozilla,
>>> several CAs have had it a practice to do so.
>>>
>>> We would like to propose that the operators of TLS CAs (those with the
>>> websites trust bit enabled) be required to submit these self-assessments
>>> annually by providing a link to them in the Common CA Database (CCADB).
>>> Therefore, we are proposing a new section 3.4 in the MRSP to read as
>>> follows:
>>>
>>>  Begin Draft for MRSP-
>>>
>>> 3.4 Compliance Self-Assessments
>>> Effective January 1, 2024, CA operators with CA certificates capable of
>>> issuing working TLS server certificates MUST complete a [Compliance
>>> Self-Assessment](https://www.ccadb.org/cas/self-assessment) at least
>>> every 365 days and provide the Common CA Database with the location where
>>> that Compliance Self-Assessment can be retrieved.
>>>
>>> - End Draft for MRSP -
>>>
>>> The effective date of January 1, 2024, is not intended to result in a
>>> huge batch of self-assessments being submitted that day. Rather, we would
>>> hope that CAs begin providing the locations of their self-assessments as
>>> soon as possible by completing the "Self-Assessment" section under the
>>> "Root Information" tab of an Add/Update Root Case in the CCADB
>>> <https://www.ccadb.org/cas/updates>. (The field for this information
>>> already exists in the CCADB under the heading "Self-Assessment".)
>>>
>>> Please provide any comments or suggestions.
>>>
>>> Thanks,
>>>
>>> Ben and Kathleen
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaYZeN6siOHym3YLeVAGB2rhvZ6mZ706Y44qN5cHRzE%2B6Q%40mail.gmail.com.


Re: MRSP 2.9: Issue#232: Root CA Lifecycles

2023-07-26 Thread Ben Wilson
Thanks, Rob.  I'll change it to a strong SHOULD.
Ben

On Wed, Jul 26, 2023 at 10:09 AM Rob Stradling  wrote:

> > CA operators MUST apply to Mozilla for inclusion of their next
> generation root certificate at least 2 years before the distrust date of
> the CA certificate they wish to replace.
>
> Hi Ben.  I would interpret that sentence to mean that if a CA operator
> misses the "at least 2 years" deadline then they are *forever forbidden*
> from submitting a next generation root certificate for inclusion in
> Mozilla's root store.  Is that the intent?
>
> I think CAs should certainly be encouraged to submit next gen roots in a
> timely fashion, and I think Mozilla shouldn't feel obliged to grant
> extensions on to-be-replaced root removals in order to support CAs that
> fail to do this "at least 2 years" in advance.  However, I think "forever
> forbidden" is unnecessarily harsh!
>
> So I suggest changing "MUST" to "SHOULD".
>
> --
> *From:* dev-security-policy@mozilla.org 
> on behalf of Ben Wilson 
> *Sent:* 26 July 2023 16:42
> *To:* dev-secur...@mozilla.org 
> *Subject:* MRSP 2.9: Issue#232: Root CA Lifecycles
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
> All,
>
> We previously announced this change in policy over a year ago, and will be
> finalizing it in Version 2.9 of the Mozilla Root Store Policy (MRSP).
> Please review this addition, and let us know if you have any final
> comments.
>
> - Begin MRSP Revision -
>
>
> *7.4 Root CA Lifecycles *
> For a root CA certificate trusted for server authentication, Mozilla will
> remove the websites trust bit when the CA key material is more than 15
> years old. For a root CA certificate trusted for secure email, Mozilla will
> set the "Distrust for S/MIME After Date" for the CA certificate to 18 years
> from the CA key material generation date. The CA key material generation
> date SHALL be determined by reference to the auditor-witnessed key
> generation ceremony report. If the CA operator cannot provide the key
> generation ceremony report for a root CA certificate created before July 1,
> 2012, then Mozilla will use the “Valid From” date in the root CA
> certificate to establish the key material generation date. For transition
> purposes, root CA certificates in the Mozilla root store will be distrusted
> according to the schedule located at
> https://wiki.mozilla.org/CA/Root_CA_Lifecycles, which is subject to
> change if underlying algorithms become more susceptible to cryptanalytic
> attack or if other circumstances arise that make this schedule obsolete.
> CA operators MUST apply to Mozilla for inclusion of their next generation
> root certificate at least 2 years before the distrust date of the CA
> certificate they wish to replace.
>
> - End MRSP Revision -
>
> Thanks,
>
> Ben
>
> --
> You received this message because you are subscribed to the Google Groups "
> dev-security-policy@mozilla.org" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dev-security-policy+unsubscr...@mozilla.org.
> To view this discussion on the web visit
> https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtabwQ0tiADoo-YNvCSuu3dAxTJOjSKnUbWb6NQasoejQKg%40mail.gmail.com
> <https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtabwQ0tiADoo-YNvCSuu3dAxTJOjSKnUbWb6NQasoejQKg%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaYs6mPsU_Hft__ygqm4_CO_2ySo%2Bkak%2B6QwWd66P%3Dobsw%40mail.gmail.com.


MRSP 2.9: Issue#232: Root CA Lifecycles

2023-07-26 Thread Ben Wilson
All,

We previously announced this change in policy over a year ago, and will be
finalizing it in Version 2.9 of the Mozilla Root Store Policy (MRSP).
Please review this addition, and let us know if you have any final
comments.

- Begin MRSP Revision -


*7.4 Root CA Lifecycles*
For a root CA certificate trusted for server authentication, Mozilla will
remove the websites trust bit when the CA key material is more than 15
years old. For a root CA certificate trusted for secure email, Mozilla will
set the "Distrust for S/MIME After Date" for the CA certificate to 18 years
from the CA key material generation date. The CA key material generation
date SHALL be determined by reference to the auditor-witnessed key
generation ceremony report. If the CA operator cannot provide the key
generation ceremony report for a root CA certificate created before July 1,
2012, then Mozilla will use the “Valid From” date in the root CA
certificate to establish the key material generation date. For transition
purposes, root CA certificates in the Mozilla root store will be distrusted
according to the schedule located at
https://wiki.mozilla.org/CA/Root_CA_Lifecycles, which is subject to change
if underlying algorithms become more susceptible to cryptanalytic attack or
if other circumstances arise that make this schedule obsolete.
CA operators MUST apply to Mozilla for inclusion of their next generation
root certificate at least 2 years before the distrust date of the CA
certificate they wish to replace.

- End MRSP Revision -

Thanks,

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtabwQ0tiADoo-YNvCSuu3dAxTJOjSKnUbWb6NQasoejQKg%40mail.gmail.com.


Re: [ovs-discuss] ovs pypi package

2023-07-26 Thread Terry Wilson via discuss
On Fri, Jul 21, 2023 at 3:45 AM Ilya Maximets  wrote:
>
> On 7/19/23 09:52, Felix Huettner via discuss wrote:
> > Hi everyone,
> >
> > i noticed that the latest release of the ovs library on pypi is for
> > 2.17.1 [1]. Would it be possible to pushlish newer versions of the ovs
> > python lib there as well, or are there reasons speaking against that?
>
> The pypi package is maintained by OpenStack folks mostly
> for their own use.  But, I guess, it should be possible
> to update to some newer stable release.
>
> Terry, what do you think?
>
> Best regards, Ilya Maximets.
>
> >
> > [1] https://pypi.org/project/ovs/#history
> >
> > Thanks
> > Felix

I'll go ahead and make a release today. It would be worth discussing
having ownership of the pypi ovs package include people responsible
for OVS releases and making releasing there part of the release
process. One could argue that me owning the package upstream is a
little...weird. :)

Terry

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: Anybody like to ID spiders

2023-07-25 Thread mike wilson
Black ones seem to be Latrodectus hesperus, the western black widow.  The other 
is likely Laterodectus geometricus, the brown widow.  The latter is less deadly 
and has a much greater range.  Newly established in southern California.
> On 25/07/2023 18:35 Larry Colen  wrote:
> 
>  
> I'm pretty confident that the first one is a black widow, but I'm not so sure 
> about the ones with the striped legs:
> 
> https://www.flickr.com/photos/ellarsee/albums/72177720310015924
> 
> 
> --
> Larry Colen
> l...@red4est.com  sent from ret13est
> 
> 
> 
> --
> %(real_name)s Pentax-Discuss Mail List
> To unsubscribe send an email to pdml-le...@pdml.net
> to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
> the directions.
--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


Re: [Servercert-wg] Participation Proposal for Revised SCWG Charter

2023-07-25 Thread Ben Wilson via Servercert-wg
Thanks for your insights, Roman.

I'm not yet convinced that the attendance approach would not be effective.
Nevertheless, here are some other potential alternatives to discuss:

1 - require that a Certificate Consumer have a certain size userbase, or
alternatively, that they be a Root Store member of the Common CA Database
<https://www.ccadb.org/rootstores/how>, or
2 - require that a Certificate Consumer pay a membership fee to the
CA/Browser Forum.

Does anyone have any other ideas, proposals, or suggestions that we can
discuss?

The approaches listed above would be in addition to the following other
requirements already proposed:

The Certificate Consumer has public documentation stating that it requires
Certification Authorities to comply with the CA/Browser Forum’s Baseline
Requirements for the issuance and maintenance of TLS server certificates; its
membership-qualifying software product uses a list of CA certificates to
validate the chain of trust from a TLS certificate to a CA certificate in
such list; and it publishes how it decides to add or remove a CA
certificate from the root store used in its membership-qualifying software
product.

Thanks,

Ben

On Mon, Jul 24, 2023 at 10:48 PM Roman Fischer 
wrote:

> Dear Ben,
>
>
>
> As stated before, I’m against minimal attendance (or even participation –
> however you would measure that, numbers of words spoken or written?)
> requirements. I’ve seen in university, in private associations, policitcs…
> that this simply doesn’t solve the problem. I totally agree with Tim: It
> will create administrative overhead and not solve the problem.
>
>
>
> IMHO non-particpants taking part in the democratic process (i.e. voting)
> is just something we have to accept and factor in. It’s one end of the
> extreme spectrum. There might be over-active participants that overwhelm
> the group by pushing their own agenda… If we have minimum participation
> requirements, then we maybe should also have maximum participation rules?
> 
>
>
>
> Rgds
> Roman
>
>
>
> *From:* Servercert-wg  *On Behalf Of *Ben
> Wilson via Servercert-wg
> *Sent:* Montag, 24. Juli 2023 21:40
> *To:* Tim Hollebeek ; CA/B Forum Server
> Certificate WG Public Discussion List 
> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
> Charter
>
>
>
> Tim,
>
> One problem we're trying to address is the potential for a great number of
> “submarine voters”.  Such members may remain inactive for extended periods
> of time and then surface only to vote for or against something they
> suddenly are urged to support or oppose, without being aware of the
> issues.  This will skew and damage the decision-making process.
>
> Another problem, that I don't think has been mentioned before, is the
> reliability of the CA/Browser Forum to adopt well-informed standards going
> forward.  In other words, if something like I suggest happens, then I can
> see Certificate Consumers leaving the Forum and unilaterally setting very
> separate and distinct rules. This will result in fragmentation,
> inconsistency, and much more management overhead for CAs than the effort
> needed to keep track of attendance, which is already being done by the
> Forum.  (If you'd like, I can share with everyone the list of members who
> have not voted or attended meetings in over two years.)
>
> Ben
>
>
>
> On Mon, Jul 24, 2023 at 11:41 AM Tim Hollebeek 
> wrote:
>
> What is your argument in response to the point that any potential bad
> actors will be trivially able to satisfy the participation metrics?
>
>
>
> I’m very worried we’ll end up doing a lot of management and tracking work,
> without actually solving the problem.
>
>
>
> -Tim
>
>
>
> *From:* Ben Wilson 
> *Sent:* Monday, July 24, 2023 10:21 AM
> *To:* Ben Wilson ; CA/B Forum Server Certificate WG
> Public Discussion List 
> *Cc:* Tim Hollebeek 
> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
> Charter
>
>
>
> All,
>
> I have thought a lot about this, including various other formulas (e.g.
> market share) to come up with something reasonable, but I've come back to
> attendance as the key metric that we need to focus on. I just think that an
> attendance metric provides the only workable, measurable, and sound
> solution for determining the right to vote as a Certificate Consumer
> because it offers the following three elements:
>
>- Informed Decision-Making: Voting requires a comprehensive
>understanding of ongoing discussions and developments. Regular attendance
>provides members with the necessary context and knowledge to make
>well-informed decisions.
>- Commitment: Attendance is a tangible and measurable representation

Re: Cache write synchronization mode

2023-07-24 Thread Raymond Wilson
>>  However, if a primary node fails before at least 1 backup node receives
an update, then the update will be lost, and all nodes will have the old
value.

Does this imply that it is a good idea to have the FullSync write
synchronization mode? If the primary node 'comes back' after the primary
node failure would you expect the new value to propagate to all nodes?


On Tue, Jul 25, 2023 at 5:22 PM Pavel Tupitsyn  wrote:

> > if a hard failure occurs to one of the backup servers in the replicated
> cache will the server that failed have an inconsistent (old) copy of that
> element in the replicated cache when it restarts
>
> If only a backup server fails and restarts, it will get new data from the
> primary node, no issue here.
> However, if a primary node fails before at least 1 backup node receives an
> update, then the update will be lost, and all nodes will have the old value.
>
> Related: CacheConfiguration.ReadFromBackup property is true by default,
> meaning that with PrimarySync it is possible to get old value from a backup
> node after an update, before backups receive new data.
>
> On Mon, Jul 24, 2023 at 11:51 PM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> Hi Pavel,
>>
>> I understand the differences between the sync modes in terms of when the
>> write returns. What I want to understand is if there are consistency risks
>> with the PrimarySync versus FullSync modes.
>>
>> For example, if I have 4 nodes participating in the replicated cache (and
>> am using the default PrimarySync mode), then the write will return once the
>> primary node in the replicated cache has completed the write. At that point
>> if a hard failure occurs to one of the backup servers in the replicated
>> cache will the server that failed have an inconsistent (old) copy of that
>> element in the replicated cache when it restarts?
>>
>> Raymond.
>>
>>

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-24 Thread Chad Wilson
Yeah, just semantics.  Like you can't know the parameters "name" or
anything like that in the script itself, or do things like "iterate on all
parameter names" - you can only know their injected *value*, so if you want
to make logic depend on a parameter's name, you have to assign it to a
local var in the script first and work with the variable known to the
script, e.g with GoCD *parameter *PIPE_RESOURCE_PARAM=fast and task/script
content

#!/bin/sh
agent_resource="#{PIPE_RESOURCE_PARAM}"

At runtime, the shell runtime execution environment will see the below,
with parameter name gone

#!/bin/sh
agent_resource="fast"

So you can't write logic that varies based on the parameters' *names*
defined in GoCD (that's what I mean by meta-programming). But can still
achieve most things you might like to with other workarounds 

By contrast, *with env vars* you can iterate on the entire env and see if
anything has been defined at GoCD level, e.g with GoCD *env var *
PIPE_RESOURCE_ENV_VAR=fast

#!/bin/sh
env # <--- will print PIPE_RESOURCE_ENV_VAR=fast (among other things)

So slightly different semantics due what the execution environment knows.

-Chad

On Tue, Jul 25, 2023 at 12:41 PM Joshua Franta  wrote:

>
> yes- i'm aware that parameters are replaced before the script is written
> into the agent and executed.
> while it's correct that scripts don't know where the information inside
> them comes from, this is true for any script not just ones used by gocd.
> perhaps these are more semantic level points.
>
> eg i don't know what you mean about meta programming-  you can put a
> pipeline parameter into a variable and do whatever you like with it, same
> as env var.
> but again that's little to do with gocd specifically.
>
> regardless- yes got what i needed!
>
> thx again to everybody who tried to help
>
>
>
> On Mon, Jul 24, 2023 at 11:27 PM Chad Wilson 
> wrote:
>
>> If you read Jason's message a bit more closely he is conveying that the
>> script's runtime environment has no knowledge of the parameters - not that
>> they can't be used at all.
>>
>> They are just tokens that have already been 'realized' or replaced into
>> the content by the time the script/task runs. So the scripting environment
>> itself doesn't know that there were parameters used to generate the content
>> to run/execute and you can't meta-program based on them inside the script's
>> logic. (unlike environment variables)
>>
>> I believe this is in reference to the earlier script-based example you
>> gave which is a little confusing.
>>
>> Anyway, seems you have a way forward here for your core requirement.
>>
>> On Tue, 25 Jul 2023, 11:39 Joshua Franta,  wrote:
>>
>>> Jason, your knowledge here is off. Parameters can be used in scripts,
>>> see a previous email I this thread that shows how it works.
>>>
>>> On Mon, Jul 24, 2023, 4:11 PM Jason Smyth  wrote:
>>>
>>>> Hi Josh,
>>>>
>>>> I think there may be some confusion here regarding GoCD terminology and
>>>> common concepts.
>>>>
>>>> > i think the main source of confusion is that I thought parameters
>>>> could only be referred to in scripts!
>>>> > I didn't know you could refer to them inside of other configuration
>>>> properties!
>>>>
>>>> To the best of my knowledge, Parameters (GoCD concept) cannot be
>>>> referenced in scripts. You can call a script that uses parameters
>>>> (scripting concept), but as far as I know, GoCD Parameters are not
>>>> persisted in the Agent's runtime environment unless they are somehow passed
>>>> in via the Task definition. Are you sure you aren't thinking of Environment
>>>> Variables (GoCD concept)? Environment Variables can be defined in a few
>>>> different places in GoCD. As the name suggests, these values are persisted
>>>> in the Agent's runtime environment when a Task is executed.
>>>>
>>>> > I still have a question about how this works in examples using
>>>> templates.
>>>> > If we didn't define the pipeline parameter by default, how would
>>>> gocd interpret what I'm guessing would be a blank resource?
>>>>
>>>> If a Template references a Parameter then every Pipeline that uses that
>>>> Template _must_ define that Parameter. Depending on how the Parameter is
>>>> used in the Template, leaving the Parameter Value blank may be valid.
>>>>
>>>> In the case of using Parameters to define Resources, my testing shows
>>>> tha

[go-cd] Re: Release Announcement - 23.2.0

2023-07-24 Thread Chad Wilson
Hi folks - unfortunately a problem sneaked through our automated regression
in 23.2.0 which prevents UI navigation via Stage History to older stage
runs in the history when you're already viewing the detail of an individual
stage (Stage Details view).

You can still navigate to these as normal from the Pipeline Activity view,
or from a specific job in the history.

Will look to release 23.3.0 with a fix for this shortly, but if you've
upgraded already and have noticed any other problems, please let us know.

https://github.com/gocd/gocd/milestone/82?closed=1

-Chad


On Sat, 22 Jul 2023, 16:38 Chad Wilson,  wrote:

> Hello everyone,
>
> A new release of GoCD <https://www.gocd.org/releases/#23-2-0> (23.2.0) is
> out.
>
> This release is mainly a minor maintenance release. As always, please
> remember to take a backup before upgrading.
>
> To know more about the features and bug fixes in this release, see the release
> notes <https://www.gocd.org/releases/#23-1-0> or head to the downloads
> page <https://www.gocd.org/download/> to try it. Feedback and ideas are
> always welcome - we appreciate the discussion on issues you are having, and
> how we can improve things.
>
> Cheers,
> Chad & Aravind
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH9NDOsiMviu%2BKD8TbRReKp0kDm920A%3D1Ojs9sqGpUfS6g%40mail.gmail.com.


Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-24 Thread Chad Wilson
If you read Jason's message a bit more closely he is conveying that the
script's runtime environment has no knowledge of the parameters - not that
they can't be used at all.

They are just tokens that have already been 'realized' or replaced into the
content by the time the script/task runs. So the scripting environment
itself doesn't know that there were parameters used to generate the content
to run/execute and you can't meta-program based on them inside the script's
logic. (unlike environment variables)

I believe this is in reference to the earlier script-based example you gave
which is a little confusing.

Anyway, seems you have a way forward here for your core requirement.

On Tue, 25 Jul 2023, 11:39 Joshua Franta,  wrote:

> Jason, your knowledge here is off. Parameters can be used in scripts, see
> a previous email I this thread that shows how it works.
>
> On Mon, Jul 24, 2023, 4:11 PM Jason Smyth  wrote:
>
>> Hi Josh,
>>
>> I think there may be some confusion here regarding GoCD terminology and
>> common concepts.
>>
>> > i think the main source of confusion is that I thought parameters
>> could only be referred to in scripts!
>> > I didn't know you could refer to them inside of other configuration
>> properties!
>>
>> To the best of my knowledge, Parameters (GoCD concept) cannot be
>> referenced in scripts. You can call a script that uses parameters
>> (scripting concept), but as far as I know, GoCD Parameters are not
>> persisted in the Agent's runtime environment unless they are somehow passed
>> in via the Task definition. Are you sure you aren't thinking of Environment
>> Variables (GoCD concept)? Environment Variables can be defined in a few
>> different places in GoCD. As the name suggests, these values are persisted
>> in the Agent's runtime environment when a Task is executed.
>>
>> > I still have a question about how this works in examples using
>> templates.
>> > If we didn't define the pipeline parameter by default, how would gocd
>> interpret what I'm guessing would be a blank resource?
>>
>> If a Template references a Parameter then every Pipeline that uses that
>> Template _must_ define that Parameter. Depending on how the Parameter is
>> used in the Template, leaving the Parameter Value blank may be valid.
>>
>> In the case of using Parameters to define Resources, my testing shows
>> that each Parameter must define a single, valid, Resource. That is, if you
>> want to specify multiple Parameterized Resources, you must use multiple
>> Parameters. You cannot, for example, provide a Parameter Value of "foo,
>> bar" to make your Pipeline's Job depend on the "foo" and "bar" Resources.
>> GoCD rejects the configuration as invalid if you try to save it. Similarly,
>> GoCD rejects the configuration as invalid if a Parameter is used in the
>> Resource field and you try to leave its Value blank.
>>
>> Regarding your specific use case, you can solve it using either
>> Environments or Resources. The right solution depends on your requirements
>> and how you want to reason about your environment.
>>
>> The way I understand it, in the context of this discussion you have 2
>> groups of Agents (Agents1 and Agents2) and 2 groups of Pipelines
>> (PipelinesA and PipelinesB). The Pipelines in PipelinesA can run on any
>> Agent, but the Pipelines in PipelinesB must run on the Agents in Agents2.
>> We will ignore the fact that Pipelines can contain multiple Stages and
>> multiple Jobs and assume either that all of the Pipelines contain a single
>> Stage with a single Job, or that the scheduling requirements are the same
>> for all Jobs in a given Pipeline. You have also talked about Pipeline
>> priority.
>>
>> Based on this, I assume your requirements are one of the following:
>>
>> 1. Agents should be used to the full extent possible; the workload in
>> PipelinesB is heavier so those Pipelines must not run on Agents1, or
>> 2. Pipelines in PipelinesB have a higher priority than those in
>> PipelinesA; Agents in Agents2 should take Jobs from PipelinesA only if
>> there are no pending Jobs for PipelinesB.
>>
>> GoCD supports the first scenario. You can achieve this by assigning 2
>> Resources/Environments. Pipelines in PipelinesA get 1 Resource/Environment;
>> Pipelines in PipelinesB get the other. Agents in Agents1 get the PipelinesA
>> Resource/Environment; Agents in Agents2 get both.
>>
>> GoCD does not support the concept of priority, so scenario 2 is not
>> supported. The best you could accomplish would be to map each group of
>> Pipelines to

Re: Ignite data region off-heap allocation

2023-07-24 Thread Raymond Wilson
Thanks for the confirmation.

On Mon, Jul 24, 2023 at 4:40 PM Pavel Tupitsyn  wrote:

> > If this flag is true will Ignite proactively allocate and use all pages
> in a data region, rather than incrementally?
>
> LazyMemoryAllocation means whether memory for DataRegion will be allocated
> only when the first cache is created in that region (when true), or
> immediately (when false)
>
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataRegionConfiguration.html#isLazyMemoryAllocation--
>
>
> On Wed, Jul 19, 2023 at 12:50 PM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> Just FYI, we have held off any memory pressure changes in the meantime
>> while we continue to investigate the memory issues we have.
>>
>> On Tue, 18 Jul 2023 at 9:07 AM, Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Hi Pavel,
>>>
>>> This area is confusing. There is no indication that the memory pressure
>>> applies to any individual object or allocation, so there is clearly no
>>> association between memory pressure and any particular resource.
>>>
>>> I get your argument that .Net can 'see' allocated memory. What is
>>> unclear is whether it cares about actually allocated and used pages, or
>>> committed pages.
>>>
>>> I see there is a LazyMemoryAllocation (default: true) for data regions.
>>> Some data regions set this to false, eg:
>>>
>>> ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>> ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>> ^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,
>>>
>>> The documentation is not clear on the effect of this flag other than to
>>> say it is for 'Lazy memory allocation'. If this flag is true will Ignite
>>> proactively allocate and use all pages in a data region, rather than
>>> incrementally?
>>>
>>> Thanks,
>>> Raymond.
>>>
>>>
>>> On Tue, Jul 11, 2023 at 10:55 PM Pavel Tupitsyn 
>>> wrote:
>>>
>>>> > I can’t see another way of letting . Net know that it can’t have
>>>> access to all the ‘free’ memory in the process
>>>>
>>>> You don't need to tell .NET how much memory is currently available. It
>>>> is the job of the OS. .NET can "see" the size of the unmanaged heap.
>>>>
>>>> To quote another explanation [1]:
>>>>
>>>> > The point of AddMemoryPressure is to tell the garbage collector that
>>>> there's a large amount of memory allocated with that object.
>>>> > If it's unmanaged, the garbage collector doesn't know about it; only
>>>> the managed portion.
>>>> > Since the managed portion is relatively small, the GC may let it pass
>>>> for garbage collection several times, essentially wasting memory that might
>>>> need to be freed.
>>>>
>>>> I really don't think AddMemoryPressure is the right thing to do in your
>>>> case.
>>>> If you run into OOM issues, then look into Ignite memory region
>>>> settings [2] and/or adjust application memory usage on the .NET side, so
>>>> that the sum of those is not bigger than available RAM.
>>>>
>>>> [1]
>>>> https://stackoverflow.com/questions/1149181/what-is-the-point-of-using-gc-addmemorypressure-with-an-unmanaged-resource
>>>> [2]
>>>> https://ignite.apache.org/docs/latest/memory-configuration/data-regions#configuring-default-data-region
>>>>
>>>> On Tue, Jul 11, 2023 at 11:48 AM Raymond Wilson <
>>>> raymond_wil...@trimble.com> wrote:
>>>>
>>>>> How do Ignite .Net server nodes manage this memory issue in other
>>>>> projects?
>>>>>
>>>>> On Tue, Jul 11, 2023 at 5:32 PM Raymond Wilson <
>>>>> raymond_wil...@trimble.com> wrote:
>>>>>
>>>>>> Oops, commutes => committed
>>>>>>
>>>>>> On Tue, 11 Jul 2023 at 4:34 PM, Raymond Wilson <
>>>>>> raymond_wil...@trimble.com> wrote:
>>>>>>
>>>>>>> I can’t see another way of letting . Net know that it can’t have
>>>>>>> access to all the ‘free’ memory in the process when a large slab of 
>>>>>>> that is
>>>>>>> spoken for i

[Wikitech-ambassadors] Tech News 2023, week 30

2023-07-24 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/30. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Deutsch
, English, Tiếng Việt
, Türkçe
, español
, français
, italiano
, norsk bokmål
, polski
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, فارسی
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - On July 18, the Wikimedia Foundation launched a survey about the technical
   decision making process
    for people
   who do technical work that relies on software that is maintained by the
   Foundation or affiliates. If this applies to you, please take part in
   the survey . The
   survey will be open for three weeks, until August 7. You can find more
   information in the announcement e-mail on wikitech-l
   

   .

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 25 July. It will
   be on non-Wikipedia wikis and some Wikipedias from 26 July. It will be on
   all wikis from 27 July (calendar
   ).

*Tech news 
prepared by Tech News writers
 and
posted by bot

•
Contribute
 •
Translate
 •
Get help  • Give feedback
 • Subscribe or unsubscribe
.*
___
Wikitech-ambassadors mailing list -- wikitech-ambassadors@lists.wikimedia.org
To unsubscribe send an email to wikitech-ambassadors-le...@lists.wikimedia.org


[Translators-l] Re: Ready for translation: Tech News #30 (2023)

2023-07-24 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 20 languages) to 1,076 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[exim] Untainting help

2023-07-24 Thread Steve Wilson via Exim-users
I'm currently running exim 4.92 and having no taint issues, the moment I 
update to 4.96 I get the below message in the logs and messages bounce.
I understand the tainting and not trusting 3rd party entered data but 
I'm looking to fix this the right way, as google has presented a few 
hacks and that's not how I'd like to go.


1qJtZ6-0004kS-1z ** st...@swsystem.co.uk R=mysql_user 
T=local_dovecot_lda: Tainted arg 2 for local_dovecot_lda transport 
command: 'st...@swsystem.co.uk'


My understanding is that this comes from my transport 
(local_dovecot_lda) and some documentation states I can use 
${domain_data} and ${local_part_data}, however local_part_data doesn't 
seem available for the transport.
Should I be doing a mysql lookup for local_parts in the router or is 
there a better way to simplify my config?


Router:
mysql_user:
  driver    = accept
  domains   = +local_domains
  condition = ${lookup mysql{ \
    SELECT CONCAT(username,'@',domain) AS 
email \

    FROM user \
    WHERE 
username='${quote_mysql:$local_part}' \

    AND domain='${quote_mysql:$domain}' \
    AND SMTP_allowed='YES' \
  }{true}{false}}
  local_part_suffix = +* : -* : _*
  local_part_suffix_optional
  transport = ${if 
exists{/home/vpopmail/domains/${domain}/${local_part}/.mailfilter} 
{local_mysql_maildrop} {local_dovecot_lda} }


Transport:
local_dovecot_lda:
  driver    = pipe
  path  = "/bin:/usr/bin:/usr/local/bin"
  environment   = 
"HOME=/home/vpopmail/domains/${quote_mysql:domain}/${quote_mysql:$local_part}/;ORIG_LHS=${original_local_part};ORIG_RHS=${original_domain}"
  home_directory    = 
"/home/vpopmail/domains/${quote_mysql:$domain}/${quote_mysql:$local_part}/"
  current_directory = 
"/home/vpopmail/domains/${quote_mysql:$domain}/${quote_mysql:$local_part}/"
  command   = "/usr/libexec/dovecot/deliver -d 
${quote_mysql:$local_part}@${quote_mysql:$domain}"

  log_output
  delivery_date_add
  envelope_to_add
  return_path_add
  message_suffix =
  temp_errors = 64 : 69 : 70: 71 : 72 : 73 : 74 : 75 : 78
  user  = vpopmail
  group = vpopmail

local_domains is defined as:
domainlist local_domains = ${lookup mysql {\
    SELECT domain FROM user WHERE 
domain='${quote_mysql:$domain}' \

  UNION \
    SELECT domain FROM alias WHERE 
domain='${quote_mysql:$domain}' \

  UNION \
    SELECT domain FROM catchall WHERE 
domain='${quote_mysql:$domain}'\

   }}

Am I correct in thinking I should add a local_parts lookup to the router 
as below or is there a more elegant way to get the $*_data variables to 
the transport?

local_parts  = ${lookup mysql{ SELECT username \
    FROM user \
    WHERE 
username='${quote_mysql:$local_part}' \

    AND domain='${quote_mysql:$domain}' \
    AND SMTP_allowed='YES' }}

Looking at my current config it's been in place since 2010 with minor 
updates, I've spent hours trying to get my head round what needs doing 
and would appreciate any available advice.


Regards
Steve.


--
## subscription configuration (requires account):
##   https://lists.exim.org/mailman3/postorius/lists/exim-users.lists.exim.org/
## unsubscribe (doesn't require an account):
##   exim-users-unsubscr...@lists.exim.org
## Exim details at http://www.exim.org/
## Please use the Wiki with this list - http://wiki.exim.org/


Re: Cache write synchronization mode

2023-07-24 Thread Raymond Wilson
Hi Pavel,

I understand the differences between the sync modes in terms of when the
write returns. What I want to understand is if there are consistency risks
with the PrimarySync versus FullSync modes.

For example, if I have 4 nodes participating in the replicated cache (and
am using the default PrimarySync mode), then the write will return once the
primary node in the replicated cache has completed the write. At that point
if a hard failure occurs to one of the backup servers in the replicated
cache will the server that failed have an inconsistent (old) copy of that
element in the replicated cache when it restarts?

Raymond.


Re: [Servercert-wg] Participation Proposal for Revised SCWG Charter

2023-07-24 Thread Ben Wilson via Servercert-wg
Tim,

One problem we're trying to address is the potential for a great number of
“submarine voters”.  Such members may remain inactive for extended periods
of time and then surface only to vote for or against something they
suddenly are urged to support or oppose, without being aware of the issues.
This will skew and damage the decision-making process.

Another problem, that I don't think has been mentioned before, is the
reliability of the CA/Browser Forum to adopt well-informed standards going
forward.  In other words, if something like I suggest happens, then I can
see Certificate Consumers leaving the Forum and unilaterally setting very
separate and distinct rules. This will result in fragmentation,
inconsistency, and much more management overhead for CAs than the effort
needed to keep track of attendance, which is already being done by the
Forum.  (If you'd like, I can share with everyone the list of members who
have not voted or attended meetings in over two years.)

Ben

On Mon, Jul 24, 2023 at 11:41 AM Tim Hollebeek 
wrote:

> What is your argument in response to the point that any potential bad
> actors will be trivially able to satisfy the participation metrics?
>
>
>
> I’m very worried we’ll end up doing a lot of management and tracking work,
> without actually solving the problem.
>
>
>
> -Tim
>
>
>
> *From:* Ben Wilson 
> *Sent:* Monday, July 24, 2023 10:21 AM
> *To:* Ben Wilson ; CA/B Forum Server Certificate WG
> Public Discussion List 
> *Cc:* Tim Hollebeek 
> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
> Charter
>
>
>
> All,
>
> I have thought a lot about this, including various other formulas (e.g.
> market share) to come up with something reasonable, but I've come back to
> attendance as the key metric that we need to focus on. I just think that an
> attendance metric provides the only workable, measurable, and sound
> solution for determining the right to vote as a Certificate Consumer
> because it offers the following three elements:
>
>- Informed Decision-Making: Voting requires a comprehensive
>understanding of ongoing discussions and developments. Regular attendance
>provides members with the necessary context and knowledge to make
>well-informed decisions.
>- Commitment: Attendance is a tangible and measurable representation
>of a member's commitment to the Server Certificate WG and its objectives.
>It demonstrates a genuine interest in contributing to the development and
>improvement of the requirements.
>- Active Involvement: By prioritizing attendance, we encourage active
>involvement and discourage passive membership. Voting rights should be
>earned through consistent engagement, as this ensures that decisions are
>made by those who are genuinely invested in the outcomes.
>
> At this point, I'm going to re-draft a proposal for a revision to the
> Server Certificate WG Charter and present it on the public list (because an
> eventual revision of the Charter will have to take place at the Forum
> level).
>
> Thanks,
>
> Ben
>
>
>
> On Thu, Jul 13, 2023 at 9:45 AM Ben Wilson via Servercert-wg <
> servercert-wg@cabforum.org> wrote:
>
> Thanks, Tim.
>
>
>
> All,
>
>
>
> I will look closer at the distribution and use of software for browsing
> the internet securely, instead of participation metrics. There is at least
> one source, StatCounter (https://gs.statcounter.com/browser-market-share),
> that purports to measure use of browsing software, both globally and
> regionally. Would it be worthwhile to explore distribution by region and
> come up with a reasonable threshold?  Can we rely on StatCounter, or should
> we look elsewhere?
>
>
>
> Thanks,
>
>
>
> Ben
>
>
>
> On Wed, Jul 12, 2023 at 9:30 AM Tim Hollebeek via Servercert-wg <
> servercert-wg@cabforum.org> wrote:
>
> I have a meaningful comment.
>
>
>
> I don’t want to ever have to discuss or judge whether someone’s comment is
> “meaningful” or not, and I don’t think incentivizing people to post more
> comments than they otherwise would is helpful.
>
>
>
> I also think getting the chairs involved in any way in discussing whether
> a member representative did or did not have a medical condition during a
> particular time period is an extremely bad idea.
>
>
>
> Given that the original issue was trying to determine whether a
> certificate consumer is in fact a legitimate player in the ecosystem or
> not, I would suggest that exploring metrics like market share might be far
> more useful.  Metrics like participation are rather intrusive and onerous,
> except to those who are trying to game them, and those trying t

Re: [Servercert-wg] Participation Proposal for Revised SCWG Charter

2023-07-24 Thread Ben Wilson via Servercert-wg
All,

I have thought a lot about this, including various other formulas (e.g.
market share) to come up with something reasonable, but I've come back to
attendance as the key metric that we need to focus on. I just think that an
attendance metric provides the only workable, measurable, and sound
solution for determining the right to vote as a Certificate Consumer
because it offers the following three elements:

   - Informed Decision-Making: Voting requires a comprehensive
   understanding of ongoing discussions and developments. Regular attendance
   provides members with the necessary context and knowledge to make
   well-informed decisions.
   - Commitment: Attendance is a tangible and measurable representation of
   a member's commitment to the Server Certificate WG and its objectives. It
   demonstrates a genuine interest in contributing to the development and
   improvement of the requirements.
   - Active Involvement: By prioritizing attendance, we encourage active
   involvement and discourage passive membership. Voting rights should be
   earned through consistent engagement, as this ensures that decisions are
   made by those who are genuinely invested in the outcomes.

At this point, I'm going to re-draft a proposal for a revision to the
Server Certificate WG Charter and present it on the public list (because an
eventual revision of the Charter will have to take place at the Forum
level).

Thanks,

Ben


On Thu, Jul 13, 2023 at 9:45 AM Ben Wilson via Servercert-wg <
servercert-wg@cabforum.org> wrote:

> Thanks, Tim.
>
> All,
>
> I will look closer at the distribution and use of software for browsing
> the internet securely, instead of participation metrics. There is at least
> one source, StatCounter (https://gs.statcounter.com/browser-market-share),
> that purports to measure use of browsing software, both globally and
> regionally. Would it be worthwhile to explore distribution by region and
> come up with a reasonable threshold?  Can we rely on StatCounter, or should
> we look elsewhere?
>
> Thanks,
>
> Ben
>
> On Wed, Jul 12, 2023 at 9:30 AM Tim Hollebeek via Servercert-wg <
> servercert-wg@cabforum.org> wrote:
>
>> I have a meaningful comment.
>>
>>
>>
>> I don’t want to ever have to discuss or judge whether someone’s comment
>> is “meaningful” or not, and I don’t think incentivizing people to post more
>> comments than they otherwise would is helpful.
>>
>>
>>
>> I also think getting the chairs involved in any way in discussing whether
>> a member representative did or did not have a medical condition during a
>> particular time period is an extremely bad idea.
>>
>>
>>
>> Given that the original issue was trying to determine whether a
>> certificate consumer is in fact a legitimate player in the ecosystem or
>> not, I would suggest that exploring metrics like market share might be far
>> more useful.  Metrics like participation are rather intrusive and onerous,
>> except to those who are trying to game them, and those trying to game such
>> metrics will succeed with little or no effort.
>>
>>
>>
>> -Tim
>>
>>
>>
>> *From:* Servercert-wg  *On Behalf Of
>> *Roman Fischer via Servercert-wg
>> *Sent:* Wednesday, July 12, 2023 7:23 AM
>> *To:* CA/B Forum Server Certificate WG Public Discussion List <
>> servercert-wg@cabforum.org>
>> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
>> Charter
>>
>>
>>
>> Dear Ben,
>>
>>
>>
>> Mandatory participation has in my experience never resulted in more or
>> better discussions. People will dial into the telco and let it run in the
>> background to “earn the credits”.
>>
>>
>>
>> Also, what would happen after the 90 day suspension? Would the
>> organization be removed as a CA/B member?
>>
>>
>>
>> Rgds
>> Roman
>>
>>
>>
>> *From:* Servercert-wg  *On Behalf Of
>> *Ben Wilson via Servercert-wg
>> *Sent:* Freitag, 7. Juli 2023 21:59
>> *To:* CA/B Forum Server Certificate WG Public Discussion List <
>> servercert-wg@cabforum.org>
>> *Subject:* [Servercert-wg] Participation Proposal for Revised SCWG
>> Charter
>>
>>
>>
>> All,
>>
>>
>>
>> Here is a draft participation proposal for the SCWG to consider and
>> discuss for inclusion in a revised SCWG Charter.
>>
>>
>>
>> #.  Participation Requirements to Maintain Voting Privileges
>>
>>
>>
>> (a) Attendance.  The privilege to vote “Yes” or “No” on ballots is
>> suspended for 90 days if a Voting Member fails to mee

Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-24 Thread Chad Wilson
On Mon, Jul 24, 2023 at 8:44 PM Joshua Franta  wrote:

>
> chad thanks for your answer.
>
> i think the main source of confusion is that I thought parameters could
> only be referred to in scripts!
> I didn't know you could refer to them inside of other configuration
> properties!
> Is this documented?   Regardless that's super useful, there's probably
> some other things that can be cleaned up knowing that.
>

https://docs.gocd.org/current/configuration/admin_use_parameters_in_configuration.html#rules-around-usage-of-parameters



> I tried this on a pipeline w/out any template and it worked as described.
>  Just put the parameter reference in resource- UI accepts as long as
> parameter exists and works.
>
> I still have a question about how this works in examples using templates.
> If we didn't define the pipeline parameter by default, how would gocd
> interpret what I'm guessing would be a blank resource?
>
> eg we have
>
>1. a pipeline template called FAST_OR_SLOW_PIPE
>2. every pipeline implementing this template defines a parameter
>called  PIPE_RESOURCE_PARAM
>
> What happens if somebody only defines PIPE_RESOURCE_PARAM when the
> pipeline is FAST?
> If it's left as empty for ANY-aka-SLOW resources, will gocd intepret this
> as a blank resource requirement and fail?
> Or will it ignore blank resources?
>

I'm not sure - perhaps just try it empirically? It could either fail or see
it as blank i.e "no resource requirement" - I don't think there's a strong
case for either behaviour being more correct.

-Chad


> On Sun, Jul 23, 2023 at 10:04 AM Chad Wilson 
> wrote:
>
>> With that description, if you want to use *environments
>> <https://docs.gocd.org/current/introduction/concepts_in_go.html#environment>*
>> rather than resources
>> <https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>
>> (and assuming you don't use environments for any other purpose), I would
>>
>> *1) Create 2 environments "fast" and "any"*
>>
>> *2) Map agents to environments*
>> agents on GROUPA = machines that have less beefy hardware
>> *- declare environments "any" when registered*
>>
>> agents on GROUPB = more expensive machines
>> *- declare both environments "fast" and "any" when registered*
>>
>> *3) When configuring your pipelines*
>>
>>1. have a couple of pipelines only run on the more expensive machines *<--
>>add these pipelines to "fast" environment*
>>2. have all other pipelines run in either group (next available
>>agent) *<-- add these pipelines to "any" environment*
>>
>> This should give you roughly the semantics you say you want, but note it
>> won't *prioritise* the GROUPB agents for use by the "couple of pipelines
>> only run on the more expensive machines", it will just ensure they never
>> run on the slower machines/agents. Something equivalent could also be done
>> with resources
>> <https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>
>> .
>>
>> There is no way to "try another agent" from inside the actual job's
>> tasks. In this sense, the contents of tasks/scripts aren't relevant to
>> scheduling. The GoCD resources and environments have to be known at
>> schedule time. When you use pipeline parameters, they are realised at
>> configuration time as when you create a pipeline from a template, it will
>> force you to set the parameter values.
>>
>> To clarify, when you talked earlier about "a resource requirement" are
>> you *actually* referring to GoCD's concept of resources, or were you
>> talking in a generic sense? The answers are assuming you are talking about
>> GoCD resources
>> <https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>
>> but now I am more confused by your shell script. *If you want to use
>> resources* (rather than environments) to affect scheduling, while still
>> avoiding duplication of your templates, we are suggesting you use a
>> parameter like *this*, not put it into some task content. You are
>> setting the parameterized value into the field that determines the job's
>> scheduling, not something that happens at execution time like a task. But
>> again, if your goal is to control scheduling at pipeline level, for all
>> jobs in a pipeline, you don't need to use resources, and can just use
>> environments as in my earlier example above.
>>
>> [image: image.png]
>>
>> -Chad
>>
>>
>> On Sun, Jul 23, 2023 a

Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-23 Thread Chad Wilson
With that description, if you want to use *environments
<https://docs.gocd.org/current/introduction/concepts_in_go.html#environment>*
rather than resources
<https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>
(and assuming you don't use environments for any other purpose), I would

*1) Create 2 environments "fast" and "any"*

*2) Map agents to environments*
agents on GROUPA = machines that have less beefy hardware
*- declare environments "any" when registered*

agents on GROUPB = more expensive machines
*- declare both environments "fast" and "any" when registered*

*3) When configuring your pipelines*

   1. have a couple of pipelines only run on the more expensive machines *<--
   add these pipelines to "fast" environment*
   2. have all other pipelines run in either group (next available agent) *<--
   add these pipelines to "any" environment*

This should give you roughly the semantics you say you want, but note it
won't *prioritise* the GROUPB agents for use by the "couple of pipelines
only run on the more expensive machines", it will just ensure they never
run on the slower machines/agents. Something equivalent could also be done
with resources
<https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>.

There is no way to "try another agent" from inside the actual job's tasks.
In this sense, the contents of tasks/scripts aren't relevant to scheduling.
The GoCD resources and environments have to be known at schedule time. When
you use pipeline parameters, they are realised at configuration time as
when you create a pipeline from a template, it will force you to set the
parameter values.

To clarify, when you talked earlier about "a resource requirement" are you
*actually* referring to GoCD's concept of resources, or were you talking in
a generic sense? The answers are assuming you are talking about GoCD
resources
<https://docs.gocd.org/current/introduction/concepts_in_go.html#resources>
but now I am more confused by your shell script. *If you want to use
resources* (rather than environments) to affect scheduling, while still
avoiding duplication of your templates, we are suggesting you use a
parameter like *this*, not put it into some task content. You are setting
the parameterized value into the field that determines the job's
scheduling, not something that happens at execution time like a task. But
again, if your goal is to control scheduling at pipeline level, for all
jobs in a pipeline, you don't need to use resources, and can just use
environments as in my earlier example above.

[image: image.png]

-Chad


On Sun, Jul 23, 2023 at 8:21 PM Joshua Franta  wrote:

> appreciate so many responses.  i think we're a little apart so i'll take
> the suggestion to give our example:
>
> GROUPA = machines that have less beefy hardware
> GROUPB = more expensive machines
>
> we'd like to:
>
>1. have a couple of pipelines only run on the more expensive machines
>2. have all other pipelines run in either group (next available agent)
>
> perhaps this was not clear from my previous explanations, but a couple of
> people have suggested pipeline parameters.
>
> EXAMPLE
>
> a pipeline parameter is only going to be available to the job after the
> job has already been assigned an agent, right?
>
> so if i have a pipeline called 'Priority' w/a parameter  called "group-id"
> and the pipeline has a 'Job' that is a shell script:
>
> 
> ##!/bin/sh
>
> agent_resource="$GO_AGENT_RESOURCE_VARIABLE"
>
> if ! echo "#{group-id}" |grep -q "$agentr_esource"; then
>
>echo "agent can't run #{group-id} pipelines"
>
>## won't this will make my pipeline fail when I want it to simply try
> another agent?
> exit 1
> fi
>
> 
>
> or perhaps people saying this know of some environment variable that where
> we can request another agent?
>
> obviously pipeline parameters themseles don't do anything, so i'm confused
> how i can affect assignment in a job that requires an agent before it runs.
> this 2nd part is what i don't get above
>
> appreciate any clarifications or suggestions thx
>
>
> On Sat, Jul 22, 2023 at 9:58 AM Chad Wilson 
> wrote:
>
>>
>> On Sat, Jul 22, 2023 at 8:21 PM Joshua Franta 
>> wrote:
>>
>>> re: using environments rather than resources... environments can't be
>>> defined at the pipeline level either though?
>>>
>>
>> A pipeline is *assigned* to 0 or 1 environments (via the Admin  >
>> Environments UI if not using pipelines-as-code) - thus it's at the pipeline
>> level by definition. It defines a scheduling requirement for all jobs in
>>

Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-22 Thread Chad Wilson
On Sat, Jul 22, 2023 at 8:21 PM Joshua Franta  wrote:

> re: using environments rather than resources... environments can't be
> defined at the pipeline level either though?
>

A pipeline is *assigned* to 0 or 1 environments (via the Admin  >
Environments UI if not using pipelines-as-code) - thus it's at the pipeline
level by definition. It defines a scheduling requirement for all jobs in
that pipeline. Which seems what you asked for with "able to communicate a
resource requirement at the pipeline level" right?


or i guess it's more correct to say that using environments is a bit of a
> side-car feature, in that we use interact w/environments through a
> different prisim/ui/config (no biggie) but also seems it's mutually
> exclusive to maximizing overall usage of agents.for us if a given host
> can execute something (a pipeline, a job) it should.  and if it can't, it
> shouldn't.
> trying to force a hard divider can be useful for prod/qa staging, but it
> seems to limit just being able to have pipelines declare their needs.
> maybe i'm missing what you're saying but i don't think environments are
> functionally equivalent to resources?
>

I didn't imply they were functionally equivalent, but I did try to imply
they were a different mechanism of defining a requirement on a job's
scheduling, at the pipeline level. If a pipeline is assigned to an
"environment", its jobs must be scheduled on agents that also declare they
support that "environment". Similarly if a pipeline job declares a resource
requirement, the agent must also have that resource declared for it to be
assigned. This is a very similar, but different level of configuration of a
scheduling requirement, no?
https://docs.gocd.org/current/configuration/managing_environments.html

Anyway, perhaps I don't understand what you are trying to achieve. If you
are currently trying to "prioritise" pipelines by using resources you can
also "prioritise" pipelines by having pools of agents, say, dedicated to an
environment you call "high-priority". As I said, "Don't need to get hung up
on the name [environment]".



> we use template parameters extensively already.
> eg we even templatize further inside our own jobs by re-using scripts that
> interact with template parameters on most commonly used templates (eg our
> most popular template has maybe 10-15 pipelines).
> however this is more of a job specific thing since it's at the job level.
> if you're saying we could change every pipeline to read this at a pipeline
> level is a non trivial change to every job.
>

You said you had many templates that varied only by the "resources" field
for jobs. If that is the stated problem then parameters are a possible
solution to remove duplication, no?


> that's ok but i guess my overall question tho would be that if a given job
> decided it couldn't execute the pipeline parameters... it has no way to
> pass the job to another agent?
>

That's the same problem you have currently if the resource is typoed or
wrong inside the template, no? If the resource requirement has no available
agents, then it can't be scheduled.


> in such an example it would just fail the job, no?   again maybe i'm not
> following but this seems to not allow the business/value level to declare
> minimum needs
> (environments seem like they are more about maximimal requirements, but
> i'm no expert)
>

I'm not following what you're trying to say here, sorry.

Perhaps this would be easier if you gave a specific example of how you
achieve "have some pipelines that are given higher preferences for
agent/build resources" currently, rather than talking in abstract terms?

-Chad


On Sat, Jul 22, 2023 at 6:56 AM Chad Wilson  wrote:
>
>> Have you tried to use "environments" (or a mix of environments and
>> resources) to achieve what you are trying to?
>>
>> When scheduling jobs it's the combination of the resource and the
>> environment that are matched to an agent, but the relevant environment is
>> declared at the pipeline level like you refer to. Don't need to get hung up
>> on the name so much. Yes, you can have "environment variables" attached to
>> an environment and propagate those to all pipelines within it, but you
>> don't have to use them like that.
>>
>> Alternatively, to make the templates less duplicated and allow the
>> resource to flow from the pipeline *using* the template, you could try
>> using template parameters
>> <https://docs.gocd.org/current/configuration/admin_use_parameters_in_configuration.html>
>> in the resources field? e.g #{job-resoure-requirement}? If there are only a
>> small number of different resources used across the stages/jobs, you could
>> use

Re: [go-cd] shouldn't required resources also be at the pipeline level?

2023-07-22 Thread Chad Wilson
Have you tried to use "environments" (or a mix of environments and
resources) to achieve what you are trying to?

When scheduling jobs it's the combination of the resource and the
environment that are matched to an agent, but the relevant environment is
declared at the pipeline level like you refer to. Don't need to get hung up
on the name so much. Yes, you can have "environment variables" attached to
an environment and propagate those to all pipelines within it, but you
don't have to use them like that.

Alternatively, to make the templates less duplicated and allow the resource
to flow from the pipeline *using* the template, you could try using template
parameters

in the resources field? e.g #{job-resoure-requirement}? If there are only a
small number of different resources used across the stages/jobs, you could
use the parameters to "model" this I imagine.

-Chad

On Sat, Jul 22, 2023 at 6:54 PM Josh  wrote:

> QUESTION:
>
> Shouldn't we also be able to communicate a resource requirement at the
> pipeline level, and not just inside a single job?
>
> I get that it definately needs to be at the job level since that's the
> smallest unit of work and some machines can't execute certain tasks.
> But at the value-stream/pipeline/business level, you also want to be able
> to have some pipelines compiling on preferred resources, no?
>
>
> is there a better way to accomplish this?
> or perhaps this already is possible and i'm missing it.
> i looked closely at the config since sometimes you can do something simple
> that is not possible inside the UI, but I'm not seeing it.
>
> To restate use case:  We have some pipelines that are given higher
> preferences for agent/build resources.   Wanting to do a lot more of this,
> but it's tricky because resources can only be defined at the job level (in
> the UI). Also we use a lot of templates, so having resources at job
> level means we end up having lots of alsomost identical templates that only
> vary by the resources used (which somewhat defeats the point of the
> templates and the value of gocd in this respect).
>
> hoping there is a config hack or maybe i'm missinig something.
> also if this could be done in a plugin, any color there would be helpful
> (and i would make sure it's open sourced if need be).
>
> thx
>
> ps i keep using other ci/cd products and gocd is still one of the all
> around bests.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/a9a4ba2c-b1c9-4202-9408-3e2566929b59n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH8zGo6mu0ss0jCCyw0D7Hw4JOwEwfcfNu20yqo0aRRdWw%40mail.gmail.com.


[go-cd] Release Announcement - 23.2.0

2023-07-22 Thread Chad Wilson
Hello everyone,

A new release of GoCD  (23.2.0) is
out.

This release is mainly a minor maintenance release. As always, please
remember to take a backup before upgrading.

To know more about the features and bug fixes in this release, see the release
notes  or head to the downloads page
 to try it. Feedback and ideas are always
welcome - we appreciate the discussion on issues you are having, and how we
can improve things.

Cheers,
Chad & Aravind

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH-7w%3D%3DFrVcrMT%3DKK0umoEOxCddgx0wcTNfdgh_A4xXn_g%40mail.gmail.com.


[Translators-l] Ready for translation: Tech News #30 (2023)

2023-07-20 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/30

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F30=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Request for Input: CA Incident Reporting

2023-07-20 Thread 'Clint Wilson' via CCADB Public
All,

During the CA/Browser Forum Face-to-Face 59 meeting, several Root Store 
Programs expressed an interest in improving Web PKI incident reporting.

The CCADB Steering Committee is interested in this community’s recommendations 
on improving the standards applicable to and the overall quality of incident 
reports submitted by Certification Authority (CA) Owners. We aim to facilitate 
effective collaboration, foster transparency, and promote the sharing of best 
practices and lessons learned among CAs and the broader community.

Currently, some Root Store Programs require incident reports from CA Owners to 
address a list of items in a format detailed on ccadb.org  
[1]. While the CCADB format provides a framework for reporting, we would like 
to discuss ideas on how to improve the quality and usefulness of these reports.

We would like to make incident reports more useful and effective where they:

Are consistent in quality, transparency, and format.
Demonstrate thoroughness and depth of investigation and incident analysis, 
including for variants.
Clearly identify the true root cause(s) while avoiding restating the issue.
Provide sufficient detail that enables other CA Owners or members of the public 
to comprehend and, where relevant, implement an equivalent solution.
Present a complete timeline of the incident, including the introduction of the 
root cause(s).
Include specific, actionable, and timebound steps for resolving the issue(s) 
that contributed to the root cause(s).
Are frequently updated when new information is found and steps for resolution 
are completed, delayed, or changed. 
Allow a reader to quickly understand what happened, the scope of the impact, 
and how the remediation will sufficiently prevent the root cause of the 
incident from reoccuring. 

We appreciate, to state it lightly, members of this community and the general 
public who generate and review reports, offer their understanding of the 
situation and impact, and ask clarifying questions. 

Call to action: In the spirit of continuous improvement, we are requesting (and 
very much appreciate) this community’s suggestions for how CA incident 
reporting can be improved.

Not every suggestion will be implemented, but we will commit to reviewing all 
suggestions and collectively working towards an improved standard.

Thank you
-Clint, on behalf of the CCADB Steering Committee

[1] https://www.ccadb.org/cas/incident-report 

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/3B253FFF-4070-4F0E-95D2-166FAC01C5A7%40apple.com.


smime.p7s
Description: S/MIME cryptographic signature


Re: [cabfpub] Voting begins for Ballot Forum-18 v3 - Update CA/B Forum Bylaws to version 2.5

2023-07-19 Thread Clint Wilson via Public
Apple votes Yes on Forum-018.

> On Jul 13, 2023, at 1:43 AM, Dimitris Zacharopoulos (HARICA) via Public 
>  wrote:
> 
> This message begins the voting period for ballot Forum-18 v3.
> 
> Dimitris.
> 
> Purpose of the Ballot
> 
> The Forum has identified and discussed a number of improvements to be made to 
> the current version of the Bylaws to improve clarity and allow the Forum to 
> function more efficiently. Most of these changes are described in the “Issues 
> with Bylaws to be addressed 
> <https://docs.google.com/document/d/1EtrIy3F5cPge0_M-C8J6fe72KcVI8H5Q_2S6S31ynU0>”
>  document. Some preparatory discussions and reviews can be checked on GitHub 
> <https://github.com/cabforum/forum/pull/32>.
> Here is a list of major changes:
> 
> Clarified that it is not always required to “READ” the antitrust statement 
> before each meeting and added the option of reading a "note-well".
> Clarified where to send/post Chartered Working Group minutes.
> Increased the number of days before automatic failing a ballot from 21 to 90 
> days.
> Allow Chair or Vice-Chair to update links to other sections within a document 
> without a ballot.
> Applied grammatical and other language improvements.
> Clarified that Subcommittee minutes do not need to also be published on the 
> public web site.
> Created a new member category called "Probationary Member", applicable to 
> both Certificate Issuer and Consumer categories, and separated "Associate 
> Members - Certificate Issuers" from the "Associate Member" category.
> Clarified language for Associate Members for consistency with Probationary 
> Member for the ballot proposals and endorsing.
> Removed the member category called "Root CA Issuer" and only kept the "CA 
> Issuer" category.
> Added a step to check the authority of the signer during membership 
> applications.
> Updated the Chartered Working Group template.
> Added some language to the Code of Conduct.
> Publishing private conversations without express permission is considered a 
> violation of the Code of Conduct.
> Updated the elections language as agreed at F2F#58.
> The following motion has been proposed by Dimitris Zacharopoulos of HARICA 
> and endorsed by Ben Wilson of Mozilla and Paul van Brouwershaven of Entrust.
> 
> MOTION BEGINS
> 
> Amendment to the Bylaws: Replace the entire text of the Bylaws of the 
> CA/Browser Forum with the attached version (CA-Browser Forum Bylaws v2.5.pdf).
> NOTE: There are two redlines produced by GitHub
> 
> Bylaws-redline.pdf (attached)
> GitHub redline available at 
> https://github.com/cabforum/forum/pull/32/files#diff- 
> <https://github.com/cabforum/forum/pull/32/files#diff-3c3a1aa55886ff217ac9c808f96a5e9a9582fc11>3c3a1aa55886ff217ac9c808f96a5e9a9582fc11
>  
> <https://github.com/cabforum/forum/pull/32/files#diff-3c3a1aa55886ff217ac9c808f96a5e9a9582fc11>
> MOTION ENDS
> 
> The procedure for this ballot is as follows:
> 
> Forum-18 v3 - Update CA/B Forum Bylaws to version 2.5
> Start time (10:00 UTC)End time (10:00 UTC)
> Discussion (at least 7 days)  4 July 2023 11 July 2023
> Expected Vote for approval (7 days)   13 July 2023
> 20 July 2023
>  v2.5.pdf>___
> Public mailing list
> Public@cabforum.org
> https://lists.cabforum.org/mailman/listinfo/public



smime.p7s
Description: S/MIME cryptographic signature
___
Public mailing list
Public@cabforum.org
https://lists.cabforum.org/mailman/listinfo/public


Re: Audit Reminder Email Summary - Intermediate Certificates

2023-07-19 Thread Kathleen Wilson
 Forwarded Message 
Subject: Summary of July 2023 Outdated Audit Statements for Intermediate 
Certs
Date: Tue, 18 Jul 2023 12:00:36 + (GMT)

None

-- 

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/2d28f870-55f6-4467-b15e-d353d284715an%40mozilla.org.


Re: Audit Reminder Email Summary - Root Certificates

2023-07-19 Thread Kathleen Wilson
 Forwarded Message 
Subject: Summary of July 2023 Audit Reminder Emails
Date: Tue, 18 Jul 2023 12:00:52 + (GMT)

Mozilla: Audit Reminder
CA Owner: eMudhra Technologies Limited
Root Certificates:
   emSign Root CA - G1
   emSign ECC Root CA - G3
   emSign Root CA - C1
   emSign ECC Root CA - C3
Standard Audit: 
https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=27090b99-fb25-4a57-ad60-4b4c0866a62f
Standard Audit Period End Date: 2022-05-31
BR Audit: 
https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=0554309b-1346-4217-a45d-7eae7a87d803
BR Audit Period End Date: 2022-05-31
EV Audit: 
https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=cee5815c-0971-452c-a242-0b07d1be37a6
EV Audit Period End Date: 2022-05-31
CA Comments: null



Mozilla: Audit Reminder
CA Owner: Actalis
Root Certificates:
   Actalis Authentication Root CA
Standard Audit: 
https://www.bureauveritas.it/sites/g/files/zypfnx256/files/media/document/CAB-Forum_AAL_ACTALIS_E_V2.9%20Version1_2_sign_0.pdf
Standard Audit Period End Date: 2022-05-31
BR Audit: 
https://www.bureauveritas.it/sites/g/files/zypfnx256/files/media/document/CAB-Forum_AAL_ACTALIS_E_V2.9%20Version1_2_sign_0.pdf
BR Audit Period End Date: 2022-05-31
EV Audit: 
https://www.bureauveritas.it/sites/g/files/zypfnx256/files/media/document/CAB-Forum_AAL_ACTALIS_E_V2.9%20Version1_2_sign_0.pdf
EV Audit Period End Date: 2022-05-31
CA Comments: null



Mozilla: Audit Reminder
CA Owner: Chunghwa Telecom
Root Certificates:
   HiPKI Root CA - G1
   Chunghwa Telecom Co., Ltd. - ePKI Root Certification Authority
Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=158fae2a-970c-48a7-ad3b-74b5b2cc033d
Standard Audit Period End Date: 2022-05-31
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=cd77b124-5910-45d8-a3b4-d03f4bf6b7ff
BR Audit Period End Date: 2022-05-31
CA Comments: null



Mozilla: Overdue Audit Statements
CA Owner: GlobalSign nv-sa
Root Certificates:
   GlobalSign**
   GlobalSign Root CA**
   GlobalSign Root R46**
   GlobalSign Root E46**
   GlobalSign**
   GlobalSign Secure Mail Root R45**
   GlobalSign**
   GlobalSign Secure Mail Root E45**

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=e502c9d3-90fa-486d-9006-f92e4b474b97
Standard Audit Period End Date: 2022-03-31
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=9283592
BR Audit Period End Date: 2022-03-31
BR Audit:
BR Audit Period End Date:
EV Audit: https://bugzilla.mozilla.org/attachment.cgi?id=9283591
EV Audit Period End Date: 2022-03-31
EV Audit:
EV Audit Period End Date:
CA Comments: null



Mozilla: Audit Reminder
CA Owner: Government of Spain, Autoritat de Certificació de la Comunitat 
Valenciana (ACCV)
Root Certificates:
   ACCVRAIZ1
Standard Audit: 
https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=968b4d5c-8c28-432d-8914-9261d5780a4e
Standard Audit Period End Date: 2022-04-30
BR Audit: 
https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=d63a6979-d148-462d-8caa-adf739e9f3fd
BR Audit Period End Date: 2022-04-30
CA Comments: null



Mozilla: Audit Reminder
CA Owner: SECOM Trust Systems CO., LTD.
Root Certificates:
   SECOM Trust.net - Security Communication RootCA1
   Security Communication RootCA2
   Security Communication ECC RootCA1
   Security Communication RootCA3
Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=a00e07ce-e497-4321-a9b8-d1e8025bd799
Standard Audit Period End Date: 2022-06-06
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=b9a750f7-6f94-4451-aff5-08dd730eb4a6
BR Audit Period End Date: 2022-06-06
EV Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=0625f8a9-d690-469c-a3ee-5c1e4551e8c5
EV Audit Period End Date: 2022-06-06
CA Comments: null




-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/cddecfbc-1041-4cea-89c6-27005b402ae1n%40mozilla.org.


Re: MRSP 2.9: S/MIME BRs and Audits

2023-07-19 Thread Ben Wilson
All,

For comment and discussion, here is some draft language for replacement in MRSP
section 1.1 Scope
<https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#11-scope>
:

-- Begin MRSP Proposal --

This policy applies, as appropriate, to certificates matching any of the
following

…

3. end entity certificates that have at least one valid, unrevoked chain up
to such a CA certificate through intermediate certificates that are all in
scope and

   - an Extended Key Usage (EKU) extension that contains the
   anyExtendedKeyUsage or id-kp-serverAuth KeyPurposeId, or no EKU extension
   (i.e. a "server certificate"); or
   - an EKU extension of id-kp-emailProtection and an rfc822Name or an
   otherName of type id-on-SmtpUTF8Mailbox in the subjectAltName (i.e. an
   "email certificate").

-- End MRSP Proposal -

This language would replace what is currently in MRSP section 1.1
<https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#11-scope>
:

-

3. end entity certificates that have at least one valid, unrevoked chain up
to such a CA certificate through intermediate certificates that are all in
scope, such end entity certificates having either:


   - an Extended Key Usage (EKU) extension that contains one or more of
   these KeyPurposeIds: anyExtendedKeyUsage, id-kp-serverAuth,
   id-kp-emailProtection; or
   - no EKU extension.

Thoughts?

Ben

On Wed, Jul 19, 2023 at 10:32 AM Ben Wilson  wrote:

> Hi Christophe,
> Thanks for pointing out this issue. I will work this into my edits on
> Github so that the scope of the Mozilla Root Store Policy for S/MIME
> certificates is narrowed. In other words, I'll add the language "and the
> inclusion of a rfc822Name or an otherName of type id-on-SmtpUTF8Mailbox in
> the subjectAltName extension" to the draft of version 2.9 that I'm working
> on so that an S/MIME certificate, for purposes of the MRSP, must have not
> only the emailProtection EKU, but also an RFC822 name or an otherName of
> type id-on-SmtpUTF8Mailbox in the SAN.
> Does that resolve your concern?
> Thanks,
> Ben
>
>
> On Thu, Jul 6, 2023 at 9:47 AM Christophe Bonjean <
> christophe.bonj...@globalsign.com> wrote:
>
>> Hi Ben and Kathleen,
>>
>>
>>
>> “Insofar as the *S/MIME* or TLS Baseline Requirements *attempt to define
>> their own scope*, the *scope of this policy (section 1.1) overrides that*.
>> CA operations relating to issuance of all S/MIME or TLS server certificates
>> in the scope of this policy SHALL conform to the S/MIME or TLS Baseline
>> Requirements, as applicable.”
>>
>>
>>
>> Section 1.1 of the MRSP states “[…], such end entity certificates having
>> either: an Extended Key Usage (EKU) extension that contains one or more of
>> these KeyPurposeIds: anyExtendedKeyUsage, id-kp-serverAuth,
>> *id-kp-emailProtection*; or [….]”
>>
>>
>>
>> Section 1.1 of the SBR states “An S/MIME Certificate for the purposes of
>> this document can be identified by the existence of an Extended Key Usage
>> (EKU) for id-kp-emailProtection (OID: 1.3.6.1.5.5.7.3.4) *and the
>> inclusion of a rfc822Name or an otherName of type id-on-SmtpUTF8Mailbox in
>> the subjectAltName extension*.”
>>
>>
>>
>> Is the intention of the Mozilla Root Store Policy update to apply the
>> SMIME BRs to all certificates with the EKU EmailProtection, including
>> certificates without an rfc822Name or an otherName, such as certificates
>> for document and pdf signing purposes?
>>
>>
>>
>> I recall these use cases being discussed in the working group and
>> intentionally out-scoping them from the SBRs to avoid unintended adverse
>> effects, so wonder how to interpret the proposed update to the MRSP.
>>
>>
>>
>> Kind regards,
>>
>>
>>
>> Christophe
>>
>>
>>
>> *From:* dev-security-policy@mozilla.org 
>> *On Behalf Of *Ben Wilson
>> *Sent:* Wednesday, June 14, 2023 12:54 AM
>> *To:* dev-secur...@mozilla.org 
>> *Subject:* MRSP 2.9: S/MIME BRs and Audits
>>
>>
>>
>> All,
>>
>> This email opens up discussion of our proposed resolution of GitHub
>> Issue #258 <https://github.com/mozilla/pkipolicy/issues/258> (SMIME
>> Baseline Requirements).
>>
>> We plan to add requirements to version 2.9 of the Mozilla Root Store
>> Policy <https://www.mozilla.org/projects/security/certs/policy/>
>> regarding the CA/Browser Forum’s S/MIME Baseline Requirements.
>>
>> We propose to update Mozilla’s Root Store Policy to require annual S/MIME
>> BR audits as follows.
>&g

Updated Version of CCADB Policy to v. 1.2.3

2023-07-19 Thread Ben Wilson
All,

The CCADB policy has been updated to Version 1.2.3
. This minor version increment represents a
change to Section 5.1.2 (“Webtrust”) because WebTrust now has a seal file
for Qualified Audits that is integrated with the CCADB. It is recommended
that if a CA obtains a qualified audit, then the CA Owner should obtain
this new type of seal file URL from WebTrust/CPA Canada and use that in the
CCADB. The benefits of this approach are (1) the PDF can be retrieved from
WebTrust/CPA Canada and processed using Automated Letter Validation (ALV)
and (2) because the authenticity of the audit letter is established, the
auditor does not need to be contacted separately to verify audit letter
authenticity.

The exact changes can be viewed here
.


Please continue to contact supp...@ccadb.org for all other support needs
with the CCADB.

Thank you,

- Ben, on behalf of the CCADB Steering Committee

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/CA%2B1gtaYwNk%2BYPbpg2ZqgNNMV_M9qa4-HSSQka3hw%2B0nYttWRbg%40mail.gmail.com.


[Smcwg-public] Mozilla Wiki Page for S/MIME BR Transition Issues

2023-07-19 Thread Ben Wilson via Smcwg-public
All,

I have created a wiki page (https://wiki.mozilla.org/CA/Transition_SMIME_BRs)
to address miscellaneous issues that might arise for CAs in their
transition toward compliance with the CA/Browser Forum’s Baseline
Requirements for S/MIME Certificates (S/MIME BRs). (The wiki page is for
items that are not directly explained in the upcoming version 2.9 of the
Mozilla Root Store Policy.)

The first issue addressed in the wiki page relates to the re-issuance of
existing intermediate CAs used for issuing S/MIME certificates. Based on
language provided by Corey Bonnell, the wiki page explains how Mozilla
expects S/MIME CA re-issuance to occur.

We may add explanations about other items of concern to the wiki page in
the future, and if so, I’ll advise you accordingly.

Thanks,

Ben
___
Smcwg-public mailing list
Smcwg-public@cabforum.org
https://lists.cabforum.org/mailman/listinfo/smcwg-public


S/MIME BR Transition Wiki Page

2023-07-19 Thread Ben Wilson
All,

I have created a wiki page (https://wiki.mozilla.org/CA/Transition_SMIME_BRs)
to address miscellaneous issues that might arise for CAs in their
transition toward compliance with the CA/Browser Forum’s Baseline
Requirements for S/MIME Certificates (S/MIME BRs). (The wiki page is for
items that are not directly explained in the upcoming version 2.9 of the
Mozilla Root Store Policy.)

The first issue addressed in the wiki page relates to the re-issuance of
existing intermediate CAs used for issuing S/MIME certificates. Based on
language provided by Corey Bonnell, the wiki page explains how Mozilla
expects S/MIME CA re-issuance to occur.

We may add explanations about other items of concern to the wiki page in
the future, and if so, I’ll advise you accordingly.

Thanks,

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtabWB1%3DYS6%2B9Yf2fE3ii-6mFvJhkP-LxmtZs3s4i4z8_pg%40mail.gmail.com.


Re: MRSP 2.9: S/MIME BRs and Audits

2023-07-19 Thread Ben Wilson
Hi Christophe,
Thanks for pointing out this issue. I will work this into my edits on
Github so that the scope of the Mozilla Root Store Policy for S/MIME
certificates is narrowed. In other words, I'll add the language "and the
inclusion of a rfc822Name or an otherName of type id-on-SmtpUTF8Mailbox in
the subjectAltName extension" to the draft of version 2.9 that I'm working
on so that an S/MIME certificate, for purposes of the MRSP, must have not
only the emailProtection EKU, but also an RFC822 name or an otherName of
type id-on-SmtpUTF8Mailbox in the SAN.
Does that resolve your concern?
Thanks,
Ben


On Thu, Jul 6, 2023 at 9:47 AM Christophe Bonjean <
christophe.bonj...@globalsign.com> wrote:

> Hi Ben and Kathleen,
>
>
>
> “Insofar as the *S/MIME* or TLS Baseline Requirements *attempt to define
> their own scope*, the *scope of this policy (section 1.1) overrides that*.
> CA operations relating to issuance of all S/MIME or TLS server certificates
> in the scope of this policy SHALL conform to the S/MIME or TLS Baseline
> Requirements, as applicable.”
>
>
>
> Section 1.1 of the MRSP states “[…], such end entity certificates having
> either: an Extended Key Usage (EKU) extension that contains one or more of
> these KeyPurposeIds: anyExtendedKeyUsage, id-kp-serverAuth,
> *id-kp-emailProtection*; or [….]”
>
>
>
> Section 1.1 of the SBR states “An S/MIME Certificate for the purposes of
> this document can be identified by the existence of an Extended Key Usage
> (EKU) for id-kp-emailProtection (OID: 1.3.6.1.5.5.7.3.4) *and the
> inclusion of a rfc822Name or an otherName of type id-on-SmtpUTF8Mailbox in
> the subjectAltName extension*.”
>
>
>
> Is the intention of the Mozilla Root Store Policy update to apply the
> SMIME BRs to all certificates with the EKU EmailProtection, including
> certificates without an rfc822Name or an otherName, such as certificates
> for document and pdf signing purposes?
>
>
>
> I recall these use cases being discussed in the working group and
> intentionally out-scoping them from the SBRs to avoid unintended adverse
> effects, so wonder how to interpret the proposed update to the MRSP.
>
>
>
> Kind regards,
>
>
>
> Christophe
>
>
>
> *From:* dev-security-policy@mozilla.org  *On
> Behalf Of *Ben Wilson
> *Sent:* Wednesday, June 14, 2023 12:54 AM
> *To:* dev-secur...@mozilla.org 
> *Subject:* MRSP 2.9: S/MIME BRs and Audits
>
>
>
> All,
>
> This email opens up discussion of our proposed resolution of GitHub Issue
> #258 <https://github.com/mozilla/pkipolicy/issues/258> (SMIME Baseline
> Requirements).
>
> We plan to add requirements to version 2.9 of the Mozilla Root Store
> Policy <https://www.mozilla.org/projects/security/certs/policy/>
> regarding the CA/Browser Forum’s S/MIME Baseline Requirements.
>
> We propose to update Mozilla’s Root Store Policy to require annual S/MIME
> BR audits as follows.
>
>- Section 2.2, second bullet point modified to directly reference that
>email verification must be in accordance with section 3.2.2 of the S/MIME
>BRs
>- Section 2.3,
>
>
>- First paragraph – add the following sentence (as a second sentence):
>
> Certificates issued on or after September 1, 2023, that are capable of
> being used to digitally sign or encrypt email messages, and CA operations
> relating to the issuance of such certificates, MUST conform to the latest
> version of the CA/Browser Forum Baseline Requirements for the Issuance and
> Management of Publicly-Trusted S/MIME Certificates.
>
> oChange the remaining references of “Baseline Requirements” in this
> section to “S/MIME and TLS Baseline Requirements”. To indicate that the
> statements apply to both.
>
>- Section 3.1.2
>
>
>- Add ETSI TS 119 411-6 as audit criteria
>   - Add WebTrust for CAs - S/MIME as audit criteria
>
>
>- Sections 3.2, 3.3, 5.2, 7.1
>
>
>- Change “Baseline Requirements” to “S/MIME and TLS Baseline
>   Requirements”. To indicate that the statements apply to both.
>
>
>- Section 5.1 add a statement:  “The following curves are not
>prohibited, but are not currently supported:  P-521, Curve25519, and
>Curve448.”
>
>
>- And add a sentence:  “EdDSA keys MAY be included in certificates
>   that chain to a root certificate in our root program if the certificate
>   contains ‘id-kp-emailProtection` in the EKU extension. Otherwise, EdDSA
>   keys MUST NOT be included.”
>
>
>- Section 5.3.1
>
>
>- Move the following sentence from the end of the current second
>   paragraph up to its own stand-alone paragraph.
>
>
>- "The conformance

Re: [go-cd] Failed to find 'git' in path with Helm based installation

2023-07-19 Thread Chad Wilson
Yeah, the default images are Alpine based and not built for arm64/aarch64,
see this for nasty detail <https://github.com/gocd/gocd/issues/11355>.

I use colima rather than rancher but it is horrifically slow under QEMU
emulation, and still weirdly unstable like you experienced (although never
seen that git error as a result). Never bothered with Rosetta, but for some
reason qemu doesn't like the JVM very much on the GoCD images. It is
possibly a whole lot of problems mixed up together.

A *much higher performing and more stable* alternative for you will be to
add --set server.image.repository=gocd/gocd-server-centos-9 --set
agent.image.repository=gocd/gocd-agent-centos-9 or equivalent in values
overrides when you install/upgrade the chart. This switches to the
CentOS-based images which are a bit bigger, but perfectly stable and built
multi-arch including arm64. For the *agent,* all the non-alpine images
<https://www.gocd.org/download/#docker> (ubuntu, debian, centos) have been
built multi-arch since 23.1.0.

This is how I test/validate/develop locally, which is also on an Apple
Silicon Mac.

Probably could do with being better documented on the Helm Chart itself,
PRs welcome 

-Chad

On Wed, Jul 19, 2023 at 11:42 PM 'Andreas Hubert' via go-cd <
go-cd@googlegroups.com> wrote:

> Okay, so it did not worked in Rancher Desktop when I enabled Virtual
> Machine Emulation VZ with enabled Rosetta Option. It also did not worked
> with Virtual Machine Emulation QEMU. But finally it works now with VZ, but
> Rosetta option unchecked.
>
> Thanks for the hint Chad!
>
> Andreas Hubert schrieb am Mittwoch, 19. Juli 2023 um 16:58:39 UTC+2:
>
>> > At a guess, is this perhaps a local cluster on an M1 Mac?
>> Good guess ;)
>>
>> When I check the logs from the pod, I get this error upon checking
>> connection for sample Material:
>> jvm 1| 2023-07-19 14:52:28,756  INFO [166@MessageListener for
>> ServerPingListener] p.c.g.c.e.k.c.g.c.e.KubernetesPlugin:72
>> [plugin-cd.go.contrib.elasticagent.kubernetes] - [refresh-pod-state] Pod
>> information successfully synced. All(Running/Pending) pod count is 0.
>> jvm 1| 2023-07-19 14:52:30,015 ERROR [124@MessageListener for
>> MaterialUpdateListener] ProcessManager:102 - [Command Line] Failed
>> executing [git clone --branch master --no-checkout
>> https://github.com/gocd-contrib/getting-started-repo
>> /go-working-dir/pipelines/flyweight/8ad0eaec-5e2d-4f61-bfd6-dc26f7f67818]
>> jvm 1| 2023-07-19 14:52:30,015 ERROR [124@MessageListener for
>> MaterialUpdateListener] ProcessManager:103 - [Command Line] Agent's
>> Environment Variables: {GOCD_APP_SERVER_SERVICE_PORT_HTTP=8153,
>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin,
>> WRAPPER_JAVA_VERSION_MINOR=0,
>> WRAPPER_HOSTNAME=gocd-app-server-5c9dd5b56c-646pn, WRAPPER_BITS=64,
>> WRAPPER_VERSION=3.5.51, WRAPPER_BASE_NAME=wrapper,
>> GOCD_APP_SERVER_SERVICE_PORT=8153,
>> WRAPPER_HOST_NAME=gocd-app-server-5c9dd5b56c-646pn,
>> WRAPPER_JAVA_VENDOR=OpenJDK, PWD=/, KUBERNETES_PORT_443_TCP=tcp://
>> 10.43.0.1:443, LANGUAGE=en_US:en,
>> GOCD_PLUGIN_INSTALL_docker-registry-artifact-plugin=
>> https://github.com/gocd/docker-registry-artifact-plugin/releases/download/v1.3.1-485/docker-registry-artifact-plugin-1.3.1-485.jar,
>> WRAPPER_EDITION=Standard, GOCD_APP_SERVER_PORT_8153_TCP_PROTO=tcp,
>> LC_ALL=en_US.UTF-8, WRAPPER_JAVA_VERSION_REVISION=6,
>> WRAPPER_JAVA_VERSION=17.0.6, KUBERNETES_SERVICE_PORT_HTTPS=443, SHLVL=1,
>> WRAPPER_PID=115, WRAPPER_WORKING_DIR=/go-working-dir, WRAPPER_OS=linux,
>> KUBERNETES_PORT=tcp://10.43.0.1:443,
>> GOCD_APP_SERVER_SERVICE_HOST=10.43.100.193,
>> KUBERNETES_SERVICE_HOST=10.43.0.1, LANG=en_US.UTF-8,
>> WRAPPER_BIN_DIR=/go-server/wrapper,
>> WRAPPER_CONF_DIR=/go-server/wrapper-config, WRAPPER_LANG=en,
>> GOCD_APP_SERVER_PORT_8153_TCP=tcp://10.43.100.193:8153,
>> WRAPPER_FILE_SEPARATOR=/, WRAPPER_INIT_DIR=/,
>> KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1,
>> GOCD_APP_SERVER_PORT_8153_TCP_ADDR=10.43.100.193,
>> GOCD_PLUGIN_INSTALL_kubernetes-elastic-agents=
>> https://github.com/gocd/kubernetes-elastic-agents/releases/download/v3.9.0-442/kubernetes-elastic-agent-3.9.0-442.jar,
>> GO_JAVA_HOME=/gocd-jre, WRAPPER_PATH_SEPARATOR=:,
>> KUBERNETES_PORT_443_TCP_PROTO=tcp, KUBERNETES_SERVICE_PORT=443,
>> GOCD_APP_SERVER_PORT=tcp://10.43.100.193:8153,
>> HOSTNAME=gocd-app-server-5c9dd5b56c-646pn, WRAPPER_JAVA_VERSION_MAJOR=17,
>> WRAPPER_RUN_MODE=console, WRAPPER_ARCH=x86,
>> GOCD_APP_SERVER_PORT_8153_TCP_PORT=8153, KUBERNETES_PORT_443_TCP_PORT=443,
>> HOME=/home/go}
>>
>> Which is weird, because if I just run those commands directly with git,
&

Re: [slurm-users] Unconfigured GPUs being allocated

2023-07-19 Thread Wilson, Steven M
I found that this is actually a known bug in Slurm so I'll note it here in case 
anyone comes across this thread in the future:
  https://bugs.schedmd.com/show_bug.cgi?id=10598

Steve

From: slurm-users  on behalf of Wilson, 
Steven M 
Sent: Tuesday, July 18, 2023 5:32 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

Further testing and looking at the source code confirms what looks to me like a 
bug in Slurm. GPUs that are not configured in gres.conf are detected by slurmd 
in the system and discarded since they aren't found in gres.conf. That's fine 
except they should also be hidden through cgroup control so that they aren't 
visible along with allocated GPUs when a job is run. Slurm assumes that the job 
can only see the GPUs that it allocates to the job and sets the 
$CUDA_VISIBLE_DEVICES accordingly. Unfortunately, the job actually sees the 
allocated GPUs plus any unconfigured GPUs and $CUDA_VISIBLE_DEVICES may or may 
not happen to correspond to the GPU(s) allocated by Slurm.

I was hoping that I could write a Prolog script that would adjust 
$CUDA_VISIBLE_DEVICES to remove any unconfigured GPUs but any changes using 
"export CUDA_VISIBLE_DEVICES=..." don't seem to have an effect upon the actual 
environment of the job.

Steve

____
From: Wilson, Steven M 
Sent: Friday, July 14, 2023 4:10 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

It's not so much whether a job may or may not access the GPU but rather which 
GPU(s) is(are) included in $CUDA_VISIBLE_DEVICES. That is what controls what 
our CUDA jobs can see and therefore use (within any cgroups constraints, of 
course). In my case, Slurm is sometimes setting $CUDA_VISIBLE_DEVICES to a GPU 
that is not in the Slurm configuration because it is intended only for driving 
the display and not GPU computations.

Thanks for your thoughts!

Steve

From: slurm-users  on behalf of 
Christopher Samuel 
Sent: Friday, July 14, 2023 1:57 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

[You don't often get email from ch...@csamuel.org. Learn why this is important 
at https://aka.ms/LearnAboutSenderIdentification ]

 External Email: Use caution with attachments, links, or sharing data 


On 7/14/23 10:20 am, Wilson, Steven M wrote:

> I upgraded Slurm to 23.02.3 but I'm still running into the same problem.
> Unconfigured GPUs (those absent from gres.conf and slurm.conf) are still
> being made available to jobs so we end up with compute jobs being run on
> GPUs which should only be used

I think this is expected - it's not that Slurm is making them available,
it's that it's unaware of them and so doesn't control them in the way it
does for the GPUs it does know about. So you get the default behaviour
(any process can access them).

If you want to stop them being accessed from Slurm you'd need to find a
way to prevent that access via cgroups games or similar.

All the best,
Chris
--
Chris Samuel  :  
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.csamuel.org%2F=05%7C01%7Cstevew%40purdue.edu%7C6fba97485b73413521d208db8494160a%7C4130bd397c53419cb1e58758d6d63f21%7C0%7C0%7C638249543794377751%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=VslW51ree1Ibt3xfYyy99Aj%2BREZh7BqpM6Ipg3jAM84%3D=0<http://www.csamuel.org/>
  :  Berkeley, CA, USA




Re: [go-cd] Failed to find 'git' in path with Helm based installation

2023-07-19 Thread Chad Wilson
The core error regarding git you are seeing is not directly related to the
agent not coming up, but they may have the same root cause.

What operating system, hardware architecture and Kubernetes variant are you
deploying the Helm chart to?

At a guess, is this perhaps a local cluster on an M1 Mac?

-Chad

On Wed, Jul 19, 2023 at 10:28 PM 'Andreas Hubert' via go-cd <
go-cd@googlegroups.com> wrote:

> Hi all!
> I just wanted to play and experiment a little bit with GoCD and tried to
> use the Helm chart for my own k8s cluster.
> But when I try to add Material or work with the sample Material and test
> connection, I get this error:
>
> Failed to find 'git' on your PATH. Please ensure 'git' is executable by
> the Go Server and on the Go Agents where this material will be used.
>
>
> If I check the resources in my namespace, it seems the agent is not coming
> up. Could this be related?
> NAME   READY   STATUSRESTARTS   AGE
> pod/gocd-app-server-5c9dd5b56c-646pn   1/1 Running   0  44m
>
> NAME  TYPE   CLUSTER-IP  EXTERNAL-IP   PORT(S)
>  AGE
> service/gocd-app-server   NodePort   10.43.100.193   
>  8153:30760/TCP   44m
>
> NAME  READY   UP-TO-DATE   AVAILABLE   AGE
> deployment.apps/gocd-app-agent0/0 00   44m
> deployment.apps/gocd-app-server   1/1 11   44m
>
> NAME DESIRED   CURRENT   READY
> AGE
> replicaset.apps/gocd-app-agent-54b5bdc7670 0 0
> 44m
> replicaset.apps/gocd-app-server-5c9dd5b56c   1 1 1
> 44m
>
> Thanks for any hint!
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/6ad3fb0c-a828-43fc-b103-e086cf7b293cn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH8YuEqHEOJv_jSUknU4f98SEUZBPOrinH-7TXcWDgA40Q%40mail.gmail.com.


Re: Possible WAL corruption on running system during K8s update

2023-07-19 Thread Raymond Wilson
As a follow up to this:

We tried removing both those in the walstore and walarchive. Problem is
that somewhere there is a checkpoint that says its up to wal index
2414...yet we only have 2413...2412...etc

We need to find where it stores this checkpoint index and change it, it
seems.

On Wed, 19 Jul 2023 at 10:02 AM, Raymond Wilson 
wrote:

> Hi Alex,
>
> Here is the log from the Ignite startup. It's fairly short but shows
> everything I think:
>
> 2023-07-17 22:38:55,061 [1] DBG [ImmutableCacheComputeServer]   Starting
> Ignite.NET 2.15.0.23172
> 2023-07-17 22:38:55,065 [1] DBG [ImmutableCacheComputeServer]
> 2023-07-17 22:38:55,068 [1] DBG [ImmutableCacheComputeServer]
> 2023-07-17 22:38:55,070 [1] DBG [ImmutableCacheComputeServer]
> 2023-07-17 22:38:55,070 [1] DBG [ImmutableCacheComputeServer]
> 2023-07-17 22:38:55,073 [1] DBG [ImmutableCacheComputeServer]
> 2023-07-17 22:38:55,471 [1] DBG [ImmutableCacheComputeServer]   JVM
> started.
> 2023-07-17 22:38:56,340 [1] WRN [ImmutableCacheComputeServer]   Consistent
> ID is not set, it is recommended to set consistent ID for production
> clusters (use IgniteConfiguration.setConsistentId property)
> 2023-07-17 22:38:56,382 [1] INF [ImmutableCacheComputeServer]
> >>>__  
> >>>   /  _/ ___/ |/ /  _/_  __/ __/
> >>>  _/ // (7 7// /  / / / _/
> >>> /___/\___/_/|_/___/ /_/ /___/
> >>>
> >>> ver. 2.15.0#20230425-sha1:f98f7f35
> >>> 2023 Copyright(C) Apache Software Foundation
> >>>
> >>> Ignite documentation: https://ignite.apache.org
>
> 2023-07-17 22:38:56,383 [1] INF [ImmutableCacheComputeServer]   Config
> URL: n/a
> 2023-07-17 22:38:56,414 [1] INF [ImmutableCacheComputeServer]
> IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=250,
> svcPoolSize=8, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=250,
> mgmtPoolSize=4, dataStreamerPoolSize=8, utilityCachePoolSize=8,
> utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
> buildIdxPoolSize=1, igniteHome=/trex/, igniteWorkDir=/persist/Immutable,
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e46d9f4,
> nodeId=4e70ba5e-5829-4b2d-b349-6539918990b5, marsh=BinaryMarshaller [],
> marshLocJobs=false, p2pEnabled=false, netTimeout=5000,
> netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3,
> metricsHistSize=1, metricsUpdateFreq=2000,
> metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi
> [addrRslvr=null, addressFilter=null, sockTimeout=0, ackTimeout=0,
> marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60, soLinger=0,
> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null,
> skipAddrsRandomization=false], segPlc=USE_FAILURE_HANDLER,
> segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true,
> segChkFreq=1, commSpi=TcpCommunicationSpi
> [connectGate=org.apache.ignite.spi.communication.tcp.internal.ConnectGateway@5bb3d42d,
> ctxInitLatch=java.util.concurrent.CountDownLatch@5bf61e67[Count = 1],
> stopping=false, clientPool=null, nioSrvWrapper=null, stateProvider=null],
> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@2c1dc8e,
> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [],
> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@61019f59,
> addrRslvr=null,
> encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@62e8f862,
> tracingSpi=org.apache.ignite.spi.tracing.NoopTracingSpi@26f3d90c,
> clientMode=false, rebalanceThreadPoolSize=1, rebalanceTimeout=1,
> rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0,
> rebalanceBatchSize=524288, txCfg=TransactionConfiguration
> [txSerEnabled=false, dfltIsolation=REPEATABLE_READ,
> dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0,
> txTimeoutOnPartitionMapExchange=0, deadlockTimeout=1,
> pessimisticTxLogSize=0, pessimisticTxLogLinger=1, tmLookupClsName=null,
> txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true,
> discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100,
> locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100,
> failureDetectionTimeout=6, sysWorkerBlockedTimeout=null,
> clientFailureDetectionTimeout=6, metricsLogFreq=3,
> connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11212,
> noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768,
> idleQryCurTimeout=60, idleQryCurCheckFreq=6, sndQueueLimit=0,
> selectorCnt=2, idleTimeout=7000, sslEnabled=false, sslClientAuth=false,
> sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=8,
> msgInterceptor=null], odbcCfg=null, warmupClos=null,
> atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED,
> backups=

Re: Ignite data region off-heap allocation

2023-07-19 Thread Raymond Wilson
Just FYI, we have held off any memory pressure changes in the meantime
while we continue to investigate the memory issues we have.

On Tue, 18 Jul 2023 at 9:07 AM, Raymond Wilson 
wrote:

> Hi Pavel,
>
> This area is confusing. There is no indication that the memory pressure
> applies to any individual object or allocation, so there is clearly no
> association between memory pressure and any particular resource.
>
> I get your argument that .Net can 'see' allocated memory. What is unclear
> is whether it cares about actually allocated and used pages, or committed
> pages.
>
> I see there is a LazyMemoryAllocation (default: true) for data regions.
> Some data regions set this to false, eg:
>
> ^--   sysMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
> ^--   metastoreMemPlc region [type=internal, persistence=true,
> lazyAlloc=false,
> ^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,
>
> The documentation is not clear on the effect of this flag other than to
> say it is for 'Lazy memory allocation'. If this flag is true will Ignite
> proactively allocate and use all pages in a data region, rather than
> incrementally?
>
> Thanks,
> Raymond.
>
>
> On Tue, Jul 11, 2023 at 10:55 PM Pavel Tupitsyn 
> wrote:
>
>> > I can’t see another way of letting . Net know that it can’t have access
>> to all the ‘free’ memory in the process
>>
>> You don't need to tell .NET how much memory is currently available. It is
>> the job of the OS. .NET can "see" the size of the unmanaged heap.
>>
>> To quote another explanation [1]:
>>
>> > The point of AddMemoryPressure is to tell the garbage collector that
>> there's a large amount of memory allocated with that object.
>> > If it's unmanaged, the garbage collector doesn't know about it; only
>> the managed portion.
>> > Since the managed portion is relatively small, the GC may let it pass
>> for garbage collection several times, essentially wasting memory that might
>> need to be freed.
>>
>> I really don't think AddMemoryPressure is the right thing to do in your
>> case.
>> If you run into OOM issues, then look into Ignite memory region settings
>> [2] and/or adjust application memory usage on the .NET side, so that the
>> sum of those is not bigger than available RAM.
>>
>> [1]
>> https://stackoverflow.com/questions/1149181/what-is-the-point-of-using-gc-addmemorypressure-with-an-unmanaged-resource
>> [2]
>> https://ignite.apache.org/docs/latest/memory-configuration/data-regions#configuring-default-data-region
>>
>> On Tue, Jul 11, 2023 at 11:48 AM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> How do Ignite .Net server nodes manage this memory issue in other
>>> projects?
>>>
>>> On Tue, Jul 11, 2023 at 5:32 PM Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
>>>> Oops, commutes => committed
>>>>
>>>> On Tue, 11 Jul 2023 at 4:34 PM, Raymond Wilson <
>>>> raymond_wil...@trimble.com> wrote:
>>>>
>>>>> I can’t see another way of letting . Net know that it can’t have
>>>>> access to all the ‘free’ memory in the process when a large slab of that 
>>>>> is
>>>>> spoken for in terms of memory commutes to Ignite data regions.
>>>>>
>>>>> In the current setup, as time goes on and Ignite progressively fills
>>>>> the allocated cache ram then system behaviour changes and can result in 
>>>>> out
>>>>> of memory issues. I think I would prefer consistent system behaviour wrt 
>>>>> to
>>>>> allocated resources from the start.
>>>>>
>>>>> Raymond.
>>>>>
>>>>> On Tue, 11 Jul 2023 at 3:57 PM, Pavel Tupitsyn 
>>>>> wrote:
>>>>>
>>>>>> Are you sure this is necessary?
>>>>>>
>>>>>> GC.AddMemoryPressure documentation [1] states that this will "improve
>>>>>> performance only for types that exclusively depend on finalizers".
>>>>>>
>>>>>> [1]
>>>>>> https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=net-7.0
>>>>>>
>>>>>> On Tue, Jul 11, 2023 at 1:02 AM Raymond Wilson <
>>>>>> raymond_wil...@trimble.com> wrote:
>>>>>>
>>>>>>> I'm making changes to add memory pressure to the GC to take into
>&

Re: [Servercert-wg] [secdir] Secdir last call review of draft-gutmann-testkeys-04

2023-07-18 Thread Clint Wilson via Servercert-wg
Hi Wayne,

This is helpful and much appreciated!

> On Jul 18, 2023, at 11:15 AM, Wayne Thayer  wrote:
> 
> Hi Clint,
> 
> Thank you for helping to unpack my concerns.
> 
> On Mon, Jul 17, 2023 at 2:28 PM Clint Wilson  <mailto:cli...@apple.com>> wrote:
>> Hi Wayne,
>> 
>> I’d like to better understand your worry and perhaps interpretation of BR 
>> 6.1.1.3(4) and 4.9.1.1(3,4,16). Just to restate for my benefit, the concern 
>> is that: IF we interpret Tim’s message regarding the testkeys draft as 
>> qualifying the keys present in the draft as “[All] CAs [subscribed to the 
>> Servercert-wg list being] made aware that [a future] Applicant’s Private Key 
>> has suffered a Key Compromise….” THEN, in a similar situation, any 
>> servercert-wg member could share any number of compromised keys here and, 
>> theoretically, bloat (with no upper bounds) the set of known compromised 
>> keys a CA has to retain and check in order to reject certificate requests as 
>> needed to meet the requirements of 6.1.1.3 WHILE also not necessarily 
>> increasing the meaningful security provided by the BRs. Is that correct?
>> As a concrete example (an extreme I could imagine), someone could generate, 
>> and potentially delete, 100 or 100,000,000,000 keypairs easily (for a value 
>> of “easily” most associated with effort rather than time or resources), 
>> share a CSV, or even just pointer to a repository/document, with the 
>> Servercert-wg, and (if interpreted per your worry) cause a bunch of keys 
>> never intended to be used for actual certificate issuance to be forever part 
>> of a set of keys which all CAs must check every received certificate request 
>> against.
>> 
> 
> The magnitude of the problem is not my primary concern, but that is something 
> to consider.

Agreed, though I do think it highlights that there may be multiple weaknesses 
in the current wording of the BRs related to this topic and likely some overlap 
with other “weak key” checking requirements. I apologize for the distinct lack 
of polish this is likely to have, but just to share some personal thoughts on 
the matter, the possible weaknesses that have become clear(er) to me in this 
discussion are:

1. Method and/or means of communication 
I think this is more concretely the primary concern you have at this point or 
at least was the primary point of your initial response.
The BRs only stipulate a required action (rejecting certificate requests and/or 
revoking issued certificates) based on the receipt (being made aware) of 
certain types of communication (proof that a key has been compromised). Trying 
to dig one level deeper and describing a scenario that, I believe, maps to the 
text of the BRs today: 
The sender of the information is not mentioned in these sections of the 
requirements, so the “message" can come from anyone or anywhere
But that message must be communicated to the CA 
“the CA” being another term of some ambiguity in this specific context) 
AND the communication must convey keys — typically just the public key 
component of a key pair whose private key is compromised
receiving private keys themselves qualifies, of course, but is absolutely not 
necessary and should be heavily discouraged (let’s please not go through that 
again…)
AND prove or demonstrate to the CA that the keys have indeed been compromised
I think, in the context of the “compromised keys” we’re talking about, the 
communication with the CA must supply specific keys, not broad classes of keys 
based on some contained attribute or something.
So, essentially, the BRs say that in the event of communication to the CA that 
demonstrates certain keys are compromised, the CA must both revoke any 
certificates they’ve issued which contain those keys and, from that point on, 
reject any certificate requests containing those keys. 
There’s sort of an interesting little interaction here where, hypothetically, a 
certificate request could have been received by a CA prior to the compromised 
key being reported, in which case it seems the CA could still issue the 
certificate, but would then need to revoke it within 24 hours… but I digress.
AFAICT, this is as far as the BRs set requirements on this topic, so it seems 
to me there are reasonably 2 ways this could play out on either a per-CA or 
BR-wide basis from here:
Further specification:
The BRs, act as a governing policy, to the CA’s policy(ies). The CA could 
further specify in their CPS (or other authoritative policy or practices 
document) that the communication referenced in this section must be directed in 
a specific way or to a specific target (i.e. taking “the CA” and mapping it to 
“this form on this website”, taking “made aware” and mapping it to “this report 
shared at this location”, or similar). It seems the CA can thus c

Cache write synchronization mode

2023-07-18 Thread Raymond Wilson
I have a query regarding the CacheWriteSynchronizationMode in
CacheConfiguration.

This enum is defined like this in the .Net client:

  public enum CacheWriteSynchronizationMode
  {
/// 
/// Mode indicating that Ignite should wait for write or commit replies
from all nodes.
/// This behavior guarantees that whenever any of the atomic or
transactional writes
/// complete, all other participating nodes which cache the written
data have been updated.
/// 
FullSync,
/// 
/// Flag indicating that Ignite will not wait for write or commit
responses from participating nodes,
/// which means that remote nodes may get their state updated a bit
after any of the cache write methods
/// complete, or after {@link Transaction#commit()} method completes.
/// 
FullAsync,
/// 
/// This flag only makes sense for {@link CacheMode#PARTITIONED} mode.
When enabled, Ignite will wait
/// for write or commit to complete on primary node, but will not wait
for backups to be updated.
/// 
PrimarySync,
  }

We have some replicated caches (where cfg.CacheMode =
CacheMode.Replicated), but we don't specify the WriteSynchronizationMode.

I note in the comment for PrimarySync (the default) that this "only makes
sense" for Partitioned caches. Given we don't set this mode for our
replicated caches then they will be using the PrimarySync write
synchronization mode.

The core Ignite help does not distinguish these synchronization modes and
strongly implies that all three synchronization modes have equivalent
consistency guarantees, but the help comment implies that replicated caches
should use either FullSync or FullAsync to ensure all replicated contexts
receive the written value.

As a background, I am investigating an issue in our system that could be
explained by replicated caches not having consistent values and am writing
some triage tooling to prove if that is the case or not by comparing the
stored values in each of the replicates cache nodes, However, I'm also
doing some due diligence on our configuration and ran into this item.

Thanks,
Raymond.


-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Re: Possible WAL corruption on running system during K8s update

2023-07-18 Thread Raymond Wilson
ic.Servers.Compute.ImmutableCacheComputeServer]
Completed creation of new Ignite node: Exists = False, Factory available =
True
2023-07-17 22:39:03,458 [1] WRN
[VSS.TRex.GridFabric.Servers.Compute.ImmutableCacheComputeServer]   Unable
to obtain instance of TRex-Immutable at attempt:1
Unhandled exception: Apache.Ignite.Core.Common.IgniteException: Failed to
apply page delta
 ---> Apache.Ignite.Core.Common.JavaException: class
org.apache.ignite.IgniteException: Failed to apply page delta
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1150)
at
org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:48)
at
org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:74)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to apply
page delta
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$26(GridCacheDatabaseSharedManager.java:2289)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApplyPage$27(GridCacheDatabaseSharedManager.java:2346)
at
org.apache.ignite.internal.processors.cache.persistence.CacheStripedExecutor.lambda$submit$0(CacheStripedExecutor.java:75)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalStateException: Failed to get page IO instance
(page content is corrupted)
at
org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:85)
at
org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:97)
at
org.apache.ignite.internal.pagemem.wal.record.delta.PagesListRemovePageRecord.applyDelta(PagesListRemovePageRecord.java:55)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyPageDelta(GridCacheDatabaseSharedManager.java:2401)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$26(GridCacheDatabaseSharedManager.java:2282)
... 5 more
   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck()
   at
Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallStaticVoidMethod(GlobalRef
cls, IntPtr methodId, Int64* argsPtr)
   at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.IgnitionStart(Env
env, String cfgPath, String gridName, Boolean clientMode, Boolean
userLogger, Int64 igniteId, Boolean redirectConsole)
   at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
   --- End of inner exception stack trace ---
   at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)

Thanks,
Raymond.

On Wed, Jul 19, 2023 at 5:43 AM Raymond Wilson 
wrote:

> Hi Alex,
>
> We are using Ignite v2.15.
>
> I will track down the additional log information and reply on this thread.
>
> Raymond.
>
>
> On Wed, Jul 19, 2023 at 2:55 AM Alex Plehanov 
> wrote:
>
>> Hello,
>>
>> Which Ignite version do you use?
>> Please share exception details after "Exception during start processors,
>> node will be stopped and close connections" (there should be a reason in
>> the log, why the page delta can't be applied).
>>
>> вт, 18 июл. 2023 г. в 05:05, Raymond Wilson :
>>
>>> Hi,
>>>
>>> We run a dev/alpha stack of our application in Azure Kubernetes.
>>> Persistent storage is contained in Azure Files NAS storage volumes, one per
>>> server node.
>>>
>>> We ran an upgrade of Kubernetes today (from 1.24.9 to 1.26.3). During
>>> the update various pods were stopped and restarted as is normal for an
>>> update. This included nodes running the dev/alpha stack.
>>>
>>> At least one node (of a cluster of four server nodes in the cluster)
>>> failed to restart after the update, with the following logging:
>>>
>>>   2023-07-18 01:23:55.171 [1] INFRestoring checkpoint after logical
>>> recovery, will start physical recovery from back pointer: WALPointer
>>> [idx=2431, fileOff=209031823, len=29]
>>>  2023-07-18 01:23:55.205  [28] ERRFailed to apply page delta.
>>> rec=[PagesListRemovePageRecord [rmvdPageId=010100010057,
>>> pageId=010100010004, grpId=-1476359018, super=PageDeltaRecord
>>> [grpId=-1476359018, pageId=010100010004, super=WALRecord [size=41,
>>> chainSize=0, pos=WALPointer [idx=2431, fileOff=209169155, len=41],
>>> type=PAGES_LIST_REMOVE_PAGE
>>>  2023-07-18 01:23:55.217 [1] INFCleanup cache stores [total=0,
>>> left=0, cleanFiles=false]
&

Re: [slurm-users] Unconfigured GPUs being allocated

2023-07-18 Thread Wilson, Steven M
Further testing and looking at the source code confirms what looks to me like a 
bug in Slurm. GPUs that are not configured in gres.conf are detected by slurmd 
in the system and discarded since they aren't found in gres.conf. That's fine 
except they should also be hidden through cgroup control so that they aren't 
visible along with allocated GPUs when a job is run. Slurm assumes that the job 
can only see the GPUs that it allocates to the job and sets the 
$CUDA_VISIBLE_DEVICES accordingly. Unfortunately, the job actually sees the 
allocated GPUs plus any unconfigured GPUs and $CUDA_VISIBLE_DEVICES may or may 
not happen to correspond to the GPU(s) allocated by Slurm.

I was hoping that I could write a Prolog script that would adjust 
$CUDA_VISIBLE_DEVICES to remove any unconfigured GPUs but any changes using 
"export CUDA_VISIBLE_DEVICES=..." don't seem to have an effect upon the actual 
environment of the job.

Steve

____
From: Wilson, Steven M 
Sent: Friday, July 14, 2023 4:10 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

It's not so much whether a job may or may not access the GPU but rather which 
GPU(s) is(are) included in $CUDA_VISIBLE_DEVICES. That is what controls what 
our CUDA jobs can see and therefore use (within any cgroups constraints, of 
course). In my case, Slurm is sometimes setting $CUDA_VISIBLE_DEVICES to a GPU 
that is not in the Slurm configuration because it is intended only for driving 
the display and not GPU computations.

Thanks for your thoughts!

Steve

From: slurm-users  on behalf of 
Christopher Samuel 
Sent: Friday, July 14, 2023 1:57 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

[You don't often get email from ch...@csamuel.org. Learn why this is important 
at https://aka.ms/LearnAboutSenderIdentification ]

 External Email: Use caution with attachments, links, or sharing data 


On 7/14/23 10:20 am, Wilson, Steven M wrote:

> I upgraded Slurm to 23.02.3 but I'm still running into the same problem.
> Unconfigured GPUs (those absent from gres.conf and slurm.conf) are still
> being made available to jobs so we end up with compute jobs being run on
> GPUs which should only be used

I think this is expected - it's not that Slurm is making them available,
it's that it's unaware of them and so doesn't control them in the way it
does for the GPUs it does know about. So you get the default behaviour
(any process can access them).

If you want to stop them being accessed from Slurm you'd need to find a
way to prevent that access via cgroups games or similar.

All the best,
Chris
--
Chris Samuel  :  
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.csamuel.org%2F=05%7C01%7Cstevew%40purdue.edu%7C6fba97485b73413521d208db8494160a%7C4130bd397c53419cb1e58758d6d63f21%7C0%7C0%7C638249543794377751%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=VslW51ree1Ibt3xfYyy99Aj%2BREZh7BqpM6Ipg3jAM84%3D=0<http://www.csamuel.org/>
  :  Berkeley, CA, USA




Re: Possible WAL corruption on running system during K8s update

2023-07-18 Thread Raymond Wilson
Hi Alex,

We are using Ignite v2.15.

I will track down the additional log information and reply on this thread.

Raymond.


On Wed, Jul 19, 2023 at 2:55 AM Alex Plehanov 
wrote:

> Hello,
>
> Which Ignite version do you use?
> Please share exception details after "Exception during start processors,
> node will be stopped and close connections" (there should be a reason in
> the log, why the page delta can't be applied).
>
> вт, 18 июл. 2023 г. в 05:05, Raymond Wilson :
>
>> Hi,
>>
>> We run a dev/alpha stack of our application in Azure Kubernetes.
>> Persistent storage is contained in Azure Files NAS storage volumes, one per
>> server node.
>>
>> We ran an upgrade of Kubernetes today (from 1.24.9 to 1.26.3). During the
>> update various pods were stopped and restarted as is normal for an update.
>> This included nodes running the dev/alpha stack.
>>
>> At least one node (of a cluster of four server nodes in the cluster)
>> failed to restart after the update, with the following logging:
>>
>>   2023-07-18 01:23:55.171 [1] INFRestoring checkpoint after logical
>> recovery, will start physical recovery from back pointer: WALPointer
>> [idx=2431, fileOff=209031823, len=29]
>>  2023-07-18 01:23:55.205  [28] ERRFailed to apply page delta.
>> rec=[PagesListRemovePageRecord [rmvdPageId=010100010057,
>> pageId=010100010004, grpId=-1476359018, super=PageDeltaRecord
>> [grpId=-1476359018, pageId=010100010004, super=WALRecord [size=41,
>> chainSize=0, pos=WALPointer [idx=2431, fileOff=209169155, len=41],
>> type=PAGES_LIST_REMOVE_PAGE
>>  2023-07-18 01:23:55.217 [1] INFCleanup cache stores [total=0,
>> left=0, cleanFiles=false]
>>  2023-07-18 01:23:55.218 [1] ERRGot exception while starting (will
>> rollback startup routine).
>>  2023-07-18 01:23:55.218 [1] ERRException during start processors,
>> node will be stopped and close connections
>>
>> I know Apache Ignite is very good at surviving 'Big Red Switch'
>> scenarios, and we have our data regions configured with the strictest
>> update protocol (full sync after each write), however it's possible the NAS
>> implementation does something different!
>>
>> I think if we delete the WAL files from the nodes that won't restart then
>> the node may be happy, though we will lose any updates since the last
>> checkpoint (but then, it has low use and checkpoints are every 30-45
>> seconds or so, so this won't be significant).
>>
>> Is this an error anyone else has noticed?
>> Has anyone else had similar issues with Azure Files when using strict
>> update/sync semantics?
>>
>> Thanks,
>> Raymond.
>>
>> --
>> <http://www.trimble.com/>
>> Raymond Wilson
>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>> 11 Birmingham Drive | Christchurch, New Zealand
>> raymond_wil...@trimble.com
>>
>>
>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>
>

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Possible WAL corruption on running system during K8s update

2023-07-17 Thread Raymond Wilson
Hi,

We run a dev/alpha stack of our application in Azure Kubernetes. Persistent
storage is contained in Azure Files NAS storage volumes, one per server
node.

We ran an upgrade of Kubernetes today (from 1.24.9 to 1.26.3). During the
update various pods were stopped and restarted as is normal for an update.
This included nodes running the dev/alpha stack.

At least one node (of a cluster of four server nodes in the cluster) failed
to restart after the update, with the following logging:

  2023-07-18 01:23:55.171 [1] INFRestoring checkpoint after logical
recovery, will start physical recovery from back pointer: WALPointer
[idx=2431, fileOff=209031823, len=29]
 2023-07-18 01:23:55.205  [28] ERRFailed to apply page delta.
rec=[PagesListRemovePageRecord [rmvdPageId=010100010057,
pageId=010100010004, grpId=-1476359018, super=PageDeltaRecord
[grpId=-1476359018, pageId=010100010004, super=WALRecord [size=41,
chainSize=0, pos=WALPointer [idx=2431, fileOff=209169155, len=41],
type=PAGES_LIST_REMOVE_PAGE
 2023-07-18 01:23:55.217 [1] INFCleanup cache stores [total=0, left=0,
cleanFiles=false]
 2023-07-18 01:23:55.218 [1] ERRGot exception while starting (will
rollback startup routine).
 2023-07-18 01:23:55.218 [1] ERRException during start processors, node
will be stopped and close connections

I know Apache Ignite is very good at surviving 'Big Red Switch' scenarios,
and we have our data regions configured with the strictest update protocol
(full sync after each write), however it's possible the NAS implementation
does something different!

I think if we delete the WAL files from the nodes that won't restart then
the node may be happy, though we will lose any updates since the last
checkpoint (but then, it has low use and checkpoints are every 30-45
seconds or so, so this won't be significant).

Is this an error anyone else has noticed?
Has anyone else had similar issues with Azure Files when using strict
update/sync semantics?

Thanks,
Raymond.

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


[Translators-l] Re: Ready for translation: Tech News #29 (2023)

2023-07-17 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 21 languages) to 1,076 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Wikitech-ambassadors] Tech News 2023, week 29

2023-07-17 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/29. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Deutsch
, English, Tiếng Việt
, Türkçe
, español
, français
, italiano
, norsk bokmål
, polski
, suomi
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, فارسی
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語
, 한국어


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - We are now serving 1% of all global user traffic from Kubernetes
    (you can read more technical
   details ).
   We are planning to increment this percentage regularly. You can follow
   the progress of this work .

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 18 July. It will
   be on non-Wikipedia wikis and some Wikipedias from 19 July. It will be on
   all wikis from 20 July (calendar
   ).
   - MediaWiki system messages
   
   will now look for available local fallbacks, instead of always using the
   default fallback defined by software. This means wikis no longer need to
   override each language on the fallback chain
   

   separately. For example, English Wikipedia doesn't have to create en-ca
   and en-gb subpages with a transclusion of the base pages anymore. This
   makes it easier to maintain local overrides. [1]
   
   - The action=growthsetmentorstatus API will be deprecated with the new
   MediaWiki version. Bots or scripts calling that API should use the
   action=growthmanagementorlist API now. [2]
   

*Tech news 
prepared by Tech News writers
 and
posted by bot

•
Contribute
 •
Translate
 •
Get help  • Give feedback
 • Subscribe or unsubscribe
.*
___
Wikitech-ambassadors mailing list -- wikitech-ambassadors@lists.wikimedia.org
To unsubscribe send an email to wikitech-ambassadors-le...@lists.wikimedia.org


Re: [Servercert-wg] [secdir] Secdir last call review of draft-gutmann-testkeys-04

2023-07-17 Thread Clint Wilson via Servercert-wg
Hi Wayne,

I’d like to better understand your worry and perhaps interpretation of BR 
6.1.1.3(4) and 4.9.1.1(3,4,16). Just to restate for my benefit, the concern is 
that: IF we interpret Tim’s message regarding the testkeys draft as qualifying 
the keys present in the draft as “[All] CAs [subscribed to the Servercert-wg 
list being] made aware that [a future] Applicant’s Private Key has suffered a 
Key Compromise….” THEN, in a similar situation, any servercert-wg member could 
share any number of compromised keys here and, theoretically, bloat (with no 
upper bounds) the set of known compromised keys a CA has to retain and check in 
order to reject certificate requests as needed to meet the requirements of 
6.1.1.3 WHILE also not necessarily increasing the meaningful security provided 
by the BRs. Is that correct?
As a concrete example (an extreme I could imagine), someone could generate, and 
potentially delete, 100 or 100,000,000,000 keypairs easily (for a value of 
“easily” most associated with effort rather than time or resources), share a 
CSV, or even just pointer to a repository/document, with the Servercert-wg, and 
(if interpreted per your worry) cause a bunch of keys never intended to be used 
for actual certificate issuance to be forever part of a set of keys which all 
CAs must check every received certificate request against.

Notable to this worry, I think, is that nothing about the language in in the 
BRs today indicates to me that Tim’s message or the above, somewhat silly, 
scenario would not be interpreted to qualify as a reason to reject those 
associated keys. That is, if a CA subscribed to this mailing list and 
conforming to the BRs, issued a certificate to a key in the testkeys draft 
after July 4, 2023, it seems that the BRs would consider that a misissuance as 
there’s no limitation or specification regarding what (or whether) any specific 
bar is met in order to constitute “the CA [being] made aware”. 4.9.3 I think 
comes quite close, but stops short of saying something like “For the purposes 
of requirements in 4.9.1.1, 4.9.1.2, and 6.1.1.3, the CA MAY require a 
Certificate Problem Report to be submitted in order to constitute being made 
aware of reasons to reject certificate requests or revoke certificates.” which 
I think would remove the current ambiguity regarding what needs to happen in 
order for a CA to need to begin rejecting certificate requests for compromised 
keys. (Note, I’m not saying this change is a good or well-thought-out idea, 
just what came to mind as one option to increase clarity in a way that would 
address the worry raised.)

This is separate, in my mind, to any potential interpretation that would expect 
CAs to go out and look for compromised keys elsewhere. “Looking" implies to me 
a proactive effort, whereas “made aware” is much more passive and would 
seemingly include any receipt of information by the CA (or its official 
representatives?). More to the point, I don’t see any implication that CAs 
should be looking for compromised keys in the current BR text, which hopefully 
helps with part of the worry (though adding something like that as a 
requirement has been discussed before, iirc, especially in the context of 
pwnedkeys.com  and I could see that, and related topics, 
coming up again with 
https://www.ietf.org/archive/id/draft-mpalmer-key-compromise-attestation-00.txt).

While I don’t foresee near-term, major, and negative impact from my 
interpretation of the BRs, I do think we can maintain the intent of the 
requirement without leaving it as open as a rough analogue to a zip bomb. While 
I proposed something purely for illustration above, I’ve also filed 
https://github.com/cabforum/servercert/issues/442 to track this if there’s 
further interest in ensuring the BRs could address this worry.

As always, please let me know if I’ve missed some crucial detail or interaction 
here that’s led me to an erroneous conclusion on the topic. Cheers!
-Clint

> On Jul 7, 2023, at 3:13 PM, Wayne Thayer via Servercert-wg 
>  wrote:
> 
> Thanks for sharing this Tim.
> 
> I want to comment on the statement that CAs should blocklist the keys 
> published in this RFC. Doing that may very well be helpful to the CA and 
> their customers, but I do not believe it is a requirement set forth by the 
> CAB Forum or root store policy.
> 
> Prior discussions on this topic have not resulted in requirements beyond the 
> clarification of BR 6.1.1.3(4): "The CA has previously been made aware that 
> the Applicant’s Private Key has suffered a Key Compromise, such as through 
> the provisions of Section 4.9.1.1;". My worry is that we will begin 
> interpreting "has previously been made aware" as inclusive of keys published 
> in a RFC that Tim sent to the mailing list, without any bounds or guidance on 
> where else CAs must look for compromised keys (e.g. scanning online databases 
> and software packages). I don't necessarily intend to start a 

Re: Ignite data region off-heap allocation

2023-07-17 Thread Raymond Wilson
Hi Pavel,

This area is confusing. There is no indication that the memory pressure
applies to any individual object or allocation, so there is clearly no
association between memory pressure and any particular resource.

I get your argument that .Net can 'see' allocated memory. What is unclear
is whether it cares about actually allocated and used pages, or committed
pages.

I see there is a LazyMemoryAllocation (default: true) for data regions.
Some data regions set this to false, eg:

^--   sysMemPlc region [type=internal, persistence=true,
lazyAlloc=false,
^--   metastoreMemPlc region [type=internal, persistence=true,
lazyAlloc=false,
^--   TxLog region [type=internal, persistence=true, lazyAlloc=false,

The documentation is not clear on the effect of this flag other than to say
it is for 'Lazy memory allocation'. If this flag is true will Ignite
proactively allocate and use all pages in a data region, rather than
incrementally?

Thanks,
Raymond.


On Tue, Jul 11, 2023 at 10:55 PM Pavel Tupitsyn 
wrote:

> > I can’t see another way of letting . Net know that it can’t have access
> to all the ‘free’ memory in the process
>
> You don't need to tell .NET how much memory is currently available. It is
> the job of the OS. .NET can "see" the size of the unmanaged heap.
>
> To quote another explanation [1]:
>
> > The point of AddMemoryPressure is to tell the garbage collector that
> there's a large amount of memory allocated with that object.
> > If it's unmanaged, the garbage collector doesn't know about it; only the
> managed portion.
> > Since the managed portion is relatively small, the GC may let it pass
> for garbage collection several times, essentially wasting memory that might
> need to be freed.
>
> I really don't think AddMemoryPressure is the right thing to do in your
> case.
> If you run into OOM issues, then look into Ignite memory region settings
> [2] and/or adjust application memory usage on the .NET side, so that the
> sum of those is not bigger than available RAM.
>
> [1]
> https://stackoverflow.com/questions/1149181/what-is-the-point-of-using-gc-addmemorypressure-with-an-unmanaged-resource
> [2]
> https://ignite.apache.org/docs/latest/memory-configuration/data-regions#configuring-default-data-region
>
> On Tue, Jul 11, 2023 at 11:48 AM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> How do Ignite .Net server nodes manage this memory issue in other
>> projects?
>>
>> On Tue, Jul 11, 2023 at 5:32 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Oops, commutes => committed
>>>
>>> On Tue, 11 Jul 2023 at 4:34 PM, Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
>>>> I can’t see another way of letting . Net know that it can’t have access
>>>> to all the ‘free’ memory in the process when a large slab of that is spoken
>>>> for in terms of memory commutes to Ignite data regions.
>>>>
>>>> In the current setup, as time goes on and Ignite progressively fills
>>>> the allocated cache ram then system behaviour changes and can result in out
>>>> of memory issues. I think I would prefer consistent system behaviour wrt to
>>>> allocated resources from the start.
>>>>
>>>> Raymond.
>>>>
>>>> On Tue, 11 Jul 2023 at 3:57 PM, Pavel Tupitsyn 
>>>> wrote:
>>>>
>>>>> Are you sure this is necessary?
>>>>>
>>>>> GC.AddMemoryPressure documentation [1] states that this will "improve
>>>>> performance only for types that exclusively depend on finalizers".
>>>>>
>>>>> [1]
>>>>> https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=net-7.0
>>>>>
>>>>> On Tue, Jul 11, 2023 at 1:02 AM Raymond Wilson <
>>>>> raymond_wil...@trimble.com> wrote:
>>>>>
>>>>>> I'm making changes to add memory pressure to the GC to take into
>>>>>> account memory committed to the Ignite data regions as this will be
>>>>>> unmanaged memory allocations from the perspective of the GC.
>>>>>>
>>>>>> I don't call seeing anything related to this for .Net clients in the
>>>>>> documentation. Are you aware of any?
>>>>>>
>>>>>> Raymond.
>>>>>>
>>>>>> On Mon, Jul 10, 2023 at 9:41 PM Raymond Wilson <
>>>>>> raymond_wil...@trimble.com> wrote:
>>>>>>
>>>>>>> Thanks Pavel, this makes sense.

Re: [Smcwg-public] CommonNames, Pseudonyms, GivenNames and Surnames

2023-07-17 Thread Clint Wilson via Smcwg-public
Hi Rob,

I think minimally filing an issue in https://github.com/cabforum/smime/issues 
would be a good thing to do to track this potential conflict.
FWIW, I also think the issue identified is indeed an issue (though probably not 
major) and your proposed updates seem reasonable to me as well.

Cheers,
-Clint

> On Jul 13, 2023, at 6:52 AM, Robert Lee via Smcwg-public 
>  wrote:
> 
> Dear all,
>  
> I’m emailing because I think some further clarification may be needed in 
> section 7.1.4.2.2(a) around commonNames as Personal Names or Pseudonyms 
> (capital ‘P’ based on SMC03 changes).
>  
> What I think is needed is to align some of the uses of commonNames with the 
> existing rules around if subject:pseudonym is present then 
> subject:givenName/subject:surname SHALL NOT be present and the vice versa 
> rule.  My understanding/assumption is that the pseudonym/givenName/surname 
> rules are in place to make an SMIME certificate a Pseudonym cert or a 
> Personal Name cert and not to be both at the same time (especially as putting 
> one’s name into the cert would dramatically reduce any privacy afforded by 
> using a Pseudonym).
>  
> However, the options for commonName in sponsor and individual validated 
> certificates don't entirely work with the above as currently you _could_ have 
> a subject:pseudonym and then put your Personal Name in the commonName which 
> doesn't track with my understanding/assumption of what the 
> pseudonym/givenName/surname rules are supposed to achieve.
>  
> I don’t think it’s a difficult thing to fix though.  Adding the following 
> lines to 7.1.4.2.2(a) should close this hole effectively enough:
>  
> “If the subject:commonName contains a Pseudonym, then the subject:givenName 
> and/or subject:surname attributes SHALL NOT be present.”
>  
> “If the subject:commonName contains a Personal Name, then the 
> subject:pseudonym attribute SHALL NOT be present.”
>  
> If people broadly agree with my suggestion then I’m happy to make a PR into 
> the BRs or somewhere else if, like SMC03, there’ll be a branch collecting 
> changes in someone’s fork of the document.
>  
> Best Regards,
> Rob
>  
> Dr. Robert Lee MEng PhD
> Senior Software Engineer with Cryptography SME
> www.globalsign.co.uk |www.globalsign.eu 
> 
>  
> ___
> Smcwg-public mailing list
> Smcwg-public@cabforum.org 
> https://lists.cabforum.org/mailman/listinfo/smcwg-public



smime.p7s
Description: S/MIME cryptographic signature
___
Smcwg-public mailing list
Smcwg-public@cabforum.org
https://lists.cabforum.org/mailman/listinfo/smcwg-public


Re: Exceeding CCADB CA Logins for Current Term

2023-07-17 Thread Kathleen Wilson
Thank you to all of you who have so promptly responded to this request to 
reduce your CCADB logins until August 5. 

If you do need to make updates to the CCADB before August 5, we ask that 
you consolidate your CA's logins to one account one day per week.

I appreciate the email that I have received explaining your CCADB usage, 
summarized below.

Login to the CCADB to update data:
- Create a Case or respond to a comment on a Case
- Respond to a CCADB Survey
- Update data related to your CA, root certificates, intermediate 
certificates, documents, self-assessments.

For the CCADB usage listed above, would it be reasonable to ask CAs to 
consolidate their CCADB logins to one account one day per week? Two days 
per week? Other?
Or do you have other ideas about how to encourage the above behavior while 
managing our monthly login allowance?
We would like to document expected usage/behavior, and we may still 
increase our monthly login allowance based on the results of this 
discussion.

Reasons that CA POCs login to the CCADB more than one day per week on a 
regular basis:
- Check progress of a Case 
- Check for any new tasks listed in CCADB Home page
- Check details of intermediate certificate records
- Check for pending items requiring attention in CCADB

Would you (CA POCs) be interested in public reports that provide the 
information listed above for all CAs?

Or would you prefer to receive a regular email that provides the 
information listed above?
Opt-in? Daily? Weekly? Customizable?

Or would you prefer to only receive email when there are pending items 
requiring your attention in the CCADB?

Thanks,
Kathleen

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/41e48dd7-b919-4557-994b-f94e0aaee0a4n%40ccadb.org.


[Wikimedia-l] Re: Wiki Loves Folklore: Announcing Results and Project Statistics for 2023

2023-07-17 Thread Wilson Oluoha
Congratulations to the Winners.

Beautiful pictures all of them.

On Mon, Jul 17, 2023 at 7:38 AM Camelia Boban 
wrote:

> Thank you Joy for the update, happy to share it.
> All stunning photos, congratulazioni  to the photographers and the
> organizers.
>
> Camelia on behalf of the Wiki Loves Folklore Italy team
>
>
> Il dom 16 lug 2023, 17:50 Joy Agyepong  ha scritto:
>
>> Dear Community,
>>
>>
>> I hope this email finds you well. We are delighted to share with you the
>> exciting results and statistics of our recently concluded project. Wiki
>> Loves Folklore
>> 
>> aimed to celebrate the rich cultural heritage of folklore from around the
>> world by encouraging contributors to upload their media to Wikimedia
>> Commons.
>>
>>
>> The project, which ran from 1st February to 31st March, received an
>> overwhelming response, with a total
>> of
>> *38,027* submissions from* 2,200 *dedicated uploaders representing *140
>> countries
>> *.
>> It's been truly inspiring to witness the global enthusiasm and commitment
>> to preserving and sharing folklore through this initiative.
>>
>>
>> Today, we are thrilled to announce the *15 winning media entries*, which
>> were selected by our esteemed panel of judges. These entries captivated us
>> with their artistic expression, cultural significance, and ability to
>> showcase the diversity of folklore across different regions. We encourage
>> you to explore these exceptional contributions:
>>
>>
>> 1. [Winning Media Entry 1]
>> 
>>
>> 2. [Winning Media Entry 2]
>> 
>>
>> 3. [Winning Media Entry 3]
>> 
>>
>> 4. [Winning Media Entry 4]
>> 
>>
>> 5. [Winning Media Entry 5]
>> 
>>
>> 6. [Winning Media Entry 6]
>> 
>>
>> 7. [Winning Media Entry 7]
>> 
>>
>> 8. [Winning Media Entry 8]
>> 
>>
>> 9. [Winning Media Entry 9]
>> 
>>
>> 10. [Winning Media Entry 10]
>> 
>>
>> 11. [Winning Media Entry 11]
>> 
>>
>> 12. [Winning Media Entry 12]
>> 
>>
>> 13. [Winning Media Entry 13]
>> 
>>
>> 14. [Winning Media Entry 14]
>> 
>>
>> 15. [Winning Media Entry 15]
>> 
>>
>>
>> Additionally, we invite you to visit our project page
>> 
>> to view these outstanding entries and discover the remarkable folklore
>> traditions they represent.
>>
>>
>> *Wiki Loves Folklore is an international project that aims to collect and
>> preserve folklore-related media on Wikimedia Commons. By creating a
>> repository of diverse folklore traditions, we strive to raise awareness,
>> foster cultural exchange, and inspire future generations. This initiative
>> serves as a testament to the power of collaboration and the importance of
>> preserving our shared cultural heritage.*
>>
>>
>> This year's success is because of your tireless effort, the contributions
>> of other participants, organizers, and volunteers. We are immensely
>> grateful for your dedication and passion for promoting cultural heritage
>> worldwide. Your support has made this project a resounding success, and we
>> look forward to your continued participation in future endeavors.
>>
>>
>> If you have any questions or would like further information, please do
>> not hesitate to reach out via supp...@wikilovesfolklore.org. Thank you
>> once again for your interest and support.
>>
>>
>> Cheers to a New Year!!
>>
>>
>> Warm regards,
>>
>>
>> Joy
>>
>> On behalf of the International team Wiki Loves Folklore
>> ___
>> Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines
>> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
>> 

[African Wikimedians] Re: [Wikimedia-l] Re: Wiki Loves Folklore: Announcing Results and Project Statistics for 2023

2023-07-17 Thread Wilson Oluoha
Congratulations to the Winners.

Beautiful pictures all of them.

On Mon, Jul 17, 2023 at 7:38 AM Camelia Boban 
wrote:

> Thank you Joy for the update, happy to share it.
> All stunning photos, congratulazioni  to the photographers and the
> organizers.
>
> Camelia on behalf of the Wiki Loves Folklore Italy team
>
>
> Il dom 16 lug 2023, 17:50 Joy Agyepong  ha scritto:
>
>> Dear Community,
>>
>>
>> I hope this email finds you well. We are delighted to share with you the
>> exciting results and statistics of our recently concluded project. Wiki
>> Loves Folklore
>> 
>> aimed to celebrate the rich cultural heritage of folklore from around the
>> world by encouraging contributors to upload their media to Wikimedia
>> Commons.
>>
>>
>> The project, which ran from 1st February to 31st March, received an
>> overwhelming response, with a total
>> of
>> *38,027* submissions from* 2,200 *dedicated uploaders representing *140
>> countries
>> *.
>> It's been truly inspiring to witness the global enthusiasm and commitment
>> to preserving and sharing folklore through this initiative.
>>
>>
>> Today, we are thrilled to announce the *15 winning media entries*, which
>> were selected by our esteemed panel of judges. These entries captivated us
>> with their artistic expression, cultural significance, and ability to
>> showcase the diversity of folklore across different regions. We encourage
>> you to explore these exceptional contributions:
>>
>>
>> 1. [Winning Media Entry 1]
>> 
>>
>> 2. [Winning Media Entry 2]
>> 
>>
>> 3. [Winning Media Entry 3]
>> 
>>
>> 4. [Winning Media Entry 4]
>> 
>>
>> 5. [Winning Media Entry 5]
>> 
>>
>> 6. [Winning Media Entry 6]
>> 
>>
>> 7. [Winning Media Entry 7]
>> 
>>
>> 8. [Winning Media Entry 8]
>> 
>>
>> 9. [Winning Media Entry 9]
>> 
>>
>> 10. [Winning Media Entry 10]
>> 
>>
>> 11. [Winning Media Entry 11]
>> 
>>
>> 12. [Winning Media Entry 12]
>> 
>>
>> 13. [Winning Media Entry 13]
>> 
>>
>> 14. [Winning Media Entry 14]
>> 
>>
>> 15. [Winning Media Entry 15]
>> 
>>
>>
>> Additionally, we invite you to visit our project page
>> 
>> to view these outstanding entries and discover the remarkable folklore
>> traditions they represent.
>>
>>
>> *Wiki Loves Folklore is an international project that aims to collect and
>> preserve folklore-related media on Wikimedia Commons. By creating a
>> repository of diverse folklore traditions, we strive to raise awareness,
>> foster cultural exchange, and inspire future generations. This initiative
>> serves as a testament to the power of collaboration and the importance of
>> preserving our shared cultural heritage.*
>>
>>
>> This year's success is because of your tireless effort, the contributions
>> of other participants, organizers, and volunteers. We are immensely
>> grateful for your dedication and passion for promoting cultural heritage
>> worldwide. Your support has made this project a resounding success, and we
>> look forward to your continued participation in future endeavors.
>>
>>
>> If you have any questions or would like further information, please do
>> not hesitate to reach out via supp...@wikilovesfolklore.org. Thank you
>> once again for your interest and support.
>>
>>
>> Cheers to a New Year!!
>>
>>
>> Warm regards,
>>
>>
>> Joy
>>
>> On behalf of the International team Wiki Loves Folklore
>> ___
>> Wikimedia-l mailing list -- wikimedi...@lists.wikimedia.org, guidelines
>> at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and
>> 

Re: [go-cd] source code compile problem

2023-07-16 Thread Chad Wilson
Maybe, GoCD is built with Node 18 right now.

It still uses Webpack 4, which might not work correctly with Node 20.

-Chad

On Mon, 17 Jul 2023, 09:59 jianhua guo,  wrote:

> After running the command "yarn cache clean" and "rm -rf
> server/src/main/webapp/WEB-INF/rails/node_modules", I compile the project
> again, then seeing the following exception, is it the nodejs version is too
> high?
>
> > Task :server:compileAssetsWebpackDev FAILED
> [/Users/guojianhua/gtja/projects/gocd/server/src/main/webapp/WEB-INF/rails]$
> yarn run webpack-dev --env
> outputDir=/Users/guojianhua/gtja/projects/gocd/server/src/main/webapp/WEB-INF/rails/public/assets/webpack
> --env
> licenseReportFile=/Users/guojianhua/gtja/projects/gocd/server/target/reports/yarn-license/license-report.json
> yarn run v1.22.19
> $ cross-env NODE_OPTIONS=--openssl-legacy-provider webpack --config
> webpack/config/webpack.config.ts --color --mode=development --env
> outputDir=/Users/guojianhua/gtja/projects/gocd/server/src/main/webapp/WEB-INF/rails/public/assets/webpack
> --env
> licenseReportFile=/Users/guojianhua/gtja/projects/gocd/server/target/reports/yarn-license/license-report.json
> /opt/homebrew/Cellar/node/20.3.0_1/bin/node: --openssl-legacy-provider is
> not allowed in NODE_OPTIONS
> error Command failed with exit code 9.
>
> 在2023年7月14日星期五 UTC+8 15:06:32 写道:
>
>> The commit is still at
>> https://github.com/dtabuenc/karma-html-reporter/commit/51ba3f91a6f19ef383c676431aa7f8c3fa73dab3
>> so it *should* work fine.
>>
>> Perhaps try yarn cache clean and rm -rf
>> server/src/main/webapp/WEB-INF/rails/node_modules and try again?
>>
>> If that doesn't work, what's your Yarn version? Do you have some
>> non-standard global git or Yarn configuration?
>>
>> -Chad
>>
>> On Fri, Jul 14, 2023 at 2:43 PM jianhua guo  wrote:
>>
>>> I use the following command './gradlew clean prepare' to build the gocd
>>> project. Failed with the following exception, can anyone help?
>>>
>>> error Couldn't find match for "51ba3f91a6f19ef383c676431aa7f8c3fa73dab3"
>>> in
>>> "refs/heads/dependabot/npm_and_yarn/grunt-1.5.3,refs/heads/dependabot/npm_and_yarn/hosted-git-info-2.8.9,refs/heads/dependabot/npm_and_yarn/minimatch-3.0.8,refs/heads/dependabot/npm_and_yarn/minimist-and-mkdirp-1.2.8,refs/heads/karma-0.11,refs/heads/master,refs/tags/0.1.2,refs/tags/v0.1.3,refs/tags/v0.2.2,refs/tags/v0.2.3"
>>> for "https://github.com/dtabuenc/karma-html-reporter.git;.
>>> info Visit https://yarnpkg.com/en/docs/cli/install for documentation
>>> about this command.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "go-cd" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to go-cd+un...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/go-cd/9a3d1fd9-4c6b-4eaf-a43a-c52986b0f038n%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/b7b17d17-10e8-4d2d-8ffc-3f1b32f13a9en%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH9D57sas_KV7k3H%2B7%3DHwUDn_Bc-tRGskKXD%2BV%3Dvdny4VQ%40mail.gmail.com.


[Translators-l] Re: Ready for translation: Tech News #29 (2023)

2023-07-14 Thread Nick Wilson (Quiddity)
On Thu, Jul 13, 2023 at 4:50 PM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/29
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F29=page
>

The text of the newsletter is now final.

Nothing has changed since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Re: [slurm-users] Unconfigured GPUs being allocated

2023-07-14 Thread Wilson, Steven M
I haven't seen anything that allows for disabling a defined Gres device. It 
does seem to work if I define the GPUs that I don't want to use and then 
specifically submit jobs to the other GPUs using --gpu like 
"--gpu=gpu:rtx_2080_ti:1". I suppose if I set the GPU Type to be "COMPUTE" for 
the GPUs I want to use for computing and "UNUSED" for those that I don't, this 
scheme might work (e.g., --gpu=gpu:COMPUTE:3). But then every job submission 
would be required to have this option set. Not a very workable solution.

Thanks!
Steve

From: slurm-users  on behalf of Feng 
Zhang 
Sent: Friday, July 14, 2023 3:09 PM
To: Slurm User Community List 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

[Some people who received this message don't often get email from 
prod.f...@gmail.com. Learn why this is important at 
https://aka.ms/LearnAboutSenderIdentification ]

 External Email: Use caution with attachments, links, or sharing data 


Very interesting issue.

I am guessing there might be a workaround: SInce oryx has 2 gpus
instead, you can define both of them, but disable the GT 710? Does
Slurm support this?

Best,

Feng

Best,

Feng


On Tue, Jun 27, 2023 at 9:54 AM Wilson, Steven M  wrote:
>
> Hi,
>
> I manually configure the GPUs in our Slurm configuration (AutoDetect=off in 
> gres.conf) and everything works fine when all the GPUs in a node are 
> configured in gres.conf and available to Slurm.  But we have some nodes where 
> a GPU is reserved for running the display and is specifically not configured 
> in gres.conf.  In these cases, Slurm includes this unconfigured GPU and makes 
> it available to Slurm jobs.  Using a simple Slurm job that executes 
> "nvidia-smi -L", it will display the unconfigured GPU along with as many 
> configured GPUs as requested by the job.
>
> For example, in a node configured with this line in slurm.conf:
> NodeName=oryx CoreSpecCount=2 CPUs=8 RealMemory=64000 Gres=gpu:RTX2080TI:1
> and this line in gres.conf:
> Nodename=oryx Name=gpu Type=RTX2080TI File=/dev/nvidia1
> I will get the following results from a job running "nvidia-smi -L" that 
> requested a single GPU:
> GPU 0: NVIDIA GeForce GT 710 (UUID: 
> GPU-21fe15f0-d8b9-b39e-8ada-8c1c8fba8a1e)
> GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: 
> GPU-0dc4da58-5026-6173-1156-c4559a268bf5)
>
> But in another node that has all GPUs configured in Slurm like this in 
> slurm.conf:
> NodeName=beluga CoreSpecCount=1 CPUs=16 RealMemory=128500 
> Gres=gpu:TITANX:2
> and this line in gres.conf:
> Nodename=beluga Name=gpu Type=TITANX File=/dev/nvidia[0-1]
> I get the expected results from the job running "nvidia-smi -L" that 
> requested a single GPU:
> GPU 0: NVIDIA RTX A5500 (UUID: GPU-3754c069-799e-2027-9fbb-ff90e2e8e459)
>
> I'm running Slurm 22.05.5.
>
> Thanks in advance for any suggestions to help correct this problem!
>
> Steve



Re: [slurm-users] Unconfigured GPUs being allocated

2023-07-14 Thread Wilson, Steven M
It's not so much whether a job may or may not access the GPU but rather which 
GPU(s) is(are) included in $CUDA_VISIBLE_DEVICES. That is what controls what 
our CUDA jobs can see and therefore use (within any cgroups constraints, of 
course). In my case, Slurm is sometimes setting $CUDA_VISIBLE_DEVICES to a GPU 
that is not in the Slurm configuration because it is intended only for driving 
the display and not GPU computations.

Thanks for your thoughts!

Steve

From: slurm-users  on behalf of 
Christopher Samuel 
Sent: Friday, July 14, 2023 1:57 PM
To: slurm-users@lists.schedmd.com 
Subject: Re: [slurm-users] Unconfigured GPUs being allocated

[You don't often get email from ch...@csamuel.org. Learn why this is important 
at https://aka.ms/LearnAboutSenderIdentification ]

 External Email: Use caution with attachments, links, or sharing data 


On 7/14/23 10:20 am, Wilson, Steven M wrote:

> I upgraded Slurm to 23.02.3 but I'm still running into the same problem.
> Unconfigured GPUs (those absent from gres.conf and slurm.conf) are still
> being made available to jobs so we end up with compute jobs being run on
> GPUs which should only be used

I think this is expected - it's not that Slurm is making them available,
it's that it's unaware of them and so doesn't control them in the way it
does for the GPUs it does know about. So you get the default behaviour
(any process can access them).

If you want to stop them being accessed from Slurm you'd need to find a
way to prevent that access via cgroups games or similar.

All the best,
Chris
--
Chris Samuel  :  
https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.csamuel.org%2F=05%7C01%7Cstevew%40purdue.edu%7C6fba97485b73413521d208db8494160a%7C4130bd397c53419cb1e58758d6d63f21%7C0%7C0%7C638249543794377751%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=VslW51ree1Ibt3xfYyy99Aj%2BREZh7BqpM6Ipg3jAM84%3D=0<http://www.csamuel.org/>
  :  Berkeley, CA, USA




Exceeding CCADB CA Logins for Current Term

2023-07-14 Thread Kathleen Wilson
All,

We have sent the following notice to each CA Primary Point of Contact (POC) 
who has been regularly and frequently logging into the CCADB.

We would like to have a discussion here about the information that CA 
primary POCs currently obtain by logging into the CCADB when they don’t 
need to update their CA’s data or create/update a Case.

As a Primary POC for your CA…

   - What information are you looking for when you log in to the CCADB when 
   you do not intend to update/create data or a case?
   - Would it be helpful for the CCADB to provide that information via 
   additional public reports?  
   - Would you want to opt-in to a regular email from the CCADB containing 
   certain information about your CA, so that you don’t need to log into the 
   CCADB as often?


== Notice to Primary POCs ==

Subject: [Action Requested] Exceeding CCADB CA Logins for Current Term 

Dear CA Primary POC,

With this message, we request that until August 5, you only log in to the 
CCADB if you need to make a data update, such as reporting a new 
intermediate certificate/revocation or updated audit, self assessment, and 
policy documents. If it is possible to consolidate multiple updates across 
one CCADB login session rather than spreading these logins and updates 
across multiple days, it would be appreciated.

Reason for this request:

The CCADB is a highly customized instance of Salesforce, and we have been 
notified by Salesforce that we are exceeding our CA user login numbers for 
this annual term, which ends on August 4.

CA users are "Salesforce Community" users, and information about Salesforce 
Communities Licenses may be found here:
https://developer.salesforce.com/blogs/developer-relations/2014/02/salesforce-communities-licenses#:~:text=What%20happens%20if%20I%20go%20over%20my%20monthly%20limit%20allocation%3F

Salesforce follows a yearly entitlement policy to determine overage for 
community user logins. We (CCADB Steering Committee member organizations) 
are currently paying for an annual allotment of community user logins, of 
which we only have about 100 left in our annual term, which resets on 
August 5.

As the CCADB usage continues to grow, we will (1) evaluate options for our 
next annual term with Salesforce, and (2) initiate further discussion in 
the CCADB public group (https://groups.google.com/a/ccadb.org/g/public) to 
better understand the CA community usage and login needs. 

==

Thanks,
Kathleen and the CCADB SC

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/c9e3d7ee-2a5a-4392-bf5e-40134c10d506n%40ccadb.org.


Re: [slurm-users] Unconfigured GPUs being allocated

2023-07-14 Thread Wilson, Steven M
I upgraded Slurm to 23.02.3 but I'm still running into the same problem. 
Unconfigured GPUs (those absent from gres.conf and slurm.conf) are still being 
made available to jobs so we end up with compute jobs being run on GPUs which 
should only be used

Any ideas?

Thanks,
Steve

From: Wilson, Steven M
Sent: Tuesday, June 27, 2023 9:50 AM
To: slurm-users@lists.schedmd.com 
Subject: Unconfigured GPUs being allocated

Hi,

I manually configure the GPUs in our Slurm configuration (AutoDetect=off in 
gres.conf) and everything works fine when all the GPUs in a node are configured 
in gres.conf and available to Slurm.  But we have some nodes where a GPU is 
reserved for running the display and is specifically not configured in 
gres.conf.  In these cases, Slurm includes this unconfigured GPU and makes it 
available to Slurm jobs.  Using a simple Slurm job that executes "nvidia-smi 
-L", it will display the unconfigured GPU along with as many configured GPUs as 
requested by the job.

For example, in a node configured with this line in slurm.conf:
NodeName=oryx CoreSpecCount=2 CPUs=8 RealMemory=64000 Gres=gpu:RTX2080TI:1
and this line in gres.conf:
Nodename=oryx Name=gpu Type=RTX2080TI File=/dev/nvidia1
I will get the following results from a job running "nvidia-smi -L" that 
requested a single GPU:
GPU 0: NVIDIA GeForce GT 710 (UUID: 
GPU-21fe15f0-d8b9-b39e-8ada-8c1c8fba8a1e)
GPU 1: NVIDIA GeForce RTX 2080 Ti (UUID: 
GPU-0dc4da58-5026-6173-1156-c4559a268bf5)

But in another node that has all GPUs configured in Slurm like this in 
slurm.conf:
NodeName=beluga CoreSpecCount=1 CPUs=16 RealMemory=128500 Gres=gpu:TITANX:2
and this line in gres.conf:
Nodename=beluga Name=gpu Type=TITANX File=/dev/nvidia[0-1]
I get the expected results from the job running "nvidia-smi -L" that requested 
a single GPU:
GPU 0: NVIDIA RTX A5500 (UUID: GPU-3754c069-799e-2027-9fbb-ff90e2e8e459)

I'm running Slurm 22.05.5.

Thanks in advance for any suggestions to help correct this problem!

Steve


Re: [go-cd] source code compile problem

2023-07-14 Thread Chad Wilson
The commit is still at
https://github.com/dtabuenc/karma-html-reporter/commit/51ba3f91a6f19ef383c676431aa7f8c3fa73dab3
so it *should* work fine.

Perhaps try yarn cache clean and rm -rf
server/src/main/webapp/WEB-INF/rails/node_modules and try again?

If that doesn't work, what's your Yarn version? Do you have some
non-standard global git or Yarn configuration?

-Chad

On Fri, Jul 14, 2023 at 2:43 PM jianhua guo  wrote:

> I use the following command './gradlew clean prepare' to build the gocd
> project. Failed with the following exception, can anyone help?
>
> error Couldn't find match for "51ba3f91a6f19ef383c676431aa7f8c3fa73dab3"
> in
> "refs/heads/dependabot/npm_and_yarn/grunt-1.5.3,refs/heads/dependabot/npm_and_yarn/hosted-git-info-2.8.9,refs/heads/dependabot/npm_and_yarn/minimatch-3.0.8,refs/heads/dependabot/npm_and_yarn/minimist-and-mkdirp-1.2.8,refs/heads/karma-0.11,refs/heads/master,refs/tags/0.1.2,refs/tags/v0.1.3,refs/tags/v0.2.2,refs/tags/v0.2.3"
> for "https://github.com/dtabuenc/karma-html-reporter.git;.
> info Visit https://yarnpkg.com/en/docs/cli/install for documentation
> about this command.
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/9a3d1fd9-4c6b-4eaf-a43a-c52986b0f038n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH_KE%2B1aWRB%2BS1y1fgYWKr6z0YNBCnFH2aaxrazj3jBhNg%40mail.gmail.com.


Re: IR Photography

2023-07-13 Thread mike wilson


> On 14/07/2023 01:29 Larry Colen  wrote:
> Last night I had a chance to come home to get my miata, I was hoping to have 
> time to cut it up into pieces to mail to anybody that might want some, 

That's very generous of you.  I'll have a think about what I can afford postage 
on.
--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


[Translators-l] Ready for translation: Tech News #29 (2023)

2023-07-13 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/29

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F29=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


MRSP 2.9: Issues 261, 263 and 267, Miscellaneous Clarifications and Corrections

2023-07-13 Thread Ben Wilson
All,

This email announces discussion of three more GitHub issues that we would
like to address in Version 2.9 of the Mozilla Root Store Policy (MRSP).

*#261 - Merge 5 and 5.1 in Section 2.1*


Currently, item 5.1 in section 2.1 of the MRSP has a date of October 1,
2021, concerning server certificates issued on or after that date, which
date is in the past.

The updated item 5 in section 2.1 would combine items 5 and 5.1 and remove
the date and state that CAs “verify each dNSName or IPAddress in a SAN or
commonName in server certificates in accordance with sections 3.2.2.4 and
3.2.2.5 of the CA/Browser Forum's Baseline Requirements at intervals of 398
days or less, and verify that all other information that is included in
server certificates remains current and correct at intervals of 825 days or
less”.

*#263 - Clarify sentence prohibiting blank sections that also contain no
Subsections in CPs and CPSes
*

Currently, item 5 in MRSP section 3.3 says that CPs and CPSes must be
structured according to RFC 3647.  It has been argued that this is
ambiguous, for instance, because RFC 3647 has more than one numbered
outline.  Also, the third bullet says that CPs/CPSes must “contain no
sections that are blank and have no subsections”.  That language was not
intended to mean that a CP/CPS could not have any subsections.  Therefore,
item 5 in Section 3.3 should be clarified as follows:

“all CPs, CPSes, and combined CP/CPSes MUST be structured according to the
common outline set forth in section 6 of RFC 3647 (
https://datatracker.ietf.org/doc/html/rfc3647#section-6) and MUST:

* include at least every section and subsection defined in section 6 of RFC
3647;

* only use the words "No Stipulation" to mean that the particular document
imposes no requirements related to that section; and

* contain no sections that are entirely blank, having no text or
subsections”

*#267 - Update WebTrust and ETSI audit criteria to current versions and
identifiers* 

WebTrust references would be updated to require that audits be performed in
accordance with the following versions of the WebTrust criteria:

·WebTrust Principles and Criteria for Certification Authorities –
Version 2.2.2 or later;

·WebTrust Principles and Criteria for Certification Authorities –
SSL Baseline with Network Security - Version 2.6 or later; and

·WebTrust Principles and Criteria for Certification Authorities -
Extended Validation SSL - Version 1.7.8 or later.

Please provide your comments and suggestions as responses in this thread.

Thanks,

Ben and Kathleen

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaap4wDHwF5RLEL5CRS5UJBX5BoX29wQcOq-%2BUyB56Qk6A%40mail.gmail.com.


Re: [Servercert-wg] Participation Proposal for Revised SCWG Charter

2023-07-13 Thread Ben Wilson via Servercert-wg
Thanks, Tim.

All,

I will look closer at the distribution and use of software for browsing the
internet securely, instead of participation metrics. There is at least one
source, StatCounter (https://gs.statcounter.com/browser-market-share), that
purports to measure use of browsing software, both globally and regionally.
Would it be worthwhile to explore distribution by region and come up with a
reasonable threshold?  Can we rely on StatCounter, or should we look
elsewhere?

Thanks,

Ben

On Wed, Jul 12, 2023 at 9:30 AM Tim Hollebeek via Servercert-wg <
servercert-wg@cabforum.org> wrote:

> I have a meaningful comment.
>
>
>
> I don’t want to ever have to discuss or judge whether someone’s comment is
> “meaningful” or not, and I don’t think incentivizing people to post more
> comments than they otherwise would is helpful.
>
>
>
> I also think getting the chairs involved in any way in discussing whether
> a member representative did or did not have a medical condition during a
> particular time period is an extremely bad idea.
>
>
>
> Given that the original issue was trying to determine whether a
> certificate consumer is in fact a legitimate player in the ecosystem or
> not, I would suggest that exploring metrics like market share might be far
> more useful.  Metrics like participation are rather intrusive and onerous,
> except to those who are trying to game them, and those trying to game such
> metrics will succeed with little or no effort.
>
>
>
> -Tim
>
>
>
> *From:* Servercert-wg  *On Behalf Of 
> *Roman
> Fischer via Servercert-wg
> *Sent:* Wednesday, July 12, 2023 7:23 AM
> *To:* CA/B Forum Server Certificate WG Public Discussion List <
> servercert-wg@cabforum.org>
> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
> Charter
>
>
>
> Dear Ben,
>
>
>
> Mandatory participation has in my experience never resulted in more or
> better discussions. People will dial into the telco and let it run in the
> background to “earn the credits”.
>
>
>
> Also, what would happen after the 90 day suspension? Would the
> organization be removed as a CA/B member?
>
>
>
> Rgds
> Roman
>
>
>
> *From:* Servercert-wg  *On Behalf Of *Ben
> Wilson via Servercert-wg
> *Sent:* Freitag, 7. Juli 2023 21:59
> *To:* CA/B Forum Server Certificate WG Public Discussion List <
> servercert-wg@cabforum.org>
> *Subject:* [Servercert-wg] Participation Proposal for Revised SCWG Charter
>
>
>
> All,
>
>
>
> Here is a draft participation proposal for the SCWG to consider and
> discuss for inclusion in a revised SCWG Charter.
>
>
>
> #.  Participation Requirements to Maintain Voting Privileges
>
>
>
> (a) Attendance.  The privilege to vote “Yes” or “No” on ballots is
> suspended for 90 days if a Voting Member fails to meet the following
> attendance requirement over any 365-day period:
>
>- 10% of SCWG meetings for Voting Members located in time zones offset
>by UTC +5 through UTC +12
>- 30% of SCWG meetings for Voting Members located in all other time
>zones
>
> (b) Meaningful Comments.  Posting a Meaningful Comment is an alternative
> means of meeting the attendance requirement in subsection (a). A Voting
> Member can earn an attendance credit to make up for each missed meeting by
> posting a Meaningful Comment to the SCWG Public Mail List. Each Meaningful
> Comment is equal to attending one (1) meeting.
>
>
>
> A Meaningful Comment is one that follows the Code of Conduct and provides
> relevant information to the SCWG, such as new information, an insight,
> suggestion, or perspective related to the Scope of the SCWG, or that
> proposes an improvement to the TLS Baseline Requirements or EV Guidelines.
> It can also be something that responds to or builds on the comments of
> others in a meaningful way, or that offers feedback, suggestions, or
> solutions to the issues or challenges raised by the topic of discussion.
>
>
>
> A Meaningful Comment should be both relevant (within the Scope of the
> SCWG or related to the discussion that is taking place on the mailing
> list) and well-supported (clear reasons why the Voting Representative
> believes what they believe and supported by facts, data, or other
> information.)
>
>
>
> (c) A Voting Member that has failed to meet the attendance requirement in
> subsection (a) above is considered an "Inactive Member".  Any Member who
> believes that any other Member is an Inactive Member may report that Member
> on the Forum's Management List by providing specific information about that
> Member's non-participation, and the SCWG Chair shall send written notice
> to the Inact

Re: [cabfpub] Voting begins for Ballot Forum-18 v3 - Update CA/B Forum Bylaws to version 2.5

2023-07-13 Thread Ben Wilson via Public
Mozilla votes "Yes" on Ballot Forum-018 v3

On Thu, Jul 13, 2023 at 2:43 AM Dimitris Zacharopoulos (HARICA) via Public <
public@cabforum.org> wrote:

> This message begins the voting period for ballot Forum-18 v3.
>
> Dimitris.
>
> Purpose of the Ballot
> The Forum has identified and discussed a number of improvements to be made
> to the current version of the Bylaws to improve clarity and allow the Forum
> to function more efficiently. Most of these changes are described in the 
> “Issues
> with Bylaws to be addressed
> <https://docs.google.com/document/d/1EtrIy3F5cPge0_M-C8J6fe72KcVI8H5Q_2S6S31ynU0>”
> document. Some preparatory discussions and reviews can be checked on
> GitHub <https://github.com/cabforum/forum/pull/32>.
>
> Here is a list of major changes:
>
>1. Clarified that it is not always required to “READ” the antitrust
>statement before each meeting and added the option of reading a 
> "note-well".
>2. Clarified where to send/post Chartered Working Group minutes.
>3. Increased the number of days before automatic failing a ballot from
>21 to 90 days.
>4. Allow Chair or Vice-Chair to update links to other sections within
>a document without a ballot.
>5. Applied grammatical and other language improvements.
>6. Clarified that Subcommittee minutes do not need to also be
>published on the public web site.
>7. Created a new member category called "Probationary Member",
>applicable to both Certificate Issuer and Consumer categories, and
>separated "Associate Members - Certificate Issuers" from the "Associate
>Member" category.
>8. Clarified language for Associate Members for consistency with
>Probationary Member for the ballot proposals and endorsing.
>9. Removed the member category called "Root CA Issuer" and only kept
>the "CA Issuer" category.
>10. Added a step to check the authority of the signer during
>membership applications.
>11. Updated the Chartered Working Group template.
>12. Added some language to the Code of Conduct.
>13. Publishing private conversations without express permission is
>    considered a violation of the Code of Conduct.
>14. Updated the elections language as agreed at F2F#58.
>
> The following motion has been proposed by Dimitris Zacharopoulos of HARICA
> and endorsed by Ben Wilson of Mozilla and Paul van Brouwershaven of Entrust.
>
> MOTION BEGINS
> *Amendment to the Bylaws:* Replace the entire text of the Bylaws of the
> CA/Browser Forum with the attached version (CA-Browser Forum Bylaws
> v2.5.pdf).
>
> *NOTE:* There are two redlines produced by GitHub
>
>1. Bylaws-redline.pdf (attached)
>2. GitHub redline available at
>https://github.com/cabforum/forum/pull/32/files#diff-
>
> <https://github.com/cabforum/forum/pull/32/files#diff-3c3a1aa55886ff217ac9c808f96a5e9a9582fc11>
>3c3a1aa55886ff217ac9c808f96a5e9a9582fc11
>
> <https://github.com/cabforum/forum/pull/32/files#diff-3c3a1aa55886ff217ac9c808f96a5e9a9582fc11>
>
> MOTION ENDS
> The procedure for this ballot is as follows:
>
>
> *Forum-18 v3 - Update CA/B Forum Bylaws to version 2.5 * *Start time
> (10:00 UTC)* *End time (10:00 UTC)*
> Discussion (at least 7 days) 4 July 2023 11 July 2023
> Expected Vote for approval (7 days) 13 July 2023
> 20 July 2023
> ___
> Public mailing list
> Public@cabforum.org
> https://lists.cabforum.org/mailman/listinfo/public
>
___
Public mailing list
Public@cabforum.org
https://lists.cabforum.org/mailman/listinfo/public


Re: [Servercert-wg] Voting Period Begins - Ballot SC-59 v2 "Weak Key Guidance"

2023-07-12 Thread Clint Wilson via Servercert-wg
Apple votes YES on Ballot SC-059.

> On Jul 6, 2023, at 9:17 AM, Tom Zermeno via Servercert-wg 
>  wrote:
> 
> Purpose of the Ballot SC-59
> 
> This ballot proposes updates to the Baseline Requirements for the Issuance 
> and Management of Publicly-Trusted Certificates related to the identification 
> and revocation of certificates with private keys that were generated in a 
> manner that may make them susceptible to easy decryption. It specifically 
> deals with Debian weak keys, ROCA, and Close Primes Vulnerability. 
> 
> Notes:  
> 
> Thank you to the participants who voiced opinions and concerns about the 
> previous version of the ballot.  While there were many concerns about the 
> inclusion of the Debian weak keys checks, we have decided to leave the checks 
> in the ballot.  Our reasoning is that we wanted to strengthen the guidance 
> statements, to help CAs ensure compliant certificate generation.  Future 
> reviews of the BRs may cull the requirements, as is required by the needs of 
> the community. 
> We believe that the requested date of November 15, 2023, will allow enough 
> time for Certificate Authorities to enact any changes to their systems to 
> ensure that they perform the weak key checks on all CSRs submitted for TLS 
> certificates. 
> The changes introduced in SC-59 do not conflict with any of the recent 
> ballots. As observed with other ballots in the past, minor administrative 
> updates must be made to the proposed ballot text before publication such that 
> the appropriate Version # and Change History are accurately represented 
> (e.g., to indicate these changes will be represented in Version 2.0.1).  
> The following motion has been proposed by Thomas Zermeno of SSL.com 
> <http://ssl.com/> and has been endorsed by Martijn Katerbarg of Sectigo and 
> Ben Wilson of Mozilla. 
> 
> - Motion Begins -  
> 
> This ballot modifies the “Baseline Requirements for the Issuance and 
> Management of Publicly-Trusted Certificates” (“Baseline Requirements”), based 
> on Version 2.0.0. 
> 
> MODIFY the Baseline Requirements as specified in the following Redline: 
> https://github.com/cabforum/servercert/compare/a0360b61e73476959220dc328e3b68d0224fa0b3...SSLcom:servercert:958e6ccac857b826fead6e4bd06d58f4fdd7fa7a
>   
> 
> - Motion Ends - 
> 
> The procedure for approval of this ballot is as follows:
> 
> Discussion (7 days) 
> 
> • Start time: 2023-06-26 22:00:00 UTC 
> 
> • End time: 2023-07-03 21:59:59 UTC 
> 
> Vote for approval (7 days) 
> 
>   •  Start Time:  2023-07-06 17:00:00
> 
>   •  End Time:   2023-07-13 16:59:59 
> 
>  
> ___
> Servercert-wg mailing list
> Servercert-wg@cabforum.org <mailto:Servercert-wg@cabforum.org>
> https://lists.cabforum.org/mailman/listinfo/servercert-wg



smime.p7s
Description: S/MIME cryptographic signature
___
Servercert-wg mailing list
Servercert-wg@cabforum.org
https://lists.cabforum.org/mailman/listinfo/servercert-wg


Re: [Servercert-wg] Voting Period Begins - Ballot SC-063 V4: “Make OCSP Optional, Require CRLs, and Incentivize Automation”

2023-07-12 Thread Clint Wilson via Servercert-wg
Apple votes YES on Ballot SC-063.

> On Jul 6, 2023, at 8:59 AM, Ryan Dickson via Servercert-wg 
>  wrote:
> 
> Purpose of Ballot SC-063
> This Ballot proposes updates to the Baseline Requirements for the Issuance 
> and Management of Publicly-Trusted Certificates related to making Online 
> Certificate Status Protocol (OCSP) services optional for CAs. This proposal 
> does not prohibit or otherwise restrict CAs who choose to continue supporting 
> OCSP from doing so. If CAs continue supporting OCSP, the same requirements 
> apply as they exist today.
> 
> Additionally, this proposal introduces changes related to CRL requirements 
> including:
> 
> CRLs must conform with the proposed profile.
> CAs must generate and publish either:
> a full and complete, or 
> a set of partitioned CRLs (sometimes called “sharded” CRLs), that when 
> aggregated, represent the equivalent of a full and complete CRL.
> CAs issuing Subscriber Certificates must update and publish a new CRL…
> within twenty-four (24) hours after recording a Certificate as revoked; and 
> Otherwise: 
> at least every seven (7) days if all Certificates include an Authority 
> Information Access extension with an id-ad-ocsp accessMethod (“AIA OCSP 
> pointer”), or
> at least every four (4) days in all other cases.
> 
> Finally, the proposal revisits the concept of a “short-lived” certificate, 
> introduced in Ballot 153 
> .  As 
> described in this ballot, short-lived certificates (sometimes called 
> “short-term certificates” in ETSI specifications 
> )
>  are:
> 
> optional. CAs will not be required to issue short-lived certificates. For TLS 
> certificates that do not meet the definition of a short-lived certificate 
> introduced in this proposed update, the current maximum validity period of 
> 398 days remains applicable. 
> constrained to an initial maximum validity period of ten (10) days. The 
> proposal stipulates that short-lived certificates issued on or after 15 March 
> 2026 must not have a Validity Period greater than seven (7) days.
> not required to contain a CRLDP or OCSP pointer and are not required to be 
> revoked. The primary mechanism of certificate invalidation for these 
> short-lived certificates would be through certificate expiry. CAs may 
> optionally revoke short-lived certificates. The initial maximum certificate 
> validity is aligned with the existing maximum values for CRL “nextUpdate” and 
> OCSP response validity allowed by the BRs today. 
> 
> Additional background, justification, and considerations are outlined here 
> .
> 
> Proposal Revision History:
> 
> The set of updates resulting from the first round of discussion are presented 
> here .
> The set of updates resulting from the second round of discussion are 
> presented here .
> The set of updates resulting from the third round of discussion are presented 
> here . 
> 
> The following motion has been proposed by Ryan Dickson and Chris Clements of 
> Google (Chrome Root Program) and endorsed by Kiran Tummala of Microsoft and 
> Tim Callan of Sectigo.
> 
> 
> — Motion Begins —
> 
> This ballot modifies the “Baseline Requirements for the Issuance and 
> Management of Publicly-Trusted Certificates” (“Baseline Requirements”), based 
> on Version 2.0.0.
> 
> MODIFY the Baseline Requirements as specified in the following Redline: 
> https://github.com/cabforum/servercert/compare/a0360b61e73476959220dc328e3b68d0224fa0b3..b8a0453e59ff342779d5083f2f1f8b8b5930a66a
>  
> 
> 
> — Motion Ends —
> 
> This ballot proposes a Final Maintenance Guideline. The procedure for 
> approval of this ballot is as follows:
> 
> Discussion (13+ days)
> Start time: 2023-06-22 20:30:00 UTC
> End time: 2023-07-06 15:59:59 UTC
> 
> Vote for approval (7 days)
> Start time: 2023-07-06 16:00:00 UTC
> End time: 2023-07-13 16:00:00 UTC
> ___
> Servercert-wg mailing list
> Servercert-wg@cabforum.org
> https://lists.cabforum.org/mailman/listinfo/servercert-wg



smime.p7s
Description: S/MIME cryptographic signature
___
Servercert-wg mailing list
Servercert-wg@cabforum.org
https://lists.cabforum.org/mailman/listinfo/servercert-wg


[ovs-dev] [PATCH v5] python: Add async DNS support

2023-07-11 Thread Terry Wilson
This adds a Python version of the async DNS support added in:

771680d96 DNS: Add basic support for asynchronous DNS resolving

The above version uses the unbound C library, and this
implimentation uses the SWIG-wrapped Python version of that.

In the event that the Python unbound library is not available,
a warning will be logged and the resolve() method will just
return None. For the case where inet_parse_active() is passed
an IP address, it will not try to resolve it, so existing
behavior should be preserved in the case that the unbound
library is unavailable.

Intentional differences from the C version are as follows:

  OVS_HOSTS_FILE environment variable can bet set to override
  the system 'hosts' file. This is primarily to allow testing to
  be done without requiring network connectivity.

  Since resolution can still be done via hosts file lookup, DNS
  lookups are not disabled when resolv.conf cannot be loaded.

  The Python socket_util module has fallen behind its C equivalent.
  The bare minimum change was done to inet_parse_active() to support
  sync/async dns, as there is no equivalent to
  parse_sockaddr_components(), inet_parse_passive(), etc. A TODO
  was added to bring socket_util.py up to equivalency to the C
  version.

Signed-off-by: Terry Wilson 
---
 .github/workflows/build-and-test.yml|   4 +-
 Documentation/intro/install/general.rst |   4 +-
 Documentation/intro/install/rhel.rst|   2 +-
 Documentation/intro/install/windows.rst |   2 +-
 NEWS|   3 +
 debian/control.in   |   1 +
 m4/openvswitch.m4   |   8 +-
 python/TODO.rst |   7 +
 python/automake.mk  |   2 +
 python/ovs/dns_resolve.py   | 286 
 python/ovs/socket_util.py   |  21 +-
 python/ovs/stream.py|   2 +-
 python/ovs/tests/test_dns_resolve.py| 280 +++
 python/setup.py |   6 +-
 rhel/openvswitch-fedora.spec.in |   2 +-
 tests/vlog.at   |   2 +
 16 files changed, 615 insertions(+), 17 deletions(-)
 create mode 100644 python/ovs/dns_resolve.py
 create mode 100644 python/ovs/tests/test_dns_resolve.py

diff --git a/.github/workflows/build-and-test.yml 
b/.github/workflows/build-and-test.yml
index f66ab43b0..47d239f10 100644
--- a/.github/workflows/build-and-test.yml
+++ b/.github/workflows/build-and-test.yml
@@ -183,10 +183,10 @@ jobs:
   run:  sudo apt update || true
 - name: install common dependencies
   run:  sudo apt install -y ${{ env.dependencies }}
-- name: install libunbound libunwind
+- name: install libunbound libunwind python3-unbound
   # GitHub Actions doesn't have 32-bit versions of these libraries.
   if:   matrix.m32 == ''
-  run:  sudo apt install -y libunbound-dev libunwind-dev
+  run:  sudo apt install -y libunbound-dev libunwind-dev python3-unbound
 - name: install 32-bit libraries
   if:   matrix.m32 != ''
   run:  sudo apt install -y gcc-multilib
diff --git a/Documentation/intro/install/general.rst 
b/Documentation/intro/install/general.rst
index 42b5682fd..19e360d47 100644
--- a/Documentation/intro/install/general.rst
+++ b/Documentation/intro/install/general.rst
@@ -90,7 +90,7 @@ need the following software:
   If libcap-ng is installed, then Open vSwitch will automatically build with
   support for it.
 
-- Python 3.4 or later.
+- Python 3.6 or later.
 
 - Unbound library, from http://www.unbound.net, is optional but recommended if
   you want to enable ovs-vswitchd and other utilities to use DNS names when
@@ -208,7 +208,7 @@ simply install and run Open vSwitch you require the 
following software:
   from iproute2 (part of all major distributions and available at
   https://wiki.linuxfoundation.org/networking/iproute2).
 
-- Python 3.4 or later.
+- Python 3.6 or later.
 
 On Linux you should ensure that ``/dev/urandom`` exists. To support TAP
 devices, you must also ensure that ``/dev/net/tun`` exists.
diff --git a/Documentation/intro/install/rhel.rst 
b/Documentation/intro/install/rhel.rst
index d1fc42021..f2151d890 100644
--- a/Documentation/intro/install/rhel.rst
+++ b/Documentation/intro/install/rhel.rst
@@ -92,7 +92,7 @@ Once that is completed, remove the file ``/tmp/ovs.spec``.
 If python3-sphinx package is not available in your version of RHEL, you can
 install it via pip with 'pip install sphinx'.
 
-Open vSwitch requires python 3.4 or newer which is not available in older
+Open vSwitch requires python 3.6 or newer which is not available in older
 distributions. In the case of RHEL 6.x and its derivatives, one option is
 to install python34 from `EPEL`_.
 
diff --git a/Documentation/intro/install/windows.rst 
b/Documentation/intro/install/windows.rst
index 78f60f35a..fce099d5d 100644
--- a/Documentation/intro/install/windows.rst
+++ b/Documentation/intro/install/windows.rst

[ovs-dev] [PATCH v4] python: Add async DNS support

2023-07-11 Thread Terry Wilson
This adds a Python version of the async DNS support added in:

771680d96 DNS: Add basic support for asynchronous DNS resolving

The above version uses the unbound C library, and this
implimentation uses the SWIG-wrapped Python version of that.

In the event that the Python unbound library is not available,
a warning will be logged and the resolve() method will just
return None. For the case where inet_parse_active() is passed
an IP address, it will not try to resolve it, so existing
behavior should be preserved in the case that the unbound
library is unavailable.

Intentional differences from the C version are as follows:

  OVS_HOSTS_FILE environment variable can bet set to override
  the system 'hosts' file. This is primarily to allow testing to
  be done without requiring network connectivity.

  Since resolution can still be done via hosts file lookup, DNS
  lookups are not disabled when resolv.conf cannot be loaded.

  The Python socket_util module has fallen behind its C equivalent.
  The bare minimum change was done to inet_parse_active() to support
  sync/async dns, as there is no equivalent to
  parse_sockaddr_components(), inet_parse_passive(), etc. A TODO
  was added to bring socket_util.py up to equivalency to the C
  version.

Signed-off-by: Terry Wilson 
---
 .github/workflows/build-and-test.yml|   4 +-
 Documentation/intro/install/general.rst |   4 +-
 Documentation/intro/install/rhel.rst|   2 +-
 Documentation/intro/install/windows.rst |   2 +-
 NEWS|   3 +
 debian/control.in   |   1 +
 m4/openvswitch.m4   |   8 +-
 python/TODO.rst |   7 +
 python/automake.mk  |   2 +
 python/ovs/dns_resolve.py   | 286 
 python/ovs/socket_util.py   |  21 +-
 python/ovs/stream.py|   2 +-
 python/ovs/tests/test_dns_resolve.py| 280 +++
 python/setup.py |   6 +-
 rhel/openvswitch-fedora.spec.in |   2 +-
 tests/vlog.at   |   2 +
 16 files changed, 615 insertions(+), 17 deletions(-)
 create mode 100644 python/ovs/dns_resolve.py
 create mode 100644 python/ovs/tests/test_dns_resolve.py

diff --git a/.github/workflows/build-and-test.yml 
b/.github/workflows/build-and-test.yml
index f66ab43b0..47d239f10 100644
--- a/.github/workflows/build-and-test.yml
+++ b/.github/workflows/build-and-test.yml
@@ -183,10 +183,10 @@ jobs:
   run:  sudo apt update || true
 - name: install common dependencies
   run:  sudo apt install -y ${{ env.dependencies }}
-- name: install libunbound libunwind
+- name: install libunbound libunwind python3-unbound
   # GitHub Actions doesn't have 32-bit versions of these libraries.
   if:   matrix.m32 == ''
-  run:  sudo apt install -y libunbound-dev libunwind-dev
+  run:  sudo apt install -y libunbound-dev libunwind-dev python3-unbound
 - name: install 32-bit libraries
   if:   matrix.m32 != ''
   run:  sudo apt install -y gcc-multilib
diff --git a/Documentation/intro/install/general.rst 
b/Documentation/intro/install/general.rst
index 42b5682fd..19e360d47 100644
--- a/Documentation/intro/install/general.rst
+++ b/Documentation/intro/install/general.rst
@@ -90,7 +90,7 @@ need the following software:
   If libcap-ng is installed, then Open vSwitch will automatically build with
   support for it.
 
-- Python 3.4 or later.
+- Python 3.6 or later.
 
 - Unbound library, from http://www.unbound.net, is optional but recommended if
   you want to enable ovs-vswitchd and other utilities to use DNS names when
@@ -208,7 +208,7 @@ simply install and run Open vSwitch you require the 
following software:
   from iproute2 (part of all major distributions and available at
   https://wiki.linuxfoundation.org/networking/iproute2).
 
-- Python 3.4 or later.
+- Python 3.6 or later.
 
 On Linux you should ensure that ``/dev/urandom`` exists. To support TAP
 devices, you must also ensure that ``/dev/net/tun`` exists.
diff --git a/Documentation/intro/install/rhel.rst 
b/Documentation/intro/install/rhel.rst
index d1fc42021..f2151d890 100644
--- a/Documentation/intro/install/rhel.rst
+++ b/Documentation/intro/install/rhel.rst
@@ -92,7 +92,7 @@ Once that is completed, remove the file ``/tmp/ovs.spec``.
 If python3-sphinx package is not available in your version of RHEL, you can
 install it via pip with 'pip install sphinx'.
 
-Open vSwitch requires python 3.4 or newer which is not available in older
+Open vSwitch requires python 3.6 or newer which is not available in older
 distributions. In the case of RHEL 6.x and its derivatives, one option is
 to install python34 from `EPEL`_.
 
diff --git a/Documentation/intro/install/windows.rst 
b/Documentation/intro/install/windows.rst
index 78f60f35a..fce099d5d 100644
--- a/Documentation/intro/install/windows.rst
+++ b/Documentation/intro/install/windows.rst

Re: Review of e-Tugra's Inclusion in Mozilla’s Root Store

2023-07-11 Thread Ben Wilson
ome users' (non SSL) agreements are accessed by the
reporter” and “No sensitive information disclosure revealed in this issue”.
They also said, “no sensitive information is logged in the logging system”.
However, these statements were refuted by the fact that the contents of
568,647 emails and 251,230 documents were accessible, including access to
password resets and confirmation codes.

e-Tugra representatives first said there was “no connection” with CA
systems and then said that there was a "one-way flow".  Then they said that
their customer portal would allow reissuance for already-validated domain
validation certificates, but that the accessed application had no impact on
the process. This was contradictory because the password-reset function
could have been compromised with information in the log to change a user’s
password.

Root Removal

The responses from e-Tugra’s representatives demonstrate that this CA lacks
the expertise needed to operate a sufficiently secure environment for a
publicly trusted CA. As such, we will remove the following root
certificates in our next batch of changes to Mozilla’s root store:

E-Tugra Global Root CA RSA v3

SHA-1 Fingerprint: E9A85D2214521C5BAA0AB4BE246A238AC9BAE2A9

SHA-256 Fingerprint:
EF66B0B10A3CDB9F2E3648C76BD2AF18EAD2BFE6F117655E28C4060DA1A3F4C2

E-Tugra Global Root CA ECC v3

SHA-1 Fingerprint: 8A2FAF5753B1B0E6A104EC5B6A69716DF61CE284

SHA-256 Fingerprint:
873F4685FA7F563625252E6D36BCD7F16FC24951F264E47E1B954F4908CDCA13

Note: We will also be removing the expired “e-Tugra Certification
Authority” root certificate, as previously planned.

Follow-up

e-Tugra may continue to be a reseller to another CA operator with root
certificates in Mozilla’s root store, as long as that other CA operator
ensures correct systems, procedures, and processes in relation to reselling
certificates within their CA hierarchy, and all domain validation and
certificate issuance is performed by that other CA whose root certificates
are included in Mozilla’s root store.

e-Tugra may not become an externally-operated subordinate CA
<https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy#84-externally-operated-subordinate-cas>
chaining up to a root certificate in Mozilla’s root store, and in order for
e-Tugra representatives to apply to have new root certificates included in
Mozilla’s root store in the future, they would need to:

   -

   Provide sufficiently detailed artifacts and testing to demonstrate that
   they have implemented an infrastructure with processes in place to ensure
   that systems are rigorously secured and thoroughly hardened.
   -

   Create new root certificates within the properly configured/secured
   infrastructure.
   -

   Re-apply for inclusion in Mozilla’s root store, and go through Mozilla’s
   full root inclusion process
   <https://wiki.mozilla.org/CA/Application_Process>.


Thanks,

Ben and Kathleen

On Mon, Jun 5, 2023 at 11:36 AM Ben Wilson  wrote:

> Dear Mozilla Community,
>
> This email relates to the e-Tugra breach that was described in a blog
> post by Ian Carroll <https://ian.sh/etugra> and subsequent discussions
> here
> <https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/yqALPG5PC4s/m/sIkv6eLJ>
> and in CCADB Public
> <https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/yqALPG5PC4s/m/sIkv6eLJ>
> and Bugzilla <https://bugzilla.mozilla.org/show_bug.cgi?id=1801345>. We
> are grateful for the involvement of various individuals and
> organizations, particularly Ian Carroll and Google Chrome, who have
> contributed their expertise and resources while investigating this breach.
>
> We are now opening this discussion to help determine whether the e-Tugra
> root certificates should be removed from Mozilla’s Root Store. We will
> greatly appreciate your thoughtful and constructive feedback on this.
>
> Below are some questions for us to consider in this discussion.
>
> What were the main concerns raised by the community during the discussions
> that took place in Bugzilla Bug #1801345
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1801345>? (this is not a
> complete list; details may be found in the bug)
>
>-
>
>Mr. Carroll indicated that he was able to log in and conduct
>reconnaissance on e-Tugra’s email and document storage systems, gaining
>access to customer PII.
>-
>
>   There were security holes in e-Tugra’s internal systems that
>   existed because access to internal resources was not adequately secured,
>   namely, default passwords on some administrative tools had not been
>   changed, and internal applications were left exposed without appropriate
>   access control mechanisms.
>   -
>
>   Statements by e-Tugra about the lack of impact were refuted by the
>  

MRSP 2.9: Issues #252 and #266 - Incident Reporting

2023-07-11 Thread Ben Wilson
All,

We are proposing to revise Mozilla Root Store Policy (MRSP) Section 2.4
(Incidents) to address GitHub Issue # 252
 and Issue # 266
.

*Issue #252  -
Requirements for Reporting CA Security Incidents*

As noted in Issue #252, more guidance is needed for reporting security
incidents to Mozilla. I am drafting a wiki page that will outline what is a
reportable security incident and what a security incident report should
contain. Thus, MRSP section 2.4 will be amended to read something to the
effect, " 'Reportable Security Incident' means any security event, breach,
or compromise that has the potential to significantly impact the
confidentiality, integrity, or availability of CA infrastructure, CA
systems, or the trustworthiness of issued certificates.  A Reportable
Security Incident MUST be reported with a security incident report in
Bugzilla [link to Bugzilla security incident report template] as soon as
possible and no later than __ hours, as described in [wiki page].
Additionally, other important security incidents and compromises of a CA
operator's internal systems SHOULD be reported."

*Issue #266  – Update
reference to https://www.ccadb.org/cas/incident-report
*

Also, Issue #266 will be addressed by pointing to the CCADB's incident
report requirements.  The following language in MRSP section 2.4 will be
amended to read, "CA Operators must report incidents to Mozilla in the form
of an Incident Report that follows guidance provided on the CCADB website -
https://www.ccadb.org/cas/incident-report.;

I look forward to your comments and suggestions regarding security incident
reporting.

Thanks,

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaYDSSBvzsU7HJhoDk-tFbrt4rJP3Vyep8MPvCY%3DA447vg%40mail.gmail.com.


Re: Ignite data region off-heap allocation

2023-07-11 Thread Raymond Wilson
How do Ignite .Net server nodes manage this memory issue in other
projects?

On Tue, Jul 11, 2023 at 5:32 PM Raymond Wilson 
wrote:

> Oops, commutes => committed
>
> On Tue, 11 Jul 2023 at 4:34 PM, Raymond Wilson 
> wrote:
>
>> I can’t see another way of letting . Net know that it can’t have access
>> to all the ‘free’ memory in the process when a large slab of that is spoken
>> for in terms of memory commutes to Ignite data regions.
>>
>> In the current setup, as time goes on and Ignite progressively fills the
>> allocated cache ram then system behaviour changes and can result in out of
>> memory issues. I think I would prefer consistent system behaviour wrt to
>> allocated resources from the start.
>>
>> Raymond.
>>
>> On Tue, 11 Jul 2023 at 3:57 PM, Pavel Tupitsyn 
>> wrote:
>>
>>> Are you sure this is necessary?
>>>
>>> GC.AddMemoryPressure documentation [1] states that this will "improve
>>> performance only for types that exclusively depend on finalizers".
>>>
>>> [1]
>>> https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=net-7.0
>>>
>>> On Tue, Jul 11, 2023 at 1:02 AM Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
>>>> I'm making changes to add memory pressure to the GC to take into
>>>> account memory committed to the Ignite data regions as this will be
>>>> unmanaged memory allocations from the perspective of the GC.
>>>>
>>>> I don't call seeing anything related to this for .Net clients in the
>>>> documentation. Are you aware of any?
>>>>
>>>> Raymond.
>>>>
>>>> On Mon, Jul 10, 2023 at 9:41 PM Raymond Wilson <
>>>> raymond_wil...@trimble.com> wrote:
>>>>
>>>>> Thanks Pavel, this makes sense.
>>>>>
>>>>> Querying the .Net Process instance shows this as the difference
>>>>> between PagesMemorySize (includes committed) versus WorkingSet (includes
>>>>> uses/written to) size.
>>>>> Raymond.
>>>>>
>>>>>
>>>>
>>>> --
>>>> <http://www.trimble.com/>
>>>> Raymond Wilson
>>>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>>>> 11 Birmingham Drive |
>>>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>Christchurch,
>>>> New Zealand
>>>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>
>>>> raymond_wil...@trimble.com
>>>>
>>>>
>>>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>>>
>>> --
>> <http://www.trimble.com/>
>> Raymond Wilson
>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>> 11 Birmingham Drive | Christchurch, New Zealand
>> raymond_wil...@trimble.com
>>
>>
>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>
> --
> <http://www.trimble.com/>
> Raymond Wilson
> Trimble Distinguished Engineer, Civil Construction Software (CCS)
> 11 Birmingham Drive | Christchurch, New Zealand
> raymond_wil...@trimble.com
>
>
> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>


-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Re: Ignite data region off-heap allocation

2023-07-10 Thread Raymond Wilson
Oops, commutes => committed

On Tue, 11 Jul 2023 at 4:34 PM, Raymond Wilson 
wrote:

> I can’t see another way of letting . Net know that it can’t have access to
> all the ‘free’ memory in the process when a large slab of that is spoken
> for in terms of memory commutes to Ignite data regions.
>
> In the current setup, as time goes on and Ignite progressively fills the
> allocated cache ram then system behaviour changes and can result in out of
> memory issues. I think I would prefer consistent system behaviour wrt to
> allocated resources from the start.
>
> Raymond.
>
> On Tue, 11 Jul 2023 at 3:57 PM, Pavel Tupitsyn 
> wrote:
>
>> Are you sure this is necessary?
>>
>> GC.AddMemoryPressure documentation [1] states that this will "improve
>> performance only for types that exclusively depend on finalizers".
>>
>> [1]
>> https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=net-7.0
>>
>> On Tue, Jul 11, 2023 at 1:02 AM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> I'm making changes to add memory pressure to the GC to take into account
>>> memory committed to the Ignite data regions as this will be unmanaged
>>> memory allocations from the perspective of the GC.
>>>
>>> I don't call seeing anything related to this for .Net clients in the
>>> documentation. Are you aware of any?
>>>
>>> Raymond.
>>>
>>> On Mon, Jul 10, 2023 at 9:41 PM Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
>>>> Thanks Pavel, this makes sense.
>>>>
>>>> Querying the .Net Process instance shows this as the difference between
>>>> PagesMemorySize (includes committed) versus WorkingSet (includes
>>>> uses/written to) size.
>>>> Raymond.
>>>>
>>>>
>>>
>>> --
>>> <http://www.trimble.com/>
>>> Raymond Wilson
>>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>>> 11 Birmingham Drive |
>>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>Christchurch,
>>> New Zealand
>>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>
>>> raymond_wil...@trimble.com
>>>
>>>
>>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>>
>> --
> <http://www.trimble.com/>
> Raymond Wilson
> Trimble Distinguished Engineer, Civil Construction Software (CCS)
> 11 Birmingham Drive | Christchurch, New Zealand
> raymond_wil...@trimble.com
>
>
> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>
-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Re: Ignite data region off-heap allocation

2023-07-10 Thread Raymond Wilson
I can’t see another way of letting . Net know that it can’t have access to
all the ‘free’ memory in the process when a large slab of that is spoken
for in terms of memory commutes to Ignite data regions.

In the current setup, as time goes on and Ignite progressively fills the
allocated cache ram then system behaviour changes and can result in out of
memory issues. I think I would prefer consistent system behaviour wrt to
allocated resources from the start.

Raymond.

On Tue, 11 Jul 2023 at 3:57 PM, Pavel Tupitsyn  wrote:

> Are you sure this is necessary?
>
> GC.AddMemoryPressure documentation [1] states that this will "improve
> performance only for types that exclusively depend on finalizers".
>
> [1]
> https://learn.microsoft.com/en-us/dotnet/api/system.gc.addmemorypressure?view=net-7.0
>
> On Tue, Jul 11, 2023 at 1:02 AM Raymond Wilson 
> wrote:
>
>> I'm making changes to add memory pressure to the GC to take into account
>> memory committed to the Ignite data regions as this will be unmanaged
>> memory allocations from the perspective of the GC.
>>
>> I don't call seeing anything related to this for .Net clients in the
>> documentation. Are you aware of any?
>>
>> Raymond.
>>
>> On Mon, Jul 10, 2023 at 9:41 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Thanks Pavel, this makes sense.
>>>
>>> Querying the .Net Process instance shows this as the difference between
>>> PagesMemorySize (includes committed) versus WorkingSet (includes
>>> uses/written to) size.
>>> Raymond.
>>>
>>>
>>
>> --
>> <http://www.trimble.com/>
>> Raymond Wilson
>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>> 11 Birmingham Drive |
>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>Christchurch,
>> New Zealand
>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>
>> raymond_wil...@trimble.com
>>
>>
>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>
> --
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Re: Ignite data region off-heap allocation

2023-07-10 Thread Raymond Wilson
I'm making changes to add memory pressure to the GC to take into account
memory committed to the Ignite data regions as this will be unmanaged
memory allocations from the perspective of the GC.

I don't call seeing anything related to this for .Net clients in the
documentation. Are you aware of any?

Raymond.

On Mon, Jul 10, 2023 at 9:41 PM Raymond Wilson 
wrote:

> Thanks Pavel, this makes sense.
>
> Querying the .Net Process instance shows this as the difference between
> PagesMemorySize (includes committed) versus WorkingSet (includes
> uses/written to) size.
> Raymond.
>
>

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


[Translators-l] Re: Ready for translation: Tech News #28 (2023)

2023-07-10 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 20 languages) to 1,074 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Wikitech-ambassadors] Tech News 2023, week 28

2023-07-10 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/28. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Deutsch
, English, Tiếng Việt
, Türkçe
, español
, français
, galego
, italiano
, norsk bokmål
, polski
, suomi
, svenska
, čeština
, русский
, українська
, עברית
, فارسی
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語
, 한국어


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - The Section-level Image Suggestions feature
   

   has been deployed on seven Wikipedias (Portuguese, Russian, Indonesian,
   Catalan, Hungarian, Finnish and Norwegian Bokmål). The feature recommends
   images for articles on contributors' watchlists that are a good match for
   individual sections of those articles.
   - Global abuse filters
   
   have been enabled on all Wikimedia projects, except English and Japanese
   Wikipedias (who opted out). This change was made following a global
   request for comments
   
.
   [1] 
   - Special:BlockedExternalDomains
    is a
   new tool for administrators to help fight spam. It provides a clearer
   interface for blocking plain domains (and their subdomains), is more easily
   searchable, and is faster for the software to process for each edit on the
   wiki. It does not support regex (for complex cases), nor URL path-matching,
   nor the MediaWiki:Spam-whitelist
   , but
   otherwise it replaces most of the functionalities of the existing
   MediaWiki:Spam-blacklist
   . There is a
   Python script to help migrate all simple domains into this tool, and more
   feature details, within the tool's documentation
   
.
   It is available at all wikis except for Meta-wiki, Commons, and Wikidata.
   [2] 
   - The WikiEditor extension was updated. It includes some of the most
   frequently used features of wikitext editing. In the past, many of its
   messages could only be translated by administrators, but now all regular
   translators on translatewiki can translate them. Please check the state
   of WikiEditor localization into your language
   
,
   and if the "Completion" for your language shows anything less than 100%,
   please complete the translation. See a more detailed explanation
   

   .


Re: [go-cd] Go-Agent || CVE-2022-42889

2023-07-10 Thread Chad Wilson
Hiya

GoCD has been using commons-text 1.10 (with the issue you refer to fixed)
since GoCD 22.3.0:
https://github.com/gocd/gocd/commit/293022076385c48c9fb41485b5674fa2e69c29c1

The agent *bootstrapper* doesn't use commons-text at all, however the agent
jar which is dynamically downloaded from the server and matches the
server's version does use commons-text. You might want to double check your
server is running GoCD version 22.3.0 or later?

-Chad

On Mon, Jul 10, 2023 at 11:06 PM Mai M. Khattab 
wrote:

> Hello There,
> Any idea how can if there a remediation for (CVE-2022-42889 -  Arbitrary
> code execution in Apache Commons Text · CVE-2022-42889 · GitHub Advisory
> Database   ) on
> (go-agent), please?
> I am using go-agent (v23.1) and I found it is using commons-text (v1.9)
> Regards,
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/29cd81fe-b404-41c8-8db4-260e1204d00cn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH-MHbMNTohr%3DTODFWgg7CysPi5Y2Met-8%3D6rrjfV7id_g%40mail.gmail.com.


Re: [ovs-dev] [PATCH v3] python: Add async DNS support

2023-07-10 Thread Terry Wilson
On Mon, Jul 10, 2023 at 10:32 AM Terry Wilson  wrote:
>
> I accidentally forgot to click reply-to-all.
>
> On Fri, Jun 30, 2023 at 10:27 AM Ilya Maximets  wrote:
> >
> > On 6/30/23 16:54, Adrian Moreno wrote:
> > >
> > >
> > > On 6/30/23 14:35, Ilya Maximets wrote:
> > >> On 6/30/23 14:23, Adrian Moreno wrote:
> > >>>
> > >>>
> > >>> On 6/30/23 13:40, Ilya Maximets wrote:
> > >>>> On 6/30/23 12:39, Adrian Moreno wrote:
> > >>>>>
> > >>>>>
> > >>>>> On 6/14/23 23:07, Terry Wilson wrote:
> > >>>>>> This adds a Python version of the async DNS support added in:
> > >>>>>>
> > >>>>>> 771680d96 DNS: Add basic support for asynchronous DNS resolving
> > >>>>>>
> > >>>>>> The above version uses the unbound C library, and this
> > >>>>>> implimentation uses the SWIG-wrapped Python version of that.
> > >>>>>>
> > >>>>>> In the event that the Python unbound library is not available,
> > >>>>>> a warning will be logged and the resolve() method will just
> > >>>>>> return None. For the case where inet_parse_active() is passed
> > >>>>>> an IP address, it will not try to resolve it, so existing
> > >>>>>> behavior should be preserved in the case that the unbound
> > >>>>>> library is unavailable.
> > >>>>>>
> > >>>>>> Intentional differences from the C version are as follows:
> > >>>>>>
> > >>>>>>  OVS_HOSTS_FILE environment variable can bet set to override
> > >>>>>>  the system 'hosts' file. This is primarily to allow testing to
> > >>>>>>  be done without requiring network connectivity.
> > >>>>>>
> > >>>>>>  Since resolution can still be done via hosts file lookup, DNS
> > >>>>>>  lookups are not disabled when resolv.conf cannot be loaded.
> > >>>>>>
> > >>>>>>  The Python socket_util module has fallen behind its C 
> > >>>>>> equivalent.
> > >>>>>>  The bare minimum change was done to inet_parse_active() to 
> > >>>>>> support
> > >>>>>>  sync/async dns, as there is no equivalent to
> > >>>>>>  parse_sockaddr_components(), inet_parse_passive(), etc. A TODO
> > >>>>>>  was added to bring socket_util.py up to equivalency to the C
> > >>>>>>  version.
> > >>>>>>
> > >>>>>> Signed-off-by: Terry Wilson 
> > >>>>>> ---
> > >>>>>> .github/workflows/build-and-test.yml|   4 +-
> > >>>>>> Documentation/intro/install/general.rst |   4 +-
> > >>>>>> Documentation/intro/install/rhel.rst|   2 +-
> > >>>>>> Documentation/intro/install/windows.rst |   2 +-
> > >>>>>> NEWS|   4 +-
> > >>>>>> debian/control.in   |   1 +
> > >>>>>> m4/openvswitch.m4   |   8 +-
> > >>>>>> python/TODO.rst |   7 +
> > >>>>>> python/automake.mk  |   2 +
> > >>>>>> python/ovs/dns_resolve.py   | 272 
> > >>>>>> +++
> > >>>>>> python/ovs/socket_util.py   |  21 +-
> > >>>>>> python/ovs/stream.py|   2 +-
> > >>>>>> python/ovs/tests/test_dns_resolve.py| 280 
> > >>>>>> 
> > >>>>>> python/setup.py |   6 +-
> > >>>>>> rhel/openvswitch-fedora.spec.in |   2 +-
> > >>>>>> tests/vlog.at   |   2 +
> > >>>>>> 16 files changed, 601 insertions(+), 18 deletions(-)
> > >>>>>> create mode 100644 python/ovs/dns_resolve.py
> > >>>>>> create mode 100644 python/ovs/tests/test_dns_resolve.py
> > &

Re: [ovs-dev] [PATCH v3] python: Add async DNS support

2023-07-10 Thread Terry Wilson
I accidentally forgot to click reply-to-all.

On Fri, Jun 30, 2023 at 10:27 AM Ilya Maximets  wrote:
>
> On 6/30/23 16:54, Adrian Moreno wrote:
> >
> >
> > On 6/30/23 14:35, Ilya Maximets wrote:
> >> On 6/30/23 14:23, Adrian Moreno wrote:
> >>>
> >>>
> >>> On 6/30/23 13:40, Ilya Maximets wrote:
> >>>> On 6/30/23 12:39, Adrian Moreno wrote:
> >>>>>
> >>>>>
> >>>>> On 6/14/23 23:07, Terry Wilson wrote:
> >>>>>> This adds a Python version of the async DNS support added in:
> >>>>>>
> >>>>>> 771680d96 DNS: Add basic support for asynchronous DNS resolving
> >>>>>>
> >>>>>> The above version uses the unbound C library, and this
> >>>>>> implimentation uses the SWIG-wrapped Python version of that.
> >>>>>>
> >>>>>> In the event that the Python unbound library is not available,
> >>>>>> a warning will be logged and the resolve() method will just
> >>>>>> return None. For the case where inet_parse_active() is passed
> >>>>>> an IP address, it will not try to resolve it, so existing
> >>>>>> behavior should be preserved in the case that the unbound
> >>>>>> library is unavailable.
> >>>>>>
> >>>>>> Intentional differences from the C version are as follows:
> >>>>>>
> >>>>>>  OVS_HOSTS_FILE environment variable can bet set to override
> >>>>>>  the system 'hosts' file. This is primarily to allow testing to
> >>>>>>  be done without requiring network connectivity.
> >>>>>>
> >>>>>>  Since resolution can still be done via hosts file lookup, DNS
> >>>>>>  lookups are not disabled when resolv.conf cannot be loaded.
> >>>>>>
> >>>>>>  The Python socket_util module has fallen behind its C equivalent.
> >>>>>>  The bare minimum change was done to inet_parse_active() to support
> >>>>>>  sync/async dns, as there is no equivalent to
> >>>>>>  parse_sockaddr_components(), inet_parse_passive(), etc. A TODO
> >>>>>>  was added to bring socket_util.py up to equivalency to the C
> >>>>>>  version.
> >>>>>>
> >>>>>> Signed-off-by: Terry Wilson 
> >>>>>> ---
> >>>>>> .github/workflows/build-and-test.yml|   4 +-
> >>>>>> Documentation/intro/install/general.rst |   4 +-
> >>>>>> Documentation/intro/install/rhel.rst|   2 +-
> >>>>>> Documentation/intro/install/windows.rst |   2 +-
> >>>>>> NEWS|   4 +-
> >>>>>> debian/control.in   |   1 +
> >>>>>> m4/openvswitch.m4   |   8 +-
> >>>>>> python/TODO.rst |   7 +
> >>>>>> python/automake.mk  |   2 +
> >>>>>> python/ovs/dns_resolve.py   | 272 
> >>>>>> +++
> >>>>>> python/ovs/socket_util.py   |  21 +-
> >>>>>> python/ovs/stream.py|   2 +-
> >>>>>> python/ovs/tests/test_dns_resolve.py| 280 
> >>>>>> 
> >>>>>> python/setup.py |   6 +-
> >>>>>> rhel/openvswitch-fedora.spec.in |   2 +-
> >>>>>> tests/vlog.at   |   2 +
> >>>>>> 16 files changed, 601 insertions(+), 18 deletions(-)
> >>>>>> create mode 100644 python/ovs/dns_resolve.py
> >>>>>> create mode 100644 python/ovs/tests/test_dns_resolve.py
> >>>>>>
> >>>>>> diff --git a/.github/workflows/build-and-test.yml 
> >>>>>> b/.github/workflows/build-and-test.yml
> >>>>>> index f66ab43b0..47d239f10 100644
> >>>>>> --- a/.github/workflows/build-and-test.yml
> >>>>>> +++ b/.github/workflows/build-and-test.yml
> >>>>>> @@ -183,10 +183,10 @@ jobs:
> >>>>>>   ru

Re: Ignite data region off-heap allocation

2023-07-10 Thread Raymond Wilson
Thanks Pavel, this makes sense.

Querying the .Net Process instance shows this as the difference between
PagesMemorySize (includes committed) versus WorkingSet (includes
uses/written to) size.
Raymond.


Re: Ignite data region off-heap allocation

2023-07-10 Thread Raymond Wilson
Hi Pavel,

I want to say this should be included in the ‘used’ memory for a process,
but perhaps that is not correct.

Raymond.

On Mon, 10 Jul 2023 at 5:07 PM, Pavel Tupitsyn  wrote:

> Hi Raymond,
>
> "allocated=94407MB" reported by Ignite is "committed" memory - requested
> from the OS, but not entirely used/touched.
>
>
> See
> -
> https://github.com/apache/ignite/blob/df685afb08e3c2297adb8fc6df435a7310e95e50/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L2369
> -
> https://serverfault.com/questions/1008584/committed-allocated-memory-in-linux-is-less-than-used-memory-how-is-that-possib
> -
> https://unix.stackexchange.com/questions/137773/is-inactive-memory-related-to-the-commited-but-unused
>
> On Sat, Jul 8, 2023 at 11:44 AM Raymond Wilson 
> wrote:
>
>> Hi,
>>
>> We have an Ignite node reporting off-heap data region allocation like
>> this in the logs:
>>
>> ^-- Off-heap memory [used=37077MB, free=60.81%, allocated=94407MB]
>>
>> The same process (.Net 7 running in a Kubernetes pod with 124Gb allocated
>> out of 128Gb available on the node), reports this level of managed memory
>> usage:
>>
>> Heartbeat: Total managed memory use: 43836.083Mb
>>
>> Clearly ~94Gb + ~44Gb (138Gb) is a lot more than both 128Gb and 124Gb
>>
>> The node in question has the initial and maximum allocation for the data
>> region as 94208Mb (plus the system data region etc), so I expect the Ignite
>> node to have allocated that much (which is indicated by the 94407Mb
>> allocated figure noted in the log line.
>>
>> However, the .Net CLR is reporting nearly 48Gb of managed RAM usage in
>> .Net, so something does not add up. Either .Net is lying about how much it
>> is using, or Ignite is lying about how much RAM it actually allocated.
>>
>> I feel I am missing something here!
>>
>> Thanks,
>> Raymond.
>>
>> --
>> <http://www.trimble.com/>
>> Raymond Wilson
>> Trimble Distinguished Engineer, Civil Construction Software (CCS)
>> 11 Birmingham Drive |
>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>Christchurch,
>> New Zealand
>> <https://www.google.com/maps/search/11+Birmingham+Drive%C2%A0%7C%C2%A0+Christchurch,+New+Zealand?entry=gmail=g>
>> raymond_wil...@trimble.com
>>
>>
>> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>>
> --
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


Ignite data region off-heap allocation

2023-07-08 Thread Raymond Wilson
Hi,

We have an Ignite node reporting off-heap data region allocation like this
in the logs:

^-- Off-heap memory [used=37077MB, free=60.81%, allocated=94407MB]

The same process (.Net 7 running in a Kubernetes pod with 124Gb allocated
out of 128Gb available on the node), reports this level of managed memory
usage:

Heartbeat: Total managed memory use: 43836.083Mb

Clearly ~94Gb + ~44Gb (138Gb) is a lot more than both 128Gb and 124Gb

The node in question has the initial and maximum allocation for the data
region as 94208Mb (plus the system data region etc), so I expect the Ignite
node to have allocated that much (which is indicated by the 94407Mb
allocated figure noted in the log line.

However, the .Net CLR is reporting nearly 48Gb of managed RAM usage in
.Net, so something does not add up. Either .Net is lying about how much it
is using, or Ignite is lying about how much RAM it actually allocated.

I feel I am missing something here!

Thanks,
Raymond.

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


[Translators-l] Re: Ready for translation: Tech News #28 (2023)

2023-07-07 Thread Nick Wilson (Quiddity)
On Thu, Jul 6, 2023 at 5:54 PM Nick Wilson (Quiddity) 
wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/28
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F28=page
>

The text of the newsletter is now final.

*Four items have been added* since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Re: [EXTERNAL] Re: Slack Invite request

2023-07-07 Thread Wilson, Amy
Thanks Rawlin!

From: Rawlin Peters 
Date: Thursday, July 6, 2023 at 7:24 PM
To: dev@trafficcontrol.apache.org 
Subject: [EXTERNAL] Re: Slack Invite request
Invite sent!

- Rawlin

On Thu, Jul 6, 2023 at 6:06 AM Mukka Gangaprasad  wrote:
>
> Will someone please invite me to the ASF Slack so I can join the
> #traffic-control channel?
>
> Best regards,
> Gangaprasad.


[grpc-io] [Last Call] Come Speak at gRPConf 2023!

2023-07-07 Thread 'Terry Wilson' via grpc.io


Hello gRPC Community!


This is the last call to get your proposals in to speak at gRPConf 2023. 
The deadline is July 9th (this Sunday).


Submit your proposals and find more information and tips HERE 
!


Those not interested in speaking are invited to register now 
 for just $50. 


We hope to see you all there.


Thank you,

The gRPC team



-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/3f63a155-1c62-4b3d-bdbc-f27666073f63n%40googlegroups.com.


[Translators-l] Ready for translation: Tech News #28 (2023)

2023-07-06 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/28

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F28=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Elecraft] MH2 Mic Problem

2023-07-06 Thread Wilson Lamb via Elecraft
Well, perseverance pays, sometimes.I realize this mic probably came from China, 
but...
the connections to the end of the cable, in the mic housing, weren't well 
handled.
The connection of the black wire to the shield was made so near the end of the 
outer jacket that overheating other wires was very likely.
When all reheating had failed I had nothing to lose, so cut back the cable 
jacket a half inch or so and, What Ho!
There  was a tiny holiday in the insulation of the yellow wire, the hot audio 
downstream of the white wire from the element.
Naturally, the open spot attracted the strands of the cable shield and they 
eventually made contact with the conductor in the yellow wire!
Could I have nicked the yellow wire? Of course, but the fact is the mic didn't 
work before and now does.
I've done nothing but separate the wires, as they come out of the cable, so I'm 
sticking to my guns and blaming the loss of a couple of hours of my precious 
time on careless work terminating the cable in the housing.
There really is no free lunch and sometimes we have to get to the bottom of 
things to know the truth.
Reminds me of when I found the filaments of a pair of 813s wired in series 
rather than in parallel!
Or the time I found a 2500V power supply in a HB linear hooked up BACKWARD.
Why those builders stopped at those crucial times I'll never know, but both 
amps are decently built and are now on the air.
73,
Wilson
W4BOH
__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:Elecraft@mailman.qth.net

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to arch...@mail-archive.com 


[Elecraft] Mic Element

2023-07-06 Thread Wilson Lamb via Elecraft
Further toubleshooting has not found a problem, so I'm still thinking element.
When I was involved with cell phones, they cost a quarter!
I'll check the plant.
WL
__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:Elecraft@mailman.qth.net

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to arch...@mail-archive.com 


RE: [neonixie-l] B7971 - better contrast

2023-07-06 Thread Michail Wilson
Same here.
I have 5x MOD_6 clocks (various versions).  2 with ‘antenna’ tubes. I notice no 
difference; however, I pride myself on having the antenna tubes.  Maybe it was 
just me, but having had hundreds (if not a thousand) of B7971 tubes, I have 
only run across the 13 antenna tubes  (yes, only 1 spare).

Michail

From: neonixie-l@googlegroups.com  On Behalf Of 
Nicholas Stock
Sent: Thursday, July 6, 2023 11:14 AM
To: neonixie-l@googlegroups.com
Subject: Re: [neonixie-l] B7971 - better contrast

I have a MOD_6 with pin-top tubes in them. Absolutely no visual difference to 
the non-pin ones when compared side by side.

Jeff, you should have bought more when you could 

I still rue the day when I thought a box of 100 IN14's at $99 was expensive.

Nick
Sent from my iPhone


On Jul 6, 2023, at 07:34, Robert G. Schaffrath 
mailto:robert.schaffr...@gmail.com>> wrote:
I have one of the pin top tubes 
(https://n2jtx.com/NixieClock/Used%20Burroughs%20B-7971.jpg). Unlike most of my 
other tubes, it is a Burroughs branded tube and not Ultronics. The only thing I 
ever noticed different about it was it seemed to have a deeper red glow than 
the Ultronics tubes which are more orange. Perhaps a different amount of 
mercury in it. Someday when I think about it I'll test it again.
On Wednesday, July 5, 2023 at 3:50:55 AM UTC-4 Jeff Walton wrote:
I have a number of these "rare pin-top" B-7971's and do not find any 
significant difference in contrast with the Ultronics versions of the 7971s .  
The pin-topped or "antenna top" construction is an early format that uses a 
wired backplane instead of the PCB style interconnect.  It's a nice tube that 
looks good as a group in any clock.  There is no practical difference in 
operation  or appearance and there are certainly some MOD-6 clock owners that 
have full sets of these tubes in operation will tell you that they are equally 
reliable.   They will have date codes in the mid-1960s.

It is uncommon to see a group like this available.   $250 to $275 each per tube 
seems to be the going price on ebay these days.  Overpriced but worth whatever 
someone is willing to pay.

My first tubes were $8/pair from Buckbee-Mears and Polypaks back in the early 
1970's and included the boards with each pair.

Jeff

 Original message 
From: Robert 
Date: 7/5/23 2:11 AM (GMT-06:00)
To: neoni...@googlegroups.com
Subject: [neonixie-l] B7971 - better contrast

Not my auction but I saw this 
https://www.ebay.co.uk/itm/266322565516?mkcid=16=1=711-127632-2357-0=v3sg3SvYRJa=4429486=U1YeAcPgQ5y=_ver=artemis=COPY

In the description it says “This sale is for lots of "Pin-Top" version of the 
B-7971. The rare "Pin-Top" versions are made with a darker background on the 
segments that provide better contrast.”

I have some of these but have never noticed this

Rob
--
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neonixie-l+...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/A272F92E-A09A-48E8-A051-2C15C292A57F%40gmail.com.
--
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
neonixie-l+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/7e3257e3-e81f-4f76-b97e-9d564641af74n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
neonixie-l+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/95CC668F-34AA-4719-AD2A-0464AC16EC72%40gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neonixie-l+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/MW2PR0102MB343570459105352E7C9E32DB822CA%40MW2PR0102MB3435.prod.exchangelabs.com.


Public Discussion of TrustAsia CA Inclusion Request

2023-07-05 Thread Ben Wilson
All,

This email commences a six-week public discussion of TrustAsia’s request to
include the following certificates as publicly trusted root certificates in
one or more CCADB Root Store Member’s program. This discussion period is
scheduled to close on August 16, 2023.

The purpose of this public discussion process is to promote openness and
transparency. However, each Root Store makes its inclusion decisions
independently, on its own timelines, and based on its own inclusion
criteria. Successful completion of this public discussion process does not
guarantee any favorable action by any root store.

Anyone with concerns or questions is urged to raise them on this CCADB
Public list by replying directly in this discussion thread. Likewise, a
representative of the applicant must promptly respond directly in the
discussion thread to all questions that are posted.

CCADB Case Number:  0921
<https://ccadb.my.salesforce-sites.com/mozilla/PrintViewForCase?CaseNumber=0921>;
Bugzilla:  1688854 <https://bugzilla.mozilla.org/show_bug.cgi?id=1688854>

Organization Background Information (listed in CCADB):

   -

   CA Owner Name: TrustAsia Technologies, Inc.
   -

   Website: https://www.trustasia.com/
   -

   Address: 3201 Building B. New Caohejing International Business Center,
   391 Guiping Rd, Shanghai, 200233 China
   -

   Problem Reporting Mechanism(s): rev...@trustasia.com
   -

   Organization Type: Private Corporation
   -

   Repository URL: https://repository.trustasia.com/

Certificates Requesting Inclusion:

   1.

   TrustAsia Global Root CA G3 (4096-bit RSA):


   -

   Certificate download links: (CA Repository
   
<https://repository.trustasia.com/repo/certs/rsa-g3/TrustAsiaGlobalRootCAG3.cer>,
   crt.sh
   
<https://crt.sh/?sha256=E0D3226AEB1163C2E48FF9BE3B50B4C6431BE7BB1EACC5C36B5D5EC509039A08>
   )
   -

   Use cases served/EKUs:
   -

  Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
  -

  Client Authentication 1.3.6.1.5.5.7.3.2
  -

  Code Signing 1.3.6.1.5.5.7.3.3
  -

  Document Signing 1.3.6.1.4.1.311.10.3.12, 1.2.840.113583.1.1.5
  -

  Secure Email 1.3.6.1.5.5.7.3.4
  -

  Timestamping 1.3.6.1.5.5.7.3.8
  -

   Test websites:
   -

  Valid: https://ev-rsag3-valid.trustasia.com
  -

  Revoked: https://ev-rsag3-revoked.trustasia.com
  -

  Expired: https://ev-rsag3-expired.trustasia.com


   1.

   TrustAsia Global Root CA G4 (384-bit ECDSA):
   -

  Certificate download links: (CA Repository
  
<https://repository.trustasia.com/repo/certs/ecc-g4/TrustAsiaGlobalRootCAG4.cer>,
  crt.sh
  
<https://crt.sh/?sha256=BE4B56CB5056C0136A526DF444508DAA36A0B54F42E4AC38F72AF470E479654C>
  )
  -

  Use cases served/EKUs:
  -

 Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
 -

 Client Authentication 1.3.6.1.5.5.7.3.2
 -

 Code Signing 1.3.6.1.5.5.7.3.3
 -

 Document Signing 1.3.6.1.4.1.311.10.3.12, 1.2.840.113583.1.1.5
 -

 Secure Email 1.3.6.1.5.5.7.3.4
 -

 Timestamping 1.3.6.1.5.5.7.3.8
 -

  Test websites:
  -

 Valid: https://ev-eccg4-valid.trustasia.com
 -

 Revoked: https://ev-eccg4-revoked.trustasia.com
 -

 Expired: https://ev-eccg4-expired.trustasia.com

Relevant Policy and Practices Documentation:

The following applies to both applicant root CAs:

   -


   
https://repository.trustasia.com/repo/cps/TrustAsia-Global-CP-CPS_EN_V1.6.1.pdf


Most Recent Self-Assessment:

The following applies to both applicant root CAs:

   -

   https://bugzilla.mozilla.org/attachment.cgi?id=9308645 (completed
   12/16/2022)

Audit Statements:

   -

   Auditor: Anthony KAM and associates ltd. <http://akamcpa.com/> (enrolled
   
<https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services/licensed-webtrust-practitioners-international>
   through WebTrust)
   -

   Audit Criteria: WebTrust
   -

   Date of Audit Issuance: 10/17/2022
   -

   For Period Ending: 7/31/2022
   -

   Audit Statement(s):
   -

  Standard Audit
  
<https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=8e00c66a-8d66-4185-94e9-a0ef6a6e82f1>

  -

  BR (SSL) Audit
  
<https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=22fa052e-ee61-4134-9ae7-47e8570406e1>
  -

  EV SSL Audit
  
<https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=5d02db36-be0f-45e1-9fb7-7c2af571301c>
  -

  BR (Code Signing) Audit
  
<https://www.cpacanada.ca/GenericHandlers/CPACHandler.ashx?AttachmentID=21793782-d73e-4eac-b320-7307bc3e898f>

Risk-vs-Value Justification:

   -

   https://bugzilla.mozilla.org/attachment.cgi?id=9323860

Thank you,


Ben Wilson, on behalf of the CCADB Steering Committee

[grpc-io] 5 Days Left to Submit at Talk for gRPConf 2023

2023-07-05 Thread 'Terry Wilson' via grpc.io


Hello gRPC Community!


Time is running out to submit a proposal to speak at gRPConf 2023. The 
deadline is July 9th (this Sunday).


Submit your proposals and find more information HERE 
!

Spots are still available and topics might include:

   - 
   
   gRPC in-production
   - 
   
   User Stories + Case Studies
   - 
   
   Implementation
   - 
   
   Ecosystem + Tooling
   - 
   
   Codelabs
   
Those not interested in speaking can register now 
 for just $50. 


We hope to see you all there!


Thank you,

The gRPC team

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8dd065f0-0ea0-4baf-ab2b-5ab922880344n%40googlegroups.com.


MRSP 2.9: Issue #250: Clarify MRSP 5.3.2 to expressly include revoked CA certificates

2023-07-05 Thread Ben Wilson
All,

This email opens up discussion of our proposed resolution of GitHub Issue
#250 .

Currently, MRSP section 5.3.2 (Intermediate CA Certificates must be
publicly disclosed and audited) requires that all types of intermediate CAs
capable of issuing server certificates and email certificates be disclosed
in the CCADB. (Other root stores may have their own requirements about
reporting other types of CAs – e.g. document-signing, code-signing, etc.,
so this discussion is not about CCADB disclosure for those types of CAs.)

Last year, we added language that required CCADB reporting of
name-constrained CAs with the following language, “Name-constrained CA
certificates that are technically capable of issuing working server or
email certificates that were exempt from disclosure in previous versions of
this policy MUST be disclosed in the CCADB prior to July 1, 2022.”  Our
intent at that time was that it also included CA certificates that have
been revoked but not yet expired. (One of several reasons for requiring
disclosure of revoked CAs is that we use this information for OneCRL.)
 However,
there was some confusion last year about this intention because a revoked
CA is not “technically capable of issuing working server or email
certificates.”  See discussion -
https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/XM7hWqmqmPw/m/MEVlq7REAAAJ
The purpose of the proposal below is to clarify that revoked intermediate
CAs must be disclosed in the CCADB. Thus, “including such CA certificates
that are revoked but not yet expired” would be added to the first sentence
of MRSP section 5.3.2.  It is also proposed that we remove “prior to July
1, 2022” because that date has passed.

-MRSP Proposal Begin-

The operator of a CA certificate included in Mozilla’s root store MUST
publicly disclose in the CCADB all CA certificates they issue that chain up
to that CA certificate trusted in Mozilla’s root store that are technically
capable of issuing working server or email certificates, including such CA
certificates that are revoked but not yet expired and those CA certificates
that share the same key pair whether they are self-signed, doppelgänger,
reissued, cross-signed, or other roots. The CA operator with a certificate
included in Mozilla’s root store MUST disclose such CA certificate within
one week of certificate creation, and before any such CA is allowed to
issue certificates. Name-constrained CA certificates that are technically
capable of issuing working server or email certificates that were exempt
from disclosure in previous versions of this policy MUST also be disclosed
in the CCADB.

-MRSP Proposal End-

Please review this proposal and provide questions or comments here in this
thread.

Thanks,

Ben and Kathleen

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaamFDqNOzGa88%3D9oWrKtC4Ow%2BFDf5u91%3DJzQPq1NQOtfw%40mail.gmail.com.


Re: [ovs-discuss] Scaling OVN/Southbound

2023-07-05 Thread Terry Wilson via discuss
On Wed, Jul 5, 2023 at 9:59 AM Terry Wilson  wrote:
>
> On Fri, Jun 30, 2023 at 7:09 PM Han Zhou via discuss
>  wrote:
> >
> >
> >
> > On Wed, May 24, 2023 at 12:26 AM Felix Huettner via discuss 
> >  wrote:
> > >
> > > Hi Ilya,
> > >
> > > thank you for the detailed reply
> > >
> > > On Tue, May 23, 2023 at 05:25:49PM +0200, Ilya Maximets wrote:
> > > > On 5/23/23 15:59, Felix Hüttner via discuss wrote:
> > > > > Hi everyone,
> > > >
> > > > Hi, Felix.
> > > >
> > > > >
> > > > > we are currently running an OVN Deployment with 450 Nodes. We run a 3 
> > > > > node cluster for the northbound database and a 3 nodes cluster for 
> > > > > the southbound database.
> > > > > Between the southbound cluster and the ovn-controllers we have a 
> > > > > layer of 24 ovsdb relays.
> > > > > The setup is using TLS for all connections, however the TLS Server is 
> > > > > handled by a traefik reverseproxy to offload this from the ovsdb
> > > >
> > > > The very important part of the system description is what versions
> > > > of OVS and OVN are you using in this setup?  If it's not latest
> > > > 3.1 and 23.03, then it's hard to talk about what/if performance
> > > > improvements are actually needed.
> > > >
> > >
> > > We are currently running ovs 3.1 and ovn 22.12 (in the process of
> > > upgrading to 23.03). `monitor-all` is currently disabled, but we want to
> > > try that as well.
> > >
> > Hi Felix, did you try upgrading and enabling "monitor-all"? How does it 
> > look now?
> >
> > > > > Northd and Neutron is connecting directly to north- and southbound 
> > > > > databases without the relays.
> > > >
> > > > One of the big things that is annoying is that Neutron connects to
> > > > Southbound database at all.  There are some reasons to do that,
> > > > but ideally that should be avoided.  I know that in the past limiting
> > > > the number of metadata agents was one of the mitigation strategies
> > > > for scaling issues.  Also, why can't it connect to relays?  There
> > > > shouldn't be too many transactions flowing towards Southbound DB
> > > > from the Neutron.
> > > >
> > >
> > > Thanks for that suggestion, that definately makes sense.
> > >
> > Does this make a big difference? How many Neutron - SB connections are 
> > there?
> > What rings a bell is that Neutron is using the python OVSDB library which 
> > hasn't implemented the fast-resync feature (if I remember correctly).
>
> python-ovs has supported monitor_cond_since since v2.17.0 (though
> there may have been a bug that was fixed in 2.17.1). If fast resync
> isn't happening, then it should be considered a bug. With that said, I
> remember when I looked it a year or two ago, ovsdb-server didn't
> really use fast resync/monitor_cond_since unless it was running in
> raft cluster mode (it would reply, but with the last-txn-id as 0
> IIRC?). Does the ovsdb-relay code actually return the last-txn-id? I
> can set up an environment and run some tests, but maybe someone else
> already knows.

Looks like ovsdb-relay does support last-txn-id now:
https://github.com/openvswitch/ovs/commit/a3e97b1af1bdcaa802c6caa9e73087df7077d2b1,
but only in v3.0+.

> > At the same time, there is the feature leader-transfer-for-snapshot, which 
> > automatically transfer leader whenever a snapshot is to be written, which 
> > would happen frequently if your environment is very active.
>
> I believe snapshot should only be happening "no less frequently than
> 24 hours, with snapshots if there are more than 100 log entries and
> the log size has doubled, but no more frequently than every 10 mins"
> or something pretty close to that. So it seems like once the system
> got up to its expected size, you would just see updates every 24 hours
> since you obviously can't double in size forever. But it's possible
> I'm reading that wrong.
>
> > When a leader transfer happens, if Neutron set the option "leader-only" 
> > (only connects to leader) to SB DB (could someone confirm?), then when the 
> > leader transfer happens, all Neutron workers would reconnect to the new 
> > leader. With fast-resync, like what's implemented in C IDL and Go, the 
> > client that has cached the data would only request the delta when 
> > reconnecting. But since the python lib do

[Elecraft] MH@ Mic

2023-07-05 Thread Wilson Lamb via Elecraft
My MH2 was dropped onto concrete and is now dead,
Is there an element available?
I don't see anything else that could be wrong.
Wilson
W4BOH
__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:Elecraft@mailman.qth.net

This list hosted by: http://www.qsl.net
Please help support this email list: http://www.qsl.net/donate.html
Message delivered to arch...@mail-archive.com 


Re: [ovs-discuss] Scaling OVN/Southbound

2023-07-05 Thread Terry Wilson via discuss
On Fri, Jun 30, 2023 at 7:09 PM Han Zhou via discuss
 wrote:
>
>
>
> On Wed, May 24, 2023 at 12:26 AM Felix Huettner via discuss 
>  wrote:
> >
> > Hi Ilya,
> >
> > thank you for the detailed reply
> >
> > On Tue, May 23, 2023 at 05:25:49PM +0200, Ilya Maximets wrote:
> > > On 5/23/23 15:59, Felix Hüttner via discuss wrote:
> > > > Hi everyone,
> > >
> > > Hi, Felix.
> > >
> > > >
> > > > we are currently running an OVN Deployment with 450 Nodes. We run a 3 
> > > > node cluster for the northbound database and a 3 nodes cluster for the 
> > > > southbound database.
> > > > Between the southbound cluster and the ovn-controllers we have a layer 
> > > > of 24 ovsdb relays.
> > > > The setup is using TLS for all connections, however the TLS Server is 
> > > > handled by a traefik reverseproxy to offload this from the ovsdb
> > >
> > > The very important part of the system description is what versions
> > > of OVS and OVN are you using in this setup?  If it's not latest
> > > 3.1 and 23.03, then it's hard to talk about what/if performance
> > > improvements are actually needed.
> > >
> >
> > We are currently running ovs 3.1 and ovn 22.12 (in the process of
> > upgrading to 23.03). `monitor-all` is currently disabled, but we want to
> > try that as well.
> >
> Hi Felix, did you try upgrading and enabling "monitor-all"? How does it look 
> now?
>
> > > > Northd and Neutron is connecting directly to north- and southbound 
> > > > databases without the relays.
> > >
> > > One of the big things that is annoying is that Neutron connects to
> > > Southbound database at all.  There are some reasons to do that,
> > > but ideally that should be avoided.  I know that in the past limiting
> > > the number of metadata agents was one of the mitigation strategies
> > > for scaling issues.  Also, why can't it connect to relays?  There
> > > shouldn't be too many transactions flowing towards Southbound DB
> > > from the Neutron.
> > >
> >
> > Thanks for that suggestion, that definately makes sense.
> >
> Does this make a big difference? How many Neutron - SB connections are there?
> What rings a bell is that Neutron is using the python OVSDB library which 
> hasn't implemented the fast-resync feature (if I remember correctly).

python-ovs has supported monitor_cond_since since v2.17.0 (though
there may have been a bug that was fixed in 2.17.1). If fast resync
isn't happening, then it should be considered a bug. With that said, I
remember when I looked it a year or two ago, ovsdb-server didn't
really use fast resync/monitor_cond_since unless it was running in
raft cluster mode (it would reply, but with the last-txn-id as 0
IIRC?). Does the ovsdb-relay code actually return the last-txn-id? I
can set up an environment and run some tests, but maybe someone else
already knows.

> At the same time, there is the feature leader-transfer-for-snapshot, which 
> automatically transfer leader whenever a snapshot is to be written, which 
> would happen frequently if your environment is very active.

I believe snapshot should only be happening "no less frequently than
24 hours, with snapshots if there are more than 100 log entries and
the log size has doubled, but no more frequently than every 10 mins"
or something pretty close to that. So it seems like once the system
got up to its expected size, you would just see updates every 24 hours
since you obviously can't double in size forever. But it's possible
I'm reading that wrong.

> When a leader transfer happens, if Neutron set the option "leader-only" (only 
> connects to leader) to SB DB (could someone confirm?), then when the leader 
> transfer happens, all Neutron workers would reconnect to the new leader. With 
> fast-resync, like what's implemented in C IDL and Go, the client that has 
> cached the data would only request the delta when reconnecting. But since the 
> python lib doesn't have this, the Neutron server would re-download full data 
> when reconnecting ...
> This is a speculation based on the information I have, and the assumptions 
> need to be confirmed.
>
> > > >
> > > > We needed to increase various timeouts on the ovsdb-server and client 
> > > > side to get this to a mostly stable state:
> > > > * inactivity probes of 60 seconds (for all connections between 
> > > > ovsdb-server, relay and clients)
> > > > * cluster election time of 50 seconds
> > > >
> > > > As long as none of the relays restarts the environment is quite stable.
> > > > However we see quite regularly the "Unreasonably long xxx ms poll 
> > > > interval" messages ranging from 1000ms up to 4ms.
> > >
> > > With latest versions of OVS/OVN the CPU usage on Southbound DB
> > > servers without relays in our weekly 500-node ovn-heater runs
> > > stays below 10% during the test phase.  No large poll intervals
> > > are getting registered.
> > >
> > > Do you have more details on under which circumstances these
> > > large poll intervals occur?
> > >
> >
> > It seems to mostly happen on the initial 

[Kernel-packages] [Bug 2025915] Re: package linux-headers-6.2.0-24-generic 6.2.0-24.24 failed to install/upgrade: installed linux-headers-6.2.0-24-generic package post-installation script subprocess r

2023-07-04 Thread Bradly Wilson
I've been able to boot into kernel 5.19.0-46-generic. Still unable to
get kernel 6.2.0-24-generic working. Doing sudo apt update I get the
following:

After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] 
Setting up linux-headers-6.2.0-24-generic (6.2.0-24.24) ...
/etc/kernel/header_postinst.d/dkms:
 * dkms: running auto installation service for kernel 6.2.0-24-generic
Sign command: /usr/bin/kmodsign
Binary update-secureboot-policy not found, modules won't be signed

Building module:
Cleaning build area...
make -j4 KERNELRELEASE=6.2.0-24-generic all 
KERNEL_SRC=/lib/modules/6.2.0-24-generic/build...(bad exit status: 2)
ERROR (dkms apport): binary package for anbox-ashmem: 1 not found
Error! Bad return status for module build on kernel: 6.2.0-24-generic (x86_64)
Consult /var/lib/dkms/anbox-ashmem/1/build/make.log for more information.
Sign command: /usr/bin/kmodsign
Binary update-secureboot-policy not found, modules won't be signed

Building module:
Cleaning build area...
make -j4 KERNELRELEASE=6.2.0-24-generic all 
KERNEL_SRC=/lib/modules/6.2.0-24-generic/build...(bad exit status: 2)
ERROR (dkms apport): binary package for anbox-binder: 1 not found
Error! Bad return status for module build on kernel: 6.2.0-24-generic (x86_64)
Consult /var/lib/dkms/anbox-binder/1/build/make.log for more information.
dkms autoinstall on 6.2.0-24-generic/x86_64 succeeded for virtualbox virtualbox
dkms autoinstall on 6.2.0-24-generic/x86_64 failed for anbox-ashmem(10) 
anbox-binder(10)
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
   ...fail!
run-parts: /etc/kernel/header_postinst.d/dkms exited with return code 11
dpkg: error processing package linux-headers-6.2.0-24-generic (--configure):
 installed linux-headers-6.2.0-24-generic package post-installation script 
subprocess returned error exit status 1
Setting up linux-image-6.2.0-24-generic (6.2.0-24.24) ...
dpkg: dependency problems prevent configuration of linux-headers-generic:
 linux-headers-generic depends on linux-headers-6.2.0-24-generic; however:
  Package linux-headers-6.2.0-24-generic is not configured yet.

dpkg: error processing package linux-headers-generic (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of linux-generic:
 linux-generic depends on linux-headers-generic (= 6.2.0.24.24); however:
  Package linux-headers-generic is not configured yet.

dpkg: error processing package linux-generic (--configure):
 dependency problems - leaving unconfigured
No apport report written because the error message indicates its a followup 
error from a previous failure.

  No apport report written because the error message 
indicates its a followup e
rror from a previous failure.
 Processing triggers for 
linux-image-6.2.0-24-generic (6.2.0-24.24) ...
/etc/kernel/postinst.d/dkms:
 * dkms: running auto installation service for kernel 6.2.0-24-generic
Sign command: /usr/bin/kmodsign
Binary update-secureboot-policy not found, modules won't be signed

Building module:
Cleaning build area...
make -j4 KERNELRELEASE=6.2.0-24-generic all 
KERNEL_SRC=/lib/modules/6.2.0-24-generic/build...(bad exit status: 2)
ERROR (dkms apport): binary package for anbox-ashmem: 1 not found
Error! Bad return status for module build on kernel: 6.2.0-24-generic (x86_64)
Consult /var/lib/dkms/anbox-ashmem/1/build/make.log for more information.
Sign command: /usr/bin/kmodsign
Binary update-secureboot-policy not found, modules won't be signed

Building module:
Cleaning build area...
make -j4 KERNELRELEASE=6.2.0-24-generic all 
KERNEL_SRC=/lib/modules/6.2.0-24-generic/build...(bad exit status: 2)
ERROR (dkms apport): binary package for anbox-binder: 1 not found
Error! Bad return status for module build on kernel: 6.2.0-24-generic (x86_64)
Consult /var/lib/dkms/anbox-binder/1/build/make.log for more information.
dkms autoinstall on 6.2.0-24-generic/x86_64 succeeded for virtualbox virtualbox
dkms autoinstall on 6.2.0-24-generic/x86_64 failed for anbox-ashmem(10) 
anbox-binder(10)
Error! One or more modules failed to install during autoinstall.
Refer to previous errors for more information.
   ...fail!
run-parts: /etc/kernel/postinst.d/dkms exited with return code 11
dpkg: error processing package linux-image-6.2.0-24-generic (--configure):
 installed linux-image-6.2.0-24-generic package post-installation script 
subprocess returned error exit status 1
No apport report written because MaxReports is reached already
  Errors were 
encountered while processing:
 linux-headers-6.2.0-24-generic
 linux-headers-generic
 linux-generic
 linux-image-6.2.0-24-generic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is 

[Kernel-packages] [Bug 2025915] [NEW] package linux-headers-6.2.0-24-generic 6.2.0-24.24 failed to install/upgrade: installed linux-headers-6.2.0-24-generic package post-installation script subprocess

2023-07-04 Thread Bradly Wilson
Public bug reported:

Upgrading to Ubuntu 23.04 I got
The upgrade will continue but the 'linux-image-6.2.0-24-generic' package may 
not be in a working state. Please consider submitting a bug report about it.

installed linux-image-6.2.0-24-generic package post-installation script
subprocess returned error exit status 1

Could not install the upgrades

The upgrade has aborted. Your system could be in an unusable state. A
recovery will run now (dpkg --configure -a).

ProblemType: Package
DistroRelease: Ubuntu 23.04
Package: linux-headers-6.2.0-24-generic 6.2.0-24.24
ProcVersionSignature: Ubuntu 5.19.0-46.47-generic 5.19.17
Uname: Linux 5.19.0-46-generic x86_64
ApportVersion: 2.26.1-0ubuntu2
Architecture: amd64
AudioDevicesInUse:
 USERPID ACCESS COMMAND
 /dev/snd/controlC0:  brad   1995 F wireplumber
 /dev/snd/seq:brad   1992 F pipewire
CRDA: N/A
CasperMD5CheckResult: unknown
Date: Tue Jul  4 14:21:28 2023
ErrorMessage: installed linux-headers-6.2.0-24-generic package 
post-installation script subprocess returned error exit status 1
InstallationDate: Installed on 2020-07-28 (1071 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
MachineType: LENOVO 20JTS1GT1E
ProcFB: 0 i915drmfb
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.19.0-46-generic 
root=UUID=102d7791-8e92-444e-b259-b45288c3edab ro quiet splash vt.handoff=7
PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
Python3Details: /usr/bin/python3.11, Python 3.11.2, python3-minimal, 3.11.2-1
PythonDetails: N/A
RebootRequiredPkgs: Error: path contained symlinks.
RelatedPackageVersions: grub-pc 2.06-2ubuntu16
SourcePackage: linux
Title: package linux-headers-6.2.0-24-generic 6.2.0-24.24 failed to 
install/upgrade: installed linux-headers-6.2.0-24-generic package 
post-installation script subprocess returned error exit status 1
UpgradeStatus: Upgraded to lunar on 2023-07-04 (0 days ago)
dmi.bios.date: 02/26/2018
dmi.bios.release: 1.24
dmi.bios.vendor: LENOVO
dmi.bios.version: N1WET45W (1.24 )
dmi.board.asset.tag: Not Available
dmi.board.name: 20JTS1GT1E
dmi.board.vendor: LENOVO
dmi.board.version: SDK0J40697 WIN
dmi.chassis.asset.tag: J70046952
dmi.chassis.type: 10
dmi.chassis.vendor: LENOVO
dmi.chassis.version: None
dmi.ec.firmware.release: 1.19
dmi.modalias: 
dmi:bvnLENOVO:bvrN1WET45W(1.24):bd02/26/2018:br1.24:efr1.19:svnLENOVO:pn20JTS1GT1E:pvrThinkPadT470sW10DG:rvnLENOVO:rn20JTS1GT1E:rvrSDK0J40697WIN:cvnLENOVO:ct10:cvrNone:skuLENOVO_MT_20JT_BU_Think_FM_ThinkPadT470sW10DG:
dmi.product.family: ThinkPad T470s W10DG
dmi.product.name: 20JTS1GT1E
dmi.product.sku: LENOVO_MT_20JT_BU_Think_FM_ThinkPad T470s W10DG
dmi.product.version: ThinkPad T470s W10DG
dmi.sys.vendor: LENOVO

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package lunar

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2025915

Title:
  package linux-headers-6.2.0-24-generic 6.2.0-24.24 failed to
  install/upgrade: installed linux-headers-6.2.0-24-generic package
  post-installation script subprocess returned error exit status 1

Status in linux package in Ubuntu:
  New

Bug description:
  Upgrading to Ubuntu 23.04 I got
  The upgrade will continue but the 'linux-image-6.2.0-24-generic' package may 
not be in a working state. Please consider submitting a bug report about it.

  installed linux-image-6.2.0-24-generic package post-installation
  script subprocess returned error exit status 1

  Could not install the upgrades

  The upgrade has aborted. Your system could be in an unusable state. A
  recovery will run now (dpkg --configure -a).

  ProblemType: Package
  DistroRelease: Ubuntu 23.04
  Package: linux-headers-6.2.0-24-generic 6.2.0-24.24
  ProcVersionSignature: Ubuntu 5.19.0-46.47-generic 5.19.17
  Uname: Linux 5.19.0-46-generic x86_64
  ApportVersion: 2.26.1-0ubuntu2
  Architecture: amd64
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  brad   1995 F wireplumber
   /dev/snd/seq:brad   1992 F pipewire
  CRDA: N/A
  CasperMD5CheckResult: unknown
  Date: Tue Jul  4 14:21:28 2023
  ErrorMessage: installed linux-headers-6.2.0-24-generic package 
post-installation script subprocess returned error exit status 1
  InstallationDate: Installed on 2020-07-28 (1071 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  MachineType: LENOVO 20JTS1GT1E
  ProcFB: 0 i915drmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.19.0-46-generic 
root=UUID=102d7791-8e92-444e-b259-b45288c3edab ro quiet splash vt.handoff=7
  PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
  Python3Details: /usr/bin/python3.11, Python 3.11.2, 

[jira] [Created] (MASSEMBLY-993) Configuration option to specify artifact classifier.

2023-07-03 Thread Garret Wilson (Jira)
Garret Wilson created MASSEMBLY-993:
---

 Summary: Configuration option to specify artifact classifier.
 Key: MASSEMBLY-993
 URL: https://issues.apache.org/jira/browse/MASSEMBLY-993
 Project: Maven Assembly Plugin
  Issue Type: Improvement
Reporter: Garret Wilson


Please add a {{classifier}} option to the Maven Assembly Plugin, similar to the 
[Spring Boot Maven Plugin option of the same 
name|https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#goals-repackage-parameters-details-classifier],
 to allow a POM to explicitly indicate what classifier to append to the end of 
the artifact.

Generated Maven artifacts have an option of a "classifier" such as {{javadoc}} 
or {{sources}}. This is placed on the end of a artifact base filename, such as 
{{foo-1.2.3-javadoc.jar}} or {{foo-1.2.3-sources}}. See the description of 
classifiers in the [Maven POM Reference|https://maven.apache.org/pom.html] for 
more details.

The Spring Boot Maven Plugin has a simple [configuration to set the 
classifier|https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#goals-repackage-parameters-details-classifier].
 Thus if I specify {{bar}} as my classifier, then Spring Boot Maven Plugin 
generates {{foo-1.2.3-bar.jar}}.

The Maven Assembly Plugin's only equivalent facility is indirect and arguably 
semantically incorrect. Instead of specifying a "classifier", the Assembly 
Plugin has a Boolean option to [append the assembly 
ID|https://maven.apache.org/plugins/maven-assembly-plugin/single-mojo.html#appendAssemblyId].
 The name of this option is a little unclear; what it's really saying is "use 
the assembly ID as the artifact classifier". Using the assembly ID as the 
classifier is not a bad default, but the problem is not allowing an option for 
an alternate classifier.

The drawback here is that the consumer of a public, published assembly 
descriptor _has no control over what the ID is defined within the descriptor_. 
There needs to be a way to specify the generated artifact classifier _at the 
point of plugin definition_, independent of what is defined in the descriptor. 
(Of course, if MASSEMBLY-992 were implemented, this would provide one 
workaround, although not solve the problem of consuming published artifact 
descriptors.)

As an example see https://github.com/symphoniacloud/lambda-packaging/issues/1 . 
Symphonia publishes a Maven Assembly Plugin descriptor, but the ID it uses (for 
reasons I outline in that ticket) is less than ideal. If the Assembly Plugin 
simply were to allow a {{classifier}} option like Spring Boot Maven Plugin 
does, the POM could simply choose whatever it wants, independent of the ID in 
the assembly descriptor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[Translators-l] Re: Ready for translation: Tech News #27 (2023)

2023-07-03 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 21 languages) to 1,072 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


<    5   6   7   8   9   10   11   12   13   14   >