CCADB Update: AllCerts Report Additions

2023-09-27 Thread 'Clint Wilson' via CCADB Public
TL;DR: The CCADB Steering Committee has updated the “All Certificate 
Information (root and intermediate) in CCADB” [1] (aka 
AllCertificateRecordsCSVFormat) report to include two additional columns: 
“Derived Trust Bits” and “Status of Root Cert”

All,

The CCADB Steering Committee has received two problem statements from CAs 
regarding the value and reliability of the AllCertificateRecordsCSVFormat 
report. After discussion and design within the CCADB Steering Committee, an 
enhancement has been made to the report to address these problem statements.

Status of Root Cert
The first problem [2] identified an issue with accurately assessing the 
inclusion status of a given Intermediate Certificate in a Root Store using the 
details provided in the AllCertificateRecordsCSVFormat report. The identified 
solution was to add a new column which matches the content of the “Status of 
Root Cert” field in the CCADB. This field combines the status values from the 
separate Mozilla, Microsoft, Google Chrome, and Apple status fields, 
representing them as a single concatenated string, e.g. “Apple: Included; 
Google Chrome: Included; Microsoft: Included; Mozilla: Included”. This field 
pulls the individual status values from the Root Certificate record, so is the 
same for all Intermediate Certificate records subordinate to a given Root 
Certificate record.

The AllCertificateRecordsCSVFormat report includes several separate columns 
(e.g. ‘Mozilla Status’) that appear similar to the information provided in this 
new column. These Store-specific columns are used on both Root Certificate and 
Intermediate Certificate records. The new column pulls from the same 
information as the Store-specific columns do on Root Certificate records, so in 
this regard the new column is not net-new information. However, on Intermediate 
Certificate records this same field does not always match that of its parent 
Root Certificate record, creating some doubt as to the correct status of 
Intermediate Certificate records.

[Request] Related to this change, the CCADB Steering Committee would like to 
understand if there is any extant reliance on the Store-specific “Status” 
columns. We propose removing those in the future if they are not currently 
being relied upon.

Derived Trust Bits
The second problem identified is a little more straightforward, in that the 
current AllCertificateRecordsCSVFormat report does not include details 
regarding the “trust bits” which the CCADB has determined apply to a given Root 
or Intermediate Certificate record (represented within the CCADB in the 
“Derived Trust Bits” field). This information is helpful in determining a 
variety of expectations about the certificate, such as the applicable audit 
criteria or information disclosure requirements.

It may be important to note that the CCADB’s “Derived Trust Bits” do not, in 
all cases, match other similar data sources [3] which leverage this 
information. In some cases this is because the CCADB incorporates additional 
context and in other cases because the CCADB lacks additional context. We hope 
that this additional column will help us all to better understand where and how 
future improvements to the CCADB should be made.

This updated report has been deployed and is available for use now. If you have 
any concerns with these updates or encounter any issues, please let us know 
(preferentially here, but supp...@ccadb.org  works 
too).

Thank you
- Clint, on Behalf of the CCADB Steering Committee

[1] https://www.ccadb.org/resources
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1850031
[3] https://crt.sh/mozilla-disclosures

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/F57D6948-3F1A-46F4-9AD7-3763006BC3F8%40apple.com.


smime.p7s
Description: S/MIME cryptographic signature


Improvements to Vulnerability Disclosure wiki page

2023-09-27 Thread Ben Wilson
All,
As mentioned in a previous email, I am soliciting feedback regarding
the Vulnerability
Disclosure wiki page .
If you have any specific suggestions that we can use to enhance clarity or
to make the page more complete, please don't hesitate to share them, either
here or directly with me. Your feedback is instrumental in our commitment
to maintain a safe and secure online environment.
Thanks,
Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtabWhCfgKCOiH75pgtw1AQcNaKWjq%3Dq832p-pQbp5KrfyQ%40mail.gmail.com.


Re: MRSP 2.9: Survey Results - August 2023 CA Communication and Survey

2023-09-27 Thread Ben Wilson
cident Reporting”, the guidance says that “information does
not need to be duplicated in the Reportable Vulnerability bug if it can be
provided in the public-facing incident report.” Given the overlap and link
between these processes, clarification is needed regarding the
interdependencies between Incident Reports and Reportable Vulnerability
bugs.

Mozilla Response:  As stated in the wiki page, the purpose of MRSP 2.4.1
and the “CA Security Vulnerability” Bugzilla component is to enable CAs to
provide Mozilla with “Information about security compromises that require
action from Mozilla” and “Security-sensitive information that needs to be
shared with Mozilla”. Thus, CAs need to be sure to mark the “CA Program
Security” checkbox in such Bugzilla reports.

We welcome specific suggestions about how to improve the text in the
Vulnerability Disclosure wiki page. For that purpose, I am sending out a
separate post to this list, and then we will update the wiki page and try
to clarify the use of these terms accordingly.

Thanks,

Ben and Kathleen

On Mon, Sep 18, 2023 at 10:01 AM Ben Wilson  wrote:

> All,
> The period for submitting survey responses has now concluded, and the
> results are in the sheet linked below (in my previous email).
> I will now summarize the comments and post them here.
> Thanks,
> Ben
>
> On Fri, Sep 8, 2023 at 2:12 PM Ben Wilson  wrote:
>
>> All,
>>
>> While survey responses are not due until Sept. 15th, here are the results
>> we've received thus far.
>>
>>
>> https://docs.google.com/spreadsheets/d/1xJ6VRs2R0tw3-QHoIRzIIO8MWWoqNs576KOxPKYsp3w/edit?usp=sharing
>>
>> Thanks,
>>
>> Ben
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaZPf2-8jsqgNbTL%3DEqAOXC4DTDQhJgo0psNz_cvnVqSdg%40mail.gmail.com.


[nysbirds-l] Wave of US Birds in the UK from hurricane Lee

2023-09-26 Thread Jennifer Wilson-Pines
https://www.birdguides.com/articles/review-of-the-week/review-of-the-week-18-24-september-2023/

--

NYSbirds-L List Info:
http://www.NortheastBirding.com/NYSbirdsWELCOME.htm
http://www.NortheastBirding.com/NYSbirdsRULES.htm
http://www.NortheastBirding.com/NYSbirdsSubscribeConfigurationLeave.htm

ARCHIVES:
1) http://www.mail-archive.com/nysbirds-l@cornell.edu/maillist.html
2) http://www.surfbirds.com/birdingmail/Group/NYSBirds-L
3) http://birding.aba.org/maillist/NY01

Please submit your observations to eBird:
http://ebird.org/content/ebird/

--

[Translators-l] Re: Ready for translation: Tech News #39 (2023)

2023-09-26 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 19 languages) to 1,089 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Wikitech-ambassadors] Tech News 2023, week 39

2023-09-26 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/39. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Deutsch
, English, Tiếng Việt
, français
, italiano
, norsk bokmål
, polski
, português do Brasil
, suomi
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語
, ꯃꯤꯇꯩ ꯂꯣꯟ


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - The Vector 2022 skin will now remember the pinned/unpinned status for
   the Table of Contents for all logged-out users. [1]
   

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 26 September. It
   will be on non-Wikipedia wikis and some Wikipedias from 27 September. It
   will be on all wikis from 28 September (calendar
   ).
   - The ResourceLoader mediawiki.ui modules are now deprecated as part of
   the move to Vue.js and Codex. There is a guide for migrating from
   MediaWiki UI to Codex
    for
   any tools that use it. More details are available in the task
    and your questions are
   welcome there.
   - Gadget definitions will have a new "namespaces" option
   
.
   The option takes a list of namespace IDs. Gadgets that use this option will
   only load on pages in the given namespaces.

*Future changes*

   - New variables will be added to AbuseFilter
   :
   global_account_groups and global_account_editcount. They are available
   only when an account is being created. You can use them to prevent blocking
   automatic creation of accounts when users with many edits elsewhere visit
   your wiki for the first time. [2]
   [3]
   


*Meetings*

   - You can join the next meeting with the Wikipedia mobile apps teams.
   During the meeting, we will discuss the current features and future
   roadmap. The meeting will be on 27 October at 17:00 (UTC)
   . See details and how to join
   

   .

*Tech news 
prepared by Tech News writers
 and
posted by bot

•
Contribute
 •
Translate
 •
Get help  • Give feedback
 • Subscribe or unsubscribe

[Callers] Re: Most difficult contras

2023-09-25 Thread Dale Wilson via Contra Callers
"Happy as a Cold Pig in Warm Mud"  y Mike Boerschig doesn't seem like it
would be very difficult when you read the card, but it is amazing how many
creative ways it can go wrong.  There is usually at least one star for five
somewhere in the line at the same time there's a star for three going on
elsewhere.  I call it sometimes with the right crowd of experienced dancers
because it's fun to watch the recovery process.

Dale

On Mon, Sep 25, 2023 at 6:47 PM Jerome Grisanti via Contra Callers <
contracallers@lists.sharedweight.net> wrote:

> "Would You Do It for Twenty?" by Robert Cromartie. We have discussions
> about "glossary" dances, this one is a "kitchen sink" dance, as in
> "everything you can think of but the kitchen sink." Contra corners,
> petronella, diagonal hey, alternates between proper and improper.
>
> Maybe in a workshop, on a bet, hence the title.
>
> Jerome
>
> On Mon, Sep 25, 2023, 6:38 PM Michael Fuerst via Contra Callers <
> contracallers@lists.sharedweight.net> wrote:
>
>> What are the most difficult  contras (improper, proper, indecent or
>> becket) that you have danced,  have called, and remain  afraid to call?
>> ___
>> Contra Callers mailing list -- contracallers@lists.sharedweight.net
>> To unsubscribe send an email to
>> contracallers-le...@lists.sharedweight.net
>>
> ___
> Contra Callers mailing list -- contracallers@lists.sharedweight.net
> To unsubscribe send an email to contracallers-le...@lists.sharedweight.net
>


-- 
Penultimatum:  Surrender now or next time I threaten you I'll really mean
it.
___
Contra Callers mailing list -- contracallers@lists.sharedweight.net
To unsubscribe send an email to contracallers-le...@lists.sharedweight.net


Re: [Servercert-wg] Proposed Revision of SCWG Charter

2023-09-25 Thread Ben Wilson via Servercert-wg
Thanks, Martijn and Aaron,

Aaron, I don't think I can add a CT-support requirement for Certificate
Consumers at this time, although we can take the issue up for further
conversation.

Martijn, So that the duration of the probationary period is kept to six
months, it might be better to eliminate the F2F attendance requirement. If
we keep it, then a probationary member might have to wait until the next
F2F (but certainly not a year).  How do people feel about this?

Also, I have received feedback regarding whether a Certificate Consumer
should be required to "maintain" a full list of CAs. (I think I didn't have
the term "maintain" in the GitHub draft of the charter, so I'm thinking
that we might eliminate the term from the proposal.) Similarly, I'm
concerned that a requirement to publish "how a CA can apply for inclusion
in its root store" might make it less likely for a ballot to pass. So,
instead of "maintaining" a (full) list, what if we left it just, "(4) its
membership-qualifying software product uses a list of CA certificates to
validate the chain of trust from a TLS certificate to a CA certificate in
such list"?  What are everyone's thoughts on this?

Thanks,

Ben

On Thu, Sep 14, 2023 at 9:23 AM Aaron Gable  wrote:

> Hi all,
>
> I have a very different proposal for a Certificate Consumer membership
> criterion. I have no objection to any of the currently-proposed criteria;
> this could easily be in addition to them. What if we added:
>
> > (c) Applicants that qualify as Certificate Consumers must supply the
> following additional information:
> > - URL for its list of CA certificates that its membership-qualifying
> software product uses to validate the chain of trust from a TLS certificate
> to a CA certificate in such list; and
> > *- URL for the Certificate Transparency log which it operates within
>  and which accepts all submissions for TLS
> certificates which chain up to any CA certificate in the list above*; and
>
> Frankly, the Certificate Transparency ecosystem is in peril at the moment.
> With the recent shutdown of Sectigo's Mammoth
> <https://groups.google.com/a/chromium.org/g/ct-policy/c/Ebj2hhe5QYA/m/Cl7IW33UAgAJ>
> log and retirement of DigiCert's Yeti
> <https://groups.google.com/a/chromium.org/g/ct-policy/c/PVbs0ZMVeCI/m/Hf8kwuuAAQAJ>
> and Nessie
> <https://groups.google.com/a/chromium.org/g/ct-policy/c/MXLJFHdHdFo>
> logs, the already-tiny handful of organizations
> <https://googlechrome.github.io/CertificateTransparency/log_list.html> 
> operating
> usable CT logs is feeling even smaller. So what if Certificate Consumers --
> the organizations which benefit most from a diverse and robust ecosystem of
> CT logs -- were required to bring their own to the table? Running a CT log
> is clearly non-trivial, so such a requirement would effectively demonstrate
> that potential Certificate Consumer members are serious about operating for
> the good of the ecosystem in the long term.
>
> Thanks,
> Aaron
>
> On Fri, Sep 1, 2023 at 1:42 AM Martijn Katerbarg via Servercert-wg <
> servercert-wg@cabforum.org> wrote:
>
>> Ben,
>>
>>
>>
>> This seems like a good option. I’d say maybe we need to increase the 6
>> months period to 12, otherwise within a 6 months period there may only be 1
>> F2F. Requiring attendance (remote or in-person) if there’s only 1 F2F in
>> the time-span, could be hard if there’s a case of bad timing.
>>
>>
>>
>> Additionally, I’d like to request the addition of an additional criteria
>> (although it’s related to the “publish how it decides to add or remove a CA
>> certificate from its list.” item. I’d like to request we add a requirement
>> to:
>>
>>
>>
>>- Publish how a CA can apply for inclusion in its root store
>>
>>
>>
>> With this addition, I’d be happy to endorse
>>
>>
>>
>> Regards,
>>
>> Martijn
>>
>>
>>
>> *From:* Servercert-wg  *On Behalf Of
>> *Ben Wilson via Servercert-wg
>> *Sent:* Thursday, 31 August 2023 00:50
>> *To:* CA/B Forum Server Certificate WG Public Discussion List <
>> servercert-wg@cabforum.org>
>> *Subject:* [Servercert-wg] Proposed Revision of SCWG Charter
>>
>>
>>
>> CAUTION: This email originated from outside of the organization. Do not
>> click links or open attachments unless you recognize the sender and know
>> the content is safe.
>>
>>
>>
>> All,
>>
>>
>>
>> Thanks for your suggestions and recommendations. I think we are much
>> closer to an acceptable revision of the Server Certificate Working Grou

[Translators-l] Re: Ready for translation: Tech News #39 (2023)

2023-09-25 Thread Nick Wilson (Quiddity)
There was a last-minute change that is quite important.

Please, if you could remove "*or templates*" from the existing translations
of this entry:
https://meta.wikimedia.org/w/index.php?title=Tech/News/2023/39=prev=25658492
That would be greatly appreciated.

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F39=page

I will delay sending this edition for ~10 hours to give some time.
I'm sorry that this error wasn't detected earlier. Thanks again.

On Fri, Sep 22, 2023 at 3:34 PM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> On Thu, Sep 21, 2023 at 6:18 PM Nick Wilson (Quiddity) <
> nwil...@wikimedia.org> wrote:
>
>> The latest tech newsletter is ready for early translation:
>> https://meta.wikimedia.org/wiki/Tech/News/2023/39
>>
>> Direct translation link:
>>
>> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F39=page
>>
>
> The text of the newsletter is now final.
>
> *Two items have been added* since yesterday.
>
> There won't be any more changes; you can translate safely. Thanks!
>
>


-- 
Nick "Quiddity" Wilson (he/him)
Community Relations Specialist
Wikimedia Foundation
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Translators-l] Re: Ready for translation: Tech News #39 (2023)

2023-09-22 Thread Nick Wilson (Quiddity)
On Thu, Sep 21, 2023 at 6:18 PM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/39
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F39=page
>

The text of the newsletter is now final.

*Two items have been added* since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Re: [elixir-core:11542] Support using brackets to access an index of a list

2023-09-22 Thread Ben Wilson
> Personally, I don't see the harm in supporting it.  If someone's going to 
abuse it, they'll abuse Enum.at()

The harm isn't for people who doing it intentionally, the harm is for 
people who are doing it unintentionally. Index based array access is so 
common in certain languages that it's one of the first thing newbies from 
those languages will try in Elixir and if it works then they'll just 
proceed writing loops like:

for i <- 0..length(users) do
  IO.puts users[i].name
end

In doing so they'll have reinforced a non idiomatic pattern, and failed to 
learn something crucial about the language. Of course they can always hit 
the docs and find Enum.at and rewrite it, but if you google: "Elixir access 
list by index" you get stack overflow and forum posts that help you realize 
that you probably don't want to do that. If [] just works they won't google 
till much later.

- Ben

On Friday, September 22, 2023 at 4:28:47 AM UTC-4 ...Paul wrote:

> On Thu, Sep 21, 2023 at 7:19 PM 'Justin Wood' via elixir-lang-core <
> elixir-l...@googlegroups.com> wrote:
>
>> Languages that support it via square brackets: Rust, Ruby, Javascript, 
>> Python, C, Julia.
>>
>> All of these languages (other than maybe Julia? I have not used it at 
>> all.) are actually using arrays and not lists. It is fairly natural to have 
>> easy index based lookup for arrays. After all, it is meant to be a 
>> contiguous block of memory with each element having a known size. At least 
>> for Rust and C, other languages may be more loose in terms of 
>> implementation.
>>
>
> That's the problem.  Ruby, Python, Javascript are where a lot of Elixir 
> devs are coming from (or languages that Elixir devs are often "context 
> switching" with on a daily basis) and these all implement lists that are 
> not actually arrays in the conventional sense.  I'm pretty sure, in all 
> three cases, that a list is actually an object that maintains the list 
> elements -- that might be an actual array of pointers, or it might not.  
> It's definitely not a low-level single allocated block of memory, because 
> these are weakly typed languages an all three support mixed type values (an 
> array in C requires every entry be a fixed size; indexed access is simply 
> multiplying the index by the size of each element and using that as an 
> offset to the start of the memory block -- one of the reasons you can 
> easily segfault by doing that).
>
> Lists in Elixir are linked lists, but we don't regularly refer to them 
> that way.  We don't explicitly reinforce this implementation by requiring 
> things like "LinkedList.new(1, 2, foo)"; we use a "conventional array 
> notation" like "mylist = [1, 2, foo]" -- just like you'd see in Python, 
> Javascript, Ruby.  So it's really not that much of a surprise, coming from 
> those languages, that you start off with that kind of a syntax, and you 
> then expect the language to implement square-bracket-indexing, just like 
> those languages do.  We have Enum.at() to do this, so it's not like it's an 
> impossibility to actually make this work.  The question then is why the 
> syntax doesn't just wrap that function and 'make it work'.  I'm not sure I 
> buy Jose's argument about preventing "non-idiomatic code".  If someone is 
> going to forget that Enum.map exists, they're going to write that loop, and 
> they'll just use Enum.at if they can't get the square brackets to work.
>
> This difference from how those languages have wrapped array-like functions 
> into their syntax has popped up recently with my team, we've noticed people 
> forgetting that the length of a List in Elixir isn't "free information" 
> like it is in Ruby (where the length of a list is actually an attribute of 
> the object representing the list), so we've had to remind people not to do 
> things like "assert length(foo) > 0" (fortunately, a lot of this 
> inefficiency has been limited to our unit tests!) -- because they're coming 
> from Ruby where the length of a list is always "known", so it doesn't even 
> occur to them (and I admit, I've been guilty of forgetting this 
> occasionally as well) that Enum.count() and length() both actively loop 
> over an argument to compute the length, and aren't just retrieving some 
> privately cached data.
>
> As an aside, the one oddity around Access' square-bracket-index 
> implementation that still throws me every once in a while is why you can't 
> use square brackets to index a field on a struct.  If you have a Map, you 
> can do foo[key] but on a struct, foo[key] throws an exception.  Where it 
> gets annoying is that the way to make this work on a struct is to actually 
> use Map.get -- a Map function!  If a struct is a Map, why can't Access just 
> treat structs like Maps in this situation?  But that's a completely 
> different discussion, and I digress.  ;)
>
> Personally, I don't see the harm in supporting it.  If someone's going to 
> abuse it, they'll abuse Enum.at() to make it work anyway, just like 

[Translators-l] Ready for translation: Tech News #39 (2023)

2023-09-21 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/39

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F39=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Re: [Evergreen-general] Evergreen-general Digest, Vol 38, Issue 40

2023-09-21 Thread Staci Wilson via Evergreen-general
I am also interested in learning more about the Circ Training Roundtable idea.

Staci Wilson (She/Her) | Why Pronouns Matter
Executive Director, Office of Learning Support  
Schedule a Meeting 
Accommodations | CVCC Library | CVCC LAC 
Catawba Valley Community College 
2550 US Highway 70 SE, Hickory, NC 28602 
828.327.7000 x 4525 | www.cvcc.edu 


-Original Message-
From: Evergreen-general  On 
Behalf Of evergreen-general-requ...@list.evergreen-ils.org
Sent: Thursday, September 21, 2023 8:44 AM
To: evergreen-general@list.evergreen-ils.org
Subject: Evergreen-general Digest, Vol 38, Issue 40

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Send Evergreen-general mailing list submissions to
evergreen-general@list.evergreen-ils.org

To subscribe or unsubscribe via the World Wide Web, visit
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general

or, via email, send a message with subject or body 'help' to
evergreen-general-requ...@list.evergreen-ils.org

You can reach the person managing the list at
evergreen-general-ow...@list.evergreen-ils.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of Evergreen-general digest..."


Today's Topics:

   1. Re: Evergreen-general Digest, Vol 38, Issue 35 (Terri Moser)


--

Message: 1
Date: Thu, 21 Sep 2023 07:43:36 -0500
From: Terri Moser 
To: evergreen-general@list.evergreen-ils.org
Subject: Re: [Evergreen-general] Evergreen-general Digest, Vol 38,
Issue 35
Message-ID:

Content-Type: text/plain; charset="utf-8"

I am very interested in hearing more about the Circ Training Roundtable idea.
Terri Moser
Circulation Supervisor
Neosho Newton County Library
   "Never let a problem to be solved become more important than a 
person to be loved."
   -Thomas S. Monson


On Wed, Sep 20, 2023 at 8:10?AM <
evergreen-general-requ...@list.evergreen-ils.org> wrote:

> Send Evergreen-general mailing list submissions to
> evergreen-general@list.evergreen-ils.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-gener
> al
>
> or, via email, send a message with subject or body 'help' to
> evergreen-general-requ...@list.evergreen-ils.org
>
> You can reach the person managing the list at
> evergreen-general-ow...@list.evergreen-ils.org
>
> When replying, please edit your Subject line so it is more specific 
> than "Re: Contents of Evergreen-general digest..."
>
>
> Today's Topics:
>
>1. Re: Staff Training Roundtable (Angela Simmons-Jones)
>2. Re: Staff Training Roundtable (Joan Kranich)
>
>
> --
>
> Message: 1
> Date: Wed, 20 Sep 2023 09:07:09 -0400
> From: Angela Simmons-Jones 
> To: Evergreen Discussion Group
> 
> Subject: Re: [Evergreen-general] Staff Training Roundtable
> Message-ID:
> <
> calcdntaqbnt6podf1m7sc-2wuchfg-+qj+e1ytetl7kjkg0...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I would love to be a part of this!
>
> Angela Simmons-Jones
> Assistant Business Manager
> Houston County Public Library System
> (478) 987-3050 ext 6
>
>
> On Tue, Sep 19, 2023 at 6:15?PM Jennifer Pringle via Evergreen-general 
> < evergreen-general@list.evergreen-ils.org> wrote:
>
> > Hi everyone,
> >
> >
> >
> > Is there any interest from people who train library staff on 
> > Evergreen to get together to talk specifically about training?
> >
> >
> >
> > I don?t think we need an official interest group but I think it 
> > would be useful to talk about what training people are doing, what?s 
> > working, what isn?t working, what resources people are create for 
> > their libraries, what software is being use to create resources, etc.
> >
> >
> >
> > If there is interest I?d be happy to coordinate something for 2024.
> >
> >
> >
> > Jennifer
> >
> > --
> >
> > Jennifer Pringle (she/her)
> >
> > Co-op Support - Training Lead
> >
> > BC Libraries Cooperative
> >
> > Toll-free: 1-888-848-9250
> >
> > Email:jennifer.prin...@bc.libraries.coop
> >
> > Website: http://bc.libraries.coop
> >
> >
> >
> > Gratefully acknowledging that I live and work in the unceded 
> > Traditional Territory of

Re: Ownership change for Mozilla CA Certificate Policy module

2023-09-20 Thread Kathleen Wilson
The module ownership has been updated.
https://wiki.mozilla.org/Modules/All#Governance_Sub_Modules

Best Regards,
Kathleen


On Thursday, September 14, 2023 at 9:15:19 AM UTC-7 Kathleen Wilson wrote:

All, I posted the following in Mozilla’s governance group 
<https://groups.google.com/a/mozilla.org/g/governance>.

Please feel free to comment either here in MDSP 
<https://groups.google.com/a/mozilla.org/g/dev-security-policy> or in 
Mozilla’s governance group.

~~ 

All,

I plan to hand ownership of the “Mozilla CA Certificate Policy 
<https://wiki.mozilla.org/Modules/Activities#Mozilla_CA_Certificate_Policy>'' 
module over to Ben Wilson next week. In his role at Mozilla, Ben has become 
responsible for most of the updates to the Mozilla Root Store Policy (MRSP) 
<http://www.mozilla.org/projects/security/certs/policy/>. Ben has led the 
discussions and release of 4 versions of the MRSP: versions 2.7.1 
<https://blog.mozilla.org/security/2021/04/26/mrsp-v-2-7-1/>, 2.8 
<https://blog.mozilla.org/security/2022/05/23/upgrading-mrsp-to-v-2-8/>, 
2.8.1 
<https://wiki.mozilla.org/CA/Communications#February_2023_CA_Communication>, 
and 2.9 
<https://blog.mozilla.org/security/2023/09/13/version-2-9-of-the-mozilla-root-store-policy/>.
 
For the past couple of years Ben has represented Mozilla on all 
Certification Authority (CA) compliance bugs 
<https://wiki.mozilla.org/CA/Incident_Dashboard> related to the enforcement 
of the MRSP and other policies governing CAs. Additionally, Ben continues 
to represent Mozilla in the CA/Browser Forum, fostering synergy between the 
CA/Browser Forum Baseline Requirements and the MRSP.

There are two modules related to Mozilla’s CA Program 
<https://wiki.mozilla.org/CA> which govern the default set of certificates 
in Network Security Services (NSS) and distributed in Mozilla’s software 
products. They are:

1) CA Certificates <https://wiki.mozilla.org/Modules/All#CA_Certificates>

Description: Determine which root certificates should be included in 
Mozilla software products, which trust bits should be set on them, and 
which of them should be enabled for EV treatment. Evaluate requests from 
Certification Authorities (CAs) for inclusion or removal of root 
certificates, and for updating trust bit settings or enabling EV treatment 
for already included root certificates.

Owner: Ben Wilson – no change

Peer(s): Kathleen Wilson – no change

2) Mozilla CA Certificate Policy 
<https://wiki.mozilla.org/Modules/All#Mozilla_CA_Certificate_Policy>

Description: Definition and enforcement of policies governing Certification 
Authorities, their root certificates included in Mozilla software products, 
and intermediate and end-entity certificates within those CA hierarchies.

Owner: Kathleen Wilson -- Proposed Owner: Ben Wilson

Peer(s): Ben Wilson – Proposed Peer(s): Kathleen Wilson

Best Regards,

Kathleen


~~


-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/a206106a-5f4c-4bc8-b3aa-63c64d962876n%40mozilla.org.


Re: Audit Reminder Email Summary - Intermediate Certificates

2023-09-19 Thread Kathleen Wilson
 Forwarded Message 
Subject: Summary of September 2023 Outdated Audit Statements for 
Intermediate Certs
Date: Tue, 19 Sep 2023 12:00:30 + (GMT)

CA Owner: Government of The Netherlands, PKIoverheid (Logius)
   - Certificate Name: DigiCert QuoVadis PKIoverheid Organisatie Services 
CA - 2023
SHA-256 Fingerprint: 
6E25C0044C7EBB30D01A4CC3D5733D734D06CD296A6823E63527F4182D528351
Standard Audit Period End Date (mm/dd/): 05/31/2022

   - Certificate Name: DigiCert QuoVadis PKIoverheid Burger CA - 2023
SHA-256 Fingerprint: 
66388EE649CBE920FD949FA9B77E2AA45B5DEC4120B8FFAB371B0C9C5E38C1C1
Standard Audit Period End Date (mm/dd/): 05/31/2022

   - Certificate Name: DigiCert QuoVadis PKIoverheid Organisatie Persoon CA 
- 2023
SHA-256 Fingerprint: 
C8C77ECF368D73214D50D88384464339E6F8E59F34B47E39E7965F4E5787CF1D
Standard Audit Period End Date (mm/dd/): 05/31/2022



-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/9cd016f0-64d6-4ede-b737-20ba1a13faf9n%40mozilla.org.


Re: Audit Reminder Email Summary - Root Certificates

2023-09-19 Thread Kathleen Wilson
 Forwarded Message 
Subject: Summary of September 2023 Audit Reminder Emails
Date: Tue, 19 Sep 2023 12:00:34 + (GMT)

Mozilla: Audit Reminder
CA Owner: Certainly LLC
Root Certificates:
   Certainly Root R1**
   Certainly Root E1**

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=2f10768c-71e0-43a9-847f-0acd9263003d
Standard Audit Period End Date: 2022-06-30
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=7d14b81b-029d-4bf9-bf60-6ab8c1c4f994
BR Audit Period End Date: 2022-06-30
CA Comments: null



Mozilla: Audit Reminder
CA Owner: SSL.com
Root Certificates:
   SSL.com TLS ECC Root CA 2022
   SSL.com EV Root Certification Authority RSA R2
   SSL.com EV Root Certification Authority ECC
   SSL.com Root Certification Authority ECC
   SSL.com Root Certification Authority RSA
   SSL.com TLS RSA Root CA 2022
   SSL.com Client RSA Root CA 2022
   SSL.com Client ECC Root CA 2022
Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=f8559916-8dae-43bb-a3f3-2c8b27315707
Standard Audit Period End Date: 2022-06-30
Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=4abb6f0c-a72d-40ca-90a1-1c40b140f75a
Standard Audit Period End Date: 2022-06-30
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=46752846-7d64-424d-bdf4-9aba8564e584
BR Audit Period End Date: 2022-06-30
EV Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=adbc8c7c-930a-4dee-bcbf-b927ae2c5b3a
EV Audit Period End Date: 2022-06-30
EV Audit: 
EV Audit Period End Date: 
CA Comments: null



Mozilla: Audit Reminder
CA Owner: China Financial Certification Authority (CFCA)
Root Certificates:
   CFCA EV ROOT
Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=34ed32d4-5088-485b-af4e-52a6354528b1
Standard Audit Period End Date: 2022-07-31
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=b7433894-7c3e-4c77-9e30-30896e187d4d
BR Audit Period End Date: 2022-07-31
EV Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=3f1ae8a1-e5ac-4dea-ad68-9d1e1c522d65
EV Audit Period End Date: 2022-07-31
CA Comments: null



Mozilla: Audit Reminder
CA Owner: GoDaddy
Root Certificates:
   Go Daddy Class 2 CA**
   Go Daddy Root Certificate Authority - G2**
   Starfield Class 2 CA**
   Starfield Root Certificate Authority - G2**

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=fa4da3e3-2d4d-44f2-9c2d-42bd626512c8
Standard Audit Period End Date: 2022-06-30
BR Audit: https://bug1742657.bmoattachments.org/attachment.cgi?id=9296656
BR Audit Period End Date: 2022-06-30
EV Audit: https://bug1742657.bmoattachments.org/attachment.cgi?id=9296653
EV Audit Period End Date: 2022-06-30
CA Comments: null



Mozilla: Audit Reminder
CA Owner: IdenTrust Services, LLC
Root Certificates:
   IdenTrust Commercial Root CA 1**
   IdenTrust Public Sector Root CA 1

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=b3c8a7f0-de54-485a-8d21-8425ac95a204
Standard Audit Period End Date: 2022-06-30
BR Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=46eb594d-2873-4175-ae1a-a559445859dc
BR Audit Period End Date: 2022-06-30
EV Audit: 
https://www.cpacanada.ca/generichandlers/CPACHandler.ashx?attachmentid=d85974dd-0d91-4ca0-98f3-8785e212d29b
EV Audit Period End Date: 2022-06-30
CA Comments: null




-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/de95c325-c52d-4070-adbf-a8e317819995n%40mozilla.org.


[Translators-l] Re: Ready for translation: Tech News #38 (2023)

2023-09-18 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 20 languages) to 1,089 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Wikitech-ambassadors] Tech News 2023, week 38

2023-09-18 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/38. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Afrikaans
, Bahasa Indonesia
, Deutsch
, English, Hausa
, Tiếng Việt
, Türkçe
, français
, italiano
, norsk bokmål
, polski
, suomi
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - MediaWiki now has a stable interface policy for frontend code
    that
   more clearly defines how we deprecate MediaWiki code and wiki-based code
   (e.g. gadgets and user scripts). Thank you to everyone who contributed to
   the content and discussions. [1]
   [2]
   

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 19 September. It
   will be on non-Wikipedia wikis and some Wikipedias from 20 September. It
   will be on all wikis from 21 September (calendar
   ).
   - All wikis will be read-only for a few minutes on September 20. This is
   planned at 14:00 UTC.
   
   [3] 
   - All wikis will have a link in the sidebar that provides a short URL of
   that page, using the Wikimedia URL Shortener
   .
   [4] 

*Future changes*

   - The team investigating the Graph Extension posted a proposal for
   reenabling it
    and they
   need your input.

*Tech news 
prepared by Tech News writers
 and
posted by bot

•
Contribute
 •
Translate
 •
Get help  • Give feedback
 • Subscribe or unsubscribe
.*
___
Wikitech-ambassadors mailing list -- wikitech-ambassadors@lists.wikimedia.org
To unsubscribe send an email to wikitech-ambassadors-le...@lists.wikimedia.org


Re: MRSP 2.9: Survey Results - August 2023 CA Communication and Survey

2023-09-18 Thread Ben Wilson
All,
The period for submitting survey responses has now concluded, and the
results are in the sheet linked below (in my previous email).
I will now summarize the comments and post them here.
Thanks,
Ben

On Fri, Sep 8, 2023 at 2:12 PM Ben Wilson  wrote:

> All,
>
> While survey responses are not due until Sept. 15th, here are the results
> we've received thus far.
>
>
> https://docs.google.com/spreadsheets/d/1xJ6VRs2R0tw3-QHoIRzIIO8MWWoqNs576KOxPKYsp3w/edit?usp=sharing
>
> Thanks,
>
> Ben
>

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaaZ5NqQ2UQp8ixig2AG4R5fMcOULt5aDYV_Sk4DXAje4A%40mail.gmail.com.


Blog Post About Mozilla Root Store Policy Version 2.9

2023-09-18 Thread Ben Wilson
 All,

Recently, I posted on the Mozilla Security Blog a brief overview of updates
to the Mozilla Root Store Policy (v 2.9).  See
https://blog.mozilla.org/security/2023/09/13/version-2-9-of-the-mozilla-root-store-policy/

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaaHbaiOa7vVkXFYry89R1L%3DMvMA%2BWk-uA%3DQWG1q3A7ASA%40mail.gmail.com.


Mozilla Blog Post About Root Store Policy Version 2.9

2023-09-18 Thread Ben Wilson
All,

Recently, I posted on the Mozilla Security Blog a brief overview of updates
to the Mozilla Root Store Policy (v 2.9).  See
https://blog.mozilla.org/security/2023/09/13/version-2-9-of-the-mozilla-root-store-policy/

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/CA%2B1gtaakov0TkMuiHchAYs2ao9-Gog%3DurpAFD9eoR_HyCaebbg%40mail.gmail.com.


Re: SerpentOS departs from Dlang

2023-09-16 Thread Adam Wilson via Digitalmars-d-announce
On Saturday, 16 September 2023 at 12:34:24 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
Although I do want a write barrier on each struct/class, to 
allow for cyclic handling especially for classes.


How dare you bring the High Heresy of write barriers into D! I 
thought that it was well understand that even mentioning Write 
Barriers is a mortal sin against the Church of @nogc.


Kidding aside. If you do this, you might as well turn them on 
everywhere. After that it's a easy stroll to a non-blocking 
moving GC, which would end most complaints about the GC (nobody 
complains about the .NET GC anymore).





[nysbirds-l] Re: [nysbirds-l] Bell’s Vireo Welwyn Preserve Glen Cove LI

2023-09-16 Thread Jennifer Wilson-Pines
Also be aware that Welwyn has tons of poison ivy, particularly along those
paths.
Jennifer

On Sat, Sep 16, 2023, 2:35 PM Andrew Baksh  wrote:

> I do not see this cross posted.
>
> This morning Ashley Pichon photographed what was later determined to be
> Bell’s Vireo at Welwyn Preserve. The following, is from Ashley with regards
> on how to navigate the area.
>
> “ For those of you who don’t know Welwyn, the easiest way is to take the
> paved path down to the Sound. Go right alongside the water. When you see
> the end of the sea wall on your left, take the path on your right.  It will
> lead you to a semi paved path. Make a right on that and cross the bridge.
> About 100 feet on the right side of the path is where it was. Wear tick
> clothing or at least socks over your pants. The paths are cut back but we
> have our share of them.”
>
> Ashley also provided a link to aid in finding the location which I am
> sharing here: https://maps.app.goo.gl/GZjyRuU1YTt6XWZh9?g_st=iw
>
> Good job by Zach Schwartz-Weinstein who was spot on with his assessment on
> the ID. Congratulations to Ashley on an excellent find and documentation.
>
> Good luck to all who twitch and please remember to cross post to the
> various birding reporting mechanisms.
>
> A blessed Rosh Hashanah to all who observe.
>
> 
> “Emancipate yourself from mental slavery, none but ourselves could free
> our mind.” ~ Bob Marley
>
> “Tenderness and Kindness are not signs of weakness and despair but
> manifestations of strength and resolution” ~ Khalil Gibran
>
> "I prefer to be true to myself, even at the hazard of incurring the
> ridicule of others, rather than to be false, and to incur my own
> abhorrence." ~ Frederick Douglass
>
> 風 Swift as the wind
> 林 Quiet as the forest
> 火 Conquer like the fire
> 山 Steady as the mountain
> Sun Tzu   *The Art of War*
> 
>
> (\__/)
> (= '.'=)
>
> (") _ (")
>
> Sent from somewhere in the field using my mobile device!
>
>
> Andrew Baksh
> www.birdingdude.blogspot.com
> --
> *NYSbirds-L List Info:*
> Welcome and Basics 
> Rules and Information 
> Subscribe, Configuration and Leave
> 
> *Archives:*
> The Mail Archive
> 
> Surfbirds 
> ABA 
> *Please submit your observations to **eBird*
> *!*
> --
>

--

NYSbirds-L List Info:
http://www.NortheastBirding.com/NYSbirdsWELCOME.htm
http://www.NortheastBirding.com/NYSbirdsRULES.htm
http://www.NortheastBirding.com/NYSbirdsSubscribeConfigurationLeave.htm

ARCHIVES:
1) http://www.mail-archive.com/nysbirds-l@cornell.edu/maillist.html
2) http://www.surfbirds.com/birdingmail/Group/NYSBirds-L
3) http://birding.aba.org/maillist/NY01

Please submit your observations to eBird:
http://ebird.org/content/ebird/

--

Re: SerpentOS departs from Dlang

2023-09-16 Thread Adam Wilson via Digitalmars-d-announce

On Friday, 15 September 2023 at 21:49:17 UTC, ryuukk_ wrote:


Ikey seems to still want to use D, so the main driving factor 
is the contributors, i wonder what are the exact reasons, 
pseudo memory safety can't be the only reason




I would guess that the following is the bigger problem:

"we don't quite have the resources to also be an upstream for the 
numerous D packages we'd need to create and maintain to get our 
works over the finish line."


This has long been a chicken-egg problem for D. We need more 
packages to attract more users, but we need more users building 
packages before we can attract more users.


DIP1000 is also a bit of marketing problem. We kinda-sorta 
promise that someday you'll be able to build memory safe programs 
without a GC. We need to either push through and get it done, or 
admit we're not actually going to get there and cut it.


I know there are ref-counted languages so theoretically it should 
be workable, but the in a language as complex as D there may be 
dragons on the edges. In any case, we should not be marketing 
something we can't actually do.


[Translators-l] Re: Ready for translation: Tech News #38 (2023)

2023-09-15 Thread Nick Wilson (Quiddity)
On Thu, Sep 14, 2023 at 4:23 PM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/38
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F38=page
>
>
The text of the newsletter is now final.

*Two items have been added* since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[grpc-io] Re: StreamObservers in multi-thread Java application

2023-09-15 Thread 'Terry Wilson' via grpc.io
Yes, you should not have two threads calling onNext() on the same 
StreamObserver. If you do, you can get messages interleaved with each other 
and most likely just produce garbage. 

You need to make sure that your application acquires a lock before calling 
any of the methods on StreamObserver. Note that you CAN concurrently call 
the separate StreamObserver instances you have for incoming an outgoing 
messages. 

On Wednesday, September 6, 2023 at 9:41:25 PM UTC-7 Quyen Pham Ngoc wrote:

> I use grpc in Java multithread application. The documentation says: "Since 
> individual StreamObservers are not thread-safe, if multiple threads will be 
> writing to a StreamObserver concurrently, the application must synchronize 
> calls." 
>
> Is it true that with every onNext request of each StreamObservers I have 
> to be thread-safe? What do I do when using StreamObserver in a multi-thread 
> application environment? What happens if thread-safety is not guaranteed?
>
> Sincerely thank you for your help
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8129bb36-f694-4002-a224-a468dde883c7n%40googlegroups.com.


Re: [go-cd] Unable to run Microsoft Upgrade assistant

2023-09-15 Thread Chad Wilson
Which GoCD version are you using?

Can you share a screenshot from your task configuration? e.g something like
the below

[image: image.png]


Generally speaking you shouldn't need to do a "cmd /c" yourself, as GoCD
does this implicitly and it's possible doing this might be causing
something to go wrong with the arguments parsing/quoting. Just a guess
though.

Try configuring the task without it, e.g something like the below.

*command:* upgrade-assistant.exe
*arguments:*
upgrade
D:\Upgrade\GOPIPE\WEBAPP\SampleAPI\SampleEfAPI.csproj
--operation
Inplace
--targetFramework
net6.0
--non-interactive

-Chad

On Fri, Sep 15, 2023 at 5:58 PM Nitesh Kumar  wrote:

> Hi Team,
>
> Any help will be much appreciated. looking forward to hearing from you.
>
> On Thu, Sep 14, 2023 at 10:11 PM Nitesh Kumar 
> wrote:
>
>> Hi Aswanth,
>>
>> I have tried this option but no luck. can you pleas suggest anything else
>>
>> On Thu, Sep 14, 2023 at 4:05 AM 'Ashwanth Kumar' via go-cd <
>> go-cd@googlegroups.com> wrote:
>>
>>> Maybe try adding, "DOTNET_UPGRADEASSISTANT_TELEMETRY_OPTOUT" environment
>>> variable to "1"?
>>>
>>> Ref -
>>> https://learn.microsoft.com/en-us/dotnet/core/porting/upgrade-assistant-telemetry?tabs=console#disclosure
>>>
>>>
>>>
>>>
>>> On Wed, 13 Sept 2023 at 16:55, nitesh...@gmail.com <
>>> niteshcse...@gmail.com> wrote:
>>>
 Hi team,

 I am trying to run Microsoft upgrade assistant but its failing with
 below error
 Same command i am running through command prompt, it works as expected.

 i am running that it in non-interative mode so that microsoft upgrade
 assistant doesn't wait for user input on console. however running with GOCD
 task it still wait for user input and failing

 can you please help me with that


 [go] Task: cmd /c "upgrade-assistant.exe upgrade
 D:\Upgrade\GOPIPE\WEBAPP\SampleAPI\SampleEfAPI.csproj --operation Inplace
 --targetFramework net6.0 --non-interactive"

 Initializing and loading extensions...
 Telemetry
 Upgrade Assistant collects usage data in order to help us improve your
 experience. The data is collected by Microsoft and shared with the
 community.
 You can opt-out of telemetry by setting the
 DOTNET_UPGRADEASSISTANT_TELEMETRY_OPTOUT environment variable to '1' or
 'true'
 using your favorite shell.
 Read more about Upgrade Assistant telemetry:
 https://aka.ms/upgrade-assistant-telemetry
 Read more about .NET CLI Tools telemetry:
 https://aka.ms/dotnet-cli-telemetry
 Press any key to continue...
 System.InvalidOperationException: Cannot read keys when either
 application does
 not have a console or when console input has been redirected. Try
 Console.Read.
 at System.ConsolePal.ReadKey(Boolean intercept)
 at

 Microsoft.UpgradeAssistant.Cli.Startup.FirstUseStartup.StartupAsync(Cancellation
 Token cancellationToken) in

 D:\a\_work\1\s\src\Experiments\UpgradeAssistant\cli\Startup\FirstUseStartup.cs:l
 ine 37
 at

 Microsoft.UpgradeAssistant.Cli.Flow.Steps.Upgrade.StartupFlowStep.ValidateUserIn
 putAsync(IFlowContext context, CancellationToken cancellationToken) in

 D:\a\_work\1\s\src\Experiments\UpgradeAssistant\cli\Flow\Steps\Startup\StartupFl
 owStep .cs:line 34
 at Spectre.Console.Flow.FlowRunner.RunAsync(CancellationToken
 cancellationToken) in

 D:\a\_work\1\s\src\Experiments\UpgradeAssistant\spectre.flow\Flow\FlowRunner.cs:
 line 83

 --
 You received this message because you are subscribed to the Google
 Groups "go-cd" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to go-cd+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/go-cd/007cd4c6-639a-430f-bf49-53d6649b22d5n%40googlegroups.com
 
 .

>>>
>>>
>>> --
>>>
>>> Ashwanth Kumar / ashwanthkumar.in
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "go-cd" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to go-cd+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/go-cd/CAD9m7CyUa7YbuSBB4gNjytKd%2BxccqXJy5cHVRMf2RUqJ9gv47Q%40mail.gmail.com
>>> 
>>> .
>>>
>>
>>
>> --
>> Thanks
>>
>> Nitesh kumar
>>
>
>
> --
> Thanks
>
> Nitesh kumar
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 

[Translators-l] Ready for translation: Tech News #38 (2023)

2023-09-14 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/38

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F38=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Re: [Servercert-wg] Draft ballot SCXX- Fall 2023 Clean-up

2023-09-14 Thread Clint Wilson via Servercert-wg
Hi Inigo,

These changes look good to me as well (though the rearranging of P-Label did 
get me for a second there, thinking it had been removed) and I’d be willing to 
endorse if needed.

Cheers,
-Clint


> On Sep 13, 2023, at 4:45 AM, Inigo Barreira via Servercert-wg 
>  wrote:
> 
> Hello all,
>  
> We are looking for feedback on the following draft ballot as well as 
> endorsers. 
> Thank you,
>  
> Purpose of Ballot SCXX: Fall 2023 Cleanup
> This ballot proposes updates to the Baseline Requirements for the Issuance 
> and Management of Publicly-Trusted Certificates related to the issues and 
> typos that have happened due to the different updates of the document. 
>  
> Notes: 
> The majority of these issues have been documented in GitHub and have 
> therefore labeled as cleanup and have been the basis for this update.
> Some have been provided by emails to the CABF lists and included in this 
> reviewed version because were typos.
>  
>  
>  
> The following motion has been proposed by Iñigo Barreira of Sectigo. And, 
> endorsed by  of  and  of .
>  
> — Motion Begins —
>  
> This ballot modifies the “Baseline Requirements for the Issuance and 
> Management of Publicly-Trusted Certificates” (“Baseline Requirements”), based 
> on Version 2.0.1.
>  
> MODIFY the Baseline Requirements as specified in the following Pull Request: 
> Fall 2023 clean up by barrini · Pull Request #460 · cabforum/servercert 
> (github.com) 
>  
> — Motion Ends —
>  
>  
> This ballot proposes a Final Maintenance Guideline. The procedure for 
> approval of this ballot is as follows:
>  
> Discussion (13+ days)
> • Start time: DD-MM- 12:00:00 UTC
> • End time: DD-MM- 12:00:00 UTC
>  
> Vote for approval (7 days)
> • Start time: DD-MM- 12:00:00 UTC
> • End time: DD-MM- 12:00:00 UTC
> ___
> Servercert-wg mailing list
> Servercert-wg@cabforum.org 
> https://lists.cabforum.org/mailman/listinfo/servercert-wg



smime.p7s
Description: S/MIME cryptographic signature
___
Servercert-wg mailing list
Servercert-wg@cabforum.org
https://lists.cabforum.org/mailman/listinfo/servercert-wg


RE: [neonixie-l] Re: An Introduction,

2023-09-14 Thread Michail Wilson
Good job.

I make similar projects for Price stats.

Michail Wilson
206-920-6312

From: neonixie-l@googlegroups.com  On Behalf Of 
Roman Spark
Sent: Thursday, September 14, 2023 9:08 AM
To: neonixie-l 
Subject: [neonixie-l] Re: An Introduction,

Hello,
My Nixie clocks can show not only the time but also any other information that 
is transmitted to it via the serial port. You connect the watch with a USB 
cable to your computer and send it whatever you need. I mainly use it to show 
the price of Bitcoin and the results of football matches online. However, in 
general, the possibilities are not limited to this, you can also show the 
temperature, pressure, ping, fps, and much, much more. So I thought it would be 
fun to do something new on two clocks at once. I've never done anything like 
this before.

The resultant is attached. Data is taken once a minute from 
https://theskylive.com/voyager1-info
Good luck with your project,
Roman
середа, 13 вересня 2023 р. о 11:25:43 UTC+3 Craig Garnett пише:
Thanks for all the info,

Currently I'm only using 2 chips (and opto isolators) to multiplex the display 
whereas going static looks like it will increase the complexity quite a bit.
I'll see how bright I can get the tubes without the current getting excessive.
I've also found some neat little PIR modules that can be easily incorporated 
into the design.

Craig
On Tuesday, 12 September 2023 at 14:41:43 UTC+1 Robert G. Schaffrath wrote:
On Tuesday, 12 September 2023 at 00:07:04 UTC+1 gregebert wrote:
I'm not a fan of multiplexing nixies because of the additional current that can 
lead to shorter lifespan.

Me neither as I can hear the whine of the vibrating segments in my old B-7971 
clock I built in 1979 that is multiplexed. As for shortened life, I do not know 
what other manufacturers did but the Rodan GR-111pa tubes I have were designed 
to be multiplexed. The "a" variant were for multiplexed use and the non-"a" 
direct drive from what I understand from the spec sheet. The board I pulled my 
GR-111pa's from was definitely designed for multiplex operation as all the tube 
segments were wired in parallel with the anodes separate. They do work fine as 
direct drive tubes. I assume they have a more robust design to stand up to the 
demands of multiplexing.
--
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
neonixie-l+unsubscr...@googlegroups.com<mailto:neonixie-l+unsubscr...@googlegroups.com>.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/e4638ac5-4b65-4856-8caf-727b8f9c1721n%40googlegroups.com<https://groups.google.com/d/msgid/neonixie-l/e4638ac5-4b65-4856-8caf-727b8f9c1721n%40googlegroups.com?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neonixie-l+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/MW2PR0102MB3435A2C6A59980003D3A2EF382F7A%40MW2PR0102MB3435.prod.exchangelabs.com.


[Callers] Re: New Terminology Question

2023-09-14 Thread Dale Wilson via Contra Callers
Jeff says:
I'll bite that bullet: callers generally shouldn't be calling a dolphin hey
at a regular evening dance.

And Amy says:

It always helps to read the room first. Got a bunch of beginners? Call
simpler dances, at least the first half. Explain them well. Don't call a
complicated move that will discourage them. You want them to return, right?
Baby steps, then walking, then jogging, then dolphin heys.

So I say:

Exactly right, Amy.  I always have a challenging dance on tap ready to call
toward the middle of the second half of the evening.   If there are too
many beginners (including our perpetual beginners) when the time comes, I
simply skip the challenging dance.  If the walk-thru doesn't go well, I'm
ready with an easy replacement.  But when it works our experienced dancers
love conquering a [small] challenge -- at least that's what they tell me
later.

Dale
___
Contra Callers mailing list -- contracallers@lists.sharedweight.net
To unsubscribe send an email to contracallers-le...@lists.sharedweight.net


Ownership change for Mozilla CA Certificate Policy module

2023-09-14 Thread Kathleen Wilson


All, I posted the following in Mozilla’s governance group 
<https://groups.google.com/a/mozilla.org/g/governance>.

Please feel free to comment either here in MDSP 
<https://groups.google.com/a/mozilla.org/g/dev-security-policy> or in 
Mozilla’s governance group.

~~ 

All,

I plan to hand ownership of the “Mozilla CA Certificate Policy 
<https://wiki.mozilla.org/Modules/Activities#Mozilla_CA_Certificate_Policy>'' 
module over to Ben Wilson next week. In his role at Mozilla, Ben has become 
responsible for most of the updates to the Mozilla Root Store Policy (MRSP) 
<http://www.mozilla.org/projects/security/certs/policy/>. Ben has led the 
discussions and release of 4 versions of the MRSP: versions 2.7.1 
<https://blog.mozilla.org/security/2021/04/26/mrsp-v-2-7-1/>, 2.8 
<https://blog.mozilla.org/security/2022/05/23/upgrading-mrsp-to-v-2-8/>, 
2.8.1 
<https://wiki.mozilla.org/CA/Communications#February_2023_CA_Communication>, 
and 2.9 
<https://blog.mozilla.org/security/2023/09/13/version-2-9-of-the-mozilla-root-store-policy/>.
 
For the past couple of years Ben has represented Mozilla on all 
Certification Authority (CA) compliance bugs 
<https://wiki.mozilla.org/CA/Incident_Dashboard> related to the enforcement 
of the MRSP and other policies governing CAs. Additionally, Ben continues 
to represent Mozilla in the CA/Browser Forum, fostering synergy between the 
CA/Browser Forum Baseline Requirements and the MRSP.

There are two modules related to Mozilla’s CA Program 
<https://wiki.mozilla.org/CA> which govern the default set of certificates 
in Network Security Services (NSS) and distributed in Mozilla’s software 
products. They are:

1) CA Certificates <https://wiki.mozilla.org/Modules/All#CA_Certificates>

Description: Determine which root certificates should be included in 
Mozilla software products, which trust bits should be set on them, and 
which of them should be enabled for EV treatment. Evaluate requests from 
Certification Authorities (CAs) for inclusion or removal of root 
certificates, and for updating trust bit settings or enabling EV treatment 
for already included root certificates.

Owner: Ben Wilson – no change

Peer(s): Kathleen Wilson – no change

2) Mozilla CA Certificate Policy 
<https://wiki.mozilla.org/Modules/All#Mozilla_CA_Certificate_Policy>

Description: Definition and enforcement of policies governing Certification 
Authorities, their root certificates included in Mozilla software products, 
and intermediate and end-entity certificates within those CA hierarchies.

Owner: Kathleen Wilson -- Proposed Owner: Ben Wilson

Peer(s): Ben Wilson – Proposed Peer(s): Kathleen Wilson

Best Regards,

Kathleen


~~


-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/08c6a57e-cde3-4f45-bffd-d909f068c740n%40mozilla.org.


[Callers] Re: New Terminology Question

2023-09-14 Thread Dale Wilson via Contra Callers
On Thu, Sep 14, 2023 at 3:11 AM Michael Fuerst via Contra Callers <
contracallers@lists.sharedweight.net> wrote:

> 
>
dancers should need to understand the names of  a dozen or so basic figures
> (such as F, allemande, promenade, star, chain, right and left, circle,
> shoulders round, hey, and maybe several more)
>
Agreed, although I fear "several" might be more than you would expect.

>
>
and that callers should need only  basic figures to teach any dance.
>
I disagree with this.  How would you teach, say, a dolphin hey using only
basic figures.

Dale
___
Contra Callers mailing list -- contracallers@lists.sharedweight.net
To unsubscribe send an email to contracallers-le...@lists.sharedweight.net


[grpc-io] 1 Week Until gRPConf 2023 - Almost Sold Out!

2023-09-13 Thread 'Terry Wilson' via grpc.io


Hello gRPC Community!

We are one week away from gRPConf 2023 and tickets are close to selling 
out. If you haven’t had a chance to register yet, please do so soon. 
Tickets are still available for only $99.

We’re also excited to share that we have added a limited number of “Meet a 
Maintainer” Sessions. These sessions are a chance to do an 1on1 meeting 
with one of the maintainers of gRPC in place of watching one of the gRPConf 
talks. You can sign up by completing this form. 
<https://forms.gle/rtyGHeBDVZRmc8zw5>

All the details are below. We’re looking forward to seeing everyone on 
September 20th!

️Register Here 
<https://events.linuxfoundation.org/grpc-conf/register/#grpc-conf-rates> 

️September 20, 2023

Google Cloud Campus - Sunnyvale, CA

⏰Doors open at 8:30AM

藺Meet a Maintainer <https://forms.gle/rtyGHeBDVZRmc8zw5>

View the Schedule 
<https://events.linuxfoundation.org/grpc-conf/program/schedule/>


Thank you,

Terry Wilson and the gRPC Team

gRPConf 2023 Sunnyvale, CA | Register Now <https://youtu.be/pvI9S1O3Mk0>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e3aea7ae-c6d1-4caf-8815-e82f3a0a5194n%40googlegroups.com.


[cabf_validation] Draft Minutes of Validation Subcommittee - Sept. 7, 2023

2023-09-11 Thread Ben Wilson via Validation
*Validation Subcommittee Meeting of September 7, 2023*

*Notewell: *

Read by Corey Bonnell

*Attendance: *

Aaron Gable - ISRG, Aaron Poulsen - Amazon Trust Services, Andrea Holland -
VikingCloud, Aneta Wojtczak - Microsoft, Antonis Eleftheriadis - HARICA,
Ben Wilson - Mozilla, Bhat Abhishek - eMudhra, Bruce Morton - Entrust,
Clint Wilson – Apple, Corey Bonnell - DigiCert, Corey Rasmussen - OATI,
Dimitris Zacharopoulos - HARICA, Doug Beattie - GlobalSign, Dustin
Hollenback - Microsoft, Gurleen Grewal - Google Trust Services, Inigo
Barreira - Sectigo, Joe Ramm - OATI, Johnny Reading - GoDaddy, Keshava
Nagaraju - eMudhra, Li-Chun Chen - Chunghwa Telecom, Martijn Katerbarg -
Sectigo, Michelle Coon - OATI, Nargis Mannan - VikingCloud, Nate Smith -
GoDaddy, Paul van Brouwershaven - Entrust, Q Misell (Speaker/Invited
Guest), Rebecca Kelley - Apple, Rollin Yu - TrustAsia, Roman Fischer -
SwissSign, Scott Rea - eMudhra, Tobias Josefowitz - Opera, Wayne Thayer -
Fastly, Wendy Brown – U.S. Federal PKI,

*Previous Minutes:*

Minutes for the August 10th meeting prepared by Aneta Wojtczak were
circulated August 23rd, and they were approved.

August 24th minutes prepared by circulated Andrea Holland on September 6th
and will be approved at the next meeting.

*Agenda Items:*

·Q Misell’s presentation on ACME for Onion/Tor

·Review of To-Do List from February 2023


*Q Misell’s “ACME for Onions” and CAA for Onion Domain Names presentation
by Q Misell*

See https://magicalcodewit.ch/cabf-2023-09-07-slides/

Q is working on defining a CAA extension for .onion domains.

See https://datatracker.ietf.org/doc/draft-ietf-acme-onion/ and
https://acmeforonions.org/

This will allow automated issuance of certificates to Tor hidden services
and make .onion domains act like the DNS from a WebPKI perspective.

Implementing with CAA provides consistency and reduces the risk of
misissuance.

Q reviewed how it works through the various layers of encrypted data.

.onion domains aren't in the DNS, so standard CAA records can't be used.
Instead, CAA records are encoded in the BIND zone file format in the second
layer hidden service descriptor.

A new field in the first layer hidden service descriptor signals that there
are CAA records in the second layer descriptor.


*Reviewed To-Do list from February 2023*

See
https://lists.cabforum.org/pipermail/validation/2023-February/001860.html

We discussed replacing "Applicant" with "Subscriber" in item 9 of section
4.9.1.1. Aaron G. expressed concerns about language in the parentheses
(i.e. "no longer legally permitted"). For example, it’s unclear what
happens when a registrant for a gTLD fails to renew its assignment with
ICANN.  How much detail do we want to get into in the parenthetical in this
section. And some examples don’t fall into the bucket of “no longer legally
permitted”.  Aaron was also concerned about why a certificate should have
to be revoked if the domain is still valid in the DNS. Aaron might file an
issue in GitHub, or Corey may file one for the overall issue.

We also discussed replacing “Applicant” or “Subscriber” with
“Applicant/Subscriber” in some places of section 9.6.3. Dimitrius proposed
that we split up the requirements between those applicable to either
“Applicants” or “Subscribers”. Wayne asked that we clarify the renewal
scenario and whether the entity is an applicant. Is the relationship
transactional (on a per-certificate basis), or does it depend on the
relationship between the CA and the entity? (In the BR definition of
“Applicant” we say that they are an applicant even when they are renewing a
certificate.)  Aaron G. said that in the ACME protocol, a subscriber is
someone who has agreed to the subscriber agreement, which you do when you
create an account, and who has had a certificate issued to them – then you
are a subscriber forever more.  But also, when you are obtaining new
certificates over a ten-year period, you are both a subscriber and an
applicant because you are applying for a new certificate now. Ben was
concerned that we don’t have sufficient consensus on how these concepts
should be expressed, and therefore it was too early to address them in the
upcoming “Subscriber Agreement” ballot that he and Dustin are working on.
Corey suggested that this issue be added to the agenda for an upcoming
meeting, such as a server certificate working group meeting or the
face-to-face.

Meeting adjourned.
___
Validation mailing list
Validation@cabforum.org
https://lists.cabforum.org/mailman/listinfo/validation


[Wikitech-ambassadors] Tech News 2023, week 37

2023-09-11 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/37. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Bahasa Melayu
, Deutsch
, English, Hausa
, Tiếng Việt
, Türkçe
, français
, italiano
, magyar
, norsk bokmål
, polski
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - ORES , the
   revision evaluation service, is now using a new open-source infrastructure
   on all wikis except for English Wikipedia and Wikidata. These two will
   follow this week. If you notice any unusual results from the Recent Changes
   filters that are related to ORES (for example, "Contribution quality
   predictions" and "User intent predictions"), please report them
   . [1]
   
   - When you are logged in on one Wikimedia wiki and visit a different
   Wikimedia wiki, the system tries to log you in there automatically. This
   has been unreliable for a long time. You can now visit the login page to
   make the system try extra hard. If you feel that made logging in better or
   worse than it used to be, your feedback is appreciated. [2]
   

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 12 September. It
   will be on non-Wikipedia wikis and some Wikipedias from 13 September. It
   will be on all wikis from 14 September (calendar
   ).
   - The Technical Decision-Making Forum Retrospective
   
   team invites anyone involved in the technical field of Wikimedia projects
   to signup to and join one of their listening sessions
   
   on 13 September. Another date will be scheduled later. The goal is to
   improve the technical decision-making processes.
   - As part of the changes for the Better diff handling of paragraph splits
   

   wishlist proposal, the inline switch widget in diff pages is being rolled
   out this week to all wikis. The inline switch will allow viewers to toggle
   between a unified inline or two-column diff wikitext format. [3]
   

*Future changes*

   - All wikis will be read-only for a few minutes on 20 September. This is
   planned at 14:00 UTC.
   
   More information will be published in Tech News and will also be posted on
   individual wikis in the coming weeks. [4]
   
   - The Enterprise API is 

[Translators-l] Re: Ready for translation: Tech News #37 (2023)

2023-09-11 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 19 languages) to 1,086 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Translators-l] Re: Ready for translation: Tech News #37 (2023)

2023-09-08 Thread Nick Wilson (Quiddity)
On Thu, Sep 7, 2023 at 3:58 PM Nick Wilson (Quiddity) 
wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/37
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F37=page
>

The text of the newsletter is now final.

Nothing has changed since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


MRSP 2.9: Survey Results - August 2023 CA Communication and Survey

2023-09-08 Thread Ben Wilson
All,

While survey responses are not due until Sept. 15th, here are the results
we've received thus far.

https://docs.google.com/spreadsheets/d/1xJ6VRs2R0tw3-QHoIRzIIO8MWWoqNs576KOxPKYsp3w/edit?usp=sharing

Thanks,

Ben

-- 
You received this message because you are subscribed to the Google Groups 
"dev-security-policy@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dev-security-policy+unsubscr...@mozilla.org.
To view this discussion on the web visit 
https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CA%2B1gtaaKTKgX5efpPBqD4%2ByLM-5KKEd0PGgoV2KWAnb99rAZ7A%40mail.gmail.com.


[Translators-l] Ready for translation: Tech News #37 (2023)

2023-09-07 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/37

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F37=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Graceful shutdown of DotNet Ignite nodes running in Kubernetes pods

2023-09-06 Thread Raymond Wilson
If you have an Apache Ignite deployment on Kubernetes with Linux containers
using the DotNet C# Ignite client, how do you trigger graceful shutdown of
the node?

Kubernetes emits a SIGTERM signal to the pod when it wants to remove it.
That signal is relayed to the process running in the pod identified in the
Docker configuration

In our YAML file we start the Ignite node in the pod like this:

command: ["dotnet"]
args: ["SomeNode.dll"]

When it comes time to stop that Ignite node Kubernetes emits the SIGTERM to
the pod. It appears the 'dotnet' context catches the SIGTERM and it is not
relayed into the SomeNode.dll logic.

We have several means of catching the SIGTERM configured in our application
startup logic:

AppDomain.CurrentDomain.ProcessExit += (s, e) =>  SigTermHandler ;
AssemblyLoadContext.Default.Unloading += SigTermHandler;
Console.CancelKeyPress += (s, e) =>  SigTermHandler  ;

However the SigTermHandler is never called in our application logic, which
means the node is then hard killed with a SIGKILL after the termination
grace period configured for the pod.

If you have a similar tool chain and deployment context, how are you
ensuring the Ignite node implementation gets the SIGTERM and shuts down
gracefully?

Thanks,
Raymond.

-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


[grpc-io] Last Call: gRPConf 2023 Early Bird Registration Ends Today 

2023-09-06 Thread 'Terry Wilson' via grpc.io


Hello gRPC Community!


Today is the last day to take advantage of $50 Early Bird Registration for 
gRPConf 2023. Sign up today and join us for a full day of gRPC talks and 
discussion. Below are all the details you need to know!


✅Register Here 
<https://events.linuxfoundation.org/grpc-conf/register/#grpc-conf-rates>

️September 20, 2023

Google Cloud Campus - Sunnyvale, CA

⏰Doors open at 8:30AM

View the Schedule 
<https://events.linuxfoundation.org/grpc-conf/program/schedule/>


We are only two weeks away from the event and pricing will increase to $99 
starting tomorrow.


Thank you,

Terry Wilson and the gRPC Team

gRPConf 2023 Sunnyvale, CA | Register Now <https://youtu.be/pvI9S1O3Mk0>


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/25540447-c3dd-4b6d-b4bc-db62aa86eb67n%40googlegroups.com.


Re: [go-cd] GoCD Agent in Arch Linux container - Can't start any job

2023-09-06 Thread Chad Wilson
Hi Jacques - that looks like a bug of some kind with the way the work for
the agent to do is being serialized by the server back to the agent. Have
never seen that before though.

Could you report at https://github.com/gocd/gocd/issues along with the
relevant versions of your server, agents, Java versions etc? Please include
other information such as when this started happening, whether it happens
for all jobs, or just a single git material etc would be useful.

I also wonder whether you have any special configuration of Java locales on
your server and/or agent machines/containers?

-Chad

On Wed, Sep 6, 2023 at 10:37 PM Jacques Progin 
wrote:

> Dear team,
>
> A pipeline has been triggered by a Git commit, and the GoCD server shows
> the first step as assigned to the agent, then waiting for agent again, then
> assigned again, in a never ending loop. But the console log stays empty.
> The agents view shows the agent as idle.
>
> The agent is running into a Arch Linux docker container, and the
> go-agent.log contains the following entries appearing in loop:
>
> 2023-09-06 14:03:35,976 ERROR [scheduler-1] AgentController:99 - [Agent
> Loop] Error occurred during loop:
> com.google.gson.JsonSyntaxException: Failed parsing 'Sep 2, 2023, 11:09:23
> AM' as Date; at path
> $.assignment.materialRevisions.revisions[0].modifications[0].modifiedTime
> at
> com.google.gson.internal.bind.DateTypeAdapter.deserializeToDate(DateTypeAdapter.java:90)
> at
> com.google.gson.internal.bind.DateTypeAdapter.read(DateTypeAdapter.java:75)
> at
> com.google.gson.internal.bind.DateTypeAdapter.read(DateTypeAdapter.java:46)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.readIntoField(ReflectiveTypeAdapterFactory.java:212)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$FieldReflectionAdapter.readField(ReflectiveTypeAdapterFactory.java:433)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:393)
> at
> com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:40)
> at
> com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:82)
> at
> com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.readIntoField(ReflectiveTypeAdapterFactory.java:212)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$FieldReflectionAdapter.readField(ReflectiveTypeAdapterFactory.java:433)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:393)
> at
> com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:40)
> at
> com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:82)
> at
> com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.readIntoField(ReflectiveTypeAdapterFactory.java:212)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$FieldReflectionAdapter.readField(ReflectiveTypeAdapterFactory.java:433)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:393)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.readIntoField(ReflectiveTypeAdapterFactory.java:212)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$FieldReflectionAdapter.readField(ReflectiveTypeAdapterFactory.java:433)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:393)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.readIntoField(ReflectiveTypeAdapterFactory.java:212)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$FieldReflectionAdapter.readField(ReflectiveTypeAdapterFactory.java:433)
> at
> com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:393)
> at com.google.gson.TypeAdapter.fromJsonTree(TypeAdapter.java:299)
> at
> com.thoughtworks.go.remote.adapter.RuntimeTypeAdapterFactory$1.read(RuntimeTypeAdapterFactory.java:258)
> at com.google.gson.TypeAdapter$1.read(TypeAdapter.java:204)
> at com.google.gson.Gson.fromJson(Gson.java:1227)
> at com.google.gson.Gson.fromJson(Gson.java:1137)
> at com.google.gson.Gson.fromJson(Gson.java:1047)
> at com.google.gson.Gson.fromJson(Gson.java:982)
> at
> com.thoughtworks.go.agent.RemotingClient.getWork(RemotingClient.java:77)
> at
> 

Re: Help packaging ArrayFire

2023-09-06 Thread B. Wilson
Adam Faiz  wrote:
> On 8/20/23 19:35, B. Wilson wrote:
> > Hello Guix,
> > 
> > Knee deep in CMake hell here and would appreciate a helping hand. ArrayFire
> > build is defeating me:
> > 
> > CMake Error at 
> > /gnu/store/ygab8v4ci9iklaykapq52bfsshpvi8pw-cmake-minimal-3.24.2/share/cmake-3.24/Modules/ExternalProject.cmake:3269
> >  (message):
> >   error: could not find git for fetch of af_forge-populate
> > Call Stack (most recent call first):
> >   
> > /gnu/store/ygab8v4ci9iklaykapq52bfsshpvi8pw-cmake-minimal-3.24.2/share/cmake-3.24/Modules/ExternalProject.cmake:4171
> >  (_ep_add_update_command)
> >   CMakeLists.txt:13 (ExternalProject_Add)
> > 
> > Apparently, some of the build dependencies get automatically cloned, but I'm
> > unable to make heads or tails of how to work around this. The
> > `af_forge-populate` makes it look like it's related to Forge, but "ArrayFire
> > also requires Forge but this is a submodule and will be checkout during
> > submodule initilization stage. AF_BUILD_FORGE cmake option has to be turned 
> > on
> > to build graphics support," so I'm stumped.
> > 
> > I need this soon for a project and am willing to pay someone to take this 
> > over.
> > 
> > Here are the official build instructions: 
> > https://github.com/arrayfire/arrayfire/wiki/Build-Instructions-for-Linux
> > 
> > In fact, there's a 2016 thread where Dennis Mungai claims to have 
> > successfully
> > gotten ArrayFire packaged on Guix: https://issues.guix.gnu.org/23055. 
> > However,
> > that appears to have never resulted in a patch.
> > 
> > Thoughts?
> > 
> I'm willing to work on this, it's a very interesting challenge.

Beautiful! Keep me posted, and let me know if there's anything I can help with.

Cheers,
B. Wilson



Re: Cache write synchonization with replicated caches

2023-09-04 Thread Raymond Wilson
As a follow up to this we have produced tooling which allows us to detect
and correct the problem. We are not entirely comfortable running control.sh
on production nodes (because, well, it's production :) ).

We have observed dozens of cases of this kind of corruption on two separate
Ignite grid instances. I believe we have seen sufficient numbers of this
issue to indicate there is an undiscovered consistency issue in Ignite with
replicated caches (perhaps only when using the PrimarySync cache
synchronization mode, and possibly related to failure mode handling if
Ignite nodes are terminated abruptly).

We do not have a reproducer unfortunately.

Raymond.


On Tue, Aug 22, 2023 at 7:16 PM Николай Ижиков  wrote:

> Hello, Raymond.
>
> Usually, experimental is feature that can be changed in future.
> This statement relates to the public API of the feature usually.
>
> > Does this imply risk if run against a production environment grid?
>
> It depends.
> As for read repair, CHECK_ONLY is read only mode and can’t harm your data.
> Other modes that fix data inconsistency was used on our production and
> there are no known issues.
>
>
> 22 авг. 2023 г., в 03:12, Raymond Wilson 
> написал(а):
>
> Thanks for the pointer to the read repair facility added in Ignite 2.14.
>
> Unfortunately the .WithReadRepair() extension does not seem to be present
> in the Ignite C# client.
>
> This means we either need to use the experimental Command.sh support, or
> improve our tooling to effectively do the same. I am curious why this is
> labelled as experimental? Does this imply risk if run against a production
> environment grid?
>
> Raymond.
>
>
> On Mon, Aug 21, 2023 at 5:50 PM Николай Ижиков 
> wrote:
>
>> Hello.
>>
>> I don’t know the cause of your issue.
>> But, we have feature to overcome it [1]
>>
>> Consistency repair can be run from control.sh.
>>
>> ```
>> ./bin/control.sh --enable-experimental
>> ...
>>   [EXPERIMENTAL]
>>   Check/Repair cache consistency using Read Repair approach:
>> control.(sh|bat) --consistency repair cache-name partition
>>
>> Parameters:
>>   cache-name  - Cache to be checked/repaired.
>>   partition   - Cache's partition to be checked/repaired.
>>
>>   [EXPERIMENTAL]
>>   Cache consistency check/repair operations status:
>> control.(sh|bat) --consistency status
>>
>>   [EXPERIMENTAL]
>>   Finalize partitions update counters:
>> control.(sh|bat) --consistency finalize
>> ```
>>
>> It seems that docs for a cmd command not full.
>> It also accepts strategy argument so you can manage your repair actions
>> more accurate.
>> Try to run:
>>
>> ```
>> ❯ ./bin/control.sh --enable-experimental --consistency repair --cache
>> default --strategy CHECK_ONLY --partitions 1,2,3,…your_partitions_list...
>> ```
>>
>> Available strategies with good description can be found in sources [2]
>>
>>
>> [1] https://ignite.apache.org/docs/latest/key-value-api/read-repair
>> [2]
>> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/ReadRepairStrategy.java
>>
>>
>>
>> 21 авг. 2023 г., в 07:46, Raymond Wilson 
>> написал(а):
>>
>> [Replying onto correct thread]
>>
>> As a follow up to this email, we are starting to collect evidence that
>> replicated caches within our Ignite grid are failing to replicate values in
>> a small number of cases.
>>
>> In the cases we observe so far, with a cluster of 4 nodes participating
>> in a replicated cache, only one node reports having the correct value for a
>> key, and the other three report having no value for that key.
>>
>> The documentation is pretty opinionated about the
>> CacheWriteSynchronizationMode not being impactful with respect to
>> consistency for replicated caches. As noted below, we use PrimarySync (the
>> default) for these caches, which would suggest a potential failure mode
>> preventing the backup copies obtaining their copy once the primary copy has
>> been written.
>>
>> We are continuing to investigate and would be interested in any
>> suggestions you may have as to the likely cause.
>>
>> Thanks,
>> Raymond.
>>
>> On Thu, Jul 27, 2023 at 12:38 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Hi,
>>>
>>> I have a query regarding data safety of replicated caches in the case of
>>> hard failure of the compute resource but where the storage resource is
>>> available when the node return

[Translators-l] Re: Ready for translation: Tech News #36 (2023)

2023-09-04 Thread Nick Wilson (Quiddity)
Thank you all for your help! It is deeply appreciated. The newsletter has
now been delivered (in 19 languages) to 1,086 pages.
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[Wikitech-ambassadors] Tech News 2023, week 36

2023-09-04 Thread Nick Wilson (Quiddity)
The latest technical newsletter is now available at
https://meta.wikimedia.org/wiki/Special:MyLanguage/Tech/News/2023/36. Below
is the English version.
You can help write the next newsletter: Whenever you see information about
Wikimedia technology that you think should be distributed more broadly, you
can add it to the next newsletter at
https://meta.wikimedia.org/wiki/Tech/News/Next .
More information on how to contribute is available. You can also contact me
directly.
As always, feedback (on- or off-list) is appreciated and encouraged.
——
Other languages: Bahasa Indonesia
, Deutsch
, English, Tiếng Việt
, Türkçe
, dansk
, français
, norsk bokmål
, polski
, suomi
, svenska
, čeština
, русский
, українська
, עברית
, العربية
, हिन्दी
, বাংলা
, ಕನ್ನಡ
, 中文
, 日本語


Latest *tech news
* from the
Wikimedia technical community. Please tell other users about these changes.
Not all changes will affect you. Translations
 are
available.

*Recent changes*

   - EditInSequence
   , a feature
   that allows users to edit pages faster on Wikisource has been moved to a
   Beta Feature based on community feedback. To enable it, you can navigate to
   the beta features tab in Preferences
   
.
   [1] 
   - As part of the changes for the Generate Audio for IPA
   

   and Audio links that play on click
   

   wishlist proposals, the inline audio player mode
   

   of Phonos  has been
   deployed to all projects. [2] 
   - There is a new option for Administrators when they are changing the
   usergroups for a user, to add the user’s user page to their watchlist. This
   works both via Special:UserRights
    and via the API. [3]
   
   - One new wiki has been created:
  - a Wikipedia in Talysh  (w:tly:
  ) [4]
  

*Problems*

   - The LoginNotify extension
   
   was not sending notifications since January. It has now been fixed, so
   going forward, you may see notifications for failed login attempts, and
   successful login attempts from a new device. [5]
   

*Changes later this week*

   - The new version 
   of MediaWiki will be on test wikis and MediaWiki.org from 5 September. It
   will be on non-Wikipedia wikis and some Wikipedias from 6 September. It
   will be on all wikis from 7 September (calendar
   ).
   - Starting on Wednesday, a new set of Wikipedias will get "Add a link
   
"
   (Eastern Mari Wikipedia, Maori Wikipedia, Minangkabau Wikipedia, Macedonian
   Wikipedia, Malayalam Wikipedia, Mongolian Wikipedia, Marathi Wikipedia,
   Western Mari Wikipedia, Malay Wikipedia, Maltese Wikipedia, Mirandese
   Wikipedia, Erzya Wikipedia, Mazanderani Wikipedia, Nāhuatl Wikipedia,
   Neapolitan Wikipedia, Low German 

[African Wikimedians] [Wiki Loves Africa] 2024 Theme Suggestions

2023-09-04 Thread Wilson Oluoha
Dear Wikimedians,

The search for a theme for the *2024 Wiki Loves Africa* contest is still
ongoing.

Please click the link below to share your thoughts on what you think the
next Wiki Loves Africa theme should be:

WLA 2024 Theme Suggestions


Kindly recall that Wiki Loves Africa
 (
www.wikilovesafrica.net) is an annual photography competition that takes
place across Africa. It was designed to take back the visual narrative by
re-imaging contemporary society and cultural heritage across Africa through
the eyes of Africa’s photographers.

Wiki Loves Africa images have collectively - since metrics began in 2016 -
been viewed 1 billion times.

Started in 2014, the 9 editions so far have resulted in the contribution of
102,068 images to Wikipedia's media library, Wikimedia Commons, by over
10,380 photographers from 55 countries across the continent. The images
have a life beyond the competition, with these images being placed in
articles on Wikipedia, and thus being viewed by over 1 billion times since
the impact began being monitored from 2016; with 25 million views in May
2022 alone.


   - 2014 WLA :
   *Cuisine*
   - 2015 WLA :
*Cultural
   Fashion and Adornment*
   - 2016 WLA : *Music
   and Dance*
   - 2017 WLA : *People
   at Work*
   - 2019 WLA :
   *Play!*
   - 2020 WLA : *Africa
   on the Move*
   - 2021 WLA : *Health
   + Wellness*
   - 2022 WLA : *Home
   + Habitat*


   - 2023 WLA 
: *Climate
   and Weather*
   - *2024**:  **???*


Warm Regards


  Chers Wikimédiens,

La recherche d'un thème pour le concours* Wiki Loves Africa 2024* est
toujours en cours.

Veuillez cliquer sur le lien ci-dessous pour partager vos idées sur ce que
vous pensez être le prochain thème de Wiki Loves Africa:

WLA thème proposé pour 2024


Wiki Loves Africa
;(
www.wikilovesafrica.net) est un concours annuel de photographie qui se
déroule dans toute l'Afrique. Il a été conçu pour reprendre le récit visuel
en réimaginant la société contemporaine et le patrimoine culturel à travers
l'Afrique à travers les yeux des photographes africains.

Les images de Wiki Loves Africa ont collectivement - depuis le début des
mesures en 2016 - été vues 1 milliard de fois.

Démarrés en 2014, les 9 éditions à ce jour ont abouti à la contribution de
102 068 images à la médiathèque de Wikipédia, Wikimedia Commons, par plus
de 10 380 photographes de 55 pays à travers le continent. Les images ont
une vie au-delà du concours, puisqu'elles sont placées dans des articles de
Wikipédia et ont donc été vues plus d'un milliard de fois depuis que
l'impact a commencé à être surveillé à partir de 2016 ; avec 25 millions de
vues pour le seul mois de mai 2022.


   - 2014 WLA :
   *Cuisine*
   - 2015 WLA : *Mode
   et parure culturelle*
   - 2016 WLA :
*Musique
   et danse*
   - 2017 WLA : *Les
   gens au travail*
   - 2019 WLA :
   *Jouez!*
   - 2020 WLA :
*L'Afrique
   en mouvement*
   - 2021 WLA : *Santé
   + Bien-être*
   - 2022 WLA : *Home
   + Habitat*


   - 2023 WLA 
: *Climat
   et météo*


Très cordialement
___
African-Wikimedians mailing list -- african-wikimedians@lists.wikimedia.org
To unsubscribe send an email to african-wikimedians-le...@lists.wikimedia.org


[Wikimedia-l] [Wiki Loves Africa] 2024 Theme Suggestions

2023-09-04 Thread Wilson Oluoha
Dear Wikimedians,

The search for a theme for the *2024 Wiki Loves Africa* contest is still
ongoing.

Please click the link below to share your thoughts on what you think the
next Wiki Loves Africa theme should be:

WLA 2024 Theme Suggestions


Kindly recall that Wiki Loves Africa
 (
www.wikilovesafrica.net) is an annual photography competition that takes
place across Africa. It was designed to take back the visual narrative by
re-imaging contemporary society and cultural heritage across Africa through
the eyes of Africa’s photographers.

Wiki Loves Africa images have collectively - since metrics began in 2016 -
been viewed 1 billion times.

Started in 2014, the 9 editions so far have resulted in the contribution of
102,068 images to Wikipedia's media library, Wikimedia Commons, by over
10,380 photographers from 55 countries across the continent. The images
have a life beyond the competition, with these images being placed in
articles on Wikipedia, and thus being viewed by over 1 billion times since
the impact began being monitored from 2016; with 25 million views in May
2022 alone.


   - 2014 WLA :
   *Cuisine*
   - 2015 WLA :
*Cultural
   Fashion and Adornment*
   - 2016 WLA : *Music
   and Dance*
   - 2017 WLA : *People
   at Work*
   - 2019 WLA :
   *Play!*
   - 2020 WLA : *Africa
   on the Move*
   - 2021 WLA : *Health
   + Wellness*
   - 2022 WLA : *Home
   + Habitat*


   - 2023 WLA 
: *Climate
   and Weather*
   - *2024**:  **???*


Warm Regards


  Chers Wikimédiens,

La recherche d'un thème pour le concours* Wiki Loves Africa 2024* est
toujours en cours.

Veuillez cliquer sur le lien ci-dessous pour partager vos idées sur ce que
vous pensez être le prochain thème de Wiki Loves Africa:

WLA thème proposé pour 2024


Wiki Loves Africa
;(
www.wikilovesafrica.net) est un concours annuel de photographie qui se
déroule dans toute l'Afrique. Il a été conçu pour reprendre le récit visuel
en réimaginant la société contemporaine et le patrimoine culturel à travers
l'Afrique à travers les yeux des photographes africains.

Les images de Wiki Loves Africa ont collectivement - depuis le début des
mesures en 2016 - été vues 1 milliard de fois.

Démarrés en 2014, les 9 éditions à ce jour ont abouti à la contribution de
102 068 images à la médiathèque de Wikipédia, Wikimedia Commons, par plus
de 10 380 photographes de 55 pays à travers le continent. Les images ont
une vie au-delà du concours, puisqu'elles sont placées dans des articles de
Wikipédia et ont donc été vues plus d'un milliard de fois depuis que
l'impact a commencé à être surveillé à partir de 2016 ; avec 25 millions de
vues pour le seul mois de mai 2022.


   - 2014 WLA :
   *Cuisine*
   - 2015 WLA : *Mode
   et parure culturelle*
   - 2016 WLA :
*Musique
   et danse*
   - 2017 WLA : *Les
   gens au travail*
   - 2019 WLA :
   *Jouez!*
   - 2020 WLA :
*L'Afrique
   en mouvement*
   - 2021 WLA : *Santé
   + Bien-être*
   - 2022 WLA : *Home
   + Habitat*


   - 2023 WLA 
: *Climat
   et météo*


Très cordialement
___
Wikimedia-l mailing list -- wikimedia-l@lists.wikimedia.org, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikimedia-l@lists.wikimedia.org/message/S7GC26TETTT4BG6XDJZAB3CM25XODGXO/
To unsubscribe send an email to wikimedia-l-le...@lists.wikimedia.org

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-09-01 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761417#comment-17761417
 ] 

Raymond Wilson commented on IGNITE-20299:
-

Thanks for confirming you can reproduce it.

In an earlier experiment I tried deleting the cache folder on my local system 
and this did fix it (locally). Performing that same operation in our dev 
environment with data in it did not recover.

I will see if I can enhance the reproducer further.


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.G

[Yahoo-eng-team] [Bug 2033932] [NEW] Add support for OVN MAC_Binding aging

2023-09-01 Thread Terry Wilson
Public bug reported:

VN added support for aging out MAC_Binding entries [1][2]. Without this
feature, the MAC_Bindings table can grow indefinitely.

[1] 
https://github.com/ovn-org/ovn/commit/1a947dd3073628d2f2655f46ee7d3db62ed15b55
[2] 
https://github.com/ovn-org/ovn/commit/cecac71c0e49f9bfb6595bc03a13f3f7644dd268

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033932

Title:
  Add support for OVN MAC_Binding aging

Status in neutron:
  New

Bug description:
  VN added support for aging out MAC_Binding entries [1][2]. Without
  this feature, the MAC_Bindings table can grow indefinitely.

  [1] 
https://github.com/ovn-org/ovn/commit/1a947dd3073628d2f2655f46ee7d3db62ed15b55
  [2] 
https://github.com/ovn-org/ovn/commit/cecac71c0e49f9bfb6595bc03a13f3f7644dd268

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033932/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Translators-l] Re: Ready for translation: Tech News #36 (2023)

2023-09-01 Thread Nick Wilson (Quiddity)
On Thu, Aug 31, 2023 at 6:43 PM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/36
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F36=page
>

The text of the newsletter is now final.

*One item has been added* since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[sig-policy] Re: New version: prop-148 Clarification - Leasing of Resources is not Acceptable

2023-08-31 Thread Paul Wilson
It's very good to have your feedback for the secretariat, thanks Tsurumaki-san.

Please feel free to let us know of any other improvements that we can help 
with, any time.

Paul.

Get BlueMail for Android
On Sep 1, 2023, at 13:16, Satoru Tsurumaki 
mailto:stsur...@gmail.com>> wrote:

Dear Colleagues,

I am Satoru Tsurumaki from Japan Open Policy Forum Steering Team.

I would like to express my gratitude to Policy-Sig Chair/Co-Chair and
the APNIC Secretariat.
In the past, policy proposals were posted to the SIG-Policy ML in
batches after the proposal
deadline four weeks before OPM, but now they have posted the proposed
policies sequentially
to the SIG-Policy ML without waiting for the deadline.
This allowed us enough time to translate the proposals and introduce
them to our community for discussion.

Then I would like to share key feedback in our community for prop-148,
based on a meeting we organised on 30th Aug to discuss these proposals.

Many neutral opinions were expressed about this proposal.

(comment details)
 - I agree with the concept of prohibiting leases, but the proposal
   should not affect to the party who has leased the IPv4 from another party.

 - JPNIC has a text equivalent to prohibiting leasing in its guidelines.
   If this proposal becomes a consensus, it is necessary to consider
   including similar text in the address policy depending on its content.

Regards,

Satoru Tsurumaki / JPOPF Steering Team

2023年8月5日(土) 2:00 Shaila Sharmin :

 Dear SIG members,

 A new version of the proposal "prop-148-v004: Clarification - Leasing of 
Resources is not Acceptable" has been sent to the Policy SIG for review.

 Information about earlier versions is available from:

 http://www.apnic.net/policy/proposals/prop-148

 You are encouraged to express your views on the proposal:

   - Do you support or oppose the proposal?
   - Is there anything in the proposal that is not clear?
   - What changes could be made to this proposal to make it more effective?

 Please find the text of the proposal below.

 Regards,
 Bertrand, Shaila, and Anupam
 APNIC Policy SIG Chairs





 prop-148-v004: Clarification - Leasing of Resources is not Acceptable




 Proposer: Jordi Palet Martinez (jordi.pa...@theipv6company.com)
Amrita Choudhury (amritachoudh...@ccaoi.in)
Fernando Frediani (fhfred...@gmail.com)


 1. Problem statement
 
 RIRs have been conceived to manage, allocate and assign resources
 according to need, in such way that a LIR/ISP has addresses to be able
 to directly connect its customers based on justified need. Addresses are
 not, therefore, a property with which to trade or do business.

 When the justification of the need disappears or changes, for whatever
 reasons, the expected thing would be to return said addresses to the
 RIR, otherwise according to Section 4.1. (“The original basis of the
 delegation remains valid”) and 4.1.2. (“Made for a specific purpose that
 no longer exists, or based on information that is later found to be
 false or incomplete”) of the policy manual, APNIC is not enforced to
 renew the license. An alternative is to transfer these resources using
 the appropriate transfer policy.

 If the leasing of addresses is authorized, contrary to the original
 spirit of the policies and the very existence of the RIRs, the link
 between connectivity and addresses disappears, which also poses security
 problems, since, in the absence of connectivity, the resource holder who
 has received the license to use the addresses does not have immediate
 physical control to manage/filter them, which can cause damage to the
 entire community.

 Therefore, it should be made explicit in the Policies that the Internet
 Resources should not be leased “per se”, but only as part of a
 connectivity service, as it was documented with the original need
 justification.

 The existing policies of APNIC are not explicit about that, however
 current policies do not regard the leasing of addresses as acceptable,
 if they are not an integral part of a connectivity service.
 Specifically, the justification of the need would not be valid for those
 blocks of addresses whose purpose is not to directly connect customers
 of an LIR/ISP, and consequently the renewal of the annual license for
 the use of the addresses would not be valid either. Sections 3.2.6.
 (Address ownership), 3.2.7. (Address stockpiling) and 3.2.8.
 (Reservations not supported) of the policy manual, are keys on this
 issue, but an explicit clarification is required.

 2. Objective of policy change
 -
 Despite the fact that the intention in this regard underlies the entire
 Policy Manual text and is thus applied to justify the need for
 resources, this proposal makes this aspect explicit by adding the
 appropriate clarifying text.


 3. Situation in other regions
 

[Translators-l] Ready for translation: Tech News #36 (2023)

2023-08-31 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/36

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F36=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Fwd: Public Discussion of CommScope CA Inclusion Request

2023-08-30 Thread Ben Wilson
Forwarding to the list because this message did not appear to post.

-- Forwarded message -
From: So, Nicol 
Date: Wed, Aug 30, 2023 at 6:10 PM
Subject: RE: Public Discussion of CommScope CA Inclusion Request
To: CCADB Public 
Cc: Ben Wilson 


On Monday, August 28, 2023 at 5:49 PM, Yuwei HAN (hanyuwei70) wrote:

> Giving that there were so many TLS CAs, I don't see any necesssity to add
another TLS CA unless something new is provided by CA.
> Can CA explain what can be improved if accepted to Mozilla Root Program?

Allowing broader participation by qualified service providers with diverse
industrial experiences is generally beneficial to users of CA services.
CommScope’s more than 25 years of experience servicing the PKI needs of
device manufacturers and service providers brings unique perspectives and
capabilities to the Mozilla root program in a world that is seeing an
explosion of connected devices.

CommScope has been delivering CA and device identity provisioning services
across 200+ OEM/ODM/repair locations spanning 30+ countries. Besides being
a device manufacturer itself, CommScope also serves customers and partners,
such as Motorola and Broadcom. Additionally, we have successfully provided
over-the-air solutions to more than a dozen leading domestic and
international service providers, such as Verizon and T-Mobile, enabling
them to upgrade device cryptographic identities and enhance security in
real-world deployments. The devices we touched include mobile phones,
wireless access points, mobile network base stations, home gateways, cable
modems, and set-top boxes. We are expanding our services to IoT devices. We
understand the challenges device manufacturers and service providers face
trying to meet the PKI requirements from industry consortia such as
CableLabs, WInnForum, and CSA. We have seen and have solved the problems
that arise in global manufacturing operations and post-deployment network
operations, such as duplicate device identities and network trust changes.
Over the years, we’ve securely installed over 6 billion sets of device
identity credentials through different channels at different stages of
device lifetime.

CommScope is more than just a standard CA. With its wealth of experience
dealing with device manufacturing, deployment and operation, we are also
well positioned to serve device manufacturers and operators of device
fleets, whose requirements are not the same as typical web site operators.

Sincerely,
Nicol So

-- 
You received this message because you are subscribed to the Google Groups 
"CCADB Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to public+unsubscr...@ccadb.org.
To view this discussion on the web visit 
https://groups.google.com/a/ccadb.org/d/msgid/public/CA%2B1gtaZgWiROh-5Y1Q7UZSjY0EWCAZWyh%2BiRKEOHV8i9m1CCUQ%40mail.gmail.com.


[Servercert-wg] Proposed Revision of SCWG Charter

2023-08-30 Thread Ben Wilson via Servercert-wg
All,

Thanks for your suggestions and recommendations. I think we are much closer
to an acceptable revision of the Server Certificate Working Group Charter.
Here is the current draft:
https://github.com/cabforum/forum/blob/BenWilson-SCWG-charter-1.3/SCWG-charter.md

We have decided that a participation/attendance requirement for ongoing
membership is currently too complicated to manage, but we believe it is
important that there be a probationary period of six months during which
all new CABF-voting applicants must attend at least 30% of the
teleconferences and at least the SCWG portion of one F2F (virtually or
in-person). See section 4(d) in the draft cited above. We believe that with
this limited scope, we can and should measure attendance to ensure that
prospective members are serious about participating in the Forum.

We no longer seek to require that a Certificate Consumer have any
particular size or user base (or that they meet other criteria that were
floated in recent emails).  Those criteria were also currently too
complicated. However, in addition to those Certificate Consumer
requirements that are in the existing charter, we want a Certificate
Consumer to:

   - have public documentation stating that it requires Certificate Issuers
   to comply with the TLS Baseline Requirements;
   - maintain a list of CA certificates used to validate the chain of trust
   from a TLS certificate to a CA certificate in such list; and
   - publish how it decides to add or remove a CA certificate from its list.

I am looking for two endorsers of a FORUM ballot, so if the
above-referenced draft is generally acceptable, please contact me, and we
can work out any remaining details.

Thanks,

Ben


On Tue, Jul 25, 2023 at 11:07 PM Roman Fischer via Servercert-wg <
servercert-wg@cabforum.org> wrote:

> Dear Ben,
>
>
>
> I like your two new suggestions as they offer more lightweight mechanisms.
>
>
>
> One other idea (completely ad hoc and not really thought through) would be
> to change the charter to allow suspension of members from the SCWG by
> ballot. That way a ballot could be proposed, discussed, endorsed and voted
> on. And since the state of “suspended membership” is well defined
> (including the way back to full membership), this might offer the “accused”
> member enough possibility to counter the “allegations” made in the ballot.
> It would also make transparent who wants to suspend whom for what reasons…
>
>
>
> Kind regards
> Roman
>
>
>
> *From:* Ben Wilson 
> *Sent:* Dienstag, 25. Juli 2023 17:40
> *To:* Roman Fischer 
> *Cc:* CA/B Forum Server Certificate WG Public Discussion List <
> servercert-wg@cabforum.org>
> *Subject:* Re: [Servercert-wg] Participation Proposal for Revised SCWG
> Charter
>
>
>
> Thanks for your insights, Roman.
>
>
>
> I'm not yet convinced that the attendance approach would not be effective.
> Nevertheless, here are some other potential alternatives to discuss:
>
>
>
> 1 - require that a Certificate Consumer have a certain size userbase, or
> alternatively, that they be a Root Store member of the Common CA Database
> <https://www.ccadb.org/rootstores/how>, or
>
> 2 - require that a Certificate Consumer pay a membership fee to the
> CA/Browser Forum.
>
>
>
> Does anyone have any other ideas, proposals, or suggestions that we can
> discuss?
>
>
>
> The approaches listed above would be in addition to the following other
> requirements already proposed:
>
>
>
> The Certificate Consumer has public documentation stating that it requires
> Certification Authorities to comply with the CA/Browser Forum’s Baseline
> Requirements for the issuance and maintenance of TLS server
> certificates; its membership-qualifying software product uses a list of CA
> certificates to validate the chain of trust from a TLS certificate to a CA
> certificate in such list; and it publishes how it decides to add or remove
> a CA certificate from the root store used in its membership-qualifying
> software product.
>
>
>
> Thanks,
>
>
>
> Ben
>
>
>
> On Mon, Jul 24, 2023 at 10:48 PM Roman Fischer <
> roman.fisc...@swisssign.com> wrote:
>
> Dear Ben,
>
>
>
> As stated before, I’m against minimal attendance (or even participation –
> however you would measure that, numbers of words spoken or written?)
> requirements. I’ve seen in university, in private associations, policitcs…
> that this simply doesn’t solve the problem. I totally agree with Tim: It
> will create administrative overhead and not solve the problem.
>
>
>
> IMHO non-particpants taking part in the democratic process (i.e. voting)
> is just something we have to accept and factor in. It’s one end of the
> extreme spectrum. There might be 

[jira] [Comment Edited] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-30 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760136#comment-17760136
 ] 

Raymond Wilson edited comment on IGNITE-20299 at 8/30/23 6:31 PM:
--

I think persistence is an important aspect for this issue as it is on restart 
that the grid complains that it cannot (a) start the incorrectly created cache 
(which raises the question as to why it is still known about if creation of it 
was unsuccessful) and (b) fails to initialise the persisted caches.

The cache folder for the incorrectly create cache is also constructed which 
indicates that the grid has somehow accepted the cache as a valid new cache 
while at the same time throwing the exchange process exception, all of which 
indicates the validation of the parameters for the new cache is not enforcing 
the requirement for the data region to be known.



was (Author: rpwilson):
I think persistence is an important for this issue as it is on restart that the 
grid complains that it cannot (a) start the incorrectly created cache (which 
raises the question as to why it is still known about if creation of it was 
unsuccessful) and (b) fails to initialise the persisted caches.

The cache folder for the incorrectly create cache is also constructed which 
indicates that the grid has somehow accepted the cache as a valid new cache 
while at the same time throwing the exchange process exception, all of which 
indicates the validation of the parameters for the new cache is not enforcing 
the requirement for the data region to be known.


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheEx

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-30 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760164#comment-17760164
 ] 

Raymond Wilson commented on IGNITE-20299:
-

[~ptupitsyn]

I have managed to create a simple reproducer. Code below.

Running it once will fail with the exchange future exception. Running it a 
second time will see it fail to start the bad cache and fails to run.


{noformat}
using Apache.Ignite.Core;
using Apache.Ignite.Core.Cache;
using Apache.Ignite.Core.Cache.Configuration;
using Apache.Ignite.Core.Configuration;

namespace BadCacheCreationReproducer;

static class Program
{
  public static ICache cacheServer;

  static void Main()
  {
// Make the server
var cfgServer = new IgniteConfiguration
{
  IgniteInstanceName = "Server",
  JvmMaxMemoryMb = 1024,
  JvmInitialMemoryMb = 512,
  DataStorageConfiguration = new DataStorageConfiguration
  {
WalMode = WalMode.Fsync,
PageSize = 4 * 1024,
StoragePath = Path.Combine(@"c:\temp", "BadCacheCreationReproducer", 
"Persistence"),
WalArchivePath = Path.Combine(@"c:\temp", "BadCacheCreationReproducer", 
"WalArchive"),
WalPath = Path.Combine(@"c:\temp", "BadCacheCreationReproducer", 
"WalStore"),
DefaultDataRegionConfiguration = new DataRegionConfiguration
{
  Name = "Default",
  InitialSize = 64 * 1024 * 1024,
  MaxSize = 2L * 1024 * 1024 * 1024,
  PersistenceEnabled = true
}
  },
  JvmOptions = new List() { "-DIGNITE_QUIET=false", 
"-Djava.net.preferIPv4Stack=true", "-XX:+UseG1GC" },
  WorkDirectory = Path.Combine(@"c:\temp", "BadCacheCreationReproducer")
};

var igniteServer = Ignition.Start(cfgServer);
igniteServer.GetCluster().SetActive(true);

// Attempt to create the bad cache
var cacheCfgServer = new CacheConfiguration { Name = "ABadCache", 
KeepBinaryInStore = true, CacheMode = CacheMode.Partitioned, DataRegionName = 
"NotDefault" };

try
{
  cacheServer = igniteServer.GetOrCreateCache(cacheCfgServer);
}
catch (Exception ex)
{
  Console.WriteLine($"Exception!: {ex}");
}

Console.WriteLine("Completed");
  }
}
{noformat}



> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [t

Re: MRSP 2.9: Draft CA Communication and Survey

2023-08-29 Thread Ben Wilson
All,
This August 2023 CA Communication and Survey was sent out to CAs already in
our program last Tuesday (August 22) and to CAs applying for inclusion
yesterday (August 28). The deadline for responses is September 15. If any
CA in our program or applying to our program did not receive the CA
Communication and Survey, then please contact me directly, and I will
provide you with the link.
Thanks,
Ben

On Fri, Aug 18, 2023 at 4:20 PM Ben Wilson  wrote:

> All,
> Below for your review and comment is a draft CA Communication and Survey
> to be sent next week via the CCADB to all CA operators in Mozilla's root
> store.
> Thanks,
> Ben
> Mozilla CA Operator Survey - Respond By September 15, 2023Section 1:
>
> The purpose of this communication and survey is to ensure that CA
> operators are aware of and prepared to comply with recent changes to the
> Mozilla Root Store Policy (MRSP), v. 2.9, effective September 1, 2023.[1]
>
> CAs are expected to comply, without exception, with the MRSP, and to
> ensure ongoing compliance, CAs should carefully review this policy and the
> changes in MRSP v.2.9 from version 2.8.1.[2] These changes have been
> discussed on the Mozilla dev-security-policy list.[3] CAs that did not
> participate in such discussions or that have not yet reviewed those
> conversations should also read them to reduce the chance of confusion or
> misinterpretation.
>
> In accordance with MRSP § 4.2[4], CA operators will be required to respond
> to the questions on this form[5] on or before September 15, 2023.  The
> questions included in the survey are also available here [6].
>
> Results will be reviewed by Mozilla and may be shared publicly to inform
> us regarding these and future changes to the MRSP.
>
> For questions, concerns, or issues related to this survey, please email
> certifica...@mozilla.org.
>
> Thanks,
>
> Ben and Kathleen
>
> Mozilla CA Program
>
> References:
>
> [1]
> https://github.com/BenWilson-Mozilla/pkipolicy/blob/2.9/rootstore/policy.md
>
> [2]
> <https://github.com/mozilla/pkipolicy/compare/e8a3f55ea7565bc72e9f9e9ab3e57c993fb0812d..f82afed8a7df2598824804e84b6961f89b3969cd>
> https://github.com/mozilla/pkipolicy/compare/e8a3f55ea7565bc72e9f9e9ab3e57c993fb0812d..117054ecf1eff757cfebe40d7c952ce1e3fca920
>
> [3]
> https://groups.google.com/a/mozilla.org/g/dev-security-policy/search?q=%22mrsp%202.9%22
>
> [4]
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#42-surveys
>
> [5] Redacted
>
> [6] Redacted
> <https://docs.google.com/document/d/1ieXSt3rJyOSopJnDp4wFGSugpk6pt5pJFJ55rkpb6Ks/edit?usp=sharing>
>
> Section 2: CA Operator Information
>
> What is your name? _
>
> What is your email address? (i.e., how can we get in touch with you if we
> have questions related to a specific answer in your response?)
> _
>
> What is your organization's "CA Owner" name as listed in CCADB? (i.e.,
> this is the organization you are authorized to represent in CCADB). Please
> use the exact value as it appears in CCADB.
>
> _
> Section 3: Retirement of Older Root CA Certificates
>
> Background:
>
> According to MRSP § 7.4, root CA certificates enabled with the websites
> trust bit will have that bit removed when CA key material is 15 years old,
> and at 18 years from the CA key material generation date for a Root CA
> certificate with the email trust bit, a "Distrust for S/MIME After Date"
> will be set.
>
> As of July 1, 2012, most CAs were required to obtain an auditor-witnessed
> key generation ceremony report. If a CA operator cannot provide a key
> generation ceremony report for a root CA certificate, then Mozilla will use
> the “Valid From” date in the root CA certificate to establish the key
> material generation date.
>
> For transition purposes, root CA certificates in the Mozilla root store
> will be distrusted according to the schedule located at
> https://wiki.mozilla.org/CA/Root_CA_Lifecycles, which is subject to
> change if underlying algorithms become more susceptible to cryptanalytic
> attack or if other circumstances arise that make that schedule obsolete.
>
> We have the following questions or concerns about MRSP § 7.4 (Root CA
> Lifecycles) and/or the transition schedule posted on the wiki. (Either
> write "None" or describe below)
>
> _
> Section 4: Compliance with the CABF’s S/MIME BRs
>
> Background:
>
> Certificates issued on or after September 1, 2023, that are capable of
> being used to digitally sign or encrypt email messages, and CA operations
> relating to the issu

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760136#comment-17760136
 ] 

Raymond Wilson commented on IGNITE-20299:
-

I think persistence is an important for this issue as it is on restart that the 
grid complains that it cannot (a) start the incorrectly created cache (which 
raises the question as to why it is still known about if creation of it was 
unsuccessful) and (b) fails to initialise the persisted caches.

The cache folder for the incorrectly create cache is also constructed which 
indicates that the grid has somehow accepted the cache as a valid new cache 
while at the same time throwing the exchange process exception, all of which 
indicates the validation of the parameters for the new cache is not enforcing 
the requirement for the data region to be known.


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExcha

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760135#comment-17760135
 ] 

Raymond Wilson commented on IGNITE-20299:
-

Wrapping an exception trapper around GetOrCreateCache() call shows that there 
is an exception thrown, which is:

 "Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.

However, none of the information in the exception indicates that the "Requested 
DataRegion is not configured"


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.Gr

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760119#comment-17760119
 ] 

Raymond Wilson commented on IGNITE-20299:
-

[~ptupitsyn]

As some additional background information, the process that executed the cache 
creation command hosts two separate Ignite client node JVMs where each is a 
client node of a separate Ignite grid.

The data region in question is a valid data region in one of the grids, but not 
in the one that was asked to create the cache.

The code creating the cache is pretty simple:

{noformat}
public SiteModelMetadataManager(StorageMutability mutability)
{
  // Obtain the ignite reference for the primary grid orientation of 
SiteModels
  var ignite = DIContext.Obtain()?.Grid(mutability);

  _metaDataCache = ignite?.GetOrCreateCache(ConfigureCache());

  if (_metaDataCache == null)
throw new TRexException($"Failed to get or create Ignite cache 
{TRexCaches.SiteModelMetadataCacheName()}, ignite reference is {ignite}");
}
{noformat}

We do not observe the message "Failed to get or create Ignite cache" in our 
logs, implying GetOrCreate cache threw an exception we are not capturing.

The configuration for the cache in question is set up by this code:


{noformat}
private CacheConfiguration ConfigureCache()
{
  return new CacheConfiguration
  {
Name = TRexCaches.SiteModelMetadataCacheName(),
KeepBinaryInStore = true,
CacheMode = CacheMode.Replicated,
DataRegionName = DataRegions.MUTABLE_NONSPATIAL_DATA_REGION
  };
}
{noformat}




> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUt

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759863#comment-17759863
 ] 

Raymond Wilson commented on IGNITE-20299:
-

[~ptupitsyn]

I searched for the error you mentioned "Requested DataRegion is not 
configured", and I can see this emitted on each of the server nodes in the dev 
environment. 

The (long) log line is here:


{noformat}
Exchange future: GridDhtPartitionsExchangeFuture 
[firstDiscoEvt=DiscoveryCustomEvent [customMsg=ChangeGlobalStateMessage 
[id=e6c78a93a81-145135fd-0ac4-4457-8308-e9798dfa4ee6, 
reqId=ff5e4d1b-359d-4cda-94c3-8115da3b1dc1, 
initiatingNodeId=4d44108f-cd96-4953-94db-6365f998a91b, state=ACTIVE, 
baselineTopology=BaselineTopology [id=0, branchingHash=264269663, 
branchingType='Cluster activation', 
baselineNodes=[3fb67a8d-b805-4dbd-b400-a02077d827de, 
ce2ff0c2-46b7-4082-97ea-847514672d06, b9e51542-6d1c-4535-8a02-ced986e53878, 
46f8195d-000f-4d09-8ab9-3bd5246e9109]], forceChangeBaselineTopology=false, 
timestamp=1693184516920, forceDeactivation=true], 
affTopVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], 
super=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=4d44108f-cd96-4953-94db-6365f998a91b, 
consistentId=3fb67a8d-b805-4dbd-b400-a02077d827de, addrs=ArrayList [], 
sockAddrs=HashSet [], discPort=47500, order=5, intOrder=5, 
lastExchangeTime=1693184505010, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=false], topVer=12, msgTemplate=null, 
span=org.apache.ignite.internal.processors.tracing.NoopSpan@1b32fd2b, 
nodeId8=b1708bc6, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1693184518126]], 
crd=TcpDiscoveryNode [id=b1708bc6-8798-4d16-b7ee-7cdf2535944d, 
consistentId=b9e51542-6d1c-4535-8a02-ced986e53878, addrs=ArrayList [], 
sockAddrs=HashSet [], discPort=47500, order=1, intOrder=1, 
lastExchangeTime=1693184529771, loc=true, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=false], exchId=GridDhtPartitionExchangeId 
[topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], 
discoEvt=DiscoveryCustomEvent [customMsg=ChangeGlobalStateMessage 
[id=e6c78a93a81-145135fd-0ac4-4457-8308-e9798dfa4ee6, 
reqId=ff5e4d1b-359d-4cda-94c3-8115da3b1dc1, 
initiatingNodeId=4d44108f-cd96-4953-94db-6365f998a91b, state=ACTIVE, 
baselineTopology=BaselineTopology [id=0, branchingHash=264269663, 
branchingType='Cluster activation', 
baselineNodes=[3fb67a8d-b805-4dbd-b400-a02077d827de, 
ce2ff0c2-46b7-4082-97ea-847514672d06, b9e51542-6d1c-4535-8a02-ced986e53878, 
46f8195d-000f-4d09-8ab9-3bd5246e9109]], forceChangeBaselineTopology=false, 
timestamp=1693184516920, forceDeactivation=true], 
affTopVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], 
super=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=4d44108f-cd96-4953-94db-6365f998a91b, 
consistentId=3fb67a8d-b805-4dbd-b400-a02077d827de, addrs=ArrayList [], 
sockAddrs=HashSet [], discPort=47500, order=5, intOrder=5, 
lastExchangeTime=1693184505010, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=false], topVer=12, msgTemplate=null, 
span=org.apache.ignite.internal.processors.tracing.NoopSpan@1b32fd2b, 
nodeId8=b1708bc6, msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=1693184518126]], 
nodeId=4d44108f, evt=DISCOVERY_CUSTOM_EVT], added=true, exchangeType=ALL, 
initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=true, 
hash=1834334095], init=true, lastVer=GridCacheVersion [topVer=0, 
order=1693184474107, nodeOrder=0, dataCenterId=0], 
partReleaseFut=PartitionReleaseFuture [topVer=AffinityTopologyVersion 
[topVer=12, minorTopVer=1], totalFutures=5, futures=[ExplicitLockReleaseFuture 
[topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], totalFutures=0, 
futures=[]], AtomicUpdateReleaseFuture [topVer=AffinityTopologyVersion 
[topVer=12, minorTopVer=1], totalFutures=0, futures=[]], 
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=12, 
minorTopVer=1], totalFutures=0, futures=[]], LocalTxReleaseFuture 
[topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], totalFutures=0, 
futures=[]], AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=12, 
minorTopVer=1], totalFutures=1, futures=[RemoteTxReleaseFuture 
[topVer=AffinityTopologyVersion [topVer=12, minorTopVer=1], totalFutures=0, 
futures=[]], exchActions=ExchangeActions [startCaches=[ignite-sys-cache, 
SiteModelMetadata, SiteModelChangeBufferQueue, NonSpatial-Immutable, 
SiteModelChangeMaps, Spatial-SubGridSegment-Immutable, 
ProductionDataExistenceMap-Immutable, DesignTopologyExistenceMaps, 
SiteModels-Immutable, Spatial-SubGridDirectory-Immutable], stopCaches=null, 
startGrps=[DesignTopologyExistenceMaps, ignite-sys-cache, 
Spatial-SubGridSegment-Immutable, NonSpatial-Immutable, SiteModelChangeMaps, 
SiteModelMetadata, ProductionDataExistenceMap-Immutable, 
Spatial-SubGridDirectory-Immutable, SiteModelChangeBufferQueue, 
SiteModels-Immutable], stopGrps=[], reset

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759859#comment-17759859
 ] 

Raymond Wilson commented on IGNITE-20299:
-

Just a quick note as to how I reproduce it: I create a new instance of our 
Ignite grid on my development machine (and as it is new it has no data within 
it). Running a node which then attempts to create the cache against a data 
region that does not exist causes the issue.

In terms of my local system, there is only a single server node and a small 
collection of client nodes.


> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExc

[jira] [Commented] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759846#comment-17759846
 ] 

Raymond Wilson commented on IGNITE-20299:
-

Yes, we are using persistence. 

This is our persistence XML file:


{noformat}


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   
http://www.springframework.org/schema/util/spring-util.xsd;>

  
  

{noformat}

Our configuration is mostly in code. Here is the primary configuration for the 
server nodes:


{noformat}
public void ConfigureTRexGrid(IgniteConfiguration cfg)
{
   cfg.IgniteInstanceName = TRexGrids.ImmutableGridName();
  cfg.JvmOptions = CommonJavaJVMOptions();

  var configStore = DIContext.Obtain();

  // Note: Set the PSN JVM heap size minimum and maximum sizes to be the 
maximum defined JVM heap size for the node.
  // This is to ensure the JVM always has access to the heap promised to it 
so will never act to resize the heap 
  // This provide better performance and removes chances of surprise if the 
OS cannot allocate a larger heap size block 
  // for other reason.
  cfg.JvmMaxMemoryMb = 
configStore.GetValueInt(PSNODE_IGNITE_JVM_MAX_HEAP_SIZE_MB, 
DEFAULT_IGNITE_JVM_MAX_HEAP_SIZE_MB);
  cfg.JvmInitialMemoryMb = 
configStore.GetValueInt(PSNODE_IGNITE_JVM_MAX_HEAP_SIZE_MB, 
DEFAULT_IGNITE_JVM_MAX_HEAP_SIZE_MB);

  cfg.UserAttributes = new Dictionary
  {
  { "Owner", TRexGrids.ImmutableGridName() }
  };

  // Configure the Ignite persistence layer to store our data
  cfg.DataStorageConfiguration = new DataStorageConfiguration
  {
WalMode = WalMode.Fsync,
PageSize = IgniteDataRegionPageSize(),

StoragePath = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable", 
"Persistence"),
WalPath = Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, 
"Immutable", "WalStore"),
WalArchivePath = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable", 
"WalArchive"),

WalSegmentSize = 512 * 1024 * 1024, // Set the WalSegmentSize to 512Mb 
to better support high write loads (can be set to max 2Gb)
MaxWalArchiveSize = (long)10 * 512 * 1024 * 1024, // Ensure there are 
10 segments in the WAL archive at the defined segment size

CheckpointThreads = 
configStore.GetValueInt(IGNITE_NUMBER_OF_CHECKPOINTING_THREADS, 
DEFAULT_IGNITE_NUMBER_OF_CHECKPOINTING_THREADS),
CheckpointFrequency = 
TimeSpan.FromSeconds(configStore.GetValueInt(IGNITE_CHECKPOINTING_INTERVAL_SECONDS,
 DEFAULT_IGNITE_CHECKPOINTING_INTERVAL_SECONDS)),

DefaultDataRegionConfiguration = new DataRegionConfiguration
{
  Name = DataRegions.DEFAULT_IMMUTABLE_DATA_REGION_NAME,
  InitialSize = 
configStore.GetValueLong(IMMUTABLE_DATA_REGION_INITIAL_SIZE_MB, 
DEFAULT_IMMUTABLE_DATA_REGION_INITIAL_SIZE_MB) * 1024 * 1024,  
  MaxSize = configStore.GetValueLong(IMMUTABLE_DATA_REGION_MAX_SIZE_MB, 
DEFAULT_IMMUTABLE_DATA_REGION_MAX_SIZE_MB) * 1024 * 1024,  

  PersistenceEnabled = true
}
  };

  
Log.LogInformation($"cfg.DataStorageConfiguration.StoragePath={cfg.DataStorageConfiguration.StoragePath}");
  
Log.LogInformation($"cfg.DataStorageConfiguration.WalArchivePath={cfg.DataStorageConfiguration.WalArchivePath}");
  
Log.LogInformation($"cfg.DataStorageConfiguration.WalPath={cfg.DataStorageConfiguration.WalPath}");

  if (!bool.TryParse(Environment.GetEnvironmentVariable("IS_KUBERNETES"), 
out var isKubernetes))
  {
Log.LogWarning($"Failed to parse the value of the 'IS_KUBERNETES' 
environment variable as a bool. Value is 
{Environment.GetEnvironmentVariable("IS_KUBERNETES")}. Defaulting to true");
  }

  cfg = isKubernetes ? SetKubernetesIgniteConfiguration(cfg) : 
SetLocalIgniteConfiguration(cfg);
  cfg.WorkDirectory = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable");

  cfg.Logger = new TRexIgniteLogger(configStore, 
Logger.CreateLogger("ImmutableCacheComputeServer"));

  // Set an Ignite metrics heartbeat
  cfg.MetricsLogFrequency = new TimeSpan(0, 0, 0, 
configStore.GetValueInt(IGNITE_HEARTBEAT_FREQUENCY_SECONDS, 
DEFAULT_IGNITE_HEARTBEAT_FREQUENCY_SECONDS)); 

  cfg.PublicThreadPoolSize = 
configStore.GetVal

[jira] [Comment Edited] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-29 Thread Raymond Wilson (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759846#comment-17759846
 ] 

Raymond Wilson edited comment on IGNITE-20299 at 8/29/23 7:00 AM:
--

[~ptupitsyn]

Yes, we are using persistence. 

This is our persistence XML file:


{noformat}


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   
http://www.springframework.org/schema/util/spring-util.xsd;>

  
  

{noformat}

Our configuration is mostly in code. Here is the primary configuration for the 
server nodes:


{noformat}
public void ConfigureTRexGrid(IgniteConfiguration cfg)
{
   cfg.IgniteInstanceName = TRexGrids.ImmutableGridName();
  cfg.JvmOptions = CommonJavaJVMOptions();

  var configStore = DIContext.Obtain();

  // Note: Set the PSN JVM heap size minimum and maximum sizes to be the 
maximum defined JVM heap size for the node.
  // This is to ensure the JVM always has access to the heap promised to it 
so will never act to resize the heap 
  // This provide better performance and removes chances of surprise if the 
OS cannot allocate a larger heap size block 
  // for other reason.
  cfg.JvmMaxMemoryMb = 
configStore.GetValueInt(PSNODE_IGNITE_JVM_MAX_HEAP_SIZE_MB, 
DEFAULT_IGNITE_JVM_MAX_HEAP_SIZE_MB);
  cfg.JvmInitialMemoryMb = 
configStore.GetValueInt(PSNODE_IGNITE_JVM_MAX_HEAP_SIZE_MB, 
DEFAULT_IGNITE_JVM_MAX_HEAP_SIZE_MB);

  cfg.UserAttributes = new Dictionary
  {
  { "Owner", TRexGrids.ImmutableGridName() }
  };

  // Configure the Ignite persistence layer to store our data
  cfg.DataStorageConfiguration = new DataStorageConfiguration
  {
WalMode = WalMode.Fsync,
PageSize = IgniteDataRegionPageSize(),

StoragePath = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable", 
"Persistence"),
WalPath = Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, 
"Immutable", "WalStore"),
WalArchivePath = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable", 
"WalArchive"),

WalSegmentSize = 512 * 1024 * 1024, // Set the WalSegmentSize to 512Mb 
to better support high write loads (can be set to max 2Gb)
MaxWalArchiveSize = (long)10 * 512 * 1024 * 1024, // Ensure there are 
10 segments in the WAL archive at the defined segment size

CheckpointThreads = 
configStore.GetValueInt(IGNITE_NUMBER_OF_CHECKPOINTING_THREADS, 
DEFAULT_IGNITE_NUMBER_OF_CHECKPOINTING_THREADS),
CheckpointFrequency = 
TimeSpan.FromSeconds(configStore.GetValueInt(IGNITE_CHECKPOINTING_INTERVAL_SECONDS,
 DEFAULT_IGNITE_CHECKPOINTING_INTERVAL_SECONDS)),

DefaultDataRegionConfiguration = new DataRegionConfiguration
{
  Name = DataRegions.DEFAULT_IMMUTABLE_DATA_REGION_NAME,
  InitialSize = 
configStore.GetValueLong(IMMUTABLE_DATA_REGION_INITIAL_SIZE_MB, 
DEFAULT_IMMUTABLE_DATA_REGION_INITIAL_SIZE_MB) * 1024 * 1024,  
  MaxSize = configStore.GetValueLong(IMMUTABLE_DATA_REGION_MAX_SIZE_MB, 
DEFAULT_IMMUTABLE_DATA_REGION_MAX_SIZE_MB) * 1024 * 1024,  

  PersistenceEnabled = true
}
  };

  
Log.LogInformation($"cfg.DataStorageConfiguration.StoragePath={cfg.DataStorageConfiguration.StoragePath}");
  
Log.LogInformation($"cfg.DataStorageConfiguration.WalArchivePath={cfg.DataStorageConfiguration.WalArchivePath}");
  
Log.LogInformation($"cfg.DataStorageConfiguration.WalPath={cfg.DataStorageConfiguration.WalPath}");

  if (!bool.TryParse(Environment.GetEnvironmentVariable("IS_KUBERNETES"), 
out var isKubernetes))
  {
Log.LogWarning($"Failed to parse the value of the 'IS_KUBERNETES' 
environment variable as a bool. Value is 
{Environment.GetEnvironmentVariable("IS_KUBERNETES")}. Defaulting to true");
  }

  cfg = isKubernetes ? SetKubernetesIgniteConfiguration(cfg) : 
SetLocalIgniteConfiguration(cfg);
  cfg.WorkDirectory = 
Path.Combine(TRexServerConfig.PersistentCacheStoreLocation, "Immutable");

  cfg.Logger = new TRexIgniteLogger(configStore, 
Logger.CreateLogger("ImmutableCacheComputeServer"));

  // Set an Ignite metrics heartbeat
  cfg.MetricsLogFrequency = new TimeSpan(0, 0, 0, 
configStore.GetValueInt(IGNITE_HEARTBEAT_FREQUENCY_SECONDS, 
DEFAULT_IGNITE_HEARTBEAT_FREQUENCY_SECONDS)); 

  cfg.PublicThrea

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes total unrecoverable failure of the grid

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Summary: Creating a cache with an unknown data region name causes total 
unrecoverable failure of the grid  (was: Creating a cache with an unknown data 
region name causes immediate, total, unrecoverable failure of the grid)

> Creating a cache with an unknown data region name causes total unrecoverable 
> failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>         at 
> org.apache.igni

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate, total, unrecoverable failure of the grid

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Summary: Creating a cache with an unknown data region name causes 
immediate, total, unrecoverable failure of the grid  (was: Creating a cache 
with an unknown data region name causes immediate unrecoverable failure)

> Creating a cache with an unknown data region name causes immediate, total, 
> unrecoverable failure of the grid
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.G

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Environment: 
Observed in:

C# client and grid running on Linux in a container

C# client and grid running on Windows

 

  was:
Observed in:

C# client running on Linux in a container

C# client running on Windows

 


> Creating a cache with an unknown data region name causes immediate 
> unrecoverable failure
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client and grid running on Linux in a container
> C# client and grid running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>         at 
> org.apache.igni

Re: Possible bug failing to create a cache on a running grid causing grid failure

2023-08-28 Thread Raymond Wilson
FYI, I raised Jira ticket https://issues.apache.org/jira/browse/IGNITE-20299
for this.


On Mon, Aug 28, 2023 at 3:42 PM Raymond Wilson 
wrote:

> We have tried the same renaming in the dev environment which has multiple
> server nodes impacted and contains some data (unlike the local grid I
> tested this on which had a single server node containing no data). This
> environment failed to restart after those changes.
>
> We are still looking into it and will try to delete the cache with the
> Control.sh script, but if that is not feasible and there are no other ways
> to mitigate it I would rank this as a hot-fix candidate where a simple
> error on a customer's part is capable of causing complete loss.
>
> Raymond.
>
> On Sun, Aug 27, 2023 at 9:23 PM Raymond Wilson 
> wrote:
>
>> Looking at the cache-SiteModelMetaData folder in the persistence folder
>> for a server node shows a "cache_data" file 6kb in size. No other cache
>> folders contain this file.
>>
>> As an experiment I renamed this file to "cache_dataxxx". This appeared to
>> be sufficient to permit the grid to restart. Similarly renaming the cache
>> folder to  "xxxcache-SiteModelMetaData" also permitted the grid to restart;
>> we will be testing this further to verify.
>>
>> Raymond.
>>
>>
>> On Sun, Aug 27, 2023 at 5:20 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> I have reproduced the possible bug I reported in my earlier email.
>>>
>>> Given a running grid, having a client node in the grid attempt to create
>>> a cache using a DataRegionName that does not exist in the grid causes
>>> immediate failure in the client node with the following log output.
>>>
>>> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed
>>> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0,
>>> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
>>> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode
>>> [id=9d5ed68d-38bb-447d-aed5-189f52660716,
>>> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList
>>> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8,
>>> lastExchangeTime=1693112858024, loc=false,
>>> ver=2.15.0#20230425-sha1:f98f7f35, isClient=true], rebalanced=false,
>>> done=true, newCrdFut=null], topVer=AffinityTopologyVersion [topVer=15,
>>> minorTopVer=0]]
>>> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange
>>> timings [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
>>> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting
>>> in exchange queue" (14850 ms), stage="Exchange parameters initialization"
>>> (2 ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4
>>> ms), stage="Total time" (14859 ms)]
>>> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange
>>> longest local stages [startVer=AffinityTopologyVersion [topVer=15,
>>> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
>>> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished
>>> exchange init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
>>> crd=false]
>>> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]
>>> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED,
>>> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
>>> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class
>>> org.apache.ignite.IgniteCheckedException: Failed to complete exchange
>>> process.
>>>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete
>>> exchange process.
>>>  ---> Apache.Ignite.Core.Common.JavaException:
>>> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
>>> Failed to complete exchange process.
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>>> at
>>> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>>> at
>>> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>>> at
>>> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutOb

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Description: 
Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
        at java.base/java.lang.Thread.run(Thread.java:829)
        Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to 
initialize exchange locally [locNodeId=e9325b04-00fa-452e-9796-989b47b860ea]
                at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloa

[jira] [Updated] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Wilson updated IGNITE-20299:

Component/s: (was: cassandra)

> Creating a cache with an unknown data region name causes immediate 
> unrecoverable failure
> 
>
> Key: IGNITE-20299
> URL: https://issues.apache.org/jira/browse/IGNITE-20299
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.15
> Environment: Observed in:
> C# client running on Linux in a container
> C# client running on Windows
>  
>Reporter: Raymond Wilson
>Priority: Major
>
> Using the Ignite C# client.
>  
> Given a running grid, having a client (and perhaps server) node in the grid 
> attempt to create a cache using a DataRegionName that does not exist in the 
> grid causes immediate failure in the client node with the following log 
> output. 
>  
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed 
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
> [id=9d5ed68d-38bb-447d-aed5-189f52660716, 
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList 
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8, 
> lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
> isClient=true], rebalanced=false, done=true, newCrdFut=null], 
> topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
> [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
> exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 
> ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
> stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
> local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
> init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
> process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
> process.
>         at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>         at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>         at 
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
> exchange process.
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>         at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
>         at 
> org.apache.ignite.internal.processors.cach

[jira] [Created] (IGNITE-20299) Creating a cache with an unknown data region name causes immediate unrecoverable failure

2023-08-28 Thread Raymond Wilson (Jira)
Raymond Wilson created IGNITE-20299:
---

 Summary: Creating a cache with an unknown data region name causes 
immediate unrecoverable failure
 Key: IGNITE-20299
 URL: https://issues.apache.org/jira/browse/IGNITE-20299
 Project: Ignite
  Issue Type: Bug
  Components: cache, cassandra
Affects Versions: 2.15
 Environment: Observed in:

C# client running on Linux in a container

C# client running on Windows

 
Reporter: Raymond Wilson


Using the Ignite C# client.
 
Given a running grid, having a client (and perhaps server) node in the grid 
attempt to create a cache using a DataRegionName that does not exist in the 
grid causes immediate failure in the client node with the following log output. 
 
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed partition 
exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0, 
exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
[topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode 
[id=9d5ed68d-38bb-447d-aed5-189f52660716, 
consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList [127.0.0.1], 
sockAddrs=null, discPort=0, order=8, intOrder=8, 
lastExchangeTime=1693112858024, loc=false, ver=2.15.0#20230425-sha1:f98f7f35, 
isClient=true], rebalanced=false, done=true, newCrdFut=null], 
topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange timings 
[startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting in 
exchange queue" (14850 ms), stage="Exchange parameters initialization" (2 ms), 
stage="Determine exchange type" (3 ms), stage="Exchange done" (4 ms), 
stage="Total time" (14859 ms)]
2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange longest 
local stages [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished exchange 
init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], crd=false]
2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]   
AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED, 
evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
 ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete exchange 
process.
 ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException: 
class org.apache.ignite.IgniteCheckedException: Failed to complete exchange 
process.
        at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
        at 
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
        at 
org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
        at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to complete 
exchange process.
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
        at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3348)
        at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3182)
        at 
org.apache.ignite

Public Discussion of CommScope CA Inclusion Request

2023-08-28 Thread Ben Wilson
All,

This email commences a six-week public discussion of CommScope’s request to
include the following four (4) certificates as publicly trusted root
certificates in one or more CCADB Root Store Member’s program. This
discussion period is scheduled to close on October 10, 2023.

The purpose of this public discussion process is to promote openness and
transparency. However, each Root Store makes its inclusion decisions
independently, on its own timelines, and based on its own inclusion
criteria. Successful completion of this public discussion process does not
guarantee any favorable action by any root store.

Anyone with concerns or questions is urged to raise them on this CCADB
Public list by replying directly in this discussion thread. Likewise, a
representative of the applicant must promptly respond directly in the
discussion thread to all questions that are posted.

CCADB Case Number:  0923
;
Bugzilla:  1673177 

Organization Background Information (listed in CCADB):

   -

   CA Owner Name: CommScope
   -

   Website(s):  https://www.pki-center.com/,
   https://cert.pkiworks.com/Public/Portal, https://www.commscope.com/
   -

   Address: 6450 Sequence Drive, San Diego CA  92121
   -

   Problem Reporting Mechanism(s):
   https://cert.pkiworks.com/Public/SecurityIncidentReport/ or email to
   #advanced-pki-policy-author...@commscope.com
   -

   Organization Type: CAs are owned by wholly owned subsidiaries of
   CommScope Holding Company, Inc., a NASDAQ-traded company
   -

   Repository URL:  https://certificates.pkiworks.com/Public/Documents/

Certificates Requesting Inclusion:

   1.

   CommScope Public Trust RSA Root-01:


   -

   Certificate download links: (CA Repository
   
,
   crt.sh
   

   )
   -

   Use cases served/EKUs:
   -

  Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
  -

  Client Authentication 1.3.6.1.5.5.7.3.2
  -

   Test websites:
   -

  Valid: https://rsa-current.ca-1.test.pkiworks.com
  -

  Revoked: https://rsa-revoked.ca-1.test.pkiworks.com
  -

  Expired: https://rsa-expired.ca-1.test.pkiworks.com



   1.

   CommScope Public Trust RSA Root-02:
   -

  Certificate download links: (CA Repository
  
,
  crt.sh
  

  )
  -

  Use cases served/EKUs:
  -

 Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
 -

 Client Authentication 1.3.6.1.5.5.7.3.2
 -

  Test websites:
  -

 Valid:  https://rsa-current.ca-2.test.pkiworks.com
 -

 Revoked:  https://rsa-revoked.ca-2.test.pkiworks.com
 -

 Expired:  https://rsa-expired.ca-2.test.pkiworks.com
 2.

   CommScope Public Trust ECC Root-01
   -

  Certificate download links: (CA Repository
  
,
  crt.sh
  

  )
  -

  Use cases served/EKUs:
  -

 Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
 -

 Client Authentication 1.3.6.1.5.5.7.3.2
 -

  Test websites:
  -

 Valid: https://ecc-current.ca-1.test.pkiworks.com
 -

 Revoked: https://ecc-revoked.ca-1.test.pkiworks.com
 -

 Expired: https://ecc-expired.ca-1.test.pkiworks.com



   1.

   CommScope Public Trust ECC Root-02
   -

  Certificate download links: (CA Repository
  
,
  crt.sh
  

  )
  -

  Use cases served/EKUs:
  -

 Server Authentication (TLS) 1.3.6.1.5.5.7.3.1
 -

 Client Authentication 1.3.6.1.5.5.7.3.2
 -

  Test websites:
  -

 Valid: https://ecc-current.ca-2.test.pkiworks.com
 -

 Revoked: https://ecc-revoked.ca-2.test.pkiworks.com
 -

 Expired: https://ecc-expired.ca-2.test.pkiworks.com

Relevant Policy and Practices Documentation:

The following applies to all four (4) applicant root CAs:

   -

   https://certificates.pkiworks.com/Public/DownloadDocument/18 (CP/CPS v.
   2.6 dated 2/10/2023)


Most Recent Self-Assessment:

The following applies to all four (4) applicant root CAs:

   -

   https://bugzilla.mozilla.org/attachment.cgi?id=9281545  

Re: Possible bug failing to create a cache on a running grid causing grid failure

2023-08-27 Thread Raymond Wilson
We have tried the same renaming in the dev environment which has multiple
server nodes impacted and contains some data (unlike the local grid I
tested this on which had a single server node containing no data). This
environment failed to restart after those changes.

We are still looking into it and will try to delete the cache with the
Control.sh script, but if that is not feasible and there are no other ways
to mitigate it I would rank this as a hot-fix candidate where a simple
error on a customer's part is capable of causing complete loss.

Raymond.

On Sun, Aug 27, 2023 at 9:23 PM Raymond Wilson 
wrote:

> Looking at the cache-SiteModelMetaData folder in the persistence folder
> for a server node shows a "cache_data" file 6kb in size. No other cache
> folders contain this file.
>
> As an experiment I renamed this file to "cache_dataxxx". This appeared to
> be sufficient to permit the grid to restart. Similarly renaming the cache
> folder to  "xxxcache-SiteModelMetaData" also permitted the grid to restart;
> we will be testing this further to verify.
>
> Raymond.
>
>
> On Sun, Aug 27, 2023 at 5:20 PM Raymond Wilson 
> wrote:
>
>> I have reproduced the possible bug I reported in my earlier email.
>>
>> Given a running grid, having a client node in the grid attempt to create
>> a cache using a DataRegionName that does not exist in the grid causes
>> immediate failure in the client node with the following log output.
>>
>> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed
>> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0,
>> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
>> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode
>> [id=9d5ed68d-38bb-447d-aed5-189f52660716,
>> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList
>> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8,
>> lastExchangeTime=1693112858024, loc=false,
>> ver=2.15.0#20230425-sha1:f98f7f35, isClient=true], rebalanced=false,
>> done=true, newCrdFut=null], topVer=AffinityTopologyVersion [topVer=15,
>> minorTopVer=0]]
>> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange
>> timings [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
>> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting
>> in exchange queue" (14850 ms), stage="Exchange parameters initialization"
>> (2 ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4
>> ms), stage="Total time" (14859 ms)]
>> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange
>> longest local stages [startVer=AffinityTopologyVersion [topVer=15,
>> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
>> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished
>> exchange init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
>> crd=false]
>> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]
>> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED,
>> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
>> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class
>> org.apache.ignite.IgniteCheckedException: Failed to complete exchange
>> process.
>>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete
>> exchange process.
>>  ---> Apache.Ignite.Core.Common.JavaException:
>> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
>> Failed to complete exchange process.
>> at
>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
>> at
>> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
>> at
>> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
>> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
>> complete exchange process.
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.preloader

Re: Possible bug failing to create a cache on a running grid causing grid failure

2023-08-27 Thread Raymond Wilson
Looking at the cache-SiteModelMetaData folder in the persistence folder for
a server node shows a "cache_data" file 6kb in size. No other cache folders
contain this file.

As an experiment I renamed this file to "cache_dataxxx". This appeared to
be sufficient to permit the grid to restart. Similarly renaming the cache
folder to  "xxxcache-SiteModelMetaData" also permitted the grid to restart;
we will be testing this further to verify.

Raymond.


On Sun, Aug 27, 2023 at 5:20 PM Raymond Wilson 
wrote:

> I have reproduced the possible bug I reported in my earlier email.
>
> Given a running grid, having a client node in the grid attempt to create a
> cache using a DataRegionName that does not exist in the grid causes
> immediate failure in the client node with the following log output.
>
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Completed
> partition exchange [localNode=15122bd7-bf81-44e6-a548-e70dbd9334c0,
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=15, minorTopVer=0], evt=NODE_FAILED, evtNode=TcpDiscoveryNode
> [id=9d5ed68d-38bb-447d-aed5-189f52660716,
> consistentId=9d5ed68d-38bb-447d-aed5-189f52660716, addrs=ArrayList
> [127.0.0.1], sockAddrs=null, discPort=0, order=8, intOrder=8,
> lastExchangeTime=1693112858024, loc=false,
> ver=2.15.0#20230425-sha1:f98f7f35, isClient=true], rebalanced=false,
> done=true, newCrdFut=null], topVer=AffinityTopologyVersion [topVer=15,
> minorTopVer=0]]
> 2023-08-27 17:08:48,520 [44] INF [ImmutableClientServer]   Exchange
> timings [startVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
> resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0], stage="Waiting
> in exchange queue" (14850 ms), stage="Exchange parameters initialization"
> (2 ms), stage="Determine exchange type" (3 ms), stage="Exchange done" (4
> ms), stage="Total time" (14859 ms)]
> 2023-08-27 17:08:48,522 [44] INF [ImmutableClientServer]   Exchange
> longest local stages [startVer=AffinityTopologyVersion [topVer=15,
> minorTopVer=0], resVer=AffinityTopologyVersion [topVer=15, minorTopVer=0]]
> 2023-08-27 17:08:48,524 [44] INF [ImmutableClientServer]   Finished
> exchange init [topVer=AffinityTopologyVersion [topVer=15, minorTopVer=0],
> crd=false]
> 2023-08-27 17:08:48,525 [44] INF [ImmutableClientServer]
> AffinityTopologyVersion [topVer=15, minorTopVer=0], evt=NODE_FAILED,
> evtNode=9d5ed68d-38bb-447d-aed5-189f52660716, client=true]
> Unhandled exception: Apache.Ignite.Core.Cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Failed to complete exchange
> process.
>  ---> Apache.Ignite.Core.Common.IgniteException: Failed to complete
> exchange process.
>  ---> Apache.Ignite.Core.Common.JavaException: javax.cache.CacheException:
> class org.apache.ignite.IgniteCheckedException: Failed to complete exchange
> process.
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1272)
> at
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2278)
> at
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2242)
> at
> org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.processInStreamOutObject(PlatformProcessorImpl.java:643)
> at
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> complete exchange process.
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3709)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3737)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3813)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1796)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:1053)
> at
> org.apache.ignit

Re: Possible bug failing to create a cache on a running grid causing grid failure

2023-08-26 Thread Raymond Wilson
ture.java:979)
... 4 more
Caused by: class org.apache.ignite.IgniteCheckedException:
Requested DataRegion is not configured: Default-Mutable
at
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:896)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2463)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.getOrCreateCacheGroupContext(GridCacheProcessor.java:2181)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheContext(GridCacheProcessor.java:1991)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1926)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$55a0e703$1(GridCacheProcessor.java:1801)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCachesIfPossible$16(GridCacheProcessor.java:1771)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1798)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCachesIfPossible(GridCacheProcessor.java:1769)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processCacheStartRequests(CacheAffinitySharedManager.java:1000)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:886)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:1472)
... 5 more

   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck()
   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallObjectMethod(GlobalRef
obj, IntPtr methodId, Int64* argsPtr)
   at
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInStreamOutObject(GlobalRef
target, Int32 opType, Int64 inMemPtr)
   at Apache.Ignite.Core.Impl.PlatformJniTarget.InStreamOutObject(Int32
type, Action`1 writeAction)
   --- End of inner exception stack trace ---
   --- End of inner exception stack trace ---
   at Apache.Ignite.Core.Impl.PlatformJniTarget.InStreamOutObject(Int32
type, Action`1 writeAction)
   at Apache.Ignite.Core.Impl.PlatformTargetAdapter.DoOutOpObject(Int32
type, Action`1 action)
   at
Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration
configuration, NearCacheConfiguration nearConfiguration,
PlatformCacheConfiguration platformCacheConfiguration, Op op)
   at
Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration
configuration, NearCacheConfiguration nearConfiguration,
PlatformCacheConfiguration platformCacheConfiguration)
   at
Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration
configuration, NearCacheConfiguration nearConfiguration)
   at
Apache.Ignite.Core.Impl.Ignite.GetOrCreateCache[TK,TV](CacheConfiguration
configuration)


This failure causes issues in the server nodes in the grid which now fail
to restart with these errors such as the below (for the incorrectly create
cache) but which are repeated for every defined cache in the grid:

2023-08-27 17:11:36,882 [42] INF [ImmutableCacheComputeServer]   Can not
finish proxy initialization because proxy does not exist,
cacheName=SiteModelMetadata,
localNodeId=3d4a75e8-174d-4947-877e-e45784d8d08d
2

At this point the grid is now unusable.

To summarise: Attempted creation of a cache with an unknown DataRegionName
causes immediate and unrecovered failure in the entire grid.

Raymond.


On Fri, Aug 25, 2023 at 7:47 PM Raymond Wilson 
wrote:

> We believe we had some code on a dev environment attempt to create a cache
> that was intended for another Ignite.
>
> The creation of this cache would have failed (at least) because the data
> region referenced in the cache configuration does not exist on that
> environment.
>
> A subsequent restart of the environment some time later started failing to
> initialise nodes on which the failed cache would have been stored had it
> succeeded.
>
> The failing nodes report this in the log:
>
> 2023-08-25 04:20:24,540 [44] WRN [ImmutableCacheComputeServer]   Cache
> can not be started : cache=SiteModelMetadata
>
> 2023-08-25 04:20:11,265 [1] WRN [ImmutableCacheComputeServer]   WAL
> segment tail reached. [idx=414, isWorkDir=true,
> serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer@c3719e5,
> actualFilePtr=WALPointer [idx=414, fileOff=452480679, len=0]]
>
> This error implies that (somehow) Ignite considers this to be a cache
> existing in 

[sig-policy] Re: prop-153-v001: Proposed changes to PDP

2023-08-26 Thread Paul Wilson
BTW Jordi, you can keep using email if you prefer it, for participation in any 
of the APNIC lists.

The whole point of Orbit is to offer a web interface to the same lists; but of 
course no messages should be lost in the translation.  If they are, that's a 
serious bug.

Paul.


Get BlueMail for Android
On Aug 25, 2023, at 17:25, "jordi.palet--- via SIG-policy" 
mailto:sig-policy@lists.apnic.net>> wrote:

I’m sorry to say so, but mailman was much more useful that the Orbit rubbish 
system ...

When I try to find my message is NOT THERE. Only if I look to the very last 
original email on that thread, when Shaila announced prop-153, on August 3, and 
I go down, then I’m able to find my email embedded.

How come?

The system must allow to find individual messages, see the headers, etc., etc., 
instead of wasting 15 minutes to find my email ...

No wonder that I never got a reply to it …

this is the thread link, hopefully it works, but you need to read all thru ...:
https://orbit.apnic.net/hyperkitty/list/sig-policy@lists.apnic.net/thread/IC2KP2LROGUVD4DWJL4YLCEJHMCUFSL6/?sort=date


Regards,
Jordi

@jordipalet



**
IPv4 is over
Are you ready for the new Internet ?
http://www.theipv6company.com
The IPv6 Company

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the exclusive use of the 
individual(s) named above and further non-explicilty authorized disclosure, 
copying, distribution or use of the contents of this information, even if 
partially, including attached files, is strictly prohibited and will be 
considered a criminal offense. If you are not the intended recipient be aware 
that any disclosure, copying, distribution or use of the contents of this 
information, even if partially, including attached files, is strictly 
prohibited, will be considered a criminal offense, so you must reply to the 
original sender to inform about this communication and delete it.





SIG-policy - https://mailman.apnic.net/sig-policy@lists.apnic.net/
To unsubscribe send an email to sig-policy-le...@lists.apnic.net
___
SIG-policy - https://mailman.apnic.net/sig-policy@lists.apnic.net/
To unsubscribe send an email to sig-policy-le...@lists.apnic.net

[sig-policy] Re: prop-153-v001: Proposed changes to PDP

2023-08-26 Thread Paul Wilson
Very sorry to hear about this Jordi.  It seems like a serious issue, so it'll 
be investigated urgently.

But I do hope Orbit isn't rubbish.  It's being used quite heavily and the 
feedback seems to be ok.  But it is new and definitely needs improvement, so if 
you can bear with it and report your issues, that's much appreciated.

Thanks,

Paul.


Get BlueMail for Android
On Aug 25, 2023, at 17:25, "jordi.palet--- via SIG-policy" 
mailto:sig-policy@lists.apnic.net>> wrote:

I’m sorry to say so, but mailman was much more useful that the Orbit rubbish 
system ...

When I try to find my message is NOT THERE. Only if I look to the very last 
original email on that thread, when Shaila announced prop-153, on August 3, and 
I go down, then I’m able to find my email embedded.

How come?

The system must allow to find individual messages, see the headers, etc., etc., 
instead of wasting 15 minutes to find my email ...

No wonder that I never got a reply to it …

this is the thread link, hopefully it works, but you need to read all thru ...:
https://orbit.apnic.net/hyperkitty/list/sig-policy@lists.apnic.net/thread/IC2KP2LROGUVD4DWJL4YLCEJHMCUFSL6/?sort=date


Regards,
Jordi

@jordipalet



**
IPv4 is over
Are you ready for the new Internet ?
http://www.theipv6company.com
The IPv6 Company

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the exclusive use of the 
individual(s) named above and further non-explicilty authorized disclosure, 
copying, distribution or use of the contents of this information, even if 
partially, including attached files, is strictly prohibited and will be 
considered a criminal offense. If you are not the intended recipient be aware 
that any disclosure, copying, distribution or use of the contents of this 
information, even if partially, including attached files, is strictly 
prohibited, will be considered a criminal offense, so you must reply to the 
original sender to inform about this communication and delete it.





SIG-policy - https://mailman.apnic.net/sig-policy@lists.apnic.net/
To unsubscribe send an email to sig-policy-le...@lists.apnic.net
___
SIG-policy - https://mailman.apnic.net/sig-policy@lists.apnic.net/
To unsubscribe send an email to sig-policy-le...@lists.apnic.net

[Translators-l] Re: Ready for translation: Tech News #35 (2023)

2023-08-25 Thread Nick Wilson (Quiddity)
On Fri, Aug 25, 2023 at 3:16 AM Nick Wilson (Quiddity) <
nwil...@wikimedia.org> wrote:

> The latest tech newsletter is ready for early translation:
> https://meta.wikimedia.org/wiki/Tech/News/2023/35
>
> Direct translation link:
>
> https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F35=page
>

The text of the newsletter is now final.

*Three items have been added* since yesterday.

There won't be any more changes; you can translate safely. Thanks!
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


Possible bug failing to create a cache on a running grid causing grid failure

2023-08-25 Thread Raymond Wilson
We believe we had some code on a dev environment attempt to create a cache
that was intended for another Ignite.

The creation of this cache would have failed (at least) because the data
region referenced in the cache configuration does not exist on that
environment.

A subsequent restart of the environment some time later started failing to
initialise nodes on which the failed cache would have been stored had it
succeeded.

The failing nodes report this in the log:

2023-08-25 04:20:24,540 [44] WRN [ImmutableCacheComputeServer]   Cache can
not be started : cache=SiteModelMetadata

2023-08-25 04:20:11,265 [1] WRN [ImmutableCacheComputeServer]   WAL segment
tail reached. [idx=414, isWorkDir=true,
serVer=org.apache.ignite.internal.processors.cache.persistence.wal.serializer.RecordV2Serializer@c3719e5,
actualFilePtr=WALPointer [idx=414, fileOff=452480679, len=0]]

This error implies that (somehow) Ignite considers this to be a cache
existing in the grid and is attempting to set it up.

Raymond.


[Translators-l] Ready for translation: Tech News #35 (2023)

2023-08-24 Thread Nick Wilson (Quiddity)
The latest tech newsletter is ready for early translation:
https://meta.wikimedia.org/wiki/Tech/News/2023/35

Direct translation link:
https://meta.wikimedia.org/w/index.php?title=Special:Translate=page-Tech%2FNews%2F2023%2F35=page

We plan to send the newsletter on Monday afternoon (UTC), i.e. Monday
morning PT. The existing translations will be posted on the wikis in
that language. Deadlines:
https://meta.wikimedia.org/wiki/Tech/News/For_contributors#The_deadlines

There will be more edits by Friday noon UTC but the existing content should
generally remain fairly stable. I will let you know on Friday in any
case.

Let us know if you have any questions, comments or concerns. As
always, we appreciate your help and feedback.

(If you haven't translated Tech News previously, see this email:
https://lists.wikimedia.org/pipermail/translators-l/2017-January/003773.html
___
Translators-l mailing list -- translators-l@lists.wikimedia.org
To unsubscribe send an email to translators-l-le...@lists.wikimedia.org


[CODE4LIB] Any cool new discovery projects out there?

2023-08-24 Thread Kristen Wilson
Hi everyone,

At NC State, we're working on assessing the state of our discovery
environment, and we're trying to take a look at new projects being done at
other libraries that we can learn from. Has anyone worked on or heard about
any cool new discovery projects recently? These could be things related to
library catalogs, library websites, linked data, search, etc.

Thanks!
Kristen

-- 
Kristen Wilson (she/her)
Discovery Systems Manager
NC State University Libraries
(919) 513-2118
kmbl...@ncsu.edu


Re: Salzburg to Prague

2023-08-23 Thread mike wilson
Google maps says "Cycling not available" 8 -) but the walking route has a total 
rise of just under 3000metres and a fall of just over, taking three days.  Good 
luck and have fun.  Don't forget the Narodni Technicke Muzeum - not the one on 
Wilsonova street.

> On 24/08/2023 01:23 Mark Roberts  wrote:
> 
>  
> We're off to Austria tomorrow, doing a bicycle trip that starts from
> Salzburg and ends up in Prague. It shouldn't be a very strenuous trip,
> as they're saying 36 miles is the most we'll be covering in a day, so
> that ought to leave plenty of time for some photography.
> 
> Photos to follow!
--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


Re: Fwd: PESO: Spring is Sprung

2023-08-22 Thread mike wilson
All three getting through fine here.

> On 22/08/2023 09:54 Alan C  wrote:
> 
>  
> Trying again
> 
>  Forwarded Message 
> Subject:  PESO: Spring is Sprung
> Date: Tue, 22 Aug 2023 08:06:46 +0200
> From: Alan C 
> To:   Pentax Discus Mail List 
> 
> 
> 
> An early morning snap of the Knoppiesdoorn (Knob-thorn  - Acacia 
> Nigrescens) tree on our corner. Always the first harbinger of Spring 
> right on the Equinox. The Hueglin's Robin has been kicking up a racket 
> since 3am.
> 
> https://www.flickr.com/photos/wisselstroom/53132973417/
> 
> K5 & HD 55-300
--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.

Re: [go-cd] GoCD Pipeline Views - Can they be copied across users?

2023-08-22 Thread Chad Wilson
Yeah, they are in the DB keyed to the user's id:
https://github.com/gocd/gocd/blob/0f58107c851cf2df6ce7c6902eebde796dc1f742/db-support/db-migration/src/main/resources/db-migration-scripts/initial/create-schema.xml#L291-L303

While it comes with some risks, it probably wouldn't be super difficult to
update the views/filters (PIPELINESELECTIONS) to point to the new user IDs
for each user with an appropriate query? IIRC there should be max one row
per user (can't recall if it's 1-to-(0,1) or 1-to-1 from USERS).

Not aware of any specific export or share support. The UI uses an internal
API */go/api/internal/pipeline_selection* to retrieve (GET) and update
(PUT) the views in one big block as an array of "filters", which in theory
you could use to get a JSON representation of your individual views if you
can still login with the old username. If one is more savvy, one could then
also PUT the collection back to the same API to update views when
authenticated with the new username - but obviously this is
undocumented/unsupported and would require some browser "Inspect" digging
:-)

-Chad

On Tue, Aug 22, 2023 at 4:21 PM 'Chris Gillatt' via go-cd <
go-cd@googlegroups.com> wrote:

> I think that the user-configurable pipeline views in GoCD are stored
> against the user in the DB.  I'm pretty sure that without messing around
> with the DB, migrating views (or sharing or exporting them) would not be
> possible.  Could anyone confirm this please?
>
> The reason I ask is that we've migrated from one auth plugin to another,
> and as a result, all users now have a new username (old: user.name, new:
> user.name@domain).  This means the users have lost their views.  It's not
> a biggie, and moving from LDAP to OIDC auth is a much bigger win than
> keeping views, but thought I'd just ask the question.
>
> Cheers
> Chris
>
> --
> You received this message because you are subscribed to the Google Groups
> "go-cd" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to go-cd+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/go-cd/8280b950-9e20-4291-845a-e35de44634ffn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to go-cd+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAA1RwH8KaBor-GKKoVCQhWusc16nzfTQ1mg-LW-Gwm90PcTz0A%40mail.gmail.com.


Re: Cache write synchonization with replicated caches

2023-08-21 Thread Raymond Wilson
Thanks for the pointer to the read repair facility added in Ignite 2.14.

Unfortunately the .WithReadRepair() extension does not seem to be present
in the Ignite C# client.

This means we either need to use the experimental Command.sh support, or
improve our tooling to effectively do the same. I am curious why this is
labelled as experimental? Does this imply risk if run against a production
environment grid?

Raymond.


On Mon, Aug 21, 2023 at 5:50 PM Николай Ижиков  wrote:

> Hello.
>
> I don’t know the cause of your issue.
> But, we have feature to overcome it [1]
>
> Consistency repair can be run from control.sh.
>
> ```
> ./bin/control.sh --enable-experimental
> ...
>   [EXPERIMENTAL]
>   Check/Repair cache consistency using Read Repair approach:
> control.(sh|bat) --consistency repair cache-name partition
>
> Parameters:
>   cache-name  - Cache to be checked/repaired.
>   partition   - Cache's partition to be checked/repaired.
>
>   [EXPERIMENTAL]
>   Cache consistency check/repair operations status:
> control.(sh|bat) --consistency status
>
>   [EXPERIMENTAL]
>   Finalize partitions update counters:
> control.(sh|bat) --consistency finalize
> ```
>
> It seems that docs for a cmd command not full.
> It also accepts strategy argument so you can manage your repair actions
> more accurate.
> Try to run:
>
> ```
> ❯ ./bin/control.sh --enable-experimental --consistency repair --cache
> default --strategy CHECK_ONLY --partitions 1,2,3,…your_partitions_list...
> ```
>
> Available strategies with good description can be found in sources [2]
>
>
> [1] https://ignite.apache.org/docs/latest/key-value-api/read-repair
> [2]
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/ReadRepairStrategy.java
>
>
>
> 21 авг. 2023 г., в 07:46, Raymond Wilson 
> написал(а):
>
> [Replying onto correct thread]
>
> As a follow up to this email, we are starting to collect evidence that
> replicated caches within our Ignite grid are failing to replicate values in
> a small number of cases.
>
> In the cases we observe so far, with a cluster of 4 nodes participating in
> a replicated cache, only one node reports having the correct value for a
> key, and the other three report having no value for that key.
>
> The documentation is pretty opinionated about the
> CacheWriteSynchronizationMode not being impactful with respect to
> consistency for replicated caches. As noted below, we use PrimarySync (the
> default) for these caches, which would suggest a potential failure mode
> preventing the backup copies obtaining their copy once the primary copy has
> been written.
>
> We are continuing to investigate and would be interested in any
> suggestions you may have as to the likely cause.
>
> Thanks,
> Raymond.
>
> On Thu, Jul 27, 2023 at 12:38 PM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> Hi,
>>
>> I have a query regarding data safety of replicated caches in the case of
>> hard failure of the compute resource but where the storage resource is
>> available when the node returns.
>>
>> We are using Ignite 2.15 with the C# client.
>>
>> We have a number of these caches that have four nodes participating in
>> the replicated caches, all with the default PrimarySync write
>> synchronization mode. All data storage configurations are configured with
>> WalMode = WalMode.Fsync.
>>
>> We have logic performing writes against these caches which will continue
>> once the primary node for the replicated cache has written the data item.
>>
>> I am unsure of the guarantees made by Ignite at this point in the event
>> of failure. Specifically, hard/red-button failure of compute hardware
>> resources and/or abrupt (but recoverable) detachment of storage resources.
>>
>> Scenario one: Primary node returns "OK", then immediately fails (before
>> check point). When the primary node returns should I expect the replicated
>> value to be in the primary, and to appear in all other nodes too.
>>
>> Scenario two: Primary node returns "OK", then a secondary node
>> immediately fails (before achieving the write and so before any check
>> point). When the secondary node returns should I expect the replicated
>> value to be in the recovered secondary node?
>>
>> In relation to these scenarios, does setting the cache write
>> synchronization mode improve the safety of the write as all nodes must
>> acknowledge the write before it returns.
>>
>> If there is an improvement in write safety in this instance, does this
>> imply the F

Re: Cache write synchonization with replicated caches

2023-08-20 Thread Raymond Wilson
[Replying onto correct thread]

As a follow up to this email, we are starting to collect evidence that
replicated caches within our Ignite grid are failing to replicate values in
a small number of cases.

In the cases we observe so far, with a cluster of 4 nodes participating in
a replicated cache, only one node reports having the correct value for a
key, and the other three report having no value for that key.

The documentation is pretty opinionated about the
CacheWriteSynchronizationMode not being impactful with respect to
consistency for replicated caches. As noted below, we use PrimarySync (the
default) for these caches, which would suggest a potential failure mode
preventing the backup copies obtaining their copy once the primary copy has
been written.

We are continuing to investigate and would be interested in any
suggestions you may have as to the likely cause.

Thanks,
Raymond.

On Thu, Jul 27, 2023 at 12:38 PM Raymond Wilson 
wrote:

> Hi,
>
> I have a query regarding data safety of replicated caches in the case of
> hard failure of the compute resource but where the storage resource is
> available when the node returns.
>
> We are using Ignite 2.15 with the C# client.
>
> We have a number of these caches that have four nodes participating in the
> replicated caches, all with the default PrimarySync write synchronization
> mode. All data storage configurations are configured with WalMode =
> WalMode.Fsync.
>
> We have logic performing writes against these caches which will continue
> once the primary node for the replicated cache has written the data item.
>
> I am unsure of the guarantees made by Ignite at this point in the event of
> failure. Specifically, hard/red-button failure of compute hardware
> resources and/or abrupt (but recoverable) detachment of storage resources.
>
> Scenario one: Primary node returns "OK", then immediately fails (before
> check point). When the primary node returns should I expect the replicated
> value to be in the primary, and to appear in all other nodes too.
>
> Scenario two: Primary node returns "OK", then a secondary node immediately
> fails (before achieving the write and so before any check point). When the
> secondary node returns should I expect the replicated value to be in the
> recovered secondary node?
>
> In relation to these scenarios, does setting the cache write
> synchronization mode improve the safety of the write as all nodes must
> acknowledge the write before it returns.
>
> If there is an improvement in write safety in this instance, does this
> imply the Fsync WalMode write pathway has opportunities for data loss in
> these failure situations?
>
> Thanks,
> Raymond.
>
>
>
>
> --
> <http://www.trimble.com/>
> Raymond Wilson
> Trimble Distinguished Engineer, Civil Construction Software (CCS)
> 11 Birmingham Drive | Christchurch, New Zealand
> raymond_wil...@trimble.com
>
>
> <https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>
>


-- 
<http://www.trimble.com/>
Raymond Wilson
Trimble Distinguished Engineer, Civil Construction Software (CCS)
11 Birmingham Drive | Christchurch, New Zealand
raymond_wil...@trimble.com

<https://worksos.trimble.com/?utm_source=Trimble_medium=emailsign_campaign=Launch>


<    3   4   5   6   7   8   9   10   11   12   >