> > I'm ok with breaking trunk CI temporarily as long as failures are tracked > and triaged/addressed before the next release.
>From the ticket, I understand it is meant for 5.0-rc I share this sentiment for the release we decide to ship with: > The failures should block release or we should not advertise we have those > features at all, and the configuration should be named "experimental" > rather than "latest". Is the community okay with committing the patch before all of these are > addressed? If we aim to fix everything before the next release 5.0-rc, we can commit CASSANDRA-18753 after the fixes are applied. If we are not going to do all the fixes anytime soon - I prefer to commit and have the failures and the tickets open. Otherwise, I can guarantee that I, personally, will forget some of those failures and miss them in time... and I am suspicious I won’t be the only one :-) This version is provided for new users of # Cassandra who want to get the > most out of their cluster and for users # evaluating the technology. >From reading this thread, we do not recommend using it straight into production but to experiment, gain trust, and then use it in production. Did I get it correctly? We need to confirm what it is and be sure it is clearly stated in the docs. Announcing this new yaml file under NEWS.txt features sounds reasonable to me. Or can we add a new separate section on top of NEWS.txt 5.0, dedicated only to the announcement of this new configuration file? Mick and Ekaterina (and everyone really) - any thoughts on what test > coverage we should commit to for this new configuration? Acknowledging that > we already have *a lot* of CI that we run. I do not have an immediate answer. I see there is some proposed CI configuration in the ticket. As far as I can tell from a quick look, the suggestion is to replace unit-trie with unit-latest (which exercises also tries) and the additional new jobs will be Python and Java DTests. (no new upgrade tests) On top of my mind - we probably need a cost-benefit analysis, risk analysis, and tradeoffs discussed - burnt resources vs manpower, early detection vs late discovery, or even prod issues. Experimental vs production-ready, etc Now, this question can have different answers depending on whether this is an experimental config or we recommend it for production use. I would expect new features to be enabled in this configuration and all tests to be run pre-commit with the default and the new YAML files. Is this a correct assumption? Probably done with a note on the ML. The question is, do we have enough resources in Jenkins to facilitate all this testing post-commit? > I think it is much more valuable to test those various configurations > rather than test against j11 and j17 separately. I can see a really little > value in doing that. Excellent point, I was saying for some time that IMHO we can reduce to running in CI at least pre-commit: 1) Build J11 2) build J17 3) run tests with build 11 + runtime 11 4) run tests with build 11 and runtime 17. Technically, that is what we also ship in 5.0. (Except the 2), the JDK17 build but we should not remove that from CI) Does it make sense to reduce to what I mentioned in 1,2,3,4 and instead add the suggested jobs with the new configuration from CASSANDRA-18753 in pre-commit? Please correct me if I am wrong, but I understand that running with JDK17 tests on the 17 build is experimental in CI, so we can gain confidence until the release when we will drop 11. No? If that is correct, I do not see why we run those tests on every pre-commit and not only what we ship. Best regards, Ekaterina On Wed, 14 Feb 2024 at 17:35, Štefan Miklošovič <stefan.mikloso...@gmail.com> wrote: > I agree with Jacek, I don't quite understand why we are running the > pipeline for j17 and j11 every time. I think this should be opt-in. > Majority of the time, we are just refactoring and coding stuff for > Cassandra where testing it for both jvms is just pointless and we _know_ > that it will be fine in 11 and 17 too because we do not do anything > special. If we find some subsystems where testing that on both jvms is > crucial, we might do that, I just do not remember when it was last time > that testing it in both j17 and j11 suddenly uncovered some bug. Seems more > like a hassle. > > We might then test the whole pipeline with a different config basically > for same time as we currently do. > > On Wed, Feb 14, 2024 at 9:32 PM Jacek Lewandowski < > lewandowski.ja...@gmail.com> wrote: > >> śr., 14 lut 2024 o 17:30 Josh McKenzie <jmcken...@apache.org> napisał(a): >> >>> When we have failing tests people do not spend the time to figure out if >>> their logic caused a regression and merge, making things more unstable… so >>> when we merge failing tests that leads to people merging even more failing >>> tests... >>> >>> What's the counter position to this Jacek / Berenguer? >>> >> >> For how long are we going to deceive ourselves? Are we shipping those >> features or not? Perhaps it is also a good opportunity to distinguish >> subsets of tests which make sense to run with a configuration matrix. >> >> If we don't add those tests to the pre-commit pipeline, "people do not >> spend the time to figure out if their logic caused a regression and merge, >> making things more unstable…" >> I think it is much more valuable to test those various configurations >> rather than test against j11 and j17 separately. I can see a really little >> value in doing that. >> >> >>