Let me offer a different perspective by looking at this from the lens of
our day-to-day Cassandra operators — especially those who don’t have large
teams or deep internal investments (often just one or two people, which
represents the majority in the community).

Today, many of these operators are successfully running *Cassandra 4.1*,
but they frequently face some fundamental questions from their leadership
or stakeholders, such as:

   1.

   *“Why can’t we move to JDK 21? Everyone else in the company has already
   moved off JDK 11.”*
   -

      When can we move to JDK 21? → Likely with Cassandra 6.0, which may
      still be a couple of years away.
      -

      What prevents upgrading to Cassandra 6.0? → The release introduces
      several changes that aren’t yet widely adopted or extensively tested for
      upgrade/downgrade stability.
      -

      As a result, operators often feel cautious, and this uncertainty can
      unintentionally create a negative perception of Cassandra’s readiness
      and modernization pace.
      2.

   *“Why does Cassandra occasionally lose or resurrect data that was
   already written?”*
   -

      Answer: “We need to run repair regularly to ensure data consistency.”
      -

      Follow-up: “Why don’t we run it automatically?” → Because automated
      repair is only available in Cassandra 6.0.
      -

      Next question: “Then why not upgrade to 6.0?” → Same concern as above
      — adoption risk and testing confidence.
      -

      Again, this situation can lead to doubt about Cassandra’s
      reliability, even though it’s fundamentally sound.
      3.

   *“Can we use new features like cross-shard transactions?”*
   -

      Response: These are available only in Cassandra 6.0.
      -

      This doesn’t create as much concern as the previous points, but it
      still reinforces the perception gap between versions.

In most cases, teams decide to remain on *Cassandra 4.1*, but the ongoing
conversations around modernization and stability can gradually *erode
confidence in the Apache Cassandra brand*, especially when compared with
managed or cloud-native alternatives.

For large operators with dedicated Cassandra teams, these challenges are
easier to manage — they can backport features or enhancements as needed.
But for the majority of smaller teams, that isn’t feasible.

As a community, it’s important that we continue to *strengthen Cassandra’s
reputation* by ensuring that the fundamentals — reliability, compatibility,
and operational ease — remain first-class.
Whatever steps we can take to make Cassandra easier to run and evolve
confidently, we should pursue together.

Jaydeep

On Fri, Oct 24, 2025 at 9:04 AM Dinesh Joshi <[email protected]> wrote:

> On Fri, Oct 24, 2025 at 2:44 AM Jeff Jirsa <[email protected]> wrote:
>
>> The outcome of the first round of discussion was “ok there’s no agreement
>> to do this and there’s not even agreement about why people run forks or if
>> it’s good that they maintain forks”
>>
>
> Well I guess we're talking about two different things here and it is
> totally my fault. Let me clarify.
>
> I am talking about the idea of backports. My general reading of both
> threads is that we all agree to a varying degree that backports are
> valuable. This is where I believe there is general agreement. If that is
> not the case, please point it out and I will happily stand corrected.
>
>
>> And then we’re back pushing for consensus.
>>
>> There’s no consensus here to speed up.
>>
>
> Maybe it is poor choice of words but my aim here was to help build shared
> understanding of the problem so we can talk about possible paths to a
> solution.
>
> Dinesh
>
>

Reply via email to