Looks like this is already logged at CASSANDRA-11305
<https://issues.apache.org/jira/browse/CASSANDRA-11305>. I will comment
there. I'd be interested if others have feedback.
On Wed, Jun 9, 2021 at 9:32 AM Jonathan Koppenhofer
wrote:
> Thanks!
>
> I'll put in a Jira
Thanks!
I'll put in a Jira to make this configurable. Maybe submit a patch if I can
find time.
On Tue, Jun 8, 2021, 6:49 PM Erick Ramirez
wrote:
> There's definitely a case for separation of duties. For example, admin
> roles who have DDL permissions should not have DML access. To achieve this,
Hi,
In a highly managed environment "automatics granting" (
https://cassandra.apache.org/doc/latest/cql/security.html#automatic-granting)
may not always be desirable. Is there any way to turn this off? Or what
have people done to work around cases where they don't want this.
Some use cases:
- We
I addition to Jeff's answer... Certificate hot reloading... For an
organization that uses annual certs :)
On Fri, May 7, 2021, 8:47 AM Durity, Sean R
wrote:
> There is not enough 4.0 chatter here. What feature or fix of the 4.0
> release is most important for your use case(s)/environment? What
We actually feel the same where we have edge cases where downgrading CL was
useful. We did end up writing this in application logic as our code was
pretty well abstracted and centralized to do so.
I will also agree the driver seems overly prescriptive now with limited
ability to override. Good for
, did you still faced this issue on 3.0.15 ?
>
> Thanks
> Surbhi
>
> On Tue, 18 Feb 2020 at 17:40, Jonathan Koppenhofer
> wrote:
>
>> I believe we had something similar happen on 3.0.15 a while back. We had
>> a cluster that created mass havoc by creating MVs on a
I believe we had something similar happen on 3.0.15 a while back. We had a
cluster that created mass havoc by creating MVs on a large existing
dataset. We thought we had stabilized the cluster, but saw similar issues
as you when we dropped the MVs.
We interpreted our errors to mean that we should
We do this without containers quite successfully. As precaution, we...
- have dedicated disk per instance.
- have lots of network bandwidth, but also throttle throughout defaults.
Also monitor network closely
- share CPU completely. Limit Cassandra settings to limit CPU use
(concurrent threads, com
To clarify... you have 2 datacenters with Datastax, and you want to expand
to a third DC with Opensource Cassandra? Is this a temporary migration
strategy? I would not want to run in this state for very long.
For Datastax, you should reach out to their support for questions. However,
speaking from
Not sure about why repair is running, but we are also seeing the same
merkle tree issue in a mixed version cluster in which we have intentionally
started a repair against 2 upgraded DCs. We are currently researching, and
can post back if we find the issue, but also would appreciate if someone
has a
Has anyone tried to do a DC switch as a means to migrate from Datastax to
OSS? This would be the safest route as the ability to revert back to
Datastax is easy. However, I'm curious how the dse_system keyspace would be
replicated to OSS using their custom Everywhere strategy. You may have to
change
We do multiple nodes per host as a standard practice. In our case, we never
put 2 nodes from a single cluster on the same host, though as mentioned
before, you could potentially get away with that if you properly use rack
awareness, just be careful of load.
We also do NOT use any other layer of s
I think it would also be interesting to hear how people are handling
automation (which to me is different than AI) and config management.
For us it is a combo of custom Java workflows and Saltstack.
On Thu, Mar 28, 2019, 5:03 AM Kenneth Brotman
wrote:
> I’m looking to get a better feel for how
1. PaaS model. We have a team responsible for the deployment and
self-service tooling, as well as SME for both development and Cassandra
operations. End users consume the service, and are responsible for app
development and operations. Larger apps have separate teams for this,
smaller apps have a s
e and speed is also good
>> with large amount of data (100s of TB). This process is also restartable
>> as it takes days to transfer such
>> amount of data.
>>
>> Good luck
>>
>> On Tue, Dec 4, 2018 at 9:04 PM dinesh.jo...@yahoo.com.INVALID
>> wrote:
>&g
Unfortunately, we found this to be a little tricky. We did migrations from
DSE 4.8 and 5.0 to OSS 3.0.x, so you may run into additional issues. I will
also say your best option may be to install a fresh cluster and stream the
data. This wasn't feasible for us at the size and scale in the time frame
16 matches
Mail list logo