Dear community,
is someone working on this ticket? This is clearly performance regression
and we stuck with 3.11.6 and could not upgrade to latest version.
Regards,
Maxim.
On Mon, Feb 22, 2021 at 10:37 AM Ahmed Eljami
wrote:
> Hey,
> I have created the issue, here =>
> https://issues.apache.or
On Fri, Jul 30, 2021 at 7:21 PM Bowen Song wrote:
> Since you have only one node, sstableloader is unnecessary. Copy/move the
> the data directory back to the right place and restart Cassandra or run
> 'nodetool
> refresh' is sufficient. Do not restore the 'system' keyspace, but do
> restore the
Thanks for quick answer.
> Do you ever intend to add nodes to this single node cluster? If not, I
> don't see the number of tokens matter at all.
>
I understand that, I would like to have all environments with the same
settings.
> However, if you really want to change it and don't mind downtime,
Hi everyone,
I have several development servers with 1 node and num_tokens 256. As
preparation for testing 4.0 I would like to change num_tokens to 16.
Unfortunately I could not add any additional nodes or additional DC, but
I'm fine with downtime. The important part, data should be preserved.
Wh
Hi Sean,
thanks for the quick answer. I have applied your suggestion and tested on
several environments, everything is working fine.
Other communication protected by SSL such as server-to-server and
client-to-server is working without problems as well.
Regards,
Maxim.
On Fri, Apr 30, 2021 at 3:1
Hi everyone,
I have Apache Cassandra 3.11.6 with SSL encryption, CentOS Linux release
7.9, python 2.7.5. JDK and python are coming from operating system.
I have updated today operating system and with that I've got new JDK
$ java -version
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (
Hi everyone,
There are a lot of articles, and, probably this question was asked already
many times, but I still not 100% sure.
We have a table, which we load almost full every night with spark job and
consistency LOCAL_QUORUM and record TTL 7 days. This is to remove some
records if they are not p
Hi,
I'm not sure if this will help, but I tried today to change one node to
3.11.9 from 3.11.6. We are NOT using TWCS. Very heavy read pattern, almost
no writes, with constant performance test load. Cassandra read latency 99%
increased significantly, but NOT on the node where I changed version (to
Hi Nico,
we wanted to upgrade to 3.11.8 (from .6), but now I'm very concerned as our
load is mostly read-only and very latency sensitive.
Did you figure out the reason for such behaviour with the new version ?
Regards,
Maxim.
On Tue, Sep 15, 2020 at 5:24 AM Sagar Jambhulkar
wrote:
> That is od
Hi guys,
thanks a lot for useful tips. I obviously underestimated complexity of such
change.
Thanks again,
Maxim.
>
Hi everyone,
with discussion about reducing default vnodes in version 4.0 I would like
to ask, what would be optimal procedure to perform reduction of vnodes in
existing 3.11.x cluster which was set up with default value 256. Cluster
has 2 DC with 5 nodes each and RF=3. There is one more restricti
Hi Alain,
thanks a lot for detailed answer.
> You can set values individually in a collection as you did above (and
> probably should do so to avoid massive tombstones creation), but you have
> to read the whole thing at once:
>
This, actually, is one of the design goals. At the moment I have t
Hi everyone,
I'm struggling to understand how can I query TTL on the row in collection (
Cassandra 3.11.4 ).
Here is my schema:
CREATE TYPE item (
csn bigint,
name text
);
CREATE TABLE products (
product_id bigint PRIMARY KEY,
items map>
);
And I'm creating records with TTL like this:
Hi Alex,
I'm using Cassandra reaper as well. Could be
https://issues.apache.org/jira/browse/CASSANDRA-14332 as it was committed
in both version.
Regards,
Maxim.
On Wed, Aug 29, 2018 at 2:14 PM Oleksandr Shulgin <
oleksandr.shul...@zalando.de> wrote:
> On Wed, Aug 29, 2018 at
gt;
> "The best way to predict the future is to invent it" Alan Kay
>
>
> On Wed, Aug 29, 2018 at 3:06 AM Maxim Parkachov
> wrote:
>
>> Hi everyone,
>>
>> couple of days ago I have upgraded Cassandra from 3.11.2 to 3.11.3 and I
>> see that repa
Hi everyone,
couple of days ago I have upgraded Cassandra from 3.11.2 to 3.11.3 and I
see that repair time is practically doubled. Does someone else experience
the same regression ?
Regards,
Maxim.
maybe you are setting
> nulls?
>
> Rahul
> On Aug 18, 2018, 11:16 PM -0700, Maxim Parkachov ,
> wrote:
>
> Hi Rahul,
>
> I'm already using LOCAL_QUORUM in batch process and it runs every day. As
> far as I understand, because I'm overwriting whole table with
ul Singh
wrote:
> Are you loading using a batch process? What’s the frequency of the data
> Ingest and does it have to very fast. If not too frequent and can be a
> little slower, you may consider a higher consistency to ensure data is on
> replicas.
>
> Rahul
> On Aug 18,
Hi community,
I'm currently puzzled with following challenge. I have a CF with 7 days TTL
on all rows. Daily there is a process which loads actual data with +7 days
TTL. Thus records which are not present in last 7 days of load expired.
Amount of these expired records are very small < 1%. I have d
19 matches
Mail list logo