in cassandra-env.sh
>
> fe80\:0\:0\:0\:202\:b3ff\: fe1e\:8329=DC1:RAC3
>
>
>
>
> On Tuesday, June 19, 2018, 12:51:34 PM EDT, Durity, Sean R <
> sean_r_dur...@homedepot.com> wrote:
>
>
> You are correct that the cluster decides where data goes (based on the
> has
imbalance in the
“first” node of the rack.
To help you more, we would need the create table statement(s) for your keyspace
and the topology of the cluster (like with nodetool status).
Sean Durity
From: learner dba
Sent: Tuesday, June 19, 2018 9:50 AM
To: user@cassandra.apach
key). However, if you choose a “bad” partition key,
> you may not get good distribution of the data, because the hash is
> deterministic (it always goes to the same nodes/replicas). For example, if
> you have a partition key of a datetime, it is possible that there is more
> data written
tement(s) for your keyspace
and the topology of the cluster (like with nodetool status).
Sean Durity
From: learner dba
Sent: Tuesday, June 19, 2018 9:50 AM
To: user@cassandra.apache.org
Subject: Re: RE: [EXTERNAL] Cluster is unbalanced
We do not chose the node where partition wil
on of the data, because the hash is
> deterministic (it always goes to the same nodes/replicas). For example, if
> you have a partition key of a datetime, it is possible that there is more
> data written for a certain time period – thus a larger partition and an
> imbalance across the
important decisions for a Cassandra
table.
Also, I have seen the use of racks in the topology cause an imbalance in the
“first” node of the rack.
To help you more, we would need the create table statement(s) for your keyspace
and the topology of the cluster (like with nodetool stat
@cassandra.apache.org
Subject: Re: RE: [EXTERNAL] Cluster is unbalanced
We do not chose the node where partition will go. I thought it is snitch's role
to chose replica nodes. Even the partition size does not vary on our largest
column family:
Percentile SSTables Write Latency Read La
We do not chose the node where partition will go. I thought it is snitch's
role to chose replica nodes. Even the partition size does not vary on our
largest column family:
Percentile SSTables Write Latency Read Latency Partition Size
Cell Count
>If it was partition key issue, we would see similar number of partition
keys across nodes. If we look closely number of keys across nodes vary a
lot.
I'm not sure about that, is it possible you're writing more new partitions
to some nodes even though each node owns the same number of tokens?
On
Hi Sean,
Are you using any rack aware topology? --> we are using gossip file
Are you using any rack aware topology? --> we are using gossip file What are
your partition keys? --> Partition key is uniqIs it possible that your
partition keys do not divide up as cleanly as you would like across th
10 matches
Mail list logo