Hello,
just as a small addition: The numbers also depend on your consistency level
used for reads. It will behave like that if you just read on local nodes.
If you do reads on ALL, QUORUM or EACH_QUORUM etc. you need also include
the read volume in the calculation.
Regards,
Georg
Am Mi., 15. Ja
We tried to tune sethintedhandoffthrottlekb to 100 , 1024 , 10240 but
nothing helped .
Our hints related parameters are as below, if you don't find any parameter
below then it is not set in our environment and should be of the default
value.
max_hint_window_in_ms: 1080 # 3 hours
hinted_handof
The high cpu is probably the hints getting replayed slamming the write path
Slowing it down with the hint throttle may help
It’s not instant.
> On Jan 27, 2020, at 6:05 PM, Erick Ramirez wrote:
>
>
>> Increase the max_hint_window_in_ms setting in cassandra.yaml to more than 3
>> hours, pe
>
> Increase the max_hint_window_in_ms setting in cassandra.yaml to more than
> 3 hours, perhaps 6 hours. If the issue still persists networking may need
> to be tested for bandwidth issues.
>
Just a note of warning about bumping up the hint window without
understanding the pros and cons. Be aware
You can increase the max number of open files on the new node. We find that
65K is too low for most production clusters and you can bump it up to 100
or 200K. We generally recommend 1 million but YMMV:
- nofile 1048576
On Tue, Jan 28, 2020 at 11:55 AM Eunsu Kim wrote:
> Hi experts
>
> I had a
Surbhi,
The hints could be getting accumulated for one or both of the following reasons:
- Some node is becoming unavailable very routinely, which is unlikely- The
hints are getting replayed very slowly due to network bandwidth issues, which
is more likely
Increase the max_hint_window_in_ms set
Hi Leo,
The token assignment for each node in the cluster must be unique regardless
of the datacenter they are in. This is because the range of tokens
available to assign to nodes is per cluster. Token allocation is performed
per node at a global level. A datacenter helps define the way data is
re
Why we think it might be related to hints is , because if we truncate the
hints then load goes normal on the nodes.
FYI , We had to run repair after truncating hints.
Any thoughts ?
On Mon, 27 Jan 2020 at 15:27, Deepak Vohra
wrote:
>
> Hints are a stopgap measure and not a fix to the underlying
Hi experts
I had a problem adding a new node.
Joining node in datacenterA stops streaming while joining. So it keeps the UJ.
(datacenterB is fine.)
I try 'nodetool netstats' on a stopped node and it looks like this:
Mode: JOINING
Not sending any streams.
Read Repair Statistics:
Attempted: 0
Mis
Hints are a stopgap measure and not a fix to the underlying issue. Run a full
repair.On Monday, January 27, 2020, 10:17:01 p.m. UTC, Surbhi Gupta
wrote:
Hi,
We are on Open source 3.11 .We have a issue in one of the cluster where lots of
hints gets piled up and they don't get applied
There isn't a tool that I'm aware of that's readily available to do that.
Your best bet is to run a regular repair.
But really, hints are just a side-issue of a much wider problem and that is
the nodes are overloaded. Is your application getting hit with a much
higher than expected traffic? The sc
Hi,
We are on Open source 3.11 .
We have a issue in one of the cluster where lots of hints gets piled up and
they don't get applied within hinted handoff period ( 3 hour in our case) .
And load and CPU of the server goes very high.
We see lot of messages in system.log and debug.log . Our read re
Odd. Have you seen this behavior? I ran a test last week, loaded snapshots
from 4 nodes to 4 nodes (RF 3 on both ends) and did not notice a spike.
That's not to say that it didn't happen, but I think I'd have noticed as I
was loading approx 250GB x 4 (although sequentially rather than 4x
sstableloa
I would suggest to be aware of potential data size expansion. If you load (for
example) three copies of the data into a new cluster (because the RF of the
origin cluster is 3), it will also get written to the RF of the new cluster (3
more times). So, you could see data expansion of 9x the origin
Hello
Concerning the original question, I agreed with @eric_ramirez,
sstableloader is transparent for token allocation number.
just for info @voytek, check this post out
https://thelastpickle.com/blog/2019/02/21/set-up-a-cluster-with-even-token-distribution.html
You lay be interested to now if yo
15 matches
Mail list logo