Yes, sstable upgraded on each node.

On Thu, 2 May 2019, 13:39 Nick Hatfield <nick.hatfi...@metricly.com> wrote:

> Just curious but, did you make sure to run the sstable upgrade after you
> completed the move from 2.x to 3.x ?
>
>
>
> *From:* Evgeny Inberg [mailto:evg...@gmail.com]
> *Sent:* Thursday, May 02, 2019 1:31 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Cassandra taking very long to start and server under heavy
> load
>
>
>
> Using a sigle data disk.
>
> Also, it is performing mostly heavy read operations according to the
> metrics cillected.
>
> On Wed, 1 May 2019, 20:14 Jeff Jirsa <jji...@gmail.com> wrote:
>
> Do you have multiple data disks?
>
> Cassandra 6696 changed behavior with multiple data disks to make it safer
> in the situation that one disk fails . It may be copying data to the right
> places on startup, can you see if sstables are being moved on disk?
>
> --
>
> Jeff Jirsa
>
>
>
>
> On May 1, 2019, at 6:04 AM, Evgeny Inberg <evg...@gmail.com> wrote:
>
> I have upgraded a Cassandra cluster from version 2.0.x to 3.11.4 going
> trough 2.1.14.
>
> After the upgrade, noticed that each node is taking about 10-15 minutes to
> start, and server is under a very heavy load.
>
> Did some digging around and got view leads from the debug log.
>
> Messages like:
>
> *Keyspace.java:351 - New replication settings for keyspace system_auth -
> invalidating disk boundary caches *
>
> *CompactionStrategyManager.java:380 - Recreating compaction strategy -
> disk boundaries are out of date for system_auth.roles.*
>
>
>
> This is repeating for all keyspaces.
>
>
>
> Any suggestion to check and what might cause this to happen on every
> start?
>
>
>
> Thanks!e
>
>

Reply via email to