[jira] [Updated] (CASSANDRA-15280) Update help for nodetool tablehistograms

2019-08-14 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-15280:
--
 Severity: Low
   Complexity: Low Hanging Fruit
Discovered By: User Report
 Bug Category: Parent values: Code(13163)Level 1 values: Bug - Unclear 
Impact(13164)
  Component/s: Tool/nodetool
   Status: Open  (was: Triage Needed)

> Update help for  nodetool tablehistograms
> -
>
> Key: CASSANDRA-15280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15280
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: DeepakVohra
>Priority: Normal
>
> CASSANDRA-14185 provisions 
> _Nodetool tablehistograms to print statics for all the tables_
> Please also update the _help_ to include the provision to list histograms for 
> all tables. Latest 4.0 build indicates requirement for a specific table.
> {code:java}
> nodetool help tablehistograms
> NAME
>  nodetool tablehistograms - Print statistic histograms for a given table{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14185) Nodetool tablehistograms to print statics for all the tables

2019-08-14 Thread DeepakVohra (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907693#comment-16907693
 ] 

DeepakVohra commented on CASSANDRA-14185:
-

Added CASSANDRA-15280 Update help for nodetool tablehistograms

> Nodetool tablehistograms to print statics for all the tables
> 
>
> Key: CASSANDRA-14185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14185
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Low
> Fix For: 4.0
>
>
> Currently {{nodetool tablehistograms}} requires {{keyspacename.tablename}} as 
> argument always. If we want to print histograms for all the tables then we 
> will have to run this command for all the tables one after another. 
> It would be nice to have this command dump histogram for all the table at 
> once if no argument is provided, similar to {{nodetool tablestats}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15280) Update help for nodetool tablehistograms

2019-08-14 Thread DeepakVohra (JIRA)
DeepakVohra created CASSANDRA-15280:
---

 Summary: Update help for  nodetool tablehistograms
 Key: CASSANDRA-15280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15280
 Project: Cassandra
  Issue Type: Bug
Reporter: DeepakVohra


CASSANDRA-14185 provisions 
_Nodetool tablehistograms to print statics for all the tables_

Please also update the _help_ to include the provision to list histograms for 
all tables. Latest 4.0 build indicates requirement for a specific table.
{code:java}
nodetool help tablehistograms
NAME
 nodetool tablehistograms - Print statistic histograms for a given table{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-15279:
--
  Fix Version/s: 4.0
Source Control Link: 
https://github.com/apache/cassandra/commit/fcb4d52403a3de893eb2813468a788b0c8fa6fc7
  Since Version: 4.0
 Status: Resolved  (was: Ready to Commit)
 Resolution: Fixed

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
> Fix For: 4.0
>
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Remove bad virtual table result check

2019-08-14 Thread clohfink
This is an automated email from the ASF dual-hosted git repository.

clohfink pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fcb4d52  Remove bad virtual table result check
fcb4d52 is described below

commit fcb4d52403a3de893eb2813468a788b0c8fa6fc7
Author: Chris Lohfink 
AuthorDate: Wed Aug 14 14:07:40 2019 -0500

Remove bad virtual table result check

Patch by Chris Lohfink; Reviewed by Blake Eggleston for CASSANDRA-15279
---
 src/java/org/apache/cassandra/db/virtual/SimpleDataSet.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/java/org/apache/cassandra/db/virtual/SimpleDataSet.java 
b/src/java/org/apache/cassandra/db/virtual/SimpleDataSet.java
index 6cead97..00acaed 100644
--- a/src/java/org/apache/cassandra/db/virtual/SimpleDataSet.java
+++ b/src/java/org/apache/cassandra/db/virtual/SimpleDataSet.java
@@ -73,7 +73,7 @@ public class SimpleDataSet extends 
AbstractVirtualTable.AbstractDataSet
 {
 if (null == currentRow)
 throw new IllegalStateException();
-if (null == value || columnName == null)
+if (null == columnName)
 throw new IllegalStateException(String.format("Invalid column: 
%s=%s for %s", columnName, value, currentRow));
 currentRow.add(columnName, value);
 return this;


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-15279:

Status: Ready to Commit  (was: Review In Progress)

+1

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-15279:

Reviewers: Blake Eggleston, Blake Eggleston  (was: Blake Eggleston)
   Status: Review In Progress  (was: Patch Available)
Reviewers: Blake Eggleston, Blake Eggleston

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907507#comment-16907507
 ] 

Chris Lohfink edited comment on CASSANDRA-15279 at 8/14/19 6:06 PM:


[fixed unit tests|https://circleci.com/gh/clohfink/cassandra/592]


was (Author: cnlwsu):
[fixed unit tests|[https://circleci.com/gh/clohfink/cassandra/592]

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907507#comment-16907507
 ] 

Chris Lohfink commented on CASSANDRA-15279:
---

[fixed unit tests|[https://circleci.com/gh/clohfink/cassandra/592]

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-15279:
--
 Severity: Normal
   Complexity: Low Hanging Fruit
Discovered By: Unit Test
 Bug Category: Parent values: Correctness(12982)
  Component/s: Test/unit
   Status: Open  (was: Triage Needed)

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-15279:
--
Test and Documentation Plan: na
 Status: Patch Available  (was: In Progress)

> Remove overly conservative check breaking VirtualTable unit test
> 
>
> Key: CASSANDRA-15279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>
> CASSANDRA-15194 introduced a check to make it easier to debug bad values 
> being passed to SimpleDataSet but it was too aggressive and actually blocks 
> valid cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15279) Remove overly conservative check breaking VirtualTable unit test

2019-08-14 Thread Chris Lohfink (JIRA)
Chris Lohfink created CASSANDRA-15279:
-

 Summary: Remove overly conservative check breaking VirtualTable 
unit test
 Key: CASSANDRA-15279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15279
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Lohfink
Assignee: Chris Lohfink


CASSANDRA-15194 introduced a check to make it easier to debug bad values being 
passed to SimpleDataSet but it was too aggressive and actually blocks valid 
cases which are shown in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15170) Reduce the time needed to release in-JVM dtest cluster resources after close

2019-08-14 Thread Jon Meredith (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907479#comment-16907479
 ] 

Jon Meredith commented on CASSANDRA-15170:
--

I've updated each of the branch for the inconsistent use of 
timeouts/ExecutorUtils you pointed out, and also fixed my own nit of units -> 
unit.

The only thing I noticed while doing that was that trunk was missing a couple 
of synchronized modifiers in {{SharedExecutorPool}} on {{newExecutor}} and 
{{shutdownAndWait}} that had been added in the 3.x and below series.  Paranoia 
made me include them, but I wanted to check with you if you remembered why they 
were added and if there was a reason they were not merged up to trunk?

> Reduce the time needed to release in-JVM dtest cluster resources after close
> 
>
> Key: CASSANDRA-15170
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15170
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/dtest
>Reporter: Jon Meredith
>Assignee: Jon Meredith
>Priority: Normal
>
> There are a few issues that slow the in-JVM dtests from reclaiming metaspace 
> once the cluster is closed.
> IsolatedExecutor issues the shutdown on a SingleExecutorThreadPool, sometimes 
> this thread was still running 10s after the dtest cluster was closed.  
> Instead, switch to a ThreadPoolExecutor with a core pool size of 0 so that 
> the thread executing the class loader close executes sooner.
> If an OutboundTcpConnection is waiting to connect() and the endpoint is not 
> answering, it has to wait for a timeout before it exits. Instead it should 
> check the isShutdown flag and terminate early if shutdown has been requested.
> In 3.0 and above, HintsCatalog.load uses java.nio.Files.list outside of a 
> try-with-resources construct and leaks a file handle for the directory.  This 
> doesn't matter for normal usage, it leaks a file handle for each dtest 
> Instance created.
> On trunk, Netty global event executor threads are still running and delay GC 
> for the instance class loader.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15251) " cassandra.service failed " while running command "service cassandra start"

2019-08-14 Thread ASHISH (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907407#comment-16907407
 ] 

ASHISH commented on CASSANDRA-15251:


$ sudo service cassandra start

> " cassandra.service failed " while running command "service cassandra start"
> 
>
> Key: CASSANDRA-15251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Rajesh Vakcharla
>Assignee: ASHISH
>Priority: Normal
>
> Hi Team,
> I have installed Cassandra RPM Package in Redhat linux from AWS console. 
> While trying to start cassandra getting error as shown below.
> [root@ip-172-31-41-5 ec2-user]# service cassandra start
> {color:#FF}Starting cassandra (via systemctl): Job for cassandra.service 
> failed because the service did not take the steps required by its unit 
> configuration.{color}
> {color:#FF}See "systemctl status cassandra.service" and "journalctl -xe" 
> for details.{color}
> [FAILED]
> unable to findout the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15157) Missing commands in nodetool help

2019-08-14 Thread ASHISH (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASHISH reassigned CASSANDRA-15157:
--

Assignee: ASHISH

> Missing commands in nodetool help
> -
>
> Key: CASSANDRA-15157
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15157
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Yakir Gibraltar
>Assignee: ASHISH
>Priority: Normal
>
> Hi, how to gel *all* available commands of nodetool? like cfhistograms, 
> cfstats, etc.
> "nodetool help" does not return all commands.
> for example:
> {code}
> [root@ctaz001 ~]# nodetool version
> ReleaseVersion: 3.11.4
> [root@ctaz001 ~]# nodetool help | grep cfh | wc -l
> 0
> [root@ctaz001 ~]# nodetool help
> usage: nodetool [(-p  | --port )]
> [(-u  | --username )]
> [(-pw  | --password )]
> [(-pwf  | --password-file )]
> [(-h  | --host )]  []
> The most commonly used nodetool commands are:
> assassinate  Forcefully remove a dead node without 
> re-replicating any data.  Use as a last resort if you cannot removenode
> bootstrapMonitor/manage node's bootstrap process
> cleanup  Triggers the immediate cleanup of keys no 
> longer belonging to a node. By default, clean all keyspaces
> clearsnapshotRemove the snapshot with the given name from 
> the given keyspaces. If no snapshotName is specified we will remove all 
> snapshots
> compact  Force a (major) compaction on one or more 
> tables or user-defined compaction on given SSTables
> compactionhistoryPrint history of compaction
> compactionstats  Print statistics on compactions
> decommission Decommission the *node I am connecting to*
> describecluster  Print the name, snitch, partitioner and 
> schema version of a cluster
> describering Shows the token ranges info of a given 
> keyspace
> disableautocompactionDisable autocompaction for the given 
> keyspace and table
> disablebackupDisable incremental backup
> disablebinaryDisable native transport (binary protocol)
> disablegossipDisable gossip (effectively marking the node 
> down)
> disablehandoff   Disable storing hinted handoffs
> disablehintsfordcDisable hints for a data center
> disablethriftDisable thrift server
> drainDrain the node (stop accepting writes and 
> flush all tables)
> enableautocompaction Enable autocompaction for the given keyspace 
> and table
> enablebackup Enable incremental backup
> enablebinary Reenable native transport (binary protocol)
> enablegossip Reenable gossip
> enablehandoffReenable future hints storing on the current 
> node
> enablehintsfordc Enable hints for a data center that was 
> previsouly disabled
> enablethrift Reenable thrift server
> failuredetector  Shows the failure detector information for 
> the cluster
> flushFlush one or more tables
> garbagecollect   Remove deleted data from one or more tables
> gcstats  Print GC Statistics
> getcompactionthreshold   Print min and max compaction thresholds for 
> a given table
> getcompactionthroughput  Print the MB/s throughput cap for compaction 
> in the system
> getconcurrentcompactors  Get the number of concurrent compactors in 
> the system.
> getendpoints Print the end points that owns the key
> getinterdcstreamthroughput   Print the Mb/s throughput cap for 
> inter-datacenter streaming in the system
> getlogginglevels Get the runtime logging levels
> getsstables  Print the sstable filenames that own the key
> getstreamthroughput  Print the Mb/s throughput cap for streaming 
> in the system
> gettimeout   Print the timeout of the given type in ms
> gettraceprobability  Print the current trace probability value
> gossipinfo   Shows the gossip information for the cluster
> help Display help information
> info Print node information (uptime, load, ...)
> invalidatecountercache   Invalidate the counter cache
> invalidatekeycache   Invalidate the key cache
> invalidaterowcache   Invalidate the row cache
> join Join the ring
> listsnapshotsLis

[jira] [Commented] (CASSANDRA-15251) " cassandra.service failed " while running command "service cassandra start"

2019-08-14 Thread ASHISH (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907325#comment-16907325
 ] 

ASHISH commented on CASSANDRA-15251:


cd CASSANDRA_HOME/bin

service start Cassandra

> " cassandra.service failed " while running command "service cassandra start"
> 
>
> Key: CASSANDRA-15251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Rajesh Vakcharla
>Assignee: ASHISH
>Priority: Normal
>
> Hi Team,
> I have installed Cassandra RPM Package in Redhat linux from AWS console. 
> While trying to start cassandra getting error as shown below.
> [root@ip-172-31-41-5 ec2-user]# service cassandra start
> {color:#FF}Starting cassandra (via systemctl): Job for cassandra.service 
> failed because the service did not take the steps required by its unit 
> configuration.{color}
> {color:#FF}See "systemctl status cassandra.service" and "journalctl -xe" 
> for details.{color}
> [FAILED]
> unable to findout the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15251) " cassandra.service failed " while running command "service cassandra start"

2019-08-14 Thread ASHISH (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907323#comment-16907323
 ] 

ASHISH commented on CASSANDRA-15251:


h3. Start Apache Cassandra on UNIX/Linux

{color:#FF}To start Cassandra in the background:{color}
 # Open a command prompt, and change to the following directory:  

|{color:#FF}$ cd CASSANDRA_HOME/bin{color}|
 # Run the following command:

|{color:#FF}$ ./cassandra{color}|

To start Cassandra in the foreground, run the following command:
|{color:#FF}$ ./cassandra -f{color}|

> " cassandra.service failed " while running command "service cassandra start"
> 
>
> Key: CASSANDRA-15251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Rajesh Vakcharla
>Assignee: ASHISH
>Priority: Normal
>
> Hi Team,
> I have installed Cassandra RPM Package in Redhat linux from AWS console. 
> While trying to start cassandra getting error as shown below.
> [root@ip-172-31-41-5 ec2-user]# service cassandra start
> {color:#FF}Starting cassandra (via systemctl): Job for cassandra.service 
> failed because the service did not take the steps required by its unit 
> configuration.{color}
> {color:#FF}See "systemctl status cassandra.service" and "journalctl -xe" 
> for details.{color}
> [FAILED]
> unable to findout the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15251) " cassandra.service failed " while running command "service cassandra start"

2019-08-14 Thread ASHISH (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASHISH reassigned CASSANDRA-15251:
--

Assignee: ASHISH

> " cassandra.service failed " while running command "service cassandra start"
> 
>
> Key: CASSANDRA-15251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15251
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Rajesh Vakcharla
>Assignee: ASHISH
>Priority: Normal
>
> Hi Team,
> I have installed Cassandra RPM Package in Redhat linux from AWS console. 
> While trying to start cassandra getting error as shown below.
> [root@ip-172-31-41-5 ec2-user]# service cassandra start
> {color:#FF}Starting cassandra (via systemctl): Job for cassandra.service 
> failed because the service did not take the steps required by its unit 
> configuration.{color}
> {color:#FF}See "systemctl status cassandra.service" and "journalctl -xe" 
> for details.{color}
> [FAILED]
> unable to findout the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-08-14 Thread ASHISH (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASHISH reassigned CASSANDRA-15128:
--

Assignee: ASHISH

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Assignee: ASHISH
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15274) Multiple Corrupt datafiles across entire environment

2019-08-14 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907229#comment-16907229
 ] 

Benedict commented on CASSANDRA-15274:
--

bq. if they print their entire contents successfully there's already a 
reasonable chance that the data is not corrupted

This comment was alluding to that likelihood - but that we would instead fail 
to parse the data because of corruption of the stream, long before we printed 
any garbage out.  If we manage to print out, and we do this for every 
"corrupted" block (and there are many of them), it becomes very likely the 
files aren't truly corrupted.

> Multiple Corrupt datafiles across entire environment 
> -
>
> Key: CASSANDRA-15274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Phil O Conduin
>Priority: Normal
>
> Cassandra Version: 2.2.13
> PRE-PROD environment.
>  * 2 datacenters.
>  * 9 physical servers in each datacenter - (_Cisco UCS C220 M4 SFF_)
>  * 4 Cassandra instances on each server (cass_a, cass_b, cass_c, cass_d)
>  * 72 Cassandra instances across the 2 data centres, 36 in site A, 36 in site 
> B.
> We also have 2 Reaper Nodes we use for repair.  One reaper node in each 
> datacenter each running with its own Cassandra back end in a cluster together.
> OS Details [Red Hat Linux]
> cass_a@x 0 10:53:01 ~ $ uname -a
> Linux x 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58 EST 2018 x86_64 
> x86_64 x86_64 GNU/Linux
> cass_a@x 0 10:57:31 ~ $ cat /etc/*release
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.6 (Maipo)"
> ID="rhel"
> Storage Layout 
> cass_a@xx 0 10:46:28 ~ $ df -h
> Filesystem                         Size  Used Avail Use% Mounted on
> /dev/mapper/vg01-lv_root            20G  2.2G   18G  11% /
> devtmpfs                            63G     0   63G   0% /dev
> tmpfs                               63G     0   63G   0% /dev/shm
> tmpfs                               63G  4.1G   59G   7% /run
> tmpfs                               63G     0   63G   0% /sys/fs/cgroup
> >> 4 cassandra instances
> /dev/sdd                           1.5T  802G  688G  54% /data/ssd4
> /dev/sda                           1.5T  798G  692G  54% /data/ssd1
> /dev/sdb                           1.5T  681G  810G  46% /data/ssd2
> /dev/sdc                           1.5T  558G  932G  38% /data/ssd3
> Cassandra load is about 200GB and the rest of the space is snapshots
> CPU
> cass_a@x 127 10:58:47 ~ $ lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
> CPU(s):                64
> Thread(s) per core:    2
> Core(s) per socket:    16
> Socket(s):             2
> *Description of problem:*
> During repair of the cluster, we are seeing multiple corruptions in the log 
> files on a lot of instances.  There seems to be no pattern to the corruption. 
>  It seems that the repair job is finding all the corrupted files for us.  The 
> repair will hang on the node where the corrupted file is found.  To fix this 
> we remove/rename the datafile and bounce the Cassandra instance.  Our 
> hardware/OS team have stated there is no problem on their side.  I do not 
> believe it the repair causing the corruption. 
>  
> So let me give you an example of a corrupted file and maybe someone might be 
> able to work through it with me?
> When this corrupted file was reported in the log it looks like it was the 
> repair that found it.
> $ journalctl -u cassmeta-cass_b.service --since "2019-08-07 22:25:00" --until 
> "2019-08-07 22:45:00"
> Aug 07 22:30:33 cassandra[34611]: INFO  21:30:33 Writing 
> Memtable-compactions_in_progress@830377457(0.008KiB serialized bytes, 1 ops, 
> 0%/0% of on/off-heap limit)
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Failed creating a merkle 
> tree for [repair #9587a200-b95a-11e9-8920-9f72868b8375 on KeyspaceMetadata/x, 
> (-1476350953672479093,-1474461
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Exception in thread 
> Thread[ValidationExecutor:825,1,main]
> Aug 07 22:30:33 cassandra[34611]: org.apache.cassandra.io.FSReadError: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /x/ssd2/data/KeyspaceMetadata/x-1e453cb0
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:365)
>  ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:361) 
> ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:340)
>  ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.db.composites.Abs

[jira] [Comment Edited] (CASSANDRA-15274) Multiple Corrupt datafiles across entire environment

2019-08-14 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907229#comment-16907229
 ] 

Benedict edited comment on CASSANDRA-15274 at 8/14/19 12:30 PM:


bq. if they print their entire contents successfully there's already a 
reasonable chance that the data is not corrupted

This comment was alluding to that likelihood - but that we would instead fail 
to parse the data because of corruption of the stream, long before we printed 
any garbage out.  If we manage to print out, and we do this for every 
"corrupted" block (and there are many of them), it becomes very likely (but not 
certain) that the files aren't truly corrupted.


was (Author: benedict):
bq. if they print their entire contents successfully there's already a 
reasonable chance that the data is not corrupted

This comment was alluding to that likelihood - but that we would instead fail 
to parse the data because of corruption of the stream, long before we printed 
any garbage out.  If we manage to print out, and we do this for every 
"corrupted" block (and there are many of them), it becomes very likely the 
files aren't truly corrupted.

> Multiple Corrupt datafiles across entire environment 
> -
>
> Key: CASSANDRA-15274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Phil O Conduin
>Priority: Normal
>
> Cassandra Version: 2.2.13
> PRE-PROD environment.
>  * 2 datacenters.
>  * 9 physical servers in each datacenter - (_Cisco UCS C220 M4 SFF_)
>  * 4 Cassandra instances on each server (cass_a, cass_b, cass_c, cass_d)
>  * 72 Cassandra instances across the 2 data centres, 36 in site A, 36 in site 
> B.
> We also have 2 Reaper Nodes we use for repair.  One reaper node in each 
> datacenter each running with its own Cassandra back end in a cluster together.
> OS Details [Red Hat Linux]
> cass_a@x 0 10:53:01 ~ $ uname -a
> Linux x 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58 EST 2018 x86_64 
> x86_64 x86_64 GNU/Linux
> cass_a@x 0 10:57:31 ~ $ cat /etc/*release
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.6 (Maipo)"
> ID="rhel"
> Storage Layout 
> cass_a@xx 0 10:46:28 ~ $ df -h
> Filesystem                         Size  Used Avail Use% Mounted on
> /dev/mapper/vg01-lv_root            20G  2.2G   18G  11% /
> devtmpfs                            63G     0   63G   0% /dev
> tmpfs                               63G     0   63G   0% /dev/shm
> tmpfs                               63G  4.1G   59G   7% /run
> tmpfs                               63G     0   63G   0% /sys/fs/cgroup
> >> 4 cassandra instances
> /dev/sdd                           1.5T  802G  688G  54% /data/ssd4
> /dev/sda                           1.5T  798G  692G  54% /data/ssd1
> /dev/sdb                           1.5T  681G  810G  46% /data/ssd2
> /dev/sdc                           1.5T  558G  932G  38% /data/ssd3
> Cassandra load is about 200GB and the rest of the space is snapshots
> CPU
> cass_a@x 127 10:58:47 ~ $ lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
> CPU(s):                64
> Thread(s) per core:    2
> Core(s) per socket:    16
> Socket(s):             2
> *Description of problem:*
> During repair of the cluster, we are seeing multiple corruptions in the log 
> files on a lot of instances.  There seems to be no pattern to the corruption. 
>  It seems that the repair job is finding all the corrupted files for us.  The 
> repair will hang on the node where the corrupted file is found.  To fix this 
> we remove/rename the datafile and bounce the Cassandra instance.  Our 
> hardware/OS team have stated there is no problem on their side.  I do not 
> believe it the repair causing the corruption. 
>  
> So let me give you an example of a corrupted file and maybe someone might be 
> able to work through it with me?
> When this corrupted file was reported in the log it looks like it was the 
> repair that found it.
> $ journalctl -u cassmeta-cass_b.service --since "2019-08-07 22:25:00" --until 
> "2019-08-07 22:45:00"
> Aug 07 22:30:33 cassandra[34611]: INFO  21:30:33 Writing 
> Memtable-compactions_in_progress@830377457(0.008KiB serialized bytes, 1 ops, 
> 0%/0% of on/off-heap limit)
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Failed creating a merkle 
> tree for [repair #9587a200-b95a-11e9-8920-9f72868b8375 on KeyspaceMetadata/x, 
> (-1476350953672479093,-1474461
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Exception in thread 
> Thread[ValidationExecutor:825,1,main]
> Aug 07 22:30:33 cassandra[34611]: org.apache.cassandra.io.FSReadError: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /x/ssd2/data/KeyspaceMetadata/x-1e453cb0
> Aug 07 22:30:33 

[jira] [Commented] (CASSANDRA-15274) Multiple Corrupt datafiles across entire environment

2019-08-14 Thread Vladimir Vavro (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907212#comment-16907212
 ] 

Vladimir Vavro commented on CASSANDRA-15274:


If it is possible to export suspicious sstable into json format, the challenge 
might be to verify, if the exported data are valid or corrupted. However my 
understanding is that crc check is optional but decompression is obviously not 
for the export tool. If the binary data before decompression are corrupted, 
should not we see binary garbage in the json output ?

> Multiple Corrupt datafiles across entire environment 
> -
>
> Key: CASSANDRA-15274
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15274
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Phil O Conduin
>Priority: Normal
>
> Cassandra Version: 2.2.13
> PRE-PROD environment.
>  * 2 datacenters.
>  * 9 physical servers in each datacenter - (_Cisco UCS C220 M4 SFF_)
>  * 4 Cassandra instances on each server (cass_a, cass_b, cass_c, cass_d)
>  * 72 Cassandra instances across the 2 data centres, 36 in site A, 36 in site 
> B.
> We also have 2 Reaper Nodes we use for repair.  One reaper node in each 
> datacenter each running with its own Cassandra back end in a cluster together.
> OS Details [Red Hat Linux]
> cass_a@x 0 10:53:01 ~ $ uname -a
> Linux x 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58 EST 2018 x86_64 
> x86_64 x86_64 GNU/Linux
> cass_a@x 0 10:57:31 ~ $ cat /etc/*release
> NAME="Red Hat Enterprise Linux Server"
> VERSION="7.6 (Maipo)"
> ID="rhel"
> Storage Layout 
> cass_a@xx 0 10:46:28 ~ $ df -h
> Filesystem                         Size  Used Avail Use% Mounted on
> /dev/mapper/vg01-lv_root            20G  2.2G   18G  11% /
> devtmpfs                            63G     0   63G   0% /dev
> tmpfs                               63G     0   63G   0% /dev/shm
> tmpfs                               63G  4.1G   59G   7% /run
> tmpfs                               63G     0   63G   0% /sys/fs/cgroup
> >> 4 cassandra instances
> /dev/sdd                           1.5T  802G  688G  54% /data/ssd4
> /dev/sda                           1.5T  798G  692G  54% /data/ssd1
> /dev/sdb                           1.5T  681G  810G  46% /data/ssd2
> /dev/sdc                           1.5T  558G  932G  38% /data/ssd3
> Cassandra load is about 200GB and the rest of the space is snapshots
> CPU
> cass_a@x 127 10:58:47 ~ $ lscpu | grep -E '^Thread|^Core|^Socket|^CPU\('
> CPU(s):                64
> Thread(s) per core:    2
> Core(s) per socket:    16
> Socket(s):             2
> *Description of problem:*
> During repair of the cluster, we are seeing multiple corruptions in the log 
> files on a lot of instances.  There seems to be no pattern to the corruption. 
>  It seems that the repair job is finding all the corrupted files for us.  The 
> repair will hang on the node where the corrupted file is found.  To fix this 
> we remove/rename the datafile and bounce the Cassandra instance.  Our 
> hardware/OS team have stated there is no problem on their side.  I do not 
> believe it the repair causing the corruption. 
>  
> So let me give you an example of a corrupted file and maybe someone might be 
> able to work through it with me?
> When this corrupted file was reported in the log it looks like it was the 
> repair that found it.
> $ journalctl -u cassmeta-cass_b.service --since "2019-08-07 22:25:00" --until 
> "2019-08-07 22:45:00"
> Aug 07 22:30:33 cassandra[34611]: INFO  21:30:33 Writing 
> Memtable-compactions_in_progress@830377457(0.008KiB serialized bytes, 1 ops, 
> 0%/0% of on/off-heap limit)
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Failed creating a merkle 
> tree for [repair #9587a200-b95a-11e9-8920-9f72868b8375 on KeyspaceMetadata/x, 
> (-1476350953672479093,-1474461
> Aug 07 22:30:33 cassandra[34611]: ERROR 21:30:33 Exception in thread 
> Thread[ValidationExecutor:825,1,main]
> Aug 07 22:30:33 cassandra[34611]: org.apache.cassandra.io.FSReadError: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: Corrupted: 
> /x/ssd2/data/KeyspaceMetadata/x-1e453cb0
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:365)
>  ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:361) 
> ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:340)
>  ~[apache-cassandra-2.2.13.jar:2.2.13]
> Aug 07 22:30:33 cassandra[34611]: at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382)
>  ~[apache-cassand