Re: Cassandra Daemon not coming up

2018-03-05 Thread mahesh rajamani
I did not add any user and disk space was fine.



On Tue, Feb 27, 2018, 11:33 Rahul Singh <rahul.xavier.si...@gmail.com>
wrote:

> Were there any changes to the system such as permissions, etc. Did you add
> users / change auth scheme?
>
> On Feb 27, 2018, 10:27 AM -0600, ZAIDI, ASAD A <az1...@att.com>, wrote:
>
> Can you check if you’ve enough disk space available ?
>
> ~Asad
>
>
>
> *From:* mahesh rajamani [mailto:rajamani.mah...@gmail.com]
> *Sent:* Tuesday, February 27, 2018 10:11 AM
> *To:* user@cassandra.apache.org
> *Subject:* Cassandra Daemon not coming up
>
>
>
> I am using Cassandra 3.0.9 version on a 12 node cluster. I have multiple
> node down after a restart. The cassandra VM is not coming up with an asset
> error as below. On running in debug mode it is failing while doing
> operation on " resource_role_permissons_index" in system_auth keyspace.
> Please let me know how to bring the cassandra running from this state.
>
>
>
> Logs from system.log
>
>
>
> INFO  [main] 2018-02-27 15:43:24,005 ColumnFamilyStore.java:389 -
> Initializing system_schema.columns
>
>
> INFO  [main] 2018-02-27 15:43:24,012 ColumnFamilyStore.java:389 -
> Initializing system_schema.triggers
>
>
> INFO  [main] 2018-02-27 15:43:24,019 ColumnFamilyStore.java:389 -
> Initializing system_schema.dropped_columns
>
>
> INFO  [main] 2018-02-27 15:43:24,029 ColumnFamilyStore.java:389 -
> Initializing system_schema.views
>
>
> INFO  [main] 2018-02-27 15:43:24,038 ColumnFamilyStore.java:389 -
> Initializing system_schema.types
>
>
> INFO  [main] 2018-02-27 15:43:24,049 ColumnFamilyStore.java:389 -
> Initializing system_schema.functions
>
>
> INFO  [main] 2018-02-27 15:43:24,061 ColumnFamilyStore.java:389 -
> Initializing system_schema.aggregates
>
>
> INFO  [main] 2018-02-27 15:43:24,072 ColumnFamilyStore.java:389 -
> Initializing system_schema.indexes
>
>
> ERROR [main] 2018-02-27 15:43:24,127 CassandraDaemon.java:709 - Exception
> encountered during startup
>
>
> java.lang.AssertionError: null
>
>
>
> at
> org.apache.cassandra.db.marshal.CompositeType.getInstance(CompositeType.java:103)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:311)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.config.CFMetaData.create(CFMetaData.java:366)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:954)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:928)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:891)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:868)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:856)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:136)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:126)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:239)
> [apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:568)
> [apache-cassandra-3.0.9.jar:3.0.9]
>
>
> at
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:696)
> [apache-cassandra-3.0.9.jar:3.0.9]
>
>
>
> --
>
> Regards,
> Mahesh Rajamani
>
>


Cassandra Daemon not coming up

2018-02-27 Thread mahesh rajamani
I am using Cassandra 3.0.9 version on a 12 node cluster. I have multiple
node down after a restart. The cassandra VM is not coming up with an asset
error as below. On running in debug mode it is failing while doing
operation on " resource_role_permissons_index" in system_auth keyspace.
Please let me know how to bring the cassandra running from this state.

Logs from system.log

INFO  [main] 2018-02-27 15:43:24,005 ColumnFamilyStore.java:389 -
Initializing system_schema.columns

INFO  [main] 2018-02-27 15:43:24,012 ColumnFamilyStore.java:389 -
Initializing system_schema.triggers

INFO  [main] 2018-02-27 15:43:24,019 ColumnFamilyStore.java:389 -
Initializing system_schema.dropped_columns

INFO  [main] 2018-02-27 15:43:24,029 ColumnFamilyStore.java:389 -
Initializing system_schema.views

INFO  [main] 2018-02-27 15:43:24,038 ColumnFamilyStore.java:389 -
Initializing system_schema.types

INFO  [main] 2018-02-27 15:43:24,049 ColumnFamilyStore.java:389 -
Initializing system_schema.functions

INFO  [main] 2018-02-27 15:43:24,061 ColumnFamilyStore.java:389 -
Initializing system_schema.aggregates

INFO  [main] 2018-02-27 15:43:24,072 ColumnFamilyStore.java:389 -
Initializing system_schema.indexes

ERROR [main] 2018-02-27 15:43:24,127 CassandraDaemon.java:709 - Exception
encountered during startup

java.lang.AssertionError: null


at
org.apache.cassandra.db.marshal.CompositeType.getInstance(CompositeType.java:103)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:311)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.config.CFMetaData.(CFMetaData.java:288)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.config.CFMetaData.create(CFMetaData.java:366)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:954)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:928)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:891)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:868)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:856)
~[apache-cassandra-3.0.9.jar:3.0.9]

at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:136)
~[apache-cassandra-3.0.9.jar:3.0.9]

at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:126)
~[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:239)
[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:568)
[apache-cassandra-3.0.9.jar:3.0.9]

at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:696)
[apache-cassandra-3.0.9.jar:3.0.9]

-- 
Regards,
Mahesh Rajamani


Cassandra rpm for 3.10

2017-04-03 Thread mahesh rajamani
Hi,

Can you please let me know where I can get Cassandra 3.10 RPM? If its not
available, instruction to build it would be helpful.


-- 
Regards,
Mahesh Rajamani


Issue after loading data using ssttable loader

2014-07-17 Thread mahesh rajamani
Hi,

I have an issue in my environment running with cassandra 2.0.5, It is build
with 9 nodes, with 3 nodes in each datacenter. After loading the data, I am
able to do token range lookup or list in cassandra-cli, but when I do get
x[rowkey], the system hangs. Similar query in CQL also has same
behavior.

I have 3 nodes in the source environment, which is configured as 3
datacenter, having 1 node. I did an export from source environment and
imported into new environment with 9 nodes. The other difference is source
is configured as 256 vnodes and destination environment is with 32 vnodes.

Below is the exception i see in cassandra.
ERROR [ReadStage:103] 2014-07-16 21:23:55,648 CassandraDaemon.java (line
192) Exception in thread Thread[ReadStage:103,5,main]

java.lang.AssertionError: Added column does not sort as the first column


at
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:115)

at
org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:116)


at
org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:110)
at
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:205)

at
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)


at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)


at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)


at
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)

at
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)

at
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)

at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)

at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)


at
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)


at
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1396)

at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)


at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)


at java.lang.Thread.run(Thread.java:744)



-- 
Regards,
Mahesh Rajamani


Re: Index with same Name but different keyspace

2014-05-19 Thread mahesh rajamani
Sorry I just realized the table name in 2 schema are slightly different,
but still i am not sure why i should not use same index name across
different schema. Below is the instruction to reproduce.


Created 2 keyspace using cassandra-cli


[default@unknown] create keyspace keyspace1 with placement_strategy =
'org.apache.cassandra.locator.SimpleStrategy' and
strategy_options={replication_factor:1};

[default@unknown] create keyspace keyspace2 with placement_strategy =
'org.apache.cassandra.locator.SimpleStrategy' and
strategy_options={replication_factor:1};


Create table index using cqlsh as below:


cqlsh use keyspace1;

cqlsh:keyspace1 CREATE TABLE table1 (version text, flag boolean, primary
key (version));

cqlsh:keyspace1 create index sversionindex on table1(flag);

cqlsh:keyspace1 use keyspace2;

cqlsh:keyspace2 CREATE TABLE table2 (version text, flag boolean, primary
key (version));

cqlsh:keyspace2 create index sversionindex on table2(flag);

*Bad Request: Duplicate index name sversionindex*

*Thanks*
*Mahesh*





On Sat, May 17, 2014 at 3:11 AM, Mark Reddy mark.re...@boxever.com wrote:

 Can you share your schema and the commands you are running?


 On Thu, May 15, 2014 at 7:54 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Hi,

 I am using Cassandra 2.0.5 version. I trying to setup 2 keyspace with
 same tables for different testing. While creating index on the tables, I
 realized I am not able to use the same index name  though the tables are in
 different keyspaces. Is maintaining unique index name across keyspace is
 must/feature?

 --
 Regards,
 Mahesh Rajamani





-- 
Regards,
Mahesh Rajamani


Index with same Name but different keyspace

2014-05-16 Thread mahesh rajamani
Hi,

I am using Cassandra 2.0.5 version. I trying to setup 2 keyspace with same
tables for different testing. While creating index on the tables, I
realized I am not able to use the same index name  though the tables are in
different keyspaces. Is maintaining unique index name across keyspace is
must/feature?

-- 
Regards,
Mahesh Rajamani


Re: Expired column showing up

2014-02-18 Thread mahesh rajamani
I upgraded the Cassandra to 2.0.5, these issues did not occur so far.

Thanks
Mahesh


On Mon, Feb 17, 2014 at 1:43 PM, mahesh rajamani
rajamani.mah...@gmail.comwrote:

 Christian,

 There are 2 use cases which are failing, and both looks to be similar
 issue, basically happens in column family  set with TTL.

 case 1) I manage index for specific data as single row in a column family.
 I set TTL to 1 second if the data need to be removed from the index row.
 Under some scenario the get and count for the row key gives different
 column counts. In the application if I do get I get correct set of
 columns(expired columns don't return), but if I do slice query and read 100
 columns at a time, the columns set with TTL returns. I am not able to
 understand, what is starting this issue.

 case 2) I have column family for managing locks, In this case I insert
 a column with by  default TTL as 15 seconds. If the transaction completes
 before I remove the column by again setting TTL to 1 second.

 In this case when running flush the flush hangs with following Assertion
 exception.

 ERROR [FlushWriter:1] 2014-02-17 11:49:29,349 CassandraDaemon.java (line
 187) Exception in thread Thread[FlushWriter:1,5,main]
 java.lang.AssertionError
 at
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:186)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:360)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:315)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)


 Thanks
 Mahesh



 On Mon, Feb 17, 2014 at 12:43 PM, horschi hors...@gmail.com wrote:

 Hi Mahesh,

 the problem is that every column is only tombstoned for as long as the
 original column was valid.

 So if the last update was only valid for 1 sec, then the tombstone will
 also be valid for 1 second! If the previous was valid for a longer time,
 then this old value might reappear.

 Maybe you can explain why you are doing this?

 kind regards,
 Christian



 On Mon, Feb 17, 2014 at 6:18 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Christain,

 Yes. Is it a problem?  Can you explain what happens in this scenario?

 Thanks
 Mahesh


 On Fri, Feb 14, 2014 at 3:07 PM, horschi hors...@gmail.com wrote:

 Hi Mahesh,

 is it possible you are creating columns with a long TTL, then update
 these columns with a smaller TTL?

 kind regards,
 Christian


 On Fri, Feb 14, 2014 at 3:45 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Hi,

 I am using Cassandra 2.0.2 version. On a wide row (approx. 1
 columns), I expire few column by setting TTL as 1 second. At times these
 columns show up during slice query.

 When I have this issue, running count and get commands for that row
 using Cassandra cli it gives different column counts.

 But once I run flush and compact, the issue goes off and expired
 columns don't show up.

 Can someone provide some help on this issue.

 --
 Regards,
 Mahesh Rajamani





 --
 Regards,
 Mahesh Rajamani





 --
 Regards,
 Mahesh Rajamani




-- 
Regards,
Mahesh Rajamani


Re: Expired column showing up

2014-02-17 Thread mahesh rajamani
Christain,

Yes. Is it a problem?  Can you explain what happens in this scenario?

Thanks
Mahesh


On Fri, Feb 14, 2014 at 3:07 PM, horschi hors...@gmail.com wrote:

 Hi Mahesh,

 is it possible you are creating columns with a long TTL, then update these
 columns with a smaller TTL?

 kind regards,
 Christian


 On Fri, Feb 14, 2014 at 3:45 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Hi,

 I am using Cassandra 2.0.2 version. On a wide row (approx. 1
 columns), I expire few column by setting TTL as 1 second. At times these
 columns show up during slice query.

 When I have this issue, running count and get commands for that row using
 Cassandra cli it gives different column counts.

 But once I run flush and compact, the issue goes off and expired columns
 don't show up.

 Can someone provide some help on this issue.

 --
 Regards,
 Mahesh Rajamani





-- 
Regards,
Mahesh Rajamani


Re: Expired column showing up

2014-02-17 Thread mahesh rajamani
Christian,

There are 2 use cases which are failing, and both looks to be similar
issue, basically happens in column family  set with TTL.

case 1) I manage index for specific data as single row in a column family.
I set TTL to 1 second if the data need to be removed from the index row.
Under some scenario the get and count for the row key gives different
column counts. In the application if I do get I get correct set of
columns(expired columns don't return), but if I do slice query and read 100
columns at a time, the columns set with TTL returns. I am not able to
understand, what is starting this issue.

case 2) I have column family for managing locks, In this case I insert
a column with by  default TTL as 15 seconds. If the transaction completes
before I remove the column by again setting TTL to 1 second.

In this case when running flush the flush hangs with following Assertion
exception.

ERROR [FlushWriter:1] 2014-02-17 11:49:29,349 CassandraDaemon.java (line
187) Exception in thread Thread[FlushWriter:1,5,main]
java.lang.AssertionError
at
org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:186)
at
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:360)
at
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:315)
at
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)


Thanks
Mahesh



On Mon, Feb 17, 2014 at 12:43 PM, horschi hors...@gmail.com wrote:

 Hi Mahesh,

 the problem is that every column is only tombstoned for as long as the
 original column was valid.

 So if the last update was only valid for 1 sec, then the tombstone will
 also be valid for 1 second! If the previous was valid for a longer time,
 then this old value might reappear.

 Maybe you can explain why you are doing this?

 kind regards,
 Christian



 On Mon, Feb 17, 2014 at 6:18 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Christain,

 Yes. Is it a problem?  Can you explain what happens in this scenario?

 Thanks
 Mahesh


 On Fri, Feb 14, 2014 at 3:07 PM, horschi hors...@gmail.com wrote:

 Hi Mahesh,

 is it possible you are creating columns with a long TTL, then update
 these columns with a smaller TTL?

 kind regards,
 Christian


 On Fri, Feb 14, 2014 at 3:45 PM, mahesh rajamani 
 rajamani.mah...@gmail.com wrote:

 Hi,

 I am using Cassandra 2.0.2 version. On a wide row (approx. 1
 columns), I expire few column by setting TTL as 1 second. At times these
 columns show up during slice query.

 When I have this issue, running count and get commands for that row
 using Cassandra cli it gives different column counts.

 But once I run flush and compact, the issue goes off and expired
 columns don't show up.

 Can someone provide some help on this issue.

 --
 Regards,
 Mahesh Rajamani





 --
 Regards,
 Mahesh Rajamani





-- 
Regards,
Mahesh Rajamani


Expired column showing up

2014-02-14 Thread mahesh rajamani
Hi,

I am using Cassandra 2.0.2 version. On a wide row (approx. 1 columns),
I expire few column by setting TTL as 1 second. At times these columns show
up during slice query.

When I have this issue, running count and get commands for that row using
Cassandra cli it gives different column counts.

But once I run flush and compact, the issue goes off and expired columns
don't show up.

Can someone provide some help on this issue.

-- 
Regards,
Mahesh Rajamani


Thrift CAS usage

2014-02-12 Thread mahesh rajamani
Hi,

I am using CAS feature through thrift cas api.

I am able to set the expected column with some value and use cas through
thrift api. But I am sure what I should set for expected column list to
achieve IF NOT EXIST condition for a column.

Can someone help me on this?

-- 
Regards,
Mahesh Rajamani


Creating custom secondary index in Cassandra.

2013-11-08 Thread mahesh rajamani
Hello,

I am looking out for some additional details about how to create a custom
Secondary Index.
I see CQL documentation where we can provide our implementation of
secondary index using the syntax:

CREATE CUSTOM INDEX ON users (email) USING 'path.to.the.IndexClass';

But there are no information about the interface that need to be
implemented. Can someone give more information/reference on how to use this.

-- 
Regards,
Mahesh Rajamani