Re: Bookstrapping new node isn't pulling schema from cluster

2015-04-19 Thread Eric Stevens
Is it one of your seed nodes, or does it otherwise have itself as a seed?
A node will not bootstrap if it is in its own seeds list.
On Apr 18, 2015 2:53 PM, Bill Miller bmil...@inthinc.com wrote:

 I upgraded a 5 node cluster from 1.2.5 to 1.2.9, ran ugradesstables and
 installed oracle java without issues. Then I tried upgrading one node to
 2.0.14 which my Hector (I need to move from it) client didn't like, so I
 rolled it back to 1.2.9.  Unfortunately I didn't snapshot so I cleared all
 of that nodes data and attempted to bookstrap it back into the cluster.
 When I do that it sets up the system keyspace and is talking to other nodes
 and output.log says Startup completed! Now serving reads without any
 errors.  This is immediately followed by:

 java.lang.AssertionError: Unknown keyspace note_qa
 at org.apache.cassandra.db.Table.init(Table.java:262

 and then lots of of errors when it can't fimd column families:

 org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
 cfId=5213a16b-a648-3cb5-9006-8f6bf9315009
 at
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)

 The other keyspaces/column families are never created.

 The other four nodes are running fine and nodetool shows the new node as
 UP when it's in this state.

 I attached log.  I had server debugging on.





Re: RepairException on C* 2.1.3

2015-04-19 Thread Marcus Eriksson
Issue here is that getPosition returns null

I think this was fixed in
https://issues.apache.org/jira/browse/CASSANDRA-8750

On Fri, Apr 17, 2015 at 10:55 PM, Robert Coli rc...@eventbrite.com wrote:

 On Fri, Apr 17, 2015 at 11:40 AM, Mark Greene green...@gmail.com wrote:

 I'm receiving an exception when I run a repair process via: 'nodetool
 repair -par keyspace'


 This JIRA claims fixed in 2.1.3, but I believe I have heard at least one
 other report that it isn't :

 https://issues.apache.org/jira/browse/CASSANDRA-8211

 If I were you, I would :

 a) file a JIRA at http://issues.apache.org
 b) reply to the list telling us the URL of your issue

 =Rob




timeout creating table

2015-04-19 Thread Jimmy Lin
hi,
we have some unit tests that run parallel that will create tmp keyspace,
and tables and then drop them after tests are done.

From time to time, our create table statement run into All hosts(s) for
query failed... Timeout during read (from datastax driver) error.

We later turn on tracing, and record something  in the following.
See below between === , Native_Transport-Request thread and
MigrationStage thread, there was like 16 seconds doing something.

Any idea what that 16 seconds Cassandra was doing? We can work around that
but increasing our datastax driver timeout value, but wondering if there is
actually better way to solve this?

thanks



 tracing --


5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
|
Key cache hit for sstable 95588 | 127.0.0.1 |   1592 |
Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
|   Seeking
to partition beginning in data file | 127.0.0.1 |   1593 |
Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
|
Merging data from memtables and 3 sstables | 127.0.0.1 |   1595 |
Native-Transport-Requests:102

=
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
|
Read 3 live and 0 tombstoned cells | 127.0.0.1 |   1610 |
Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
|   Executing seq scan across 1 sstables for
(min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
16381594 |  MigrationStage:1
=

5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
|   Seeking
to partition beginning in data file | 127.0.0.1 |   16381782
|  MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
|
Read 0 live and 0 tombstoned cells | 127.0.0.1 |   16381787
|  MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
|   Seeking
to partition beginning in data file | 127.0.0.1 |   16381789
|  MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
|
Read 0 live and 0 tombstoned cells | 127.0.0.1 |   16381791
|  MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
|   Seeking
to partition beginning in data file | 127.0.0.1 |   16381792
|  MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
|
Read 0 live and 0 tombstoned cells | 127.0.0.1 |   16381794
|  MigrationStage:1
.
.
.


COPY command to export a table to CSV file

2015-04-19 Thread Neha Trivedi
Hello all,

We are getting the OutOfMemoryError on one of the Node and the Node is
down, when we run the export command to get all the data from a table.


Regards
Neha




ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java (line
199) Exception in thread Thread[ReadStage:532074,5,main]
java.lang.OutOfMemoryError: Java heap space
at
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
at
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
at
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
at
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
at
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
at
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)


Re: COPY command to export a table to CSV file

2015-04-19 Thread Kiran mk
Seems like the is related to JAVA HEAP Memory.

What is the count of records in the column-family ?

What  is the Cassandra Version ?

Best Regards,
Kiran.M.K.

On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi nehajtriv...@gmail.com
wrote:

 Hello all,

 We are getting the OutOfMemoryError on one of the Node and the Node is
 down, when we run the export command to get all the data from a table.


 Regards
 Neha




 ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
 (line 199) Exception in thread Thread[ReadStage:532074,5,main]
 java.lang.OutOfMemoryError: Java heap space
 at
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
 at
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)





-- 
Best Regards,
Kiran.M.K.


Re: Bookstrapping new node isn't pulling schema from cluster

2015-04-19 Thread Anuj Wadehra
As Eric said make sure that node is not present in its own seed list. Also make 
sure that auto_bootstrap property is not set to false explicitly in yaml of the 
node. If that doesnt work You can also try removing the node using nodetool 
remove node and then adding it back. Removenode will cause some additional 
streaming.


By the way , what problems did you face in Hector when u upgrade to 2.0.14? We 
are also planning to do that very soon.


Thanks

Anuj Wadehra

Sent from Yahoo Mail on Android

From:Eric Stevens migh...@gmail.com
Date:Sun, 19 Apr, 2015 at 6:47 pm
Subject:Re: Bookstrapping new node isn't pulling schema from cluster

Is it one of your seed nodes, or does it otherwise have itself as a seed?  A 
node will not bootstrap if it is in its own seeds list. 

On Apr 18, 2015 2:53 PM, Bill Miller bmil...@inthinc.com wrote:

I upgraded a 5 node cluster from 1.2.5 to 1.2.9, ran ugradesstables and 
installed oracle java without issues. Then I tried upgrading one node to 2.0.14 
which my Hector (I need to move from it) client didn't like, so I rolled it 
back to 1.2.9.  Unfortunately I didn't snapshot so I cleared all of that nodes 
data and attempted to bookstrap it back into the cluster.  When I do that it 
sets up the system keyspace and is talking to other nodes and output.log says 
Startup completed! Now serving reads without any errors.  This is immediately 
followed by:


java.lang.AssertionError: Unknown keyspace note_qa

at org.apache.cassandra.db.Table.init(Table.java:262


and then lots of of errors when it can't fimd column families:


org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find 
cfId=5213a16b-a648-3cb5-9006-8f6bf9315009

at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:184)


The other keyspaces/column families are never created.


The other four nodes are running fine and nodetool shows the new node as UP 
when it's in this state.


I attached log.  I had server debugging on.