Re: unrepairable sstable data rows

2011-04-11 Thread Jonathan Colby
Thanks for the answer Aaron. 

There are Data, Index, Filter, and Statistics files associated with SSTables.   
What files must be physically moved/deleted? 

I tried just moving the Data file and Cassandra would not start. I see this 
exception:

 WARN [WrapperSimpleAppMain] 2011-04-11 12:04:23,239 ColumnFamilyStore.java 
(line 493) Removing orphans for /var/lib/cassandra/data/DFS/main-f-5: [Data.db]
ERROR [WrapperSimpleAppMain] 2011-04-11 12:04:23,240 
AbstractCassandraDaemon.java (line 333) Exception encountered during startup.
java.lang.AssertionError: attempted to delete non-existing file main-f-5-Data.db
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:46)
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:41) 
   at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:498)
at 
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:153)

On Apr 11, 2011, at 2:14 AM, aaron morton wrote:

 But if you wanted to get fresh data on the node, a simple approach is to 
 delete/move just the SSTable that is causing problems then run a repair. That 
 should reduce the amount of data that needs to be moved. 



Re: unrepairable sstable data rows

2011-04-11 Thread Sylvain Lebresne
Remove main-f-5-{Index|Filter|Statistics}.db files. They make no sense
without a Data file and
Cassandra always make sure it removes those before the Data file (that
while it gets confused if it
finds one of those file without a data file).

Note that your error was with the sstable main-f-232-Data.db, so it
would probably have been enough
to remove only main-f-232* (while it seems you have also removed
main-f-5-Data.db). I fear its probably
too late (it will just be potentially much more data to repair than necessary).

Out of curiosity, what version of Cassandra are you running ?

--
Sylvain

On Mon, Apr 11, 2011 at 12:08 PM, Jonathan Colby
jonathan.co...@gmail.com wrote:
 Thanks for the answer Aaron.

 There are Data, Index, Filter, and Statistics files associated with SSTables. 
   What files must be physically moved/deleted?

 I tried just moving the Data file and Cassandra would not start. I see this 
 exception:

  WARN [WrapperSimpleAppMain] 2011-04-11 12:04:23,239 ColumnFamilyStore.java 
 (line 493) Removing orphans for /var/lib/cassandra/data/DFS/main-f-5: 
 [Data.db]
 ERROR [WrapperSimpleAppMain] 2011-04-11 12:04:23,240 
 AbstractCassandraDaemon.java (line 333) Exception encountered during startup.
 java.lang.AssertionError: attempted to delete non-existing file 
 main-f-5-Data.db
        at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:46)
        at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:41)   
      at 
 org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:498)
        at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:153)

 On Apr 11, 2011, at 2:14 AM, aaron morton wrote:

 But if you wanted to get fresh data on the node, a simple approach is to 
 delete/move just the SSTable that is causing problems then run a repair. 
 That should reduce the amount of data that needs to be moved.




Cassandra constantly nodes which doens allredy exists

2011-04-11 Thread ruslan usifov
Hello

I use cassandra 0.7.4. After reconfiguring cluster on one node i constantly
see folow log:

INFO [GossipStage:1] 2011-04-11 17:14:13,514 StorageService.java (line 865)
Removing token 56713727820156410577229101238628035242 for /10.32.59.202
INFO [ScheduledTasks:1] 2011-04-11 17:14:13,514 HintedHandOffManager.java
(line 210) Deleting any stored hints for 10.32.59.202


But node 10.32.59.202 doesn't exists alredy. How to prevent this?


Read time get worse during dynamic snitch reset

2011-04-11 Thread shimi
I finally upgraded 0.6.x to 0.7.4.  The nodes are running with the new
version for several days across 2 data centers.
I noticed that the read time in some of the nodes increase by x50-60 every
ten minutes.
There was no indication in the logs for something that happen at the same
time. The only thing that I know that is running every 10 minutes is
the dynamic snitch reset.
So I changed dynamic_snitch_reset_interval_in_ms to 20 minutes and now I
have the problem once in every 20 minutes.

I am running all nodes with:
replica_placement_strategy:
org.apache.cassandra.locator.NetworkTopologyStrategy
  strategy_options:
DC1 : 2
DC2 : 2
  replication_factor: 4

(DC1 and DC2 are taken from the ips)
Does anyone familiar with this kind of behavior?

Shimi


exceptions during bootstrap 0.7.4

2011-04-11 Thread Jonathan Colby
Seeing these exceptions on a node during the bootstrap phase of a move .   
Cassandra 0.7.4.  Anyone able to shed more light on what may be causing this?

btw - the move was done to assign a new token, decommission phase seemed to 
have gone ok.  bootstrapping is still in progress (i hope)

 INFO [CompactionExecutor:1] 2011-04-11 16:26:25,583 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-249
 INFO [CompactionExecutor:1] 2011-04-11 16:27:21,067 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-250
 INFO [CompactionExecutor:1] 2011-04-11 16:28:01,745 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-251
 INFO [CompactionExecutor:1] 2011-04-11 16:36:21,320 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-252
 INFO [CompactionExecutor:1] 2011-04-11 16:36:33,485 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-253
ERROR [CompactionExecutor:1] 2011-04-11 16:36:34,368 
AbstractCassandraDaemon.java (line 112) Fatal exception in thread 
Thread[CompactionExecutor:1,1,main]
java.io.EOFException
at 
org.apache.cassandra.io.sstable.IndexHelper.skipIndex(IndexHelper.java:65)
at 
org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:942)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:935)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
ERROR [Thread-329] 2011-04-11 16:36:34,369 AbstractCassandraDaemon.java (line 
112) Fatal exception in thread Thread[Thread-329,5,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.io.EOFException
at 
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSession.java:151)
at 
org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:63)
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:91)
Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSession.java:135)
... 2 more
Caused by: java.io.EOFException
at 
org.apache.cassandra.io.sstable.IndexHelper.skipIndex(IndexHelper.java:65)
at 
org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:942)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:935)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
 INFO [CompactionExecutor:1] 2011-04-11 16:36:37,317 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-255
 INFO [CompactionExecutor:1] 2011-04-11 16:36:37,426 SSTableReader.java (line 
154) Opening /var/lib/cassandra/data/DFS/main-f-256
ERROR [CompactionExecutor:1] 2011-04-11 16:36:38,290 
AbstractCassandraDaemon.java (line 112) Fatal exception in thread 
Thread[CompactionExecutor:1,1,main]
java.io.EOFException
at 
org.apache.cassandra.io.sstable.IndexHelper.skipIndex(IndexHelper.java:65)
at 
org.apache.cassandra.io.sstable.SSTableWriter$Builder.build(SSTableWriter.java:315)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:942)
at 
org.apache.cassandra.db.CompactionManager$9.call(CompactionManager.java:935)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)



Cassandra Database Modeling

2011-04-11 Thread Shalom
I would like to save statistics on 10,000,000 (ten millions) pairs of
particles, how they relate to one another in any given space in time.

So suppose that within a total experiment time of T1..T1000 (assume that T1
is when the experiment starts, and T1000 is the time when the experiment
ends) I would like, per each pair of particles, to measure the relationship
between every Tn -- T(n+1) interval:

T1..T2 (this is the first interval)

T2..T3

T3..T4

..

..

T9,999,999..T10,000,000 (this is the last interval)

For each such a particle pair (there are 10,000,000 pairs) I would like to
save some figures (such as distance, angel etc) on each interval of [
Tn..T(n+1) ]

Once saved, the query I will be using to retrieve this data is as follows:
give me all particle pairs on time interval [ Tn..T(n+1) ] where the
distance between the two particles is smaller than X and the angle between
the two particles is greater than Y. Meaning, the query will always take
place for all particle pairs on a certain interval of time.

How would you model this in Cassandra, so that the writes/reads are
optimized? given the database size involved, can you recommend on a suitable
solution? (I have been recommended to both MongoDB / Cassandra).

I should mention that the data does change often -- we run many such
experiments (different particle sets / thousands of experiments) and would
need a very decent performance of reads/writes.

Is Cassandra suitable for this time of work?


--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Database-Modeling-tp6261778p6261778.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Analysing hotspot gc logs

2011-04-11 Thread Chris Burroughs
To avoid taking my own thread [1] off on a tangent.  Does anyone have a
reccomendation for a tool to graphical analysis (ie make useful graphs)
out of hoptspot gc logs?  Google searches have turned up several results
along the lines of go try this zip file [2].

[1] http://www.mail-archive.com/user@cassandra.apache.org/msg12134.html

[2]
http://mail.openjdk.java.net/pipermail/hotspot-gc-use/2009-August/000420.html


Re: Analysing hotspot gc logs

2011-04-11 Thread Ryan King
On Mon, Apr 11, 2011 at 10:35 AM, Chris Burroughs
chris.burrou...@gmail.com wrote:
 To avoid taking my own thread [1] off on a tangent.  Does anyone have a
 reccomendation for a tool to graphical analysis (ie make useful graphs)
 out of hoptspot gc logs?  Google searches have turned up several results
 along the lines of go try this zip file [2].

 [1] http://www.mail-archive.com/user@cassandra.apache.org/msg12134.html

 [2]
 http://mail.openjdk.java.net/pipermail/hotspot-gc-use/2009-August/000420.html


We use this to pipe the data into ganglia:
https://github.com/jkalucki/jvm-gc-stats YMMV

-ryan


problems getting started with Cassandra Ruby

2011-04-11 Thread Mark Lilback
I'm trying to connect to Cassandra from a Ruby script. I'm using rvm, and made 
a clean install of Ruby 1.9.2 and then did gem install cassandra. When I run 
a script that just contains require 'cassandra/0.7', I get the output below. 
Any suggestion on what I need to do to get rid of these warnings?


/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/server/nonblocking_server.rb:80:
 warning: `' interpreted as argument prefix
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old skip
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:235:
 warning: previous definition of skip was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_message_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:57:
 warning: previous definition of write_message_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_struct_begin
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:59:
 warning: previous definition of write_struct_begin was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_struct_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:63:
 warning: previous definition of write_struct_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_field_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:69:
 warning: previous definition of write_field_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_map_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:79:
 warning: previous definition of write_map_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_list_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:85:
 warning: previous definition of write_list_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old write_set_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:91:
 warning: previous definition of write_set_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_message_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:125:
 warning: previous definition of read_message_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_struct_begin
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:127:
 warning: previous definition of read_struct_begin was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_struct_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:131:
 warning: previous definition of read_struct_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_field_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:137:
 warning: previous definition of read_field_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_map_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:143:
 warning: previous definition of read_map_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_list_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:149:
 warning: previous definition of read_list_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method redefined; discarding old read_set_end
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift/protocol/base_protocol.rb:155:
 warning: previous definition of read_set_end was here
/Users/admin/.rvm/gems/ruby-1.9.2-p180/gems/thrift-0.5.0/lib/thrift_native.bundle:
 warning: method 

Timeout during stress test

2011-04-11 Thread mcasandra
I am running stress test using hector. In the client logs I see:

me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:32)
at
me.prettyprint.cassandra.service.HColumnFamilyImpl$1.execute(HColumnFamilyImpl.java:256)
at
me.prettyprint.cassandra.service.HColumnFamilyImpl$1.execute(HColumnFamilyImpl.java:227)
at
me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:101)
at
me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:221)
at
me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at
me.prettyprint.cassandra.service.HColumnFamilyImpl.doExecuteSlice(HColumnFamilyImpl.java:227)
at
me.prettyprint.cassandra.service.HColumnFamilyImpl.getColumns(HColumnFamilyImpl.java:139)
at
com.riptano.cassandra.stress.SliceCommand.call(SliceCommand.java:48)
at
com.riptano.cassandra.stress.SliceCommand.call(SliceCommand.java:20)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: TimedOutException()
at
org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:7174)
at
org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:540)
at
org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:512)
at
me.prettyprint.cassandra.service.HColumnFamilyImpl$1.execute(HColumnFamilyImpl.java:236)


But I don't see anything in cassandra logs.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6262430.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Remove call vs. delete mutation

2011-04-11 Thread Josep Blanquer
All,

 From a thrift client perspective using Cassandra, there are currently
2 options for deleting keys/columns/subcolumns:

1- One can use the remove call: which only takes a column path so
you can only delete 'one thing' at a time (an entire key, an entire
supercolumn, a column or a subcolumn)
2- A delete mutation: which is more flexible as it allows to delete a
list of columns an even a slice range of them within a single call.

The question I have is: is there a noticeable difference in
performance between issuing a remove call, or a mutation with a single
delete? In other words, why would I use the remove call if it's much
less flexible than the mutation?

...or another way to put it: is the remove call just there for
backwards compatibility and will be superseded by the delete mutations
in the future?

 Cheers,

Josep M.


help! seed node needs to be replaced

2011-04-11 Thread Jonathan Colby

My seed node (1 of 4)  having the wraparound range (token 0) needs to be 
replaced.


Should I bootstrap the node with a new IP, then add it back as a seed?   

Should I run remove token on another node to take over the range?

Re: help! seed node needs to be replaced

2011-04-11 Thread Jonathan Colby
I shutdown cassandra, deleted (with a backup) the contents of the data 
directory and did a nodetool move 0.It seems to be populating the node 
with its range of data.Hope that was a good idea.

On Apr 11, 2011, at 10:38 PM, Jonathan Colby wrote:

 
 My seed node (1 of 4)  having the wraparound range (token 0) needs to be 
 replaced.
 
 
 Should I bootstrap the node with a new IP, then add it back as a seed?   
 
 Should I run remove token on another node to take over the range?



Re: unrepairable sstable data rows

2011-04-11 Thread aaron morton
FYI, I was chatting with Dominic Williams on IRC yesterday, he had an 0.7.4 
install with the same problem see error stack here http://pastebin.com/YasPtEYj 

He has not run nodetool scrub but I think it the 0.7.4 install had been there a 
while so I the data file may have been fresh. 

Aaron

On 11 Apr 2011, at 22:30, Sylvain Lebresne wrote:

 Remove main-f-5-{Index|Filter|Statistics}.db files. They make no sense
 without a Data file and
 Cassandra always make sure it removes those before the Data file (that
 while it gets confused if it
 finds one of those file without a data file).
 
 Note that your error was with the sstable main-f-232-Data.db, so it
 would probably have been enough
 to remove only main-f-232* (while it seems you have also removed
 main-f-5-Data.db). I fear its probably
 too late (it will just be potentially much more data to repair than 
 necessary).
 
 Out of curiosity, what version of Cassandra are you running ?
 
 --
 Sylvain
 
 On Mon, Apr 11, 2011 at 12:08 PM, Jonathan Colby
 jonathan.co...@gmail.com wrote:
 Thanks for the answer Aaron.
 
 There are Data, Index, Filter, and Statistics files associated with 
 SSTables.   What files must be physically moved/deleted?
 
 I tried just moving the Data file and Cassandra would not start. I see this 
 exception:
 
  WARN [WrapperSimpleAppMain] 2011-04-11 12:04:23,239 ColumnFamilyStore.java 
 (line 493) Removing orphans for /var/lib/cassandra/data/DFS/main-f-5: 
 [Data.db]
 ERROR [WrapperSimpleAppMain] 2011-04-11 12:04:23,240 
 AbstractCassandraDaemon.java (line 333) Exception encountered during startup.
 java.lang.AssertionError: attempted to delete non-existing file 
 main-f-5-Data.db
at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:46)
at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:41)  
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:498)
at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:153)
 
 On Apr 11, 2011, at 2:14 AM, aaron morton wrote:
 
 But if you wanted to get fresh data on the node, a simple approach is to 
 delete/move just the SSTable that is causing problems then run a repair. 
 That should reduce the amount of data that needs to be moved.
 
 



Re: Cassandra constantly nodes which doens allredy exists

2011-04-11 Thread aaron morton
In JConsole go to o.a.c.db.HintedHandoffManager and try the 
deleteHintsForEndpopints operation. 

This is also called as when a token is removed from the ring, or when a node is 
decomissioned. 

What process did you use to reconfigure the cluster?

Aaron

On 12 Apr 2011, at 01:15, ruslan usifov wrote:

 Hello
 
 I use cassandra 0.7.4. After reconfiguring cluster on one node i constantly 
 see folow log:
 
 INFO [GossipStage:1] 2011-04-11 17:14:13,514 StorageService.java (line 865) 
 Removing token 56713727820156410577229101238628035242 for /10.32.59.202
 INFO [ScheduledTasks:1] 2011-04-11 17:14:13,514 HintedHandOffManager.java 
 (line 210) Deleting any stored hints for 10.32.59.202
 
 
 But node 10.32.59.202 doesn't exists alredy. How to prevent this?
 



Re: Read time get worse during dynamic snitch reset

2011-04-11 Thread aaron morton
The reset interval clears the latency tracked for each node so a bad node will 
be read from again. The scores for each node are then updated every 100ms 
(default) using the last 100 responses from a node. 

How long does the bad performance last for?

What CL are you reading at ? At Quorum with RF 4 the read request will be sent 
to 3 nodes, ordered by proximity and wellness according to the dynamic snitch. 
(for background recent discussion on dynamic snitch 
http://www.mail-archive.com/user@cassandra.apache.org/msg12089.html)

You can take a look at the weights and timings used by the DynamicSnitch in 
JConsole under o.a.c.db.DynamicSnitchEndpoint . Also at DEBUG log level you 
will be able to see which nodes the request is sent to. 

My guess is the DynamicSnitch is doing the right thing and the slow down is a 
node with a problem getting back into the list of nodes used for your read. 
It's then moved down the list as it's bad performance is noticed.

Hope that helps
Aaron
 

On 12 Apr 2011, at 01:28, shimi wrote:

 I finally upgraded 0.6.x to 0.7.4.  The nodes are running with the new 
 version for several days across 2 data centers.
 I noticed that the read time in some of the nodes increase by x50-60 every 
 ten minutes.
 There was no indication in the logs for something that happen at the same 
 time. The only thing that I know that is running every 10 minutes is the 
 dynamic snitch reset.
 So I changed dynamic_snitch_reset_interval_in_ms to 20 minutes and now I have 
 the problem once in every 20 minutes.
 
 I am running all nodes with:
 replica_placement_strategy: 
 org.apache.cassandra.locator.NetworkTopologyStrategy
   strategy_options:
 DC1 : 2
 DC2 : 2
   replication_factor: 4
 
 (DC1 and DC2 are taken from the ips)
 Does anyone familiar with this kind of behavior?
 
 Shimi
 



Re: help! seed node needs to be replaced

2011-04-11 Thread aaron morton
Is this the node that had the earlier EOF error during bootstrap ? 

Aaron

On 12 Apr 2011, at 08:42, Jonathan Colby wrote:

 I shutdown cassandra, deleted (with a backup) the contents of the data 
 directory and did a nodetool move 0.It seems to be populating the node 
 with its range of data.Hope that was a good idea.
 
 On Apr 11, 2011, at 10:38 PM, Jonathan Colby wrote:
 
 
 My seed node (1 of 4)  having the wraparound range (token 0) needs to be 
 replaced.
 
 
 Should I bootstrap the node with a new IP, then add it back as a seed?   
 
 Should I run remove token on another node to take over the range?
 



Re: help! seed node needs to be replaced

2011-04-11 Thread Jonathan Colby
Yes.  This node has repeatedly given problems while reading various sstables.  
So I decided to start with a fresh data dir, relying on the fact that with an 
RF=3, the data will be able to be retrieved from the cluster.

Since this is a seed node, I am a little unsure how to proceed.  From 
everything I've read, bootstrapping a seed is not a good idea.  One idea I had 
was to change the IP, bootstrap, and change the IP back.But I just tried 
nodetool move 0 to try, with the hopes that it might work.


On Apr 11, 2011, at 11:31 PM, aaron morton wrote:

 Is this the node that had the earlier EOF error during bootstrap ? 
 
 Aaron
 
 On 12 Apr 2011, at 08:42, Jonathan Colby wrote:
 
 I shutdown cassandra, deleted (with a backup) the contents of the data 
 directory and did a nodetool move 0.It seems to be populating the node 
 with its range of data.Hope that was a good idea.
 
 On Apr 11, 2011, at 10:38 PM, Jonathan Colby wrote:
 
 
 My seed node (1 of 4)  having the wraparound range (token 0) needs to be 
 replaced.
 
 
 Should I bootstrap the node with a new IP, then add it back as a seed?   
 
 Should I run remove token on another node to take over the range?
 
 



Re: Timeout during stress test

2011-04-11 Thread mcasandra
I see this occurring often when all cassandra nodes all of a sudden show CPU
spike. All reads fail for about 2 mts. GC.log and system.log doesn't reveal
much.

Only think I notice is that when I restart nodes there are tons of files
that gets deleted. cfstats from one of the nodes looks like this:

nodetool -h `hostname` tpstats
Pool NameActive   Pending  Completed
ReadStage2727  21491
RequestResponseStage  0 0 201641
MutationStage 0 0 236513
ReadRepairStage   0 0   7222
GossipStage   0 0  31498
AntiEntropyStage  0 0  0
MigrationStage0 0  0
MemtablePostFlusher   0 0324
StreamStage   0 0  0
FlushWriter   0 0324
FILEUTILS-DELETE-POOL 0 0   1220
MiscStage 0 0  0
FlushSorter   0 0  0
InternalResponseStage 0 0  0
HintedHandoff 1 3  9

--


Keyspace: StressKeyspace
Read Count: 21957
Read Latency: 46.91765058978913 ms.
Write Count: 222104
Write Latency: 0.008302124230090408 ms.
Pending Tasks: 0
Column Family: StressStandard
SSTable count: 286
Space used (live): 377916657941
Space used (total): 377916657941
Memtable Columns Count: 362
Memtable Data Size: 164403613
Memtable Switch Count: 326
Read Count: 21958
Read Latency: 631.464 ms.
Write Count: 222104
Write Latency: 0.007 ms.
Pending Tasks: 0
Key cache capacity: 100
Key cache size: 22007
Key cache hit rate: 0.002453626459907744
Row cache: disabled
Compacted row minimum size: 87
Compacted row maximum size: 5839588
Compacted row mean size: 552698




--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263087.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Cassandra Database Modeling

2011-04-11 Thread aaron morton
The tricky part here is the level of flexibility you want for the querying. In 
general you will want to denormalise to support the read queries.  

If your queries are not interactive you may be able to use Hadoop / Pig / Hive 
e.g. http://www.datastax.com/products/brisk In which case you can probably have 
a simpler data model where you spend less effort supporting the queries. But it 
sounds like you need interactive queries as part of the experiment.

You could store the data per pair in a standard CF (lets call it the pair cf) 
as follows:

- key: expriement_id.time_interval
- column name: pair_id
- column value: distance, angle, other data packed together as JSON or some 
other format

This would support a basic record of what happened, for each time interval you 
can get the list of all pairs and read their data. 

To support your spatial queries you could use two standard standard CFs as 
follows:

distance CF:
- key: experiment_id.time_interval
- colunm name: zero_padded_distance.pair_id
- column value: empty or the angle 

angle CF :
- key: experiment_id.time_interval
- colunm name: zero_padded_angle.pair_id
- column value: empty or the distance

(two pairs can have the same distance and/or angle in same time slice)

Here we are using the column name as a compound value, and am assuming they can 
be byte ordered. So for distance the column name looks something like 
000500.123456789. You would then use the Byte comparator (or similar) for the 
columns.  

To find all of the particles for experiment 2 at t5 where distance is  100 you 
would use a get_slice (see http://wiki.apache.org/cassandra/API or your higher 
level client docs) against the key 2.5 with a SliceRange start at 
00.0 and finish at 000100.9. Once you have this list of 
columns you can either filter client side for the angle or issue another query 
for the particles inside the angle range. Then join the two results client side 
using the pair_id returned in the column names. 

By using the same key for all 3 CF's all the data for a time slice will be 
stored on the same nodes. You can potentially spread this around by using 
slightly different keys so they may hash to different areas of the cluster. 
e.g. expriement_id.time_interval.distance

Data volume is not a concern, and it's not possible to talk about performance 
until you have an idea of the workload and required throughput. But writes are 
fast and I think your reads would be fast as well as the row data for distance 
and angle will not change so caches will be be useful. 
 

Hope that helps. 
Aaron

On 12 Apr 2011, at 03:01, Shalom wrote:

 I would like to save statistics on 10,000,000 (ten millions) pairs of
 particles, how they relate to one another in any given space in time.
 
 So suppose that within a total experiment time of T1..T1000 (assume that T1
 is when the experiment starts, and T1000 is the time when the experiment
 ends) I would like, per each pair of particles, to measure the relationship
 between every Tn -- T(n+1) interval:
 
 T1..T2 (this is the first interval)
 
 T2..T3
 
 T3..T4
 
 ..
 
 ..
 
 T9,999,999..T10,000,000 (this is the last interval)
 
 For each such a particle pair (there are 10,000,000 pairs) I would like to
 save some figures (such as distance, angel etc) on each interval of [
 Tn..T(n+1) ]
 
 Once saved, the query I will be using to retrieve this data is as follows:
 give me all particle pairs on time interval [ Tn..T(n+1) ] where the
 distance between the two particles is smaller than X and the angle between
 the two particles is greater than Y. Meaning, the query will always take
 place for all particle pairs on a certain interval of time.
 
 How would you model this in Cassandra, so that the writes/reads are
 optimized? given the database size involved, can you recommend on a suitable
 solution? (I have been recommended to both MongoDB / Cassandra).
 
 I should mention that the data does change often -- we run many such
 experiments (different particle sets / thousands of experiments) and would
 need a very decent performance of reads/writes.
 
 Is Cassandra suitable for this time of work?
 
 
 --
 View this message in context: 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Database-Modeling-tp6261778p6261778.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
 Nabble.com.



Re: Remove call vs. delete mutation

2011-04-11 Thread aaron morton
AFAIK both follow the same path internally. 

Aaron

On 12 Apr 2011, at 06:47, Josep Blanquer wrote:

 All,
 
 From a thrift client perspective using Cassandra, there are currently
 2 options for deleting keys/columns/subcolumns:
 
 1- One can use the remove call: which only takes a column path so
 you can only delete 'one thing' at a time (an entire key, an entire
 supercolumn, a column or a subcolumn)
 2- A delete mutation: which is more flexible as it allows to delete a
 list of columns an even a slice range of them within a single call.
 
 The question I have is: is there a noticeable difference in
 performance between issuing a remove call, or a mutation with a single
 delete? In other words, why would I use the remove call if it's much
 less flexible than the mutation?
 
 ...or another way to put it: is the remove call just there for
 backwards compatibility and will be superseded by the delete mutations
 in the future?
 
 Cheers,
 
 Josep M.



Re: Timeout during stress test

2011-04-11 Thread aaron morton
TimedOutException means the cluster could not perform the request in 
rpc_timeout time. The client should retry as the problem may be transitory. 

In this case read performance may have slowed down due to the number of 
sstables 286. It hard to tell without knowing what the workload is.

Aaron

On 12 Apr 2011, at 09:56, mcasandra wrote:

 I see this occurring often when all cassandra nodes all of a sudden show CPU
 spike. All reads fail for about 2 mts. GC.log and system.log doesn't reveal
 much.
 
 Only think I notice is that when I restart nodes there are tons of files
 that gets deleted. cfstats from one of the nodes looks like this:
 
 nodetool -h `hostname` tpstats
 Pool NameActive   Pending  Completed
 ReadStage2727  21491
 RequestResponseStage  0 0 201641
 MutationStage 0 0 236513
 ReadRepairStage   0 0   7222
 GossipStage   0 0  31498
 AntiEntropyStage  0 0  0
 MigrationStage0 0  0
 MemtablePostFlusher   0 0324
 StreamStage   0 0  0
 FlushWriter   0 0324
 FILEUTILS-DELETE-POOL 0 0   1220
 MiscStage 0 0  0
 FlushSorter   0 0  0
 InternalResponseStage 0 0  0
 HintedHandoff 1 3  9
 
 --
 
 
 Keyspace: StressKeyspace
Read Count: 21957
Read Latency: 46.91765058978913 ms.
Write Count: 222104
Write Latency: 0.008302124230090408 ms.
Pending Tasks: 0
Column Family: StressStandard
SSTable count: 286
Space used (live): 377916657941
Space used (total): 377916657941
Memtable Columns Count: 362
Memtable Data Size: 164403613
Memtable Switch Count: 326
Read Count: 21958
Read Latency: 631.464 ms.
Write Count: 222104
Write Latency: 0.007 ms.
Pending Tasks: 0
Key cache capacity: 100
Key cache size: 22007
Key cache hit rate: 0.002453626459907744
Row cache: disabled
Compacted row minimum size: 87
Compacted row maximum size: 5839588
Compacted row mean size: 552698
 
 
 
 
 --
 View this message in context: 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263087.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
 Nabble.com.



Re: Timeout during stress test

2011-04-11 Thread mcasandra
It looks like hector did retry on all the nodes and failed. Does this then
mean cassandra is down for clients in this scenario? That would be bad.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263270.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Lot of pending tasks for writes

2011-04-11 Thread mcasandra
I am running stress test and on one of the nodes I see:

[root@dsdb5 ~]# nodetool -h `hostname` tpstats
Pool NameActive   Pending  Completed
ReadStage 0 0   2495
RequestResponseStage  0 0 242202
MutationStage48   521 287850
ReadRepairStage   0 0799
GossipStage   0 0  10639
AntiEntropyStage  0 0  0
MigrationStage0 0202
MemtablePostFlusher   1 2   1047
StreamStage   0 0  0
FlushWriter   1 1   1047
FILEUTILS-DELETE-POOL 0 0   2048
MiscStage 0 0  0
FlushSorter   0 0  0
InternalResponseStage 0 0  0
HintedHandoff 1 3  5

and cfstats

Keyspace: StressKeyspace
Read Count: 2494
Read Latency: 4987.431669206095 ms.
Write Count: 281705
Write Latency: 0.017631469090005503 ms.
Pending Tasks: 49
Column Family: StressStandard
SSTable count: 882
Space used (live): 139589196497
Space used (total): 139589196497
Memtable Columns Count: 6
Memtable Data Size: 14204955
Memtable Switch Count: 1932
Read Count: 2494
Read Latency: 5921.633 ms.
Write Count: 282522
Write Latency: 0.017 ms.
Pending Tasks: 32
Key cache capacity: 100
Key cache size: 1198
Key cache hit rate: 0.0013596193065941536
Row cache: disabled
Compacted row minimum size: 219343
Compacted row maximum size: 5839588
Compacted row mean size: 557125

I am just running simple test in 6 node cassandra 4 GB heap, 96 GB RAM and
12 core per host. I am inserting 1M rows with avg col size of 250k. I keep
getting Dropped mutation messages in logs. Not sure how to troubleshoot or
tune it.

Can someone please help?

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Lot-of-pending-tasks-for-writes-tp6263462p6263462.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Timeout during stress test

2011-04-11 Thread aaron morton
It means the cluster is currently overloaded and unable to complete requests in 
time at the CL specified. 

Aaron

On 12 Apr 2011, at 11:18, mcasandra wrote:

 It looks like hector did retry on all the nodes and failed. Does this then
 mean cassandra is down for clients in this scenario? That would be bad.
 
 --
 View this message in context: 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263270.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
 Nabble.com.



Re: Timeout during stress test

2011-04-11 Thread mcasandra
But I don't understand the reason for oveload. It was doing simple read of 12
threads and reasing 5 rows. Avg CPU only 20%, No GC issues that I see. I
would expect cassandra to be able to process more with 6 nodes, 12 core, 96
GB RAM and 4 GB heap.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263470.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Timeout during stress test

2011-04-11 Thread aaron morton
You'll need to provide more information, from the TP stats the read stage could 
not keep up. If the node is not CPU bound then it is probably IO bound. 


What sort of read?
How many columns was it asking for ? 
How many columns do the rows have ?
Was the test asking for different rows ?
How many ops requests per second did it get up to?
What do the io stats look like ? 
What does nodetool cfhistograms say ?

Aaron

On 12 Apr 2011, at 13:02, mcasandra wrote:

 But I don't understand the reason for oveload. It was doing simple read of 12
 threads and reasing 5 rows. Avg CPU only 20%, No GC issues that I see. I
 would expect cassandra to be able to process more with 6 nodes, 12 core, 96
 GB RAM and 4 GB heap.
 
 --
 View this message in context: 
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263470.html
 Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
 Nabble.com.



unsubscribe

2011-04-11 Thread Denis Kirpichenkov




Re: unsubscribe

2011-04-11 Thread daryl smith


 


Re: unsubscribe

2011-04-11 Thread aaron morton
http://wiki.apache.org/cassandra/FAQ#unsubscribe

On 12 Apr 2011, at 14:43, Denis Kirpichenkov wrote:

 
 



Re: Cassandra Database Modeling

2011-04-11 Thread csharpplusproject
Hi Aaron,

Yes, of course it helps, I am starting to get a flavor of Cassandra --
thank you very much!

First of all, by 'interactive' queries, are you referring to 'real-time'
queries? (meaning, where experiments data is 'streaming', data needs to
be stored and following that, the query needs to be run in real time)?

Looking at the design of the particle pairs:

- key: expriement_id.time_interval 
- column name: pair_id 
- column value: distance, angle, other data packed together as JSON or
some other format

A couple of questions:

(1) Will a query such as pairID[ expriement_id.time_interval ] will
basically return an array of all paidIDs for the experiment, where each
item is a 'packed' JSON?
(2) Would it be possible, rather than returning the whole JSON object
per every pairID, to get (say) only the distance?
(3) Would it be possible to easily update certain 'pairIDs' with new
values (for example, update pairIDs = {2389, 93434} with new distance
values)? 

Looking at the design of the distance CF (for example):

this is VERY INTERESTING. basically you are suggesting a design that
will save the actual distance between each pair of particles, and will
allow queries where we can find all pairIDs (for an experiment, on
time_interval) that meet a certain distance criteria. VERY, VERY
INTERESTING!

A couple of questions:

(1) Will a query such as distanceCF[ expriement_id.time_interval ] will
basically return an array of all 'zero_padded_distance.pair_id' elements
for the experiment?
(2) In such a case, I will get (presumably) a python list where every
item is a string (and I will need to process it)?
(3) Given the fact that we're doing a slice on millions of columns (?),
any idea how fast such an operation would be?


Just to make sure I understand, is it true that in both situations, the
query complexity is basically O(1) since it's simply a HASH?


Thank you for all of your help!

Shalom.

-Original Message-
From: aaron morton aa...@thelastpickle.com
Reply-to: user@cassandra.apache.org
To: user@cassandra.apache.org
Subject: Re: Cassandra Database Modeling
Date: Tue, 12 Apr 2011 10:43:42 +1200

The tricky part here is the level of flexibility you want for the
querying. In general you will want to denormalise to support the read
queries.  


If your queries are not interactive you may be able to use Hadoop /
Pig / Hive e.g. http://www.datastax.com/products/brisk In which case you
can probably have a simpler data model where you spend less effort
supporting the queries. But it sounds like you need interactive queries
as part of the experiment.


You could store the data per pair in a standard CF (lets call it the
pair cf) as follows:


- key: expriement_id.time_interval
- column name: pair_id
- column value: distance, angle, other data packed together as JSON or
some other format


This would support a basic record of what happened, for each time
interval you can get the list of all pairs and read their data. 


To support your spatial queries you could use two standard standard CFs
as follows:


distance CF:
- key: experiment_id.time_interval
- colunm name: zero_padded_distance.pair_id
- column value: empty or the angle 


angle CF :
- key: experiment_id.time_interval
- colunm name: zero_padded_angle.pair_id
- column value: empty or the distance


(two pairs can have the same distance and/or angle in same time slice)


Here we are using the column name as a compound value, and am assuming
they can be byte ordered. So for distance the column name looks
something like 000500.123456789. You would then use the Byte comparator
(or similar) for the columns.  


To find all of the particles for experiment 2 at t5 where distance is 
100 you would use a get_slice
(see http://wiki.apache.org/cassandra/API or your higher level client
docs) against the key 2.5 with a SliceRange start at
00.0 and finish at 000100.9. Once you have this
list of columns you can either filter client side for the angle or issue
another query for the particles inside the angle range. Then join the
two results client side using the pair_id returned in the column names. 


By using the same key for all 3 CF's all the data for a time slice will
be stored on the same nodes. You can potentially spread this around by
using slightly different keys so they may hash to different areas of the
cluster. e.g. expriement_id.time_interval.distance


Data volume is not a concern, and it's not possible to talk about
performance until you have an idea of the workload and required
throughput. But writes are fast and I think your reads would be fast as
well as the row data for distance and angle will not change so caches
will be be useful. 
 


Hope that helps. 
Aaron


On 12 Apr 2011, at 03:01, Shalom wrote:

 I would like to save statistics on 10,000,000 (ten millions) pairs of
 particles, how they relate to one another in any given space in time.
 
 So suppose that within a total experiment time of T1..T1000 (assume
 that T1
 is 

Re: Timeout during stress test

2011-04-11 Thread Terje Marthinussen
I notice you have pending hinted handoffs?

Look for errors related to that. We have seen occasional corruptions in the
hinted handoff sstables,

If you are stressing the system to its limits, you may also consider playing
with more with the number of  read/write threads  (concurrent_reads/writes)
as well as rate limiting the number of requests you can get per node
(throttle limit).

We have seen similar issue when sending large number of requests to a
cluster (read/write threads running out, timeouts, nodes marked as down).

Terje

We have seen similar issues when

On Tue, Apr 12, 2011 at 9:56 AM, aaron morton aa...@thelastpickle.comwrote:

 It means the cluster is currently overloaded and unable to complete
 requests in time at the CL specified.

 Aaron

 On 12 Apr 2011, at 11:18, mcasandra wrote:

  It looks like hector did retry on all the nodes and failed. Does this
 then
  mean cassandra is down for clients in this scenario? That would be bad.
 
  --
  View this message in context:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263270.html
  Sent from the cassandra-u...@incubator.apache.org mailing list archive
 at Nabble.com.




Re: Timeout during stress test

2011-04-11 Thread mcasandra

aaron morton wrote:
 
 You'll need to provide more information, from the TP stats the read stage
 could not keep up. If the node is not CPU bound then it is probably IO
 bound. 
 
 
 What sort of read?
 How many columns was it asking for ? 
 How many columns do the rows have ?
 Was the test asking for different rows ?
 How many ops requests per second did it get up to?
 What do the io stats look like ? 
 What does nodetool cfhistograms say ?
 
It's simple read of 1M rows with one column of avg size of 200K. Got around
70 req per sec.

Not sure how to intepret the iostats output with things happening async in
cassandra. Can you give little description on how to interpret it?

I have posted output of cfstats. Does cfhistograms provide better info?


--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263859.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.