[jira] [Commented] (CASSANDRA-7103) Very poor performance with simple setup

2014-04-29 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13984223#comment-13984223
 ] 

Martin Bligh commented on CASSANDRA-7103:
-

OK, that explains some stuff ... will take it to user list as you suggest. I 
was under the impression that the row key was the whole of the primary key, not 
just the partition key, as described here, for instance:

http://wiki.apache.org/cassandra/DataModel
The first component of a table's primary key is the partition key; within a 
partition, rows are clustered by the remaining columns of the PK.

In which case, that description seems very misleading

 Very poor performance with simple setup
 ---

 Key: CASSANDRA-7103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7103
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Fedora 19 (also happens on Ubuntu), Cassandra 2.0.7. dsc 
 standard install
Reporter: Martin Bligh

 Single node (this is just development, 32GB 20 core server), single disk 
 array.
 Create the following table:
 {code}
 CREATE TABLE reut (
   time_order bigint,
   time_start bigint,
   ack_us mapint, int,
   gc_strategy maptext, int,
   gc_strategy_symbol maptext, int,
   gc_symbol maptext, int,
   ge_strategy maptext, int,
   ge_strategy_symbol maptext, int,
   ge_symbol maptext, int,
   go_strategy maptext, int,
   go_strategy_symbol maptext, int,
   go_symbol maptext, int,
   message_type maptext, int,
   PRIMARY KEY (time_order, time_start)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={};
 {code}
 Now I just insert data into it (using python driver, async insert, prepared 
 insert statement). Each row only fills out one of the gc_*, go_*, or ge_* 
 columns, and there's something like 20-100 entries per map column, 
 occasionally 1000, but it's nothing huge. 
 First run 685 inserts in 1.004860 seconds (681.687053 Operations/s).
 OK, not great, but that's fine.
 Now throw 50,000 rows at it.
 Now run the first run again, and it takes 53s to do the same insert of 685 
 rows - I'm getting about 10 rows per second. 
 It's not IO bound - iostat 1 shows quiescent for 9 seconds, then ~640KB 
 write, then sleeps again - seems like the fflush sync.
 Run nodetool flush and performance goes back to as before
 Not sure why this gets so slow - I think it just builds huge commit logs and 
 memtables, but never writes out to the data/ directory with sstables because 
 I only have one table? That doesn't seem like a good situation. 
 Worse ... if you let the python driver just throw stuff at it async (I think 
 this allows up to 128 request if I understand the underlying protocol, then 
 it gets so slow that a single write takes over 10s and times out). Seems to 
 be some sort of synchronization problem in Java ... if I limit the concurrent 
 async requests to the left column below, I get the number of seconds elapsed 
 on the right:
 1: 103 seconds
 2: 63 seconds
 8: 53 seconds
 16: 53 seconds
 32: 66 seconds
 64: so slow it explodes in timeouts on write (over 10s each).
 I guess there's some thundering herd type locking issue in whatever Java 
 primitive you are using to lock concurrent access to a single table. I know 
 some of the Java concurrent.* stuff has this issue. So for the other tests 
 above, I was limiting async writes to 16 pending.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7103) Very poor performance with simple setup

2014-04-29 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13984390#comment-13984390
 ] 

Benedict commented on CASSANDRA-7103:
-

Yes, the terminology is definitely a bit confusing. I even mixed it up myself 
in my first response to you. This is a result of a major transition in 
functionality/terminology, and the old terminology lingers (especially here on 
JIRA)

storage row = data stored against a given partition key
cql row = data stored against a given a primary key



 Very poor performance with simple setup
 ---

 Key: CASSANDRA-7103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7103
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Fedora 19 (also happens on Ubuntu), Cassandra 2.0.7. dsc 
 standard install
Reporter: Martin Bligh

 Single node (this is just development, 32GB 20 core server), single disk 
 array.
 Create the following table:
 {code}
 CREATE TABLE reut (
   time_order bigint,
   time_start bigint,
   ack_us mapint, int,
   gc_strategy maptext, int,
   gc_strategy_symbol maptext, int,
   gc_symbol maptext, int,
   ge_strategy maptext, int,
   ge_strategy_symbol maptext, int,
   ge_symbol maptext, int,
   go_strategy maptext, int,
   go_strategy_symbol maptext, int,
   go_symbol maptext, int,
   message_type maptext, int,
   PRIMARY KEY (time_order, time_start)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={};
 {code}
 Now I just insert data into it (using python driver, async insert, prepared 
 insert statement). Each row only fills out one of the gc_*, go_*, or ge_* 
 columns, and there's something like 20-100 entries per map column, 
 occasionally 1000, but it's nothing huge. 
 First run 685 inserts in 1.004860 seconds (681.687053 Operations/s).
 OK, not great, but that's fine.
 Now throw 50,000 rows at it.
 Now run the first run again, and it takes 53s to do the same insert of 685 
 rows - I'm getting about 10 rows per second. 
 It's not IO bound - iostat 1 shows quiescent for 9 seconds, then ~640KB 
 write, then sleeps again - seems like the fflush sync.
 Run nodetool flush and performance goes back to as before
 Not sure why this gets so slow - I think it just builds huge commit logs and 
 memtables, but never writes out to the data/ directory with sstables because 
 I only have one table? That doesn't seem like a good situation. 
 Worse ... if you let the python driver just throw stuff at it async (I think 
 this allows up to 128 request if I understand the underlying protocol, then 
 it gets so slow that a single write takes over 10s and times out). Seems to 
 be some sort of synchronization problem in Java ... if I limit the concurrent 
 async requests to the left column below, I get the number of seconds elapsed 
 on the right:
 1: 103 seconds
 2: 63 seconds
 8: 53 seconds
 16: 53 seconds
 32: 66 seconds
 64: so slow it explodes in timeouts on write (over 10s each).
 I guess there's some thundering herd type locking issue in whatever Java 
 primitive you are using to lock concurrent access to a single table. I know 
 some of the Java concurrent.* stuff has this issue. So for the other tests 
 above, I was limiting async writes to 16 pending.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7103) Very poor performance with simple setup

2014-04-29 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13984424#comment-13984424
 ] 

Martin Bligh commented on CASSANDRA-7103:
-

FWIW, in case anyone trips across this later:

changing PRIMARY KEY (time_order, time_start) to PRIMARY KEY ((time_order, 
time_start)) fixed the insert side of it - ie using both for the storage row 
index. And this: http://www.datastax.com/dev/blog/whats-new-in-cql-3-0 
explained it in more detail, and now I can at least see what they were trying 
to say in the wiki ;-) Stops me from being able to use  and  operators, but 
that's a different problem ...

Thanks for the explanation.

 Very poor performance with simple setup
 ---

 Key: CASSANDRA-7103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7103
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Fedora 19 (also happens on Ubuntu), Cassandra 2.0.7. dsc 
 standard install
Reporter: Martin Bligh

 Single node (this is just development, 32GB 20 core server), single disk 
 array.
 Create the following table:
 {code}
 CREATE TABLE reut (
   time_order bigint,
   time_start bigint,
   ack_us mapint, int,
   gc_strategy maptext, int,
   gc_strategy_symbol maptext, int,
   gc_symbol maptext, int,
   ge_strategy maptext, int,
   ge_strategy_symbol maptext, int,
   ge_symbol maptext, int,
   go_strategy maptext, int,
   go_strategy_symbol maptext, int,
   go_symbol maptext, int,
   message_type maptext, int,
   PRIMARY KEY (time_order, time_start)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={};
 {code}
 Now I just insert data into it (using python driver, async insert, prepared 
 insert statement). Each row only fills out one of the gc_*, go_*, or ge_* 
 columns, and there's something like 20-100 entries per map column, 
 occasionally 1000, but it's nothing huge. 
 First run 685 inserts in 1.004860 seconds (681.687053 Operations/s).
 OK, not great, but that's fine.
 Now throw 50,000 rows at it.
 Now run the first run again, and it takes 53s to do the same insert of 685 
 rows - I'm getting about 10 rows per second. 
 It's not IO bound - iostat 1 shows quiescent for 9 seconds, then ~640KB 
 write, then sleeps again - seems like the fflush sync.
 Run nodetool flush and performance goes back to as before
 Not sure why this gets so slow - I think it just builds huge commit logs and 
 memtables, but never writes out to the data/ directory with sstables because 
 I only have one table? That doesn't seem like a good situation. 
 Worse ... if you let the python driver just throw stuff at it async (I think 
 this allows up to 128 request if I understand the underlying protocol, then 
 it gets so slow that a single write takes over 10s and times out). Seems to 
 be some sort of synchronization problem in Java ... if I limit the concurrent 
 async requests to the left column below, I get the number of seconds elapsed 
 on the right:
 1: 103 seconds
 2: 63 seconds
 8: 53 seconds
 16: 53 seconds
 32: 66 seconds
 64: so slow it explodes in timeouts on write (over 10s each).
 I guess there's some thundering herd type locking issue in whatever Java 
 primitive you are using to lock concurrent access to a single table. I know 
 some of the Java concurrent.* stuff has this issue. So for the other tests 
 above, I was limiting async writes to 16 pending.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7103) Very poor performance with simple setup

2014-04-28 Thread Martin Bligh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13983567#comment-13983567
 ] 

Martin Bligh commented on CASSANDRA-7103:
-

BTW, I'm aware that this is
(a) a single node
(b) half my time_order entries are zero (which I don't think matters as it's a 
single node anyway), so my partition key doesn't have much variance
(c) disk is not performant (but we're not even trying to write to it from 
iostat, so I don't think this matters).
(d) I'm writing to one table
(e) I'm using a single writer.

So I'm creating a hotspot of some form. But really, 

1. I think it should be able to handle more than 700 writes a second to one 
table.
2. It shouldn't degrade to about 10 writes per second.

Sure, I could throw masses of hardware at it, and make it scale a bit better, 
but ... unless it can perform better than this on a single table, single node, 
I don't see how it'd perform in any reasonable fashion on a larger cluster.


 Very poor performance with simple setup
 ---

 Key: CASSANDRA-7103
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7103
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Fedora 19 (also happens on Ubuntu), Cassandra 2.0.7. dsc 
 standard install
Reporter: Martin Bligh

 Single node (this is just development, 32GB 20 core server), single disk 
 array.
 Create the following table:
 CREATE TABLE reut (
   time_order bigint,
   time_start bigint,
   ack_us mapint, int,
   gc_strategy maptext, int,
   gc_strategy_symbol maptext, int,
   gc_symbol maptext, int,
   ge_strategy maptext, int,
   ge_strategy_symbol maptext, int,
   ge_symbol maptext, int,
   go_strategy maptext, int,
   go_strategy_symbol maptext, int,
   go_symbol maptext, int,
   message_type maptext, int,
   PRIMARY KEY (time_order, time_start)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={};
 Now I just insert data into it (using python driver, async insert, prepared 
 insert statement). Each row only fills out one of the gc_*, go_*, or ge_* 
 columns, and there's something like 20-100 entries per map column, 
 occasionally 1000, but it's nothing huge. 
 First run 685 inserts in 1.004860 seconds (681.687053 Operations/s).
 OK, not great, but that's fine.
 Now throw 50,000 rows at it.
 Now run the first run again, and it takes 53s to do the same insert of 685 
 rows - I'm getting about 10 rows per second. 
 It's not IO bound - iostat 1 shows quiescent for 9 seconds, then ~640KB 
 write, then sleeps again - seems like the fflush sync.
 Run nodetool flush and performance goes back to as before
 Not sure why this gets so slow - I think it just builds huge commit logs and 
 memtables, but never writes out to the data/ directory with sstables because 
 I only have one table? That doesn't seem like a good situation. 
 Worse ... if you let the python driver just throw stuff at it async (I think 
 this allows up to 128 request if I understand the underlying protocol, then 
 it gets so slow that a single write takes over 10s and times out). Seems to 
 be some sort of synchronization problem in Java ... if I limit the concurrent 
 async requests to the left column below, I get the number of seconds elapsed 
 on the right:
 1: 103 seconds
 2: 63 seconds
 8: 53 seconds
 16: 53 seconds
 32: 66 seconds
 64: so slow it explodes in timeouts on write (over 10s each).
 I guess there's some thundering herd type locking issue in whatever Java 
 primitive you are using to lock concurrent access to a single table. I know 
 some of the Java concurrent.* stuff has this issue. So for the other tests 
 above, I was limiting async writes to 16 pending.



--
This message was sent by Atlassian JIRA
(v6.2#6252)