Counter Column
Hi, If the nodes of Cassandra ring are in different timezone, could it affect the counter column as it depends on the timestamp? Thanks Ajay
Is compound index a planned feature in 3.0?
Compound index in MongoDB is really useful for qiery that involves filtering/sorting on multiple columns. I was wondering if Cassandra 3.0 is supposed to implement this feature. When I read through JIRA, I only found feature like CASSANDRA-6048 https://issues.apache.org/jira/browse/CASSANDRA-6048, which allows using multiple single column indexes in a query by joining predicates. Compound index is more query driven and is closer to current application-maintained index table, which may provide better performance than single column index and can greatly simplify index maintenance during updates than index table. Any idea? Ziju
How many tombstones for deleted CQL row?
Hi, I am considering tuning the tombstone warn/error threshold. Just making sure; If I INSERT one (CQL) row populating all six columns and then DELETE the inserted row, will Cassandra write 1 range tombstone or seven tombstones (one per columns plus row marker)? Thanks, Jens
Re: How many tombstones for deleted CQL row?
If you issue DELETE my_table WHERE partition_key = xxx Cassandra will create a row tomstone and not one tombstone per column, fortunately On Fri, Dec 26, 2014 at 10:50 AM, Jens Rantil jens.ran...@tink.se wrote: Hi, I am considering tuning the tombstone warn/error threshold. Just making sure; If I INSERT one (CQL) row populating all six columns and then DELETE the inserted row, will Cassandra write 1 range tombstone or seven tombstones (one per columns plus row marker)? Thanks, Jens
Re: Is compound index a planned feature in 3.0?
Many JIRA related to index are opened for 3.x Global indices: https://issues.apache.org/jira/browse/CASSANDRA-6477 Functional index: https://issues.apache.org/jira/browse/CASSANDRA-7458 Partial index: https://issues.apache.org/jira/browse/CASSANDRA-7391 On Fri, Dec 26, 2014 at 10:49 AM, ziju feng pkdog...@gmail.com wrote: Compound index in MongoDB is really useful for qiery that involves filtering/sorting on multiple columns. I was wondering if Cassandra 3.0 is supposed to implement this feature. When I read through JIRA, I only found feature like CASSANDRA-6048 https://issues.apache.org/jira/browse/CASSANDRA-6048, which allows using multiple single column indexes in a query by joining predicates. Compound index is more query driven and is closer to current application-maintained index table, which may provide better performance than single column index and can greatly simplify index maintenance during updates than index table. Any idea? Ziju
Re: Is compound index a planned feature in 3.0?
The global index JIRA actually mentions compound index but it seems that there is no JIRA created for this feature? Anyway, I think I should wait for 3.0 and see what does it bring to index. Thanks. On Fri, Dec 26, 2014 at 6:09 PM, DuyHai Doan doanduy...@gmail.com wrote: Many JIRA related to index are opened for 3.x Global indices: https://issues.apache.org/jira/browse/CASSANDRA-6477 Functional index: https://issues.apache.org/jira/browse/CASSANDRA-7458 Partial index: https://issues.apache.org/jira/browse/CASSANDRA-7391 On Fri, Dec 26, 2014 at 10:49 AM, ziju feng pkdog...@gmail.com wrote: Compound index in MongoDB is really useful for qiery that involves filtering/sorting on multiple columns. I was wondering if Cassandra 3.0 is supposed to implement this feature. When I read through JIRA, I only found feature like CASSANDRA-6048 https://issues.apache.org/jira/browse/CASSANDRA-6048, which allows using multiple single column indexes in a query by joining predicates. Compound index is more query driven and is closer to current application-maintained index table, which may provide better performance than single column index and can greatly simplify index maintenance during updates than index table. Any idea? Ziju
Re: How many tombstones for deleted CQL row?
Great. Also, if I issue DELETE my_table WHERE partition_key=xxx AND compound_key=yyy I understand only a single tombstone will be created? On Fri, Dec 26, 2014 at 10:59 AM, DuyHai Doan doanduy...@gmail.com wrote: If you issue DELETE my_table WHERE partition_key = xxx Cassandra will create a row tomstone and not one tombstone per column, fortunately On Fri, Dec 26, 2014 at 10:50 AM, Jens Rantil jens.ran...@tink.se wrote: Hi, I am considering tuning the tombstone warn/error threshold. Just making sure; If I INSERT one (CQL) row populating all six columns and then DELETE the inserted row, will Cassandra write 1 range tombstone or seven tombstones (one per columns plus row marker)? Thanks, Jens
Why read row is so slower than read column.
Hi, all: In my cf, each row has two column, one column is the timestamp(64bit), another column is data which may be 500k about. I read row, the qps is about 30. I read that data column, the qps is about 500. Why read performance is so slow where add a so small column in read?? Thanks.
Why read row is so slower than read column.
Hi, all: In my cf, each row has two column, one column is the timestamp(64bit), another column is data which may be 500k about. I read row, the qps is about 30. I read that data column, the qps is about 500. Why read performance is so slow where add a so small column in read?? Thanks.
Re: Why read row is so slower than read column.
What do your CQL queries look like? -- Jack Krupansky On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote: Hi, all: In my cf, each row has two column, one column is the timestamp(64bit), another column is data which may be 500k about. I read row, the qps is about 30. I read that data column, the qps is about 500. Why read performance is so slow where add a so small column in read?? Thanks.
Re: Counter Column
Timestamps are timezone independent. This is a property of timestamps, not a property of Cassandra. A given moment is the same timestamp everywhere in the world. To display this in a human readable form, you then need to know what timezone you're attempting to represent the timestamp as, this is the information necessary to convert it to local time. On Fri, Dec 26, 2014 at 2:05 AM, Ajay ajay.ga...@gmail.com wrote: Hi, If the nodes of Cassandra ring are in different timezone, could it affect the counter column as it depends on the timestamp? Thanks Ajay
Re: Why read row is so slower than read column.
I would suggest enabling tracing in cqlsh and see what it has to say. There are many things which could cause this, but I'm thinking in particular you may have a lot of tombstones which get lifted when you read the whole row, and are missed when you read just one column. On Fri, Dec 26, 2014 at 6:05 AM, Jack Krupansky jack.krupan...@gmail.com wrote: What do your CQL queries look like? -- Jack Krupansky On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote: Hi, all: In my cf, each row has two column, one column is the timestamp(64bit), another column is data which may be 500k about. I read row, the qps is about 30. I read that data column, the qps is about 500. Why read performance is so slow where add a so small column in read?? Thanks.
any API to load large data from web into Cassandra
Hello I am new. Did not seem to find the answer after a brief research. Please help. Thanks! J
Re: any API to load large data from web into Cassandra
Take a look at sstableloader. We use it to load 30+m rows into Cassandra Datastax documentation is a good staty -- Keith Sterling Head of Software E: keith.sterl...@first-utility.com P: +44 7771 597 630 W: first-utility.com A: Opus 40 Business Park, Haywood Road, Warwick CV34 5AH On Fri, Dec 26, 2014 at 7:59 PM, Joanne Contact joannenetw...@gmail.com wrote: Hello I am new. Did not seem to find the answer after a brief research. Please help. Thanks! J
转发:Re: Why read row is so slower than read column.
I use thrift interface to query the data. - - What do your CQL queries look like?-- Jack Krupansky On Fri, Dec 26, 2014 at 8:00 AM, yhq...@sina.com wrote: Hi, all: In my cf, each row has two column, one column is the timestamp(64bit), another column is data which may be 500k about. I read row, the qps is about 30. I read that data column, the qps is about 500. Why read performance is so slow where add a so small column in read?? Thanks.
any code to load large data from web into Cassandra
Thank you. I did not express clearly on my question. I wonder if there is sample code to load any website data to Cassandra? Say, this webpage http://datatomix.com/?p=84 seems to use Python, tweepy, to use twitter API to get data in json format and then load data into Cassandra. So it seems tweepy is special for twitter API. Is there a code for any website? Btw I am not familiar with Python yet. So the answer may not be limited to Python. Thanks! On Fri, Dec 26, 2014 at 12:46 PM, Keith Sterling keith.sterl...@first-utility.com wrote: Take a look at sstableloader. We use it to load 30+m rows into Cassandra Datastax documentation is a good staty -- *Keith Sterling* *Head of Software* *E:* keith.sterl...@first-utility.com stephen.l...@first-utility.com *P:* +44 7771 597 630 *W:* first-utility.com http://www.first-utility.com/ *A:* Opus 40 Business Park, Haywood Road, Warwick CV34 5AH On Fri, Dec 26, 2014 at 7:59 PM, Joanne Contact joannenetw...@gmail.com wrote: Hello I am new. Did not seem to find the answer after a brief research. Please help. Thanks! J