I need to sort data on a frequent updated column, such as like count of an
item. The common way of getting data sorted in Cassandra is to have the
column to be sorted on as clustering key. However, whenever such column is
updated, we need to delete the row of old value and insert the new one,
Compound index in MongoDB is really useful for qiery that involves
filtering/sorting on multiple columns. I was wondering if Cassandra 3.0 is
supposed to implement this feature.
When I read through JIRA, I only found feature like CASSANDRA-6048
to index are opened for 3.x
Global indices: https://issues.apache.org/jira/browse/CASSANDRA-6477
Functional index: https://issues.apache.org/jira/browse/CASSANDRA-7458
Partial index: https://issues.apache.org/jira/browse/CASSANDRA-7391
On Fri, Dec 26, 2014 at 10:49 AM, ziju feng pkdog...@gmail.com
I was wondering if there is plan to allow creating counter column and
standard column in the same table.
Here is my use case:
I want to use counter to count how many users like a given item in my
application. The like count needs to be returned along with details of item
in query. To support
and non counter columns because
currently the semantic of counter is only increment/decrement (thus NOT
idempotent) and requires some special handling compared to other C* columns.
On Mon, Dec 22, 2014 at 11:33 AM, ziju feng pkdog...@gmail.com wrote:
I was wondering if there is plan to allow
on lots of data from lots
of different query paths
https://github.com/datastax/spark-cassandra-connector.
On Mon, Dec 22, 2014 at 9:22 PM, ziju feng pkdog...@gmail.com wrote:
I just skimmed through JIRA
https://issues.apache.org/jira/browse/CASSANDRA-4775 and it seems
there has been some effort
Hi all,
I was wondering if there is any plan to support syncing change
automatically between entity table and tables that contain denormalized
data on server side?
I think many use cases in Cassandra require some level of denormalization.
However, there is currently little support for
Hi,
I found that the WRITETIME function on counter column returns date/time in
milliseconds instead of microseconds, which is not mentioned in the document
http://www.datastax.com/documentation/cql/3.1/cql/cql_using/use_writetime.html.
It will be helpful to clarify the difference in the document.
the Specifying rows returned using LIMIT section. Perhaps
the document needs some updates to clarify a bit about what applies to the
drivers and what applies to cqlsh?
On Wed, Jun 25, 2014 at 12:21 AM, Sylvain Lebresne sylv...@datastax.com
wrote:
On Tue, Jun 24, 2014 at 1:03 AM, ziju feng pkdog
Does that mean the iterator will give me all the data instead of 1 rows?
On Mon, Jun 23, 2014 at 10:20 PM, DuyHai Doan doanduy...@gmail.com wrote:
With the Java Driver, set the fetchSize and use ResultSet.iterator
Le 24 juin 2014 01:04, ziju feng pkdog...@gmail.com a écrit :
Hi All
Hi All,
I have a wide row table that I want to iterate through all rows under a
specific partition key. The table may contains around one million rows per
partition
I was wondering if the default 1 rows LIMIT applies to automatic
pagination in C* 2.0 (I'm using Datastax driver). If so, what
Hi All,
I was wondering if there is a planned feature in Cassandra to return the
current counter value after the update statement?
Our project is using counter column to count and since counter column
cannot reside in the same table with regular columns, we have to
denormalize the counter value
I was thinking to use counter type a separate pin counter table and, when I
need to update the like count, I would use read-after-write to get the
current value and timestamp and then denormalize into pin's detail table and
board tables.
Is it a viable solution in this case?
Thanks
--
View
Hello,
I'm working on data modeling for a Pinterest-like project. There are
basically two main concepts: Pin and Board, just like Pinterest, where pin
is an item containing an image, description and some other information such
as a like count, and each board should contain a sorted list of Pins.
Thanks for your answer, I really like the frequency of update vs read way of
thinking.
A related question is whether it is a good idea to denormalize on read-heavy
part of data while normalize on other less frequently-accessed data?
Our app will have a limited number of system managed boards
Hi all,
Is there any way to guarantee a counter's value in materialized views,
which could be some other column families with different row keys and with
counter's value de-normalized, in sync with the value in its counter column
family?
Since batch can only work as either non-counter or
16 matches
Mail list logo