here's a question--for these date-based tables (i.e., a table per
day/week/month/whatever), how are they queried? If I keep 60 days worth of
auditing data, for example, I'd need to query all 60 tables--can I do that
smoothly? Or do I have to have 60 different select statements? Is there a
way
on a specific object ID and region. Optionally, users can
restrict their query to a specific date range, which the above data model
provides.
However, we generate quite a bit of data, and we want a convenient way to
get rid of the oldest data. Since our system scales with the time of year,
we might get 50GB
. PrestoDB http://prestodb.io/
Thanks
Bobby
On May 30, 2014, at 12:09 PM, cbert...@libero.it cbert...@libero.it wrote:
Hello,
I have a working cluster of Cassandra that performs very well on a high
traffic web application.
Now I need to build a backend web application to query Cassandra on many
I'm sure this is a CQL 101 question, but.
Is it possible to add MULTIPLE Rows/Columns to a single Partition in a
single CQL 3 Query / Call.
Need:
I'm trying to find the most efficient way to add multiple time series events
to a table in a single call.
Whilst most time series
Sent: Sunday, May 25, 2014 9:36 AM
To: user@cassandra.apache.org
Subject: Possible to Add multiple columns in one query ?
I’m sure this is a CQL 101 question, but.
Is it possible to add MULTIPLE Rows/Columns to a single Partition in a
single CQL 3 Query / Call.
Need:
I’m trying
in one query ?
I’m sure this is a CQL 101 question, but.
Is it possible to add MULTIPLE Rows/Columns to a single Partition in a
single CQL 3 Query / Call.
Need:
I’m trying to find the most efficient way to add multiple time series events
to a table in a single call.
Whilst
with different partition keys can be sent to a
coordinator node that owns that partition key, which could be multiple nodes
for RF1.
-- Jack Krupansky
From: Mark Farnan
Sent: Sunday, May 25, 2014 9:36 AM
To: user@cassandra.apache.org
Subject: Possible to Add multiple columns in one query
Calling execute the second time runs the query a second time, and it looks like
the query mutates instance state during the pagination.
What happens if you only call execute() once ?
Cheers
Aaron
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal Consultant
Apache
I think there are several issues in your schema and queries.
First, the schema can't efficiently return the single newest post for every
author. It can efficiently return the newest N posts for a particular
author.
On Fri, May 16, 2014 at 11:53 PM, 後藤 泰陽 matope@gmail.com wrote:
But I
to query first 1 columns for each partitioning keys in CQL3.
For example:
create table posts(
author ascii,
created_at timeuuid,
entry text,
primary key(author,created_at)
);
insert into posts(author,created_at,entry) values
('john',minTimeuuid('2013-02-02 10:00
/in/jlacefield
http://www.datastax.com/cassandrasummit14
On Fri, May 16, 2014 at 12:23 AM, Matope Ono matope@gmail.com wrote:
Hi, I'm modeling some queries in CQL3.
I'd like to query first 1 columns for each partitioning keys in CQL3.
For example:
create table posts(
author ascii
matope@gmail.comwrote:
Hi, I'm modeling some queries in CQL3.
I'd like to query first 1 columns for each partitioning keys in CQL3.
For example:
create table posts(
author ascii,
created_at timeuuid,
entry text,
primary key(author,created_at)
);
insert into posts(author,created_at
Jonathan Lacefield
Solutions Architect, DataStax
(404) 822 3487
http://www.linkedin.com/in/jlacefield
http://www.datastax.com/cassandrasummit14
On Fri, May 16, 2014 at 12:23 AM, Matope Ono matope@gmail.com wrote:
Hi, I'm modeling some queries in CQL3.
I'd like to query first 1 columns for each
Hi, I'm modeling some queries in CQL3.
I'd like to query first 1 columns for each partitioning keys in CQL3.
For example:
create table posts(
author ascii,
created_at timeuuid,
entry text,
primary key(author,created_at)
);
insert into posts(author,created_at,entry) values
('john
Hi, All,
I use the astyanax 1.56.48 + Cassandra 2.0.6 in my test codes and do some query
like this:
query = keyspace.prepareQuery(..).getKey(...)
.autoPaginate(true)
.withColumnRange(new RangeBuilder().setLimit(pageSize).build());
ColumnListIndexColumnName result;
result= query.execute
Hi all,
I'm using Cassandra 2.0.6 and I have 8 nodes. I'm doing some tests by using
operations below:
disable gossip of node A;
check the status by nodetool in other node, node A is Down now;
use cqlsh connecting an Up node and create a table;
enable gossip of node A;
check the status, all nodes
This just happened, is this fixed in 2.0.7?
cqlsh:tap select * from setting;
Bad Request: unconfigured columnfamily settings
cqlsh:tap select * from settings;
name | value
--+--
Hi.
I am using Cassandra 2.0.6 version. There is a case that select query
returns wrong value if use DESC option. My test procedure is as follows:
--
cqlsh:test CREATE TABLE mytable (key int, range int, PRIMARY KEY (key,
range));
cqlsh:test INSERT INTO mytable (key, range) VALUES
Consider filing a jira. Cql is the standard interface to cassandra
everything is heavily tested.
On Thursday, March 13, 2014, Katsutoshi Nagaoka nagapad.0...@gmail.com
wrote:
Hi.
I am using Cassandra 2.0.6 version. There is a case that select query
returns wrong value if use DESC option. My
Hi, experts,
I need to query all columns of a row in a column family that meet some
conditions (see below). The column is composite column and has following
format:
component1component2compoment3... where componentN has String type.
What I want to do is to find out all columns that meet
Anyone can suggest how to query on blob column via CQL3. I get bad request
error saying cannot parse data. I want to lookup on key column which is defined
as blob.
But I am able to lookup data via opscenter data explorer. Is there a
conversion functions I need to use?
Sent from my Galaxy
Did you try http://cassandra.apache.org/doc/cql3/CQL.html#blobFun ?
On 2/28/14, 9:14, Senthil, Athinanthny X. -ND wrote:
Anyone can suggest how to query on blob column via CQL3. I get bad
request error saying cannot parse data. I want to lookup on key column
which is defined as blob.
But I
, Athinanthny X. -ND
athinanthny.x.senthil@disney.com wrote:
Anyone can suggest how to query on blob column via CQL3. I get bad
request error saying cannot parse data. I want to lookup on key column
which is defined as blob.
But I am able to lookup data via opscenter data explorer
Hi Steve,
It looks like it will be pretty easy for us to do some testing with the
new client version. I'm going to give it a shot and keep my fingers
crossed.
Thanks again,
Chap
On 5 Feb 2014, at 18:10, Steven A Robenalt wrote:
Hi Chap,
If you have the ability to test the 2.0.0rc2
. A single query repeated from CQLSH (about once a second or so)
will fail approximately one out of ten times. I do see an increase in
the average read latency around the time of the failure, though it's
unclear if that's from the single failed request or if others are
affected. This seems to happen
reads/sec according to OpsCenter.
The failures don't seem to be related to any changes in load. A single
query repeated from CQLSH (about once a second or so) will fail
approximately one out of ten times. I do see an increase in the average
read latency around the time of the failure, though it's
Hi Steve,
Thanks for the reply. After all that information in my initial message I
would forget one of the most important bits. We're running Cassandra
2.0.3 with the 1.0.4 version of the DataStax driver. I'd seen mention
of those timeouts under earlier 2.x versions and really hoped they
Hi Chap,
If you have the ability to test the 2.0.0rc2 driver, I would recommend
doing so, even from a dedicated test client or a JUnit test case. There are
other benefits to the change, such as being able to use BatchStatements,
aside from possible impact on your read timeouts.
Steve
On Wed,
Hi ,
I have a 4 node cassandra cluster with one node marked as seed node. When i
checked the data directory of seed node , it has two folders
/keyspace/columnfamily.
But sstable db files are not available.the folder is empty.The db files are
available in remaining nodes.
I want to know
I'm guessing its just a coincident.. As far as I know, seeds have nothing
to do with where the data should be located.
I think there could be couple of reasons why you wouldn't see SSTables on a
specific column family folder, these are some of them:
- You're using a few distinct keys which non of
'Store1';*
THIS QUERY DO NOT WORK..I get RPC timeout error and server logs showing
indexoutofbound exception (http://pastebin.com/f7qmRc0R)
Deugging code for this query I get SliceQueryFilter [reversed=false,
slices=[[, ]], count=2147483647, toGroup = 1] because of that it throws
Hi all,
I've spend a lot of time finding a bug in system, but it turns out that the
problem is in Cassandra.
Here is how to reproduce.
=
CREATE KEYSPACE IF NOT EXISTS test_set WITH REPLICATION = { 'class' :
'SimpleStrategy', 'replication_factor' : 1 };
USE test_set;
CREATE TABLE IF
Validimir,
Thanks what version of Cassandra?
-Roger
From: Vladimir Prudnikov [mailto:v.prudni...@gmail.com]
Sent: Monday, January 13, 2014 11:57 AM
To: user
Subject: Problem inserting set when query contains IF NOT EXISTS.
Hi all,
I've spend a lot of time finding a bug in system, but it turns
,
Thanks what version of Cassandra?
-Roger
*From:* Vladimir Prudnikov [mailto:v.prudni...@gmail.com]
*Sent:* Monday, January 13, 2014 11:57 AM
*To:* user
*Subject:* Problem inserting set when query contains IF NOT EXISTS.
Hi all,
I've spend a lot of time finding a bug in system
You CAN only supply some of the components for a slice.
On Fri, Dec 20, 2013 at 2:13 PM, Josh Dzielak j...@keen.io wrote:
Is there a way to include *multiple* column names in a slice query where
one only component of the composite column name key needs to match?
For example
Is there a way to include *multiple* column names in a slice query where one
only component of the composite column name key needs to match?
For example, if this was a single row -
username:0 | username:1 | city:0 | city:1 | other:0|
other:1
, range=1)]
The below output is desired
key=123 -- A:1, A:2 [Get first 2 composite cols for prefix 'A']
B:1, B:2, B:3 [Get first 3 composite cols for prefix 'B']
C:1 [Get the first composite col for prefix 'C']
I see that this akin to a range-of-range query
the error in the subject.
There's pretty interesting thing is that if I query for text column then the
query works, while does not work for the map column. Check the two queries at
the bottom http://pastie.org/private/fcygmm891hgg4ugyjhtjg please.
Should this be modelled in a different way
in ('global_props', 'test_bucket')
But that gives the error in the subject.
There's pretty interesting thing is that if I query for text column then
the query works, while does not work for the map column. Check the two
queries at the bottom http://pastie.org/private/fcygmm891hgg4ugyjhtjgplease.
Should
.
Is there any way i can validate the count of data loaded.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Data-Query-tp7591180.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Thanks Rob that helps !
On Fri, Oct 25, 2013 at 7:34 PM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Oct 25, 2013 at 2:47 PM, srmore comom...@gmail.com wrote:
I don't know whether this is possible but was just curious, can you query
for the data in the remote datacenter with a CL.ONE
I don't know whether this is possible but was just curious, can you query
for the data in the remote datacenter with a CL.ONE ?
There could be a case where one might not have a QUORUM and would like to
read the most recent data which includes the data from the other
datacenter. AFAIK to reliably
On Fri, Oct 25, 2013 at 2:47 PM, srmore comom...@gmail.com wrote:
I don't know whether this is possible but was just curious, can you query
for the data in the remote datacenter with a CL.ONE ?
A coordinator at CL.ONE picks which replica(s) to query based in large part
on the dynamic snitch
Hi all, when using the java-driver I see this error on the client, for
reads (as well as for writes).
Many of the ops succeed, however I do see a significant amount of errors.
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra
timeout during write query at consistency ONE (1
Thanks for the reply. Isn't the addColumn(IColumn col) method in the writer
private though?
Yes but I thought you had it in your examples, was included for completeness.
use the official overloads.
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal
org.apache.cassandra.thrift.Column column; // initialize this with name,
value, timestamp, TTL
This is the wrong object to use.
one overload of addColumn() accepts IColumn which is from
org.apache.cassanda.db . The thrift classes are only use for the thrift API.
What is the difference
Thanks for the reply. Isn't the addColumn(IColumn col) method in the writer
private though? I know what to do now in order to construct a column with a
TTL now. Thanks.
On Sep 26, 2013 9:00 PM, Aaron Morton aa...@thelastpickle.com wrote:
org.apache.cassandra.thrift.Column column; // initialize
Can someone answer this doubt reg. SSTableSimpleWriter ? I'd asked about
this earlier but it probably missed. Apologies for repeating the question
(with minor additions) :
Let's say I've initialized a *SSTableSimpleWriter* instance and a new
column with TTL set :
Let's say I've initialized a *SSTableSimpleWriter* instance and a new
column with TTL set :
*SSTableSimpleWriter writer = new SSTableSimpleWriter( ... /* params here
*/);*
*Column column;*
What is the difference between calling *writer.addColumn()* on the column's
name and value, and
I would like to use select count query.
Although it was work at Cassandra 1.2.9, but there is a situation which
does not work at Cassandra 2.0.0.
so, If some row is deleted, 'select count query' seems to return the wrong
value.
Did anything change by Cassandra 2.0.0 ? or Have I made a mistake
:52 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Read query slows down when a node goes down
** **
What is your replication factor? DO you have multi-DC deployment? Also are
u using v nodes?
** **
On Sun, Sep 15, 2013 at 7:54 AM, Parag Patel parag.pa...@fusionts.com
wrote
...@gmail.com]
*Sent:* Monday, September 16, 2013 1:10 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Read query slows down when a node goes down
** **
For how long does the read latencies go up once a machine is down? It
takes a configurable amount of time for machines to detect
Hi,
We have a six node cluster running DataStax Community Edition 1.2.9. From our
app, we use the Netflix Astyanax library to read and write records into our
cluster. We read and write with QUARUM. We're experiencing an issue where
when a node goes down, we see our read queries slowing down
What is your replication factor? DO you have multi-DC deployment? Also are
u using v nodes?
On Sun, Sep 15, 2013 at 7:54 AM, Parag Patel parag.pa...@fusionts.comwrote:
Hi,
** **
We have a six node cluster running DataStax Community Edition 1.2.9. From
our app, we use the Netflix
Hi,
I have a data modelling question. I'm modelling for an use case where, an
object can have multiple facets and each facet can have multiple revisions and
the query pattern looks like get latest 'n' revisions for all facets for an
object (n=1,2,3). With a table like below:
create table
SlicePredicate only support “N” columns. So, you need to query one facet at a
time OR you can query m columns such that it returns n revisions. You may need
intelligence to increase or decrease m columns heuristically.
From: ravi prasad
Sent: Tuesday, July 30, 2013 8:11 PM
Too bad Rainbird isn't open sourced yet!
It's been 2 years, I would not hold your breath.
Remembered there are two time series open source projects out there
https://github.com/deanhiller/databus
https://github.com/Pardot/Rhombus
Cheers
-
Aaron Morton
Cassandra Consultant
New
For background on rollup analytics:
Twitter Rainbird
http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011
Acunu http://www.acunu.com/
Cheers
-
Aaron Morton
Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
Thanks Aaron.
Too bad Rainbird isn't open sourced yet!
On Tue, Jul 23, 2013 at 4:48 AM, aaron morton aa...@thelastpickle.comwrote:
For background on rollup analytics:
Twitter Rainbird
http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011
Acunu
This can be done easily,
Use normal column family to store the sequence of events where key is
session #ID identifying one use interaction with a website, column names
are TimeUUID values and column value id of the event (do not write
something like user added product to shopping cart, something
.
If you consider it depends also on the number of nodes in the cluster, the
memory available and the number of rows and column the query needs, the
problem of how optimally divide a request becomes quite complex.
It sounds like you are targeting single read thread performance.
If you
Would cassandra be a good choice for creating a funnel analytics type
product similar to mixpanel?
e.g. You create a set of events and store them in cassandra for things
like:
event#1 user visited product page
event#2 user added product to shopping cart
event#3 user clicked on checkout page
and and higher
overhead.
If you consider it depends also on the number of nodes in the cluster, the
memory available and the number of rows and column the query needs, the
problem of how optimally divide a request becomes quite complex.
Does these numbers make sense for you?
Cheers
2013/7/17 aaron
Hi Rob,
of course, we could issue multiple requests, but then we should consider
which is the optimal way to split the query in smaller ones. Moreover, we
should choose how many of sub-query run in parallel.
In ours tests, we found there's a significant performance difference
between various
This would seem to conflict with the advice to only use secondary indexes on
fields with low cardinality, not high cardinality. I guess low cardinality is
good, as long as it isn't /too/ low?
My concern is seeing people in the wild create secondary indexes with low
cardinality that generate
://www.thelastpickle.com
On 17/07/2013, at 8:24 PM, cesare cugnasco cesare.cugna...@gmail.com wrote:
Hi Rob,
of course, we could issue multiple requests, but then we should consider
which is the optimal way to split the query in smaller ones. Moreover, we
should choose how many of sub-query
when
increasing the amount of rows and columns required in a single query .
Where does this limit come from?
Giving a fast look to the code seems like the entry point is stressed
because it has to keep all the responses in memory. Only after it has
received all the responses, from the nodes
On Tue, Jul 16, 2013 at 4:46 AM, cesare cugnasco
cesare.cugna...@gmail.comwrote:
We are working on porting some life science applications to Cassandra,
but we have to deal with its limits managing huge queries. Our queries are
usually multiget_slice ones: many rows with many columns each.
Couple of questions about the test setup:
- are you running the tests in parallel (via threadCount in surefire
or failsafe for example?)
- is the instance of cassandra per-class for per jvm? (or is fork=true?)
On Sun, Jul 14, 2013 at 5:52 PM, Tristan Seligmann
mithra...@mithrandi.net wrote:
On
Aaron Morton can confirm but I think one problem could be that to create an
index on a field with small number of possible values is not good.
Yes.
In cassandra each value in the index becomes a single row in the internal
secondary index CF. You will end up with a huge row for all the values
On Mon, Jul 15, 2013 at 12:26 AM, aaron morton aa...@thelastpickle.comwrote:
Aaron Morton can confirm but I think one problem could be that to create
an index on a field with small number of possible values is not good.
Yes.
In cassandra each value in the index becomes a single row in the
On Fri, Jul 12, 2013 at 10:38 AM, aaron morton aa...@thelastpickle.comwrote:
CREATE INDEX ON conv_msgdata_by_participant_cql(msgReadFlag);
On general this is a bad idea in Cassandra (also in a relational DB IMHO).
You will get poor performance from it.
Could you elaborate on why this is
Aaron Morton can confirm but I think one problem could be that to create an
index on a field with small number of possible values is not good.
Regards,
Shahab
On Sat, Jul 13, 2013 at 9:14 AM, Tristan Seligmann
mithra...@mithrandi.netwrote:
On Fri, Jul 12, 2013 at 10:38 AM, aaron morton
tearing my hair out trying to figure out why this
query fails. In fact, it only fails on machines with slower CPUs and after
having previously run some other junit tests. I’m running junits to an
embedded Cassandra server, which works well in pretty much all other cases,
but this one is flaky
Hi,
I've been tearing my hair out trying to figure out why this
query fails. In fact, it only fails on machines with slower CPUs and after
having previously run some other junit tests. I'm running junits to an
embedded Cassandra server, which works well in pretty much all
,
-Tony
From: Robert Coli rc...@eventbrite.com
To: Tony Anecito adanec...@yahoo.com
Cc: user@cassandra.apache.org user@cassandra.apache.org
Sent: Monday, July 8, 2013 4:27 PM
Subject: Re: Cassandra intermittant with query in 1.2.5...
On Mon, Jul 8, 2013 at 2:39 PM, Tony Anecito adanec...@yahoo.com
On Sat, Jul 6, 2013 at 10:57 PM, Tony Anecito adanec...@yahoo.com wrote:
I better understand the issue now with secondary index query and Cassadra
1.2.5. not returnng rows.
I did some more testng of the issue mentioned below and discovered a very
repeatable sequence and it is as follows:
1
@cassandra.apache.org; Tony Anecito adanec...@yahoo.com
Sent: Monday, July 8, 2013 3:08 PM
Subject: Re: Cassandra intermittant with query in 1.2.5...
On Sat, Jul 6, 2013 at 10:57 PM, Tony Anecito adanec...@yahoo.com wrote:
I better understand the issue now with secondary index query and Cassadra
1.2.5
with query in 1.2.5...
Thanks Robert I will do that. I already filled out a question with the initial
info via the forum seeing if I was doing something wrong. I did see a reference
to the issue but it was not repeatable. I am thinking there is a very serious
bug that would worry all
On Mon, Jul 8, 2013 at 2:39 PM, Tony Anecito adanec...@yahoo.com wrote:
I filed the issue in JIRA.
For those playing along at home :
https://issues.apache.org/jira/browse/CASSANDRA-5732
=Rob
Hi All,
Had problems with prepared statements not working with Datastax driver and JDBC
driver. I discovered that when changing indexes yoy need to change column
family caching from All to None then back again. Problem is sometimes CLI
crashes when you do that and you need to run it again. I
Hi All,
I better understand the issue now with secondary index query and Cassadra
1.2.5. not returnng rows.
I did some more testng of the issue mentioned below and discovered a very
repeatable sequence and it is as follows:
1. Starting state query running with caching off for a Column
I've been running cassandra a while, and have used the PHP api and
cassandra-cli, but never gave cqlsh a shot.
I'm not quite getting it. My most simple CF is a dumping ground for
testing things created as:
create column family stats;
I was putting random stats I was computing in it. All keys,
Hey,
I created a table with a wide row. Query on the wide row after removing the
entries and flushing the table becomes very slow. I am aware of the impact
of tombstones but it seems that there is a deadlock which prevents the
query to be completed.
step by step:
1. creating the keyspace
text,
ts timeuuid,
key1 text,
value int,
PRIMARY KEY (counter, ts)
)
That way *counter* will be your partitioning key, and all the rows that
have the same *counter* value will be clustered (stored as a single wide
row sorted by the *ts* value). In this scenario the query:
where
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: timeuuid and cql3 query
It's my understanding that if cardinality of the first part of the primary key
has low cardinality, you will struggle with cluster balance
I'm experimenting with a data model that will need to ingest a lot of data that
will need to be query able by time. In the example below, I want to be able to
run a query like select * from count3 where counter = 'test' and ts
minTimeuuid('2013-06-18 16:23:00') and ts minTimeuuid('2013-06-18
. That will cause cql rows to be stored in sorted
order by the ts column (for a given value of counter) and allow you to do
the kind of query you're looking for.
--
Tyler Hobbs
DataStax http://datastax.com/
to do
the kind of query you're looking for.
--
Tyler Hobbs
DataStax http://datastax.com/
be your clustering key. That will cause cql rows to be stored in
sorted order by the ts column (for a given value of counter) and allow
you to do the kind of query you're looking for.
--
Tyler Hobbs
DataStax http://datastax.com/
order
by the ts column (for a given value of counter) and allow you to do the kind
of query you're looking for.
--
Tyler Hobbs
DataStaxhttp://datastax.com/
Date: Wednesday, June 19, 2013 11:00 AM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: timeuuid and cql3 query
On Wed, Jun 19, 2013 at 8:08 AM, Ryan, Brent
br...@cvent.commailto:br...@cvent.com wrote:
CREATE
: Wednesday, June 19, 2013 12:47 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: timeuuid and cql3 query
Tyler,
You're recommending this schema instead, correct?
CREATE TABLE count3 (
counter text,
ts timeuuid
@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wednesday, June 19, 2013 12:56 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: timeuuid and cql3 query
Here's an example of that not working:
cqlsh:Test
and cql3 query
Here's an example of that not working:
cqlsh:Test desc table count4;
CREATE TABLE count4 (
ts timeuuid,
counter text,
key1 text,
value int,
PRIMARY KEY (ts, counter)
) WITH
bloom_filter_fp_chance=0.01 AND
caching='KEYS_ONLY' AND
comment
sorted by the
ts value). In this scenario the query:
where counter = 'test' and ts minTimeuuid('2013-06-18 16:23:00') and ts
minTimeuuid('2013-06-18 16:24:00');
would actually be a sequential read on a wide row on a single node.
--
Francisco Andrades Grassi
www.bigjocker.com
@bigjocker
Hi - We gave a dynamic CF which has a key and multiple columns which get added
dynamically. For example -
Key_1 , Column1, Column2, Column3,...
Key_2 , Column1, Column2, Column3,.
Now I want to get all columns after Column3...how do we query that ? The
ColumnSliceIterator in hector
a key and multiple columns which get
added dynamically. For example –
** **
Key_1 , Column1, Column2, Column3,…….
Key_2 , Column1, Column2, Column3,…..
** **
Now I want to get all columns after Column3…how do we query that ? The
ColumnSliceIterator
in hector allows
...@gmail.com wrote:
hi
Can some body tell me is it possible to to do multiple query on
cassandra
like Select * from columnfamily where name='foo' and age ='21' and
timestamp = 'unixtimestamp' ;
Please tell me some guidence for these kind of queries
Thank you
hi
Can some body tell me is it possible to to do multiple query on cassandra
like Select * from columnfamily where name='foo' and age ='21' and
timestamp = 'unixtimestamp' ;
Please tell me some guidence for these kind of queries
Thank you
701 - 800 of 1112 matches
Mail list logo