I figured out the problem. The DELETE query only works if the column used in
the WHERE clause is also the first column used to define the PRIMARY KEY.
-Thomas
From: wang liang [mailto:wla...@gmail.com]
Sent: Monday, October 22, 2012 1:31 AM
To: user@cassandra.apache.org
Subject: Re: DELETE
...@mckesson.comwrote:
I figured out the problem. The DELETE query only works if the column
used in the WHERE clause is also the first column used to define the
PRIMARY KEY.
** **
-Thomas
** **
*From:* wang liang [mailto:wla...@gmail.com]
*Sent:* Monday, October 22, 2012 1:31 AM
the “title” column is equal to “hatchet”. This is the query I am using:
DELETE FROM books WHERE title = ‘hatchet’;
This query is failing with this error:
Bad Request: PRIMARY KEY part title found in SET part
I am using Cassandra 1.1 and CQL 3.0. What could be the problem
, Ryabin, Thomas thomas.rya...@mckesson.com
wrote:
I have a column family called “books”, and am trying to delete all rows
where the “title” column is equal to “hatchet”. This is the query I am
using:
DELETE FROM books WHERE title = ‘hatchet’;
** **
This query is failing
I have a column family called books, and am trying to delete all rows where
the title column is equal to hatchet. This is the query I am using:
DELETE FROM books WHERE title = 'hatchet';
This query is failing with this error:
Bad Request: PRIMARY KEY part title found in SET part
I am
row indexing/compount primary key approach.
-Vivek
On Tue, Oct 9, 2012 at 6:20 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Another option may be PlayOrm for you and it's scalable-SQL. We queried
one million rows for 100 results in just 60ms. (and it does joins). Query
CL =QUORUM.
Dean
,
trillions.
PlayOrm is just doing a range scan on your behalf. If you do a complex query
like left join trade.account where account.isActive=true and
trade.numShares50, it is doing a range scan on a few indices but it does so in
batches and eventually will do lookahead as well(ie. It will make
it).
Obviously, you don't want a partitions with billions of rows as the
B-tree starts to get a bit large. In both, you can have as many
partitions as you likeŠbillions, trillions.
PlayOrm is just doing a range scan on your behalf. If you do a complex
query like left join trade.account where
Subject: Query over secondary indexes
I have a column family User which is having a indexed column user_name.
My schema is having around 0.1 million records only and user_name is
duplicated across all rows.
Now when i am trying to retrieve it as:
get User where user_name = 'Vivek
are improving.
-Rishabh
*From:* Vivek Mishra [mailto:mishra.v...@gmail.com]
*Sent:* Friday, October 05, 2012 2:35 PM
*To:* user@cassandra.apache.org
*Subject:* Query over secondary indexes
I have a column family User which is having a indexed column
user_name. My schema is having around 0.1 million
...@gmail.com]
*Sent:* Friday, October 05, 2012 2:35 PM
*To:* user@cassandra.apache.org
*Subject:* Query over secondary indexes
I have a column family User which is having a indexed column
user_name. My schema is having around 0.1 million records only and
user_name is duplicated across all rows
Try making user_name a primary key in combination with some other unique column
and see if results are improving.
-Rishabh
From: Vivek Mishra [mailto:mishra.v...@gmail.com]
Sent: Friday, October 05, 2012 2:35 PM
To: user@cassandra.apache.org
Subject: Query over secondary indexes
I have a column
*From:* Vivek Mishra [mailto:mishra.v...@gmail.com]
*Sent:* Friday, October 05, 2012 2:35 PM
*To:* user@cassandra.apache.org
*Subject:* Query over secondary indexes
I have a column family User which is having a indexed column
user_name. My schema is having around 0.1 million records only
Hi Dean,
Thank you for your reply, i appreciate the help. I managed to get my data
model in cassandra and already inserted data and ran the query, but don't
yet have enough data to do correct benchmarking. I'm now trying to load a
huge amount of data using SSTableSimpleUnsortedWriter cause doing
Good evening,
I have a quite simple data model. Pseudo CQL code:
create table bars(
timeframe int,
date Date,
info1 double,
info2 double,
..
primary key( timeframe, date )
)
My most important query is (which might be the only one actually):
select * from bars where timeframe=X and dateY
@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Simple data model for 1 simple range query?
Good evening,
I have a quite simple data model. Pseudo CQL code:
create table bars(
timeframe int,
date Date,
info1 double,
info2 double
Wouldn't that return files from directories '/tmp1', '/tmp2', for example?
I believe so.
I thought the goal was to return files and subdirectories recursively inside
'/tmp'.
I'm not sure what the purpose of the query was.
The query query will return inodes where the file path starts
On Sep 18, 2012, at 3:06 AM, aaron morton aa...@thelastpickle.com wrote:
select filename from inode where filename ‘/tmp’ and filename ‘/tmq’ and
sentinel = ‘x’;
Wouldn't that return files from directories '/tmp1', '/tmp2', for example? I
thought the goal was to return files and
.
This is the way the query works internally. Multiget is simply a collections
of independent gets.
The multiget() is more efficient, but I'm having trouble trying to limit the
size of the data returned in order to not crash the cassandra node.
Often less is more. I would only ask for a few 10
On Sep 17, 2012, at 3:04 AM, aaron morton aa...@thelastpickle.com wrote:
I have a schema that represents a filesystem and one example of a Super CF
is:
This may help with some ideas
http://www.datastax.com/dev/blog/cassandra-file-system-design
Could you explain the usage of the sentinel?
Could you explain the usage of the sentinel?
Queries that use a secondary index must include an equality clause. That's the
sentinel is there for…
select filename from inode where filename ‘/tmp’ and filename ‘/tmq’ and
sentinel = ‘x’;
Cheers
-
Aaron Morton
Freelance
all the sub columns have to be
read into memory.
So if I set column_count = 1, as I have now, but fetch 1000 dirs (rows)
and each one happens to have 1 files (columns) the dataset is 1000x1.
This is the way the query works internally. Multiget is simply a collections
I may be missing something, but it looks like you pass multiple keys but
only a singular SlicePredicate
My bad.
I was probably thinking multiple gets but wrote multigets.
If Collections don't help maybe you need to support both query types using
separate CF's. Or a secondary index
Column paradigms. I understand that SuperColumns are discouraged in
new development, but I'm pondering a query where it seems like SuperColumns
might be better suited.
Consider a CF with SuperColumn layout as follows
t = {
k1: {
s1: { c1:v1, c2:v2 },
s2: { c1:v3, c2:v4
query per dir, or a multiget for
all needed dirs. The multiget() is more efficient, but I'm having trouble
trying to limit the size of the data returned in order to not crash the
cassandra node.
I'm using the pycassa client lib, and until now I have been doing per-directory
get()s specifiying
pondering a query where it seems like
SuperColumns might be better suited.
Consider a CF with SuperColumn layout as follows
t = {
k1: {
s1: { c1:v1, c2:v2 },
s2: { c1:v3, c2:v4 },
s3: { c1:v5, c2:v6}
...
}
...
}
Which might be modeled in CQL3:
CREATE TABLE t (
k
There is another trick here. On the playOrm open source project, we need to do
a sparse query for a join and so we send out 100 async requests and cache up
the java Future objects and return the first needed result back without
waiting for the others. With the S-SQLin playOrm, we have
I'm modeling a new application and considering the use of SuperColumn vs.
Composite Column paradigms. I understand that SuperColumns are discouraged
in new development, but I'm pondering a query where it seems like
SuperColumns might be better suited.
Consider a CF with SuperColumn layout
Thank you very much Aaron. Information you provided is very helpful.
Have a great Weekend!!!
swat.vikas
From: aaron morton aa...@thelastpickle.com
To: user@cassandra.apache.org
Sent: Thursday, August 16, 2012 6:29 PM
Subject: Re: wild card on query
I want
Hi,
I am trying to run query on cassandra cluster with predicate on row key.
I have column family called Users and rows with row key like
projectid_userid_photos. Each user within a project can have rows like
projectid_userid_blog, projectid_userid_status and so on.
I want to retrieve all
I want to retrieve all the photos from all the users of certain project. My
sql like query will be select projectid * photos from Users. How can i run
this kind of row key predicate while executing query on cassandra?
You cannot / should not do that using the data model you have. (i.e. you
Hello,
Given a row like this
key1 = (A:A:C), (A:A:B), (B:A:C), (B:C:D)
Is there a way to create a slice query that returns all columns where the
_second_ component is A? That is, I would like to get back the following
columns by asking for columns where component[0] = * and component[1
Is there a way to create a slice query that returns all columns where the
_second_ component is A?
No.
You can only get a contiguous slice of columns.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 16/08/2012, at 7:21 AM, Mike Hugo m
columns in the following page:
https://github.com/Netflix/astyanax/wiki/Examples
What I would like to query for is that last session events of each user.
(So it's like a group-by query.) Can I get this information in a single
query and would it be an efficient way to do it (regarding the schema
Sorry for the confusion created. I need to store emails registered
just for a single application. So although my data model would fit
into just a single row. But is storing a hundred million columns(col
name size= 8 byte; col value size=4 byte ) in a single row a good idea
? I am very much
What about if I spread these columns across 20 rows ? Then I have to
query each of these 20 rows for 500 columns. but still this seems a
better solution than one row for all cols or separate row for each
email id approaches !?
On Fri, Jul 27, 2012 at 11:36 AM, Aklin_81 asdk...@gmail.com wrote
In general I believe wide rows (many cols ) are preferable to skinny rows
(many rows) so that you can get all the information in 1 go,
One can store 2 billion cols in a row.
However, on what basis would you store the 500 email ids in 1 row? What
can be the row key?
For e.g. If the query you want
You should probably try to break the one row scheme to
2*Number_of_nodes rows scheme.. This should ensure proper distribution
of rows and still allow u to query from a few fixed number of rows.
How u do it depends on how are u gonna choose ur 200-500 columns
during reading (try having them
, Jul 23, 2012 at 3:40 PM, rohit bhatia rohit2...@gmail.com wrote:
You should probably try to break the one row scheme to
2*Number_of_nodes rows scheme.. This should ensure proper distribution
of rows and still allow u to query from a few fixed number of rows.
How u do it depends on how are u
On Mon, Jul 23, 2012 at 10:07 AM, Ertio Lew ertio...@gmail.com wrote:
My major concern is that is it too bad retrieving 300-500 rows (each for a
single column) in a single read query that I should store all these(around
a hundred million) columns in a single row?
You could create multiple
Actually these columns are 1 for each entity in my application I need to
query at any time columns for a list of 300-500 entities in one go.
On Mon, Jul 23, 2012 at 10:53 AM, Ertio Lew ertio...@gmail.com wrote:
Actually these columns are 1 for each entity in my application I need to
query at any time columns for a list of 300-500 entities in one go.
Can you describe your situation with small example?
in a single read query.
such that can efficiently read columns for atleast 300-500 users
in a single read query.
Is the query timebased or userid based? How do you determine which users to
read first? Do you read all of them or few of them? What's the query
criteria?
It would be helpful to understand exactly how your
I want to read columns for a randomly selected list of userIds(completely
random). I fetch the data using userIds(which would be used as column names
in case of single row or as rowkeys incase of 1 row for each user) for a
selected list of users. Assume that the application knows the list of
On Mon, Jul 23, 2012 at 11:16 AM, Ertio Lew ertio...@gmail.com wrote:
I want to read columns for a randomly selected list of userIds(completely
random). I fetch the data using userIds(which would be used as column names
in case of single row or as rowkeys incase of 1 row for each user) for a
I want to store hundred of millions of columns(containing id1 to id2
mappings) in the DB at any single time, retrieve a set of about 200-500
columns based on the column names(id1) if they are in single row or using
rowkeys if each column is stored in a unique row.
If I put them in a single
When executing a query like:
get events WHERE Firm=434550 AND ds_timestamp=1341955958200 AND
ds_timestamp=1341955958200;
what the 2ndary index implementation will do is:
1) it queries the index for Firm for the row with key 434550 (because
that's the only one restricted by an equal clause
, value=1341955958200, timestamp=1341955980651020)
If I run the following query:
get events WHERE Firm=434550 AND ds_timestamp=1341955958200 AND
ds_timestamp=1341955958200;
(which in theory would should return the same 1 row result)
It runs for around 12 seconds,
And I get
Ah, it's a Hector query question.
You may have bette luck on the Hector email list. Or if you can turn on debug
logging on the server and grab the query that would be handy.
The first thing that stands out is that (in cassandra) comparison operations
are not used in a slice range.
Cheers
I think in this case that's just Hector's way of setting the EOC byte for a
component. My guess is that the composite isn't being structured correctly
through Hector, as well.
On Tue, Jul 10, 2012 at 4:40 AM, aaron morton aa...@thelastpickle.comwrote:
The first thing that stands out is that
I have tested this extensively and EOC has huge issue in terms of
usability of CompositeTypes in Cassandra.
As an example: If you have 2 Composite Columns such as A:B:C and A:D:C.
And if you do search on A:B as start and end Composite Components, it
will return D as well. Because it returns all
On Tue, Jul 10, 2012 at 2:20 PM, Sunit Randhawa sunit.randh...@gmail.comwrote:
I have tested this extensively and EOC has huge issue in terms of
usability of CompositeTypes in Cassandra.
As an example: If you have 2 Composite Columns such as A:B:C and A:D:C.
And if you do search on A:B as
['RowKey']['1000']='A=1,B=2';
#2: set CF['RowKey']['1000:C1']='A=2,B=3'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2 only since I am specifically providing C1 as
Start
not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2 only since I am specifically providing C1 as
Start and Finish Composite Range with
Composite.ComponentEquality.EQUAL.
I am not sure if this is by design
does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2 only since I am specifically providing C1 as
Start and Finish Composite Range with
Composite.ComponentEquality.EQUAL.
I am not sure if this is by design.
Thanks
, Sunit Randhawa wrote:
Hello,
I have 2 Columns for a 'RowKey' as below:
#1 : set CF['RowKey']['1000']='A=1,B=2';
#2: set CF['RowKey']['1000:C1']='A=2,B=3'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both
Hello,
I have 2 Columns for a 'RowKey' as below:
#1 : set CF['RowKey']['1000']='A=1,B=2';
#2: set CF['RowKey']['1000:C1']='A=2,B=3'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2
:
Hello,
I have 2 Columns for a 'RowKey' as below:
#1 : set CF['RowKey']['1000']='A=1,B=2';
#2: set CF['RowKey']['1000:C1']='A=2,B=3'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am
'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2 only since I am specifically providing C1 as
Start and Finish Composite Range with
Composite.ComponentEquality.EQUAL.
I am
-data-flush-query-tp7580733.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
.
http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/
Query substitution
Use named parameters and a dictionary of names and values.
cursor.execute(SELECT column FROM CF WHERE name=:name, dict(name=Foo))
That may be a problem with the python driver (cassandra-dbapi2) and
you'd want
: 200
in_memory_compaction_limit_in_mb: 16 (from 64MB)
Key cache = 1
Row cache = 0
Could someone please help me on this.
Thanks
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-1-0-6-data-flush-query-tp7580733.html
Sent
Hello,
CQL BATCH is good for INSERT/UPDATE performance.
But it cannot do binding variable, exposed to SQL injection.
Is there a plan to make CQL BATCH to support binding variable in near future?
e.g.
http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/
Query substitution
Use named
Nothing has changed in the server, try the Hector user group.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/06/2012, at 12:02 PM, Edward Sargisson wrote:
Hi all,
Was there a change of behaviour in multiget_slice query in Cassandra
Hi all,
Was there a change of behaviour in multiget_slice query in Cassandra or
Hector between 0.7 and 1.1 when dealing with a key that doesn't exist?
We've just upgraded and our in memory unit test is failing (although
just on my machine). The test code is looking for a key that doesn't
version 1.0.3 we could get a Quote in
around 2ms. After update we are looking at time of at least 2-3 seconds.*
***
** **
The way we query the quote is using a REVERSED SuperSliceQuery with
start=now, end=00:00:00.000 (beginning of day) LIMITED to 1.
** **
Our investigation leads us
...@datastax.com]
Sent: Thursday, June 14, 2012 8:20 AM
To: user@cassandra.apache.org
Cc: cassandra-u...@incubator.apache.org; Schlueter, Kevin
Subject: Re: Cassandra upgrade to 1.1.1 resulted in slow query issue
That does looks fishy.
Would you mind opening a ticket on jira
(https://issues.apache.org/jira
version 1.0.3 we could get a Quote in around 2ms. After
update we are looking at time of at least 2-3 seconds.
The way we query the quote is using a REVERSED SuperSliceQuery with start=now,
end=00:00:00.000 (beginning of day) LIMITED to 1.
Our investigation leads us to suspect that, since upgrade
Hi All,
I am using Hector client for cassandra . I wanted to know how to create
keyspace and column family using API's to read and write data.
or i have to create keyspace and column family manually using command line
interface.
Regards
Arshad
Hi,
the Javadoc (or source code) of the me.prettyprint.hector.api.factory.HFactory
class contains all the examples to create keyspaces and column families.
To create a keyspace:
String testKeyspace = testKeyspace;
KeyspaceDefinition newKeyspace =
Hi,
After creating the keyspace successfully now i want to know how to read write
data using API,s
Regards
Arshad
From: Filippo Diotalevi [fili...@ntoklo.com]
Sent: Wednesday, June 06, 2012 2:27 PM
To: user@cassandra.apache.org
Subject: Re: Query
Hi,
the Javadoc
API,s
Regards
Arshad
--
*From:* Filippo Diotalevi [fili...@ntoklo.com]
*Sent:* Wednesday, June 06, 2012 2:27 PM
*To:* user@cassandra.apache.org
*Subject:* Re: Query
Hi,
the Javadoc (or source code) of
the me.prettyprint.hector.api.factory.HFactory class
Hi all,
I wanted to know how to read and write data using cassandra API's . is there
any link related to sample program .
Regards
Arshad
If you are using Java try out Kundera or Hector, both are good and have good
documentation available.
From: MOHD ARSHAD SALEEM [mailto:marshadsal...@tataelxsi.co.in]
Sent: Monday, June 04, 2012 2:37 AM
To: user@cassandra.apache.org
Subject: Query
Hi all,
I wanted to know how to read and write
available.
*From:* MOHD ARSHAD SALEEM [mailto:marshadsal...@tataelxsi.co.in]
*Sent:* Monday, June 04, 2012 2:37 AM
*To:* user@cassandra.apache.org
*Subject:* Query
Hi all,
I wanted to know how to read and write data using cassandra API's . is
there any link related to sample program
On Mon, Jun 4, 2012 at 7:36 PM, MOHD ARSHAD SALEEM
marshadsal...@tataelxsi.co.in wrote:
Hi all,
I wanted to know how to read and write data using cassandra API's . is
there any link related to sample program .
I did a Proof of Concept using a python client -PyCassa (
Hi
I am trying to learn Cassandra and I have one doubt. I am using the Thrift API,
to count the number of row keys I am using KeyRange to specify the row keys. To
count all of them, I specify the start and end as new byte[0]. But the count
is set to 100 by default. How do I use this method to
default count is 100, set this to some max value, but this won't guarantee
actual count.
Something like paging can help in counting. Get the last key as start in
second query, end as null, count as some value. But this will port data to
client where as we only need count.
Other solution may
You should read multiple batches specifying last key received from
previous batch as first key for next one.
For large databases I'd recommend you to use statistical approach (if it's
feasible). With random parittioner it works well.
Don't read the whole db. Knowing whole keyspace you can read
In general read queries run on multiple nodes. But each node computes the
complete result to the query.
There is no support for aggregate queries.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/05/2012, at 6:49 PM, Majid Azimi wrote
hi guys,
I'm going to build a warehouse with Cassandra. There are a lot of range and
aggregate queries.
Does Cassandra support parallel query processing?(both on single box and
cluster)
,
that will be sorted, so an indexed query can give us the right latidue range,
and then just query with logitude and
What do you think of that
thanks
2012/5/11 Dave Brosius dbros...@mebigfatguy.com
Inequalities on secondary indices are always done in memory, so without at
least one
Sorry for askign that
but Why is it necessary to always have at least one EQ comparison
[default@Keyspace1] get test where birth_year1985;
No indexed columns present in index clause with operator EQ
It oblige to have one dummy indexed column, to do this query
[default@Keyspace1] get test
birth_year1985;
No indexed columns present in index clause with operator EQ
It oblige to have one dummy indexed column, to do this query
[default@Keyspace1] get test where tag=sea and birth_year1985;
---
RowKey: sam
= (column=birth_year, value=1988, timestamp=1336742346059000)
Cassandra like that?
There's also the possibly to do in parallel, other CF, with latitude in
rows, that will be sorted, so an indexed query can give us the right
latidue range, and then just query with logitude and
What do you think of that
thanks
2012/5/11 Dave Brosius dbros...@mebigfatguy.com
where
solr_query='body:%22sixty%20eight%20million%20nine%20hundred%20forty%20three%20thousand%20four%20hundred%20twenty%20four%22'
;
count
---
0
I have exactly one row matching this string that I can retrieve
through direct solr query.
Thanks.
me (each row is a 24 hr time
bucket).
These are the results I got using the CompositeQueryIterator (with a
modified max of 100.000 cols returned per slice) taken from the Composite
query tutorial at
http://www.datastax.com/dev/blog/introduction-to-composite-columns-part-1(code
is at
https
Can you post the details of the queries you are running, including the
methodology of the tests ?
(Here is the methodology I used to time queries previously
http://thelastpickle.com/2011/07/04/Cassandra-Query-Plans/)
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
Hi guys,
I am consistently seeing a 20% improvement in query retrieval times if I
use the composite comparator Timestamp:ID instead of ID:Timestamp where
Timestamp=Long and ID=~100 character strings. I am retrieving all columns
(~1 million) from a single row. Why is this happening?
Cheers,
Alex
Sender: adsi...@gmail.com
Subject: composite query performance depends on component ordering
Message-Id: caadnm_f7xeznnmop-fwgey-f9utjepynxdch3fxhqk6ncag...@mail.gmail.com
Recipient: adam.nicho...@hl.co.uk
__
This email has been
When you do a query, there's a lot of comparison happening between
what's queries and the column names. But the composite comparator is
lazy in that when it compares two names, if the first component are
not equal, it doesn't have to compare the second one. So What's likely
happening
:
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
*and the following cli query:*
get tk_counters['d:9eff24f7-949f-487b-a566-0dedd07656ce'];
*returns:*
= (counter=no, value=1)
= (counter=yes, value=2)
Returned 2 results.
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2
://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassndra-1-0-6-GC-query-tp7323457p7323457.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
', ColumnFamily='WStandard') to relieve memory pressure
Could someone please explain why still I am getting GC warnings like above.
Many thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassndra-1-0-6-GC-query-tp7323457p7323457.html
Sent
As a configuration issue, I haven't enable the heap dump directory.
Is there another way to find the cause to this and identify possible
configuration changes?
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassndra-1-0-6-GC-query
-apache-org.3065146.n2.nabble.com/Cassndra-1-0-6-GC-query-tp7323457p7323690.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
--
Ben Coverston
DataStax -- The Apache Cassandra Company
Hello everyone,
I'm working on a application that uses Cassandra and has a geolocation
component.
I was wondering beside the slides and video at
http://www.readwriteweb.com/cloud/2011/02/video-simplegeo-cassandra.php that
simplegeo published regarding their strategy if anyone has implemented
often want things ordered by distance from centroid and
the query is no longer a bounding radius query - rather, it's a kNN with a
radius constraint). In any case, geohash is a reasonable starting point, at
least.
The neighbors problem is clearly explained here:
https://github.com/davetroy/geohash
I'm not sure about your first 2 questions. The third might be an exception:
check your Cassandra logs.
About the like-thing: there's no such query possibiliy in Cassandra / CQL.
You can take a look at Hadoop / Hive to tackle those problems.
2012/2/16 Roshan codeva...@gmail.com
Hi
I am using
901 - 1000 of 1112 matches
Mail list logo