Hello,
I am unclear on Why deleting a row in Cassandra does not delete a row key?
Is an empty row never deleted from Column Family?
It would be of great help if someone can elaborate on this.
Thanks,
Anuya
No.
You could build a custom secondary index where "column=value" is the key and
startKey and endKey are column names. Then call get_count() with a
SlicePredicate that specifies the startKey and endKey as the start and finish
column names.
Aaron
On 5 Apr 2011, at 01:45, Donal Zang wrote:
Can we do count like this?
/count [:] where = /
--
Donal Zang
Computing Center, IHEP
19B YuquanLu, Shijingshan District,Beijing, 100049
zan...@ihep.ac.cn
86 010 8823 6018
Hi!
I would like to query all the rows having a specific column defined, with
Hector
For example:
- CF is a column family
- rows 1 and 3 contain columns A and B
- rows 2 and 4 contain column A only
as a result of a query "column A" I would like to get rows 1-4, with column
A inside
a
I checked out #2212 and was able to reproduce the problem.
Thanks for investigating this and putting together a good script to
reproduce!
- Tyler
r')
>>
>> start_date = columns[0]
>> end_date = columns[7] + "z" # add "z" to make problem.
>>
>> if reversed:
>> temp = start_date
>> start_date = end_date
>> end_date = temp
>>
art_date, "end_date =", end_date, "reversed =
> ", reversed
>
> for it in cf.get_range(start = A_KEY, finish = A_KEY,
> column_reversed=reversed, column_count=1, column_start=start_date,
> column_finish=end_date):
>
>for d in it[1].iteritems():
>
wrote:
> First some terminology, when you say range slice do you mean getting multiple
> rows? Or do you mean get_slice where you return multiple super columns from
> one row?
>
> Your examples looks like you want to get multiple super columns from one row.
> In which
ranges you specify for the query are not correct
when using ASCII ordering for column names, e,g,
20031210 < 20031210022059/190209-20031210-4476885-s/z is true
20031210022059/190209-20031210-4476885-s/z < 20031210 is not true
Trying appending the highest value ASCII character to the end of 20
/" (+ suffix) ][attribute] = value
For example,
Order[ "100" ][ "20031210022059/190209-20031210-4476885-s/" ]
is a super column.
Because we want to scan them in the latest-first order, range slice
query with reversed order is used. (Partitioner is
ByteOrderedPartitione
te:
>> Is this under consideration for future releases ? or being thought
>> about!?
>>
>>
>>
>> On Thu, Feb 10, 2011 at 12:56 AM, Jonathan Ellis
>> wrote:
>>> Currently there is not.
>>>
>>> On Wed, Feb 9, 2011 at 12:04 PM, Ertio Lew wrote:
Lew wrote:
>>> Is there any way to specify on per query basis(like we specify the
>>> Consistency level), what rows be cached while you're reading them,
>>> from a row_cache enabled CF. I believe, this could lead to much more
>>> efficient use of the cac
eb 10, 2011 at 12:56 AM, Jonathan Ellis wrote:
>> Currently there is not.
>>
>> On Wed, Feb 9, 2011 at 12:04 PM, Ertio Lew wrote:
>>> Is there any way to specify on per query basis(like we specify the
>>> Consistency level), what rows be cached while you're r
Is this under consideration for future releases ? or being thought about!?
On Thu, Feb 10, 2011 at 12:56 AM, Jonathan Ellis wrote:
> Currently there is not.
>
> On Wed, Feb 9, 2011 at 12:04 PM, Ertio Lew wrote:
>> Is there any way to specify on per query basis(like
Currently there is not.
On Wed, Feb 9, 2011 at 12:04 PM, Ertio Lew wrote:
> Is there any way to specify on per query basis(like we specify the
> Consistency level), what rows be cached while you're reading them,
> from a row_cache enabled CF. I believe, this could lead to much mo
Is there any way to specify on per query basis(like we specify the
Consistency level), what rows be cached while you're reading them,
from a row_cache enabled CF. I believe, this could lead to much more
efficient use of the cache space!!( if you use same data for different
features/ parts in
> 1. Some way to send requests for keys whose token fall between 0-25 to B and
> never to C even though C will have the data due to it being replica of B.
If your data set is large, be mindful of the fact that this will cause
C to be completely cold in terms of caches. I.e., when B does go down,
C
On Wed, Jan 5, 2011 at 3:37 AM, Narendra Sharma
wrote:
> What I am looking for is:
> 1. Some way to send requests for keys whose token fall between 0-25 to B and
> never to C even though C will have the data due to it being replica of B.
> 2. Only when B is down or not reachable, the request shoul
Hi,
We are working on defining the ring topology for our cluster. One of the
plans under discussion is to have a RF=2 and perform read/write operations
with CL=ONE. I know this could be an issue since it doesn't satisfy R+W >
RF. This will work if we can always force the clients to go to the first
Hi list,
I am recently working with Casandra. Through my little affort I got this java
ClientOnlyExample.java file. (This file come along with Cassandra code check
out). I am using this Client because this do not make socket call with
Cassandra .
As per comment written in the file, this file d
hanks,
>> Naren
>>
>>
>> On Mon, Dec 27, 2010 at 8:35 AM, Roshan Dawrani
>> wrote:
>>
>>> I had seen RangeSlicesQuery, but I didn't notice that I could also give a
>>> key range there.
>>>
>>> How does a KeyRange work? D
8:35 AM, Roshan Dawrani
> wrote:
>
>> I had seen RangeSlicesQuery, but I didn't notice that I could also give a
>> key range there.
>>
>> How does a KeyRange work? Doesn't it need some sort from the partitioner -
>> whether that is order preserving or not?
8:35 AM, Roshan Dawrani wrote:
> I had seen RangeSlicesQuery, but I didn't notice that I could also give a
> key range there.
>
> How does a KeyRange work? Doesn't it need some sort from the partitioner -
> whether that is order preserving or not?
>
> I couldn'
I had seen RangeSlicesQuery, but I didn't notice that I could also give a
key range there.
How does a KeyRange work? Doesn't it need some sort from the partitioner -
whether that is order preserving or not?
I couldn't be sure of a query that was based on order of the rows in th
is an index table for having the TimeUUID sorted row
>> keys.
>>
>> I am able to query the TimeUUID columns under the super column fine. But
>> now I need to go to main CF and get the data and I want the rows in the same
>> time order as the keys.
>>
>> I
an entity and other is an index table for having the TimeUUID sorted row
> keys.
>
> I am able to query the TimeUUID columns under the super column fine. But
> now I need to go to main CF and get the data and I want the rows in the same
> time order as the keys.
>
> I am using M
Hi,
I have the following 2 column families - one being used to store full rows
for an entity and other is an index table for having the TimeUUID sorted row
keys.
I am able to query the TimeUUID columns under the super column fine. But now
I need to go to main CF and get the data and I want the
Short answer: no.
Longer answer: not yet.
On Thu, Dec 16, 2010 at 1:59 AM, Joshua Partogi wrote:
> Hi all,
>
> I really like the second index feature that has been added to 0.7 release.
> My question is, Is it possible to query using wildcards in cassandra 0.7?
>
> Thank
Hi all,
I really like the second index feature that has been added to 0.7 release.
My question is, Is it possible to query using wildcards in cassandra 0.7?
Thanks for the insights.
Kind regards,
Joshua.
--
http://twitter.com/jpartogi <http://twitter.com/scrum8>
Hi,
Using EmbeddedCassandra Serivce inside Junit Tests(BEFORECLASS) and tests are
running fine and no issues. Code to start the cassandra is something like the
following:
BUT the issue is when i try to get the data using cassandra-cli, i am not
getting any results. the data cleanup happens o
Tables model. And one thing that we can do with BigTable is
> query data using GQL. I tried looking for information about query language
> that is built on top of cassandra and ends with no luck. The only way we can
> query over data is either using Range query or Hadoop Map-Reduce. CMI
Hi,
I am still new with cassandra and from what I know so far cassandra is based
on Google BigTables model. And one thing that we can do with BigTable is
query data using GQL. I tried looking for information about query language
that is built on top of cassandra and ends with no luck. The only
request to another machine, which threw the logged exception and
> thus did not reply.
>
> You're doing an illegal query; token-based queries have to be on
> non-wrapping ranges (left token < right token), or a wrapping range of
> (mintoken, mintoken). This was changed as p
bel...@gmail.com> wrote:
TimedOutException means the host that your client is talking to sent
the request to another machine, which threw the logged exception and
thus did not reply.
You're doing an illegal query; token-based queries have to be on
non-wrapping ranges (left token < righ
should I
be doing instead?
On Mon, Nov 15, 2010 at 5:34 PM, Jonathan Ellis wrote:
> TimedOutException means the host that your client is talking to sent
> the request to another machine, which threw the logged exception and
> thus did not reply.
>
> You're doing an illega
TimedOutException means the host that your client is talking to sent
the request to another machine, which threw the logged exception and
thus did not reply.
You're doing an illegal query; token-based queries have to be on
non-wrapping ranges (left token < right token), or a wrapping
Hi
Problem:
Call - client.get_range_slices(). Using tokens (not keys), fails with
TimedoutException which I think is misleading (Read on)
Server : Works with 6.5 server, but not with 6.6 or 6.8
Client: have tried both 6.5 and 6.6
I am getting a TimedoutException when I do a get_ran
t 7:04 AM, Ching-Cheng Chen
> wrote:
>> Not sure if this the intended behavior of the indexed query.
>>
>> I created a column family and added index on column A,B,C.
>>
>> Now I insert three rows.
>>
>> row1 : A=123, B=456, C=789
>> row2 : A=123, C=7
It's working as written, but I think you're right that it makes more
sense to fail the expression when the column doesn't exist.
On Thu, Nov 11, 2010 at 7:04 AM, Ching-Cheng Chen
wrote:
> Not sure if this the intended behavior of the indexed query.
>
> I created a colum
onOn 12 Nov, 2010,at 02:04 AM, Ching-Cheng Chen wrote:Not sure if this the intended behavior of the indexed query. I created a column family and added index on column A,B,C.
Now I insert three rows. row1 : A=123, B=456, C=789 row2 : A=123, C=789 row3 : A=123, B=789, C=789 Now if I perform an ind
Not sure if this the intended behavior of the indexed query.
I created a column family and added index on column A,B,C.
Now I insert three rows.
row1 : A=123, B=456, C=789
row2 : A=123, C=789
row3 : A=123, B=789, C=789
Now if I perform an indexed query for A=123 and B=456, both row1 and row2
can you tar.gz the filter/index/data files for this sstable and attach
it to a ticket so we can debug?
if you can't make the data public you can send it to me off list and I
can have a look.
On Wed, Oct 6, 2010 at 11:37 AM, Narendra Sharma
wrote:
> Has any one used sstable2json on 0.6.5 and noti
Has any one used sstable2json on 0.6.5 and noticed the issue I described in
my email below? This doesn't look like data corruption issue as sstablekeys
shows the keys.
Thanks,
Naren
On Tue, Oct 5, 2010 at 8:09 PM, Narendra Sharma
wrote:
> 0.6.5
>
> -Naren
>
>
> On Tue, Oct 5, 2010 at 6:56 PM, J
0.6.5
-Naren
On Tue, Oct 5, 2010 at 6:56 PM, Jonathan Ellis wrote:
> Version?
>
> On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
> wrote:
> > Hi,
> >
> > I am using sstable2json to extract row data for debugging some
> application
> > issue. I first ran sstablekeys to find the list of keys in
Version?
On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
wrote:
> Hi,
>
> I am using sstable2json to extract row data for debugging some application
> issue. I first ran sstablekeys to find the list of keys in the sstable. Then
> I use the key to fetch row from sstable. The sstable is from Lucand
Hi,
I am using sstable2json to extract row data for debugging some application
issue. I first ran sstablekeys to find the list of keys in the sstable. Then
I use the key to fetch row from sstable. The sstable is from Lucandra
deployment. I get following.
-bash-3.2$ ./sstablekeys Documents-37-Data
if Name_Address(name, address) is just an index, we can redirect
> the query to ID_Address(Id, address) , Name_ID(name, id) without the cost of
> maintenance.
> Does it make sense?
>
> Alvin
>
>
> 2010/9/16 Rock, Paul
> Alvin - assuming I understand what you'
the query to ID_Address(*Id*, address) , Name_ID(*name*, id)
without the cost of maintenance.
Does it make sense?
Alvin
2010/9/16 Rock, Paul
> Alvin - assuming I understand what you're after correctly, why not make a
> CF Name_Address(name, address). Modifying the Cassandra method
n index to join two CFs.
First, we see this index as a CF/SCF. The difference is I don't materialise it.
Assume we have two tables:
ID_Address(Id, address) , Name_ID(name, id)
Then,the index is: Name_Address(name, address)
When the application tries to query on Name_Address, the value of &qu
Hello,
I am going to build an index to join two CFs.
First, we see this index as a CF/SCF. The difference is I don't materialise
it.
Assume we have two tables:
ID_Address(*Id*, address) , Name_ID(*name*, id)
Then,the index is: Name_Address(*name*, address)
When the application tries to que
I'm wondering what the performance considerations are on Join-like queries.
I have a ColumnFamily that holds millions of records (not unusual as I
understand) and I want to work on them using Pig and Hadoop. Until now we
always fetched all rows in Cassandra and just filtered and worked on them.
Th
eaving out a column from an update
> >> doesn't delete it, you need to use the remove method for that.
> >>
> >> On Tue, Jul 6, 2010 at 7:41 AM, Moses Dinakaran
> >> wrote:
> >> > Hi All,
> >> >
> >> > I have a query rel
On Wed, Jul 7, 2010 at 12:16 AM, Jonathan Ellis wrote:
>>
>> insert is insert-or-update. leaving out a column from an update
>> doesn't delete it, you need to use the remove method for that.
>>
>> On Tue, Jul 6, 2010 at 7:41 AM, Moses Dinakaran
>> wr
apache.org
Ämne: Re: Query on delete a column inside a super column
Hi,
Thanks for the reply,
The remove method
$cassandraInstance->remove('cache_pages_key_hash', 'hash_1' )
which will remove the whole key, But I don't want to do that, I need to remove
one column ins
this case.
Regards,
Moses.
On Wed, Jul 7, 2010 at 12:16 AM, Jonathan Ellis wrote:
> insert is insert-or-update. leaving out a column from an update
> doesn't delete it, you need to use the remove method for that.
>
> On Tue, Jul 6, 2010 at 7:41 AM, Moses Dinakaran
> wrot
insert is insert-or-update. leaving out a column from an update
doesn't delete it, you need to use the remove method for that.
On Tue, Jul 6, 2010 at 7:41 AM, Moses Dinakaran
wrote:
> Hi All,
>
> I have a query related to deleting a column inside a super column
>
> The follo
Hi All,
I have a query related to deleting a column inside a super column
The following is my cassandra schema
[cache_pages_key_hash] => Array
(
[hash_1] => Array
(
[1] => 4c330e95195f9
[2] => 4
service.
>>
>> Mike
>>
>>
>>>
>>>
>>>
>>> On Tue, May 11, 2010 at 11:06 AM, Mike Malone
>>> wrote:
>>> >
>>> >
>>> > On Mon, May 10, 2010 at 9:00 PM, Shuge Lee
>>> wrote:
>>&
hrift service.
>
> Mike
>
>
>>
>>
>>
>> On Tue, May 11, 2010 at 11:06 AM, Mike Malone wrote:
>> >
>> >
>> > On Mon, May 10, 2010 at 9:00 PM, Shuge Lee wrote:
>> >>
>> >> Hi all:
>> >
On Tue, May 11, 2010 at 11:06 AM, Mike Malone wrote:
> >
> >
> > On Mon, May 10, 2010 at 9:00 PM, Shuge Lee wrote:
> >>
> >> Hi all:
> >> How to write WHERE ... LIKE query ?
> >> For examples(described in Python):
> >
Hi Mike
AFAIK cassandra queries only on keys and not on column names, please verify.
On Tue, May 11, 2010 at 11:06 AM, Mike Malone wrote:
>
>
> On Mon, May 10, 2010 at 9:00 PM, Shuge Lee wrote:
>>
>> Hi all:
>> How to write WHERE ... LIKE query ?
>>
On Mon, May 10, 2010 at 9:00 PM, Shuge Lee wrote:
> Hi all:
>
> How to write WHERE ... LIKE query ?
> For examples(described in Python):
>
> Schema:
>
> # columnfamily name
> resources = [
># key
> 'foo': {
> # columns and value
&
Hi all:
How to write WHERE ... LIKE query ?
For examples(described in Python):
Schema:
# columnfamily name
resources = [
# key
'foo': {
# columns and value
'url': 'foo.com',
'pushlier': 'foo',
},
'oof
solve the
following cases:
1. Ability to fetch entries by applying a few filters ( like
show me only likes from a given user). This would include
range query to support pagination. So this would mean
indices on a few columns like the feed id, feed type etc.
2
few filters ( like show me
>only likes from a given user). This would include range query to support
>pagination. So this would mean indices on a few columns like the feed id,
>feed type etc.
>2. We have around 3 machines with 4GB RAM for this purpose and thinking
>
to
> use cassandra to solve the following cases:
>
> Ability to fetch entries by applying a few filters ( like show me only likes
> from a given user). This would include range query to support pagination. So
> this would mean indices on a few columns like the feed id, feed type etc.
Sounds
entries by applying a few filters ( like show me only
likes from a given user). This would include range query to support
pagination. So this would mean indices on a few columns like the feed id,
feed type etc.
2. We have around 3 machines with 4GB RAM for this purpose and thinking
of
searching?
>
> --
> View this message in context:
>
> http://n2.nabble.com/Lucandra-or-some-way-to-query-tp4900727p4905149.html
> Sent from the cassandra-u...@incubator.apache.org mailing list
> archive at Nabble.com.
>
>
searching?
> --
> View this message in context:
> http://n2.nabble.com/Lucandra-or-some-way-to-query-tp4900727p4905149.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
arching?
--
View this message in context:
http://n2.nabble.com/Lucandra-or-some-way-to-query-tp4900727p4905149.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
I think Lucandra is really a great idea, but since it needs
order-preserving-partitioner, does that mean there may be some 'hot-spot'
during searching?
ture, if my website grows.
>
> Thenks for your answer Eric!
>
> Jesús.
>
>
> 2010/4/14 Eric Evans
>
> On Wed, 2010-04-14 at 06:45 -0300, Jesus Ibanez wrote:
>> > Option 1 - insert data in all different ways I need in order to be
>> > able to query
t; > Option 1 - insert data in all different ways I need in order to be
> > able to query?
>
> Rolling your own indexes is fairly common with Cassandra.
>
> > Option 2 - implement Lucandra? Can you link me to a blog or an article
> > that guides me on how to implement Lu
On Wed, 2010-04-14 at 06:45 -0300, Jesus Ibanez wrote:
> Option 1 - insert data in all different ways I need in order to be
> able to query?
Rolling your own indexes is fairly common with Cassandra.
> Option 2 - implement Lucandra? Can you link me to a blog or an article
> that guid
erID, value=123, timestamp=xx)
=> (column=userID, value=456, timestamp=xx)
=> (column=userID, value=789, timestamp=xx)
This works, but is very hard to maintain in the future and the amount of
data increase exponentially. You can do it for some data, but if I have to
do it f
tself.
Once I am done with its intial designing I'll share with you guys. If at any
moment i feel that it won't work, will update in that case also.
On Sun, Apr 11, 2010 at 3:58 PM, Mark Robson wrote:
> On 11 April 2010 07:59, Lucifer Dignified wrote:
>
>> For a very simple qu
On 11 April 2010 07:59, Lucifer Dignified wrote:
> For a very simple query wherin we need to check username and password I
> think keeping incremental numeric id as key and keeping the name and value
> same in the column family should work.
>
It is highly unlikely that your app
Hi
I've been thinking of using cassandra for our existing application, which
has a very complex RDBMS schema as of now, and we need to make a lot of
queries using joins and where.
Whereas we can eliminate joins by using duplicate entries, its still hard to
query cassandra. I have thought of
Hi All:
I am thinking a more precise query in Cassandra:
Could we hava a query API like this :
List> get_slice_condition(String keyspace, List
keys, ColumnParent column_parent, Map
queryConditions, int consistency_level)
So we could use this API to query more precise data like age colum
I'm not doing schema migration, but I suspect my lack of experience
and understanding of column-based data is clouding the issue. What I
have is 2 pieces of information, let's call them LH and RH and a
single long value representing the link between them, S. The data
needs to be ordered by S, so
On Wed, Mar 24, 2010 at 4:57 AM, Colin Vipurs wrote:
...
> ColumnFamily {
> 'key1' {
> 'SuperColumn1' {
> 'Column1' :
> 'Column2' :
> }
> 'SuperColumn2' {
> 'Column3' :
> }
> }
> 'key2' {
> 'SuperColumn1' {
> 'Column1' :
> }
>
On Wed, Mar 24, 2010 at 4:57 AM, Colin Vipurs wrote:
> Could I get all keys/supercolumns where 'Column1' exists? fyi I'm
> using the Hector Java client for my work.
Not servir-side, no.
-Jonathan
Hi all,
I've just started playing with Cassandra and investigating if it's
useful for us, so please be gentle when I ask silly questions :).
When user super columns is it possible to perform a slice operation to
pull out all SC's/Keys that match a specific/range of column names?
Putting it more c
1101 - 1183 of 1183 matches
Mail list logo