to recv_get_range_slices(), in my cassandra client code, it
> always throw new
> org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.BAD_SEQUENCE_ID,
> "get_range_slices failed: out of sequence response");
> >
> > Full source code here
> >
> https:
> throw new
> org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.BAD_SEQUENCE_ID,
> "get_range_slices failed: out of sequence response");
>
> Full source code here
> https://raw.github.com/apache/cassandra/cassandra-1.0.12/interface/th
Hi,
In regards to recv_get_range_slices(), in my cassandra client code, it
always throw new
org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.BAD_SEQUENCE_ID,
"get_range_slices failed: out of sequence response");
Full source code here
https://raw.github.
Hi all,
I'm using ycsb to test Cassanda's performance on key range gets. I have
install
ycsb on one node and latest Cassandra server on another node. Using one
thread, I insert 10GB of uniformly random keys in Cassandra using ycsb,
while
performing range gets (get_range_slices) (every
I'm not sure what those log messages are from. But….
> UnknownException: [host=192.168.2.13(192.168.2.13):9160, latency=11(31),
> attempts=1] SchemaDisagreementException()
Sounds a bit like.
http://wiki.apache.org/cassandra/FAQ#schema_disagreement
Cheers
-
Aaron Morton
Freel
Hi All,
i am facing problem while setting up my database. The error under mentioned
is reflected every time i try to
setup the DB. Unable to understand why these are occurring? though
previously it was working fine, i guess
it is some connection related issues.
UnknownException: [host=192.168.2.1
fter an upgrade to cassandra-1.0 any get_range_slices gives me:
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>> at
>>> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
>>>
On Mon, Oct 31, 2011 at 11:41 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> you can
>> trigger a "user defined compaction" through JMX on each of the sstable
>> you want to rebuild.
>
> May i ask how?
> Everything i see from NodeProbe to StorageProxy is
On Mon, Oct 31, 2011 at 11:35 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
>> >
>> >
>> > I see now this was a bad choice.
>> > The read pattern of these rows is always in bulk
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> you can
> trigger a "user defined compaction" through JMX on each of the sstable
> you want to rebuild.
May i ask how?
Everything i see from NodeProbe to StorageProxy is ks and cf based.
~mck
--
“Anyone who lives within their means s
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
> >
> >
> > I see now this was a bad choice.
> > The read pattern of these rows is always in bulk so the chunk_length
> > could have been much higher so to reduce
On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>>
>> java.lang.OutOfMemoryError: Java h
On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java
After an upgrade to cassandra-1.0 any get_range_slices gives me:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
at
org.apache.cassandra.io.compress.CompressionMetadata
Hey guys,
We are designing our data model for our app and this question came up.
Lets say that I have a large number of rows(say 1M). And just one column
family.
Each row contains either columns (A, B, C) or (X, Y, Z). I want to run a
get_range_slices query to fetch columns (A, B, C).
Does
Hi,
we have 4 node Cassandra (version 0.8.1) cluster. 2 CF inside. While first
CF is working properly (read/store), get_range_slices query on second CF
return NPE error.
Any idea why it happen? Maybe some known bug and fixed in 0.8.3 ?
ERROR [pool-2-thread-51] 2011-08-25 15:02:04,360
The count you specify is the worst case, so if you can't even allocate
a List to handle it, you shouldn't be specifying such a high count.
Better find that out immediately, then when your data set grows in
production.
On Mon, Aug 15, 2011 at 8:15 AM, Patrik Modesto
wrote:
> On Mon, Aug 15, 2011 a
On Mon, Aug 15, 2011 at 15:09, Jonathan Ellis wrote:
> On Mon, Aug 15, 2011 at 7:13 AM, Patrik Modesto
> wrote:
>> PS: while reading the email before I'd send it, I've noticed the
>> keyRange.count =... is it possible that Cassandra is preallocating
>> some internal data acording the KeyRange.cou
On Mon, Aug 15, 2011 at 7:13 AM, Patrik Modesto
wrote:
> PS: while reading the email before I'd send it, I've noticed the
> keyRange.count =... is it possible that Cassandra is preallocating
> some internal data acording the KeyRange.count parameter?
That's exactly what it does.
--
Jonathan Ell
Hi,
on our dev cluster of 4 cassandra nodes 0.7.8 I'm suddenly getting:
ERROR 13:40:50,848 Internal error processing get_range_slices
java.lang.OutOfMemoryError: Java heap space
at java.util.ArrayList.(ArrayList.java:112)
011, at 5:00 PM, Yang wrote:
>
>> our keyspace is really not that big,
>> about 1million rows, each about 500 bytes
>>
>> but doing a get_range_slices() on the entire key range gives OOM
>> errors (I bumped up the -Xmx arg now, still trying, but
>> giving such a la
0 bytes
>
> but doing a get_range_slices() on the entire key range gives OOM
> errors (I bumped up the -Xmx arg now, still trying, but
> giving such a large chunk of data in one RPC call is still bad), so
> that leaves me the option to return the entire ks "page by page"
our keyspace is really not that big,
about 1million rows, each about 500 bytes
but doing a get_range_slices() on the entire key range gives OOM
errors (I bumped up the -Xmx arg now, still trying, but
giving such a large chunk of data in one RPC call is still bad), so
that leaves me the option to
rote:
> On Fri, Jun 24, 2011 at 10:21 AM, karim abbouh wrote:
> > i want get_range_slices() function returns records sorted(orded) by the
> > key(rowId) used during the insertion.
> > is it possible?
>
> You will have to use the OrderPreservingPartitioner. This i
2011 12h40
Objet : Re: Re : Re : get_range_slices result
First thing is you really should upgrade from 0.6, the current release is 0.8.
Info on time uuid's
http://wiki.apache.org/cassandra/FAQ#working_with_timeuuid_in_java
If you are using a higher level client like Hector or Pelops it will
.apache.org"
> Envoyé le : Lundi 27 Juin 2011 17h59
> Objet : Re : Re : get_range_slices result
>
> i used TimeUUIDType as type in storage-conf.xml file
>
>
> and i used it as comparator in my java code,
> but in the execution i get exception :
> Erreur --java.io.U
can i have an example for using TimeUUIDType as comparator in a client
java code.
De : karim abbouh
À : "user@cassandra.apache.org"
Envoyé le : Lundi 27 Juin 2011 17h59
Objet : Re : Re : get_range_slices result
i used TimeUUIDType as type
@cassandra.apache.org
Cc : karim abbouh
Envoyé le : Vendredi 24 Juin 2011 11h25
Objet : Re: Re : get_range_slices result
You can get the best of both worlds by repeating the key in a column,
and creating a secondary index on that column.
On Fri, Jun 24, 2011 at 1:16 PM, Sylvain Lebresne wrote:
>
You can get the best of both worlds by repeating the key in a column,
and creating a secondary index on that column.
On Fri, Jun 24, 2011 at 1:16 PM, Sylvain Lebresne wrote:
> On Fri, Jun 24, 2011 at 10:21 AM, karim abbouh wrote:
>> i want get_range_slices() function returns recor
On Fri, Jun 24, 2011 at 10:21 AM, karim abbouh wrote:
> i want get_range_slices() function returns records sorted(orded) by the
> key(rowId) used during the insertion.
> is it possible?
You will have to use the OrderPreservingPartitioner. This is no
without inconvenience however
i want get_range_slices() function returns records sorted(orded) by the
key(rowId) used during the insertion.
is it possible?
De : aaron morton
À : user@cassandra.apache.org
Envoyé le : Jeudi 23 Juin 2011 20h30
Objet : Re: get_range_slices result
Not sure
Not sure what your question is.
Does this help ? http://wiki.apache.org/cassandra/FAQ#range_rp
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 23 Jun 2011, at 21:59, karim abbouh wrote:
> how can get_range_slices() funct
how can get_range_slices() function returns sorting key ?
BR
>
> -- error --
>
> org.apache.thrift.TApplicationException: Internal error processing
> get_range_slices
> at
> org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
> at
> org.apache.cassandra.thrift.Cassandra$Clien
Nvm. Found the answer in the FAQ :P It is normal.
Thx,
Jason
On Fri, Mar 25, 2011 at 1:24 AM, Jason Harvey wrote:
> I am running a get_range_slices on one of my larger CFs. I am then
> running a 'get' call on each of those keys. I have run into 50 or so
> keys that were re
I am running a get_range_slices on one of my larger CFs. I am then
running a 'get' call on each of those keys. I have run into 50 or so
keys that were returned in the range, but get a NotFound when called
against 'get'.
I repeated the range call to ensure they weren't
What are you using for the SlicePredicate with get_range_slices() ? What sort
of performance are you getting for each request (client and server side)?
Even if you are asking for zero columns, there is still a lot of work to be
done when performing a range scan. e.g. Each SSTable must be
Hey all,
I'm trying to get a list of all the rows from a column family using
get_range_slices retrieving no actual columns. I expected this operation to be
pretty quick, but it seems to take a while (5-node 0.7.0 cluster takes 20 min
to page through 60k keys 1000 at a time). It&
re efficient expression
of the former.
On Fri, Feb 4, 2011 at 2:26 AM, Patrik Modesto wrote:
> Hi!
>
> I'm getting tombstones from get_range_slices(). I know that's normal.
> But is there a way to know that a key is tombstone? I know tombstone
> has no columns but I can cr
Hi!
I'm getting tombstones from get_range_slices(). I know that's normal.
But is there a way to know that a key is tombstone? I know tombstone
has no columns but I can create a row without any columns that would
look like a tombstone in get_range_slices().
Regards,
Patrik
>>
>> On Tue, Jan 25, 2011 at 2:59 PM, Nick Santini wrote:
>>
>>> Hi,
>>> I'm trying a test scenario where I create 100 rows in a CF, then
>>> use get_range_slices to get all the rows, and I get 100 rows, so far so good
>>> then after the test I de
Jan 25, 2011 at 2:59 PM, Nick Santini wrote:
>
>> Hi,
>> I'm trying a test scenario where I create 100 rows in a CF, then
>> use get_range_slices to get all the rows, and I get 100 rows, so far so good
>> then after the test I delete the rows using "remove&quo
Yes. See this http://wiki.apache.org/cassandra/FAQ#range_ghosts
-Naren
On Tue, Jan 25, 2011 at 2:59 PM, Nick Santini wrote:
> Hi,
> I'm trying a test scenario where I create 100 rows in a CF, then
> use get_range_slices to get all the rows, and I get 100 rows, so far so good
&g
Hi,
I'm trying a test scenario where I create 100 rows in a CF, then
use get_range_slices to get all the rows, and I get 100 rows, so far so good
then after the test I delete the rows using "remove" but without a column or
super column, this deletes the row, I can confirm that cos
or set the end key to "com.googlf"
On 12 January 2011 02:49, Aaron Morton wrote:
> If you were using OPP and get_range_slices then set the start_key to be
> "com.google" and the end_key to be "". Get is slices of say 1,000 (use the
> last key read as t
ou were using OPP and get_range_slices then set the start_key to be
> "com.google" and the end_key to be "". Get is slices of say 1,000 (use the
> last key read as the next start_ket) and when you see the first key that does
> not start with com.google top making calls.
> I
If you were using OPP and get_range_slices then set the start_key to be "com.google" and the end_key to be "". Get is slices of say 1,000 (use the last key read as the next start_ket) and when you see the first key that does not start with com.google top making calls.If you mov
On Wed, Jan 12, 2011 at 7:41 AM, Koert Kuipers <
koert.kuip...@diamondnotch.com> wrote:
> Ok I see get_range_slice is really only useful for paging with RP...
>
> So if I were using OPP (which I am not) and I wanted all keys starting with
> "com.google", what should my start_key and end_key be?
>
t: Tuesday, January 11, 2011 9:02 PM
To: user
Subject: Re: how to do a get_range_slices where all keys start with same string
http://wiki.apache.org/cassandra/FAQ#range_rp
also, start==end==x means "give me back exactly row x, if it exists."
IF you were using OPP you'd need end=y.
On Tue
http://wiki.apache.org/cassandra/FAQ#range_rp
also, start==end==x means "give me back exactly row x, if it exists."
IF you were using OPP you'd need end=y.
On Tue, Jan 11, 2011 at 7:45 PM, Koert Kuipers
wrote:
> I would like to do a get_range_slices for all keys (which are str
7:45 PM, Koert Kuipers <
koert.kuip...@diamondnotch.com> wrote:
> I would like to do a get_range_slices for all keys (which are strings)
> that start with the same substring x (for example “com.google”). How do I do
> that?
>
> start_key = x abd end_key = x doesn’t seem to do the job…
>
> thanks koert
>
>
>
I would like to do a get_range_slices for all keys (which are strings) that
start with the same substring x (for example "com.google"). How do I do that?
start_key = x abd end_key = x doesn't seem to do the job...
thanks koert
yes, it looks like the workaround of using an initial token of 1 works.
thanks,
-mike
On Dec 23, 2010, at 3:47 PM, Jonathan Ellis wrote:
> On Thu, Dec 23, 2010 at 3:00 PM, mike dooley wrote:
>> DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 597)
>> restricted ranges fo
On Thu, Dec 23, 2010 at 3:00 PM, mike dooley wrote:
> DEBUG [pool-1-thread-4] 2010-12-23 12:54:26,958 StorageProxy.java (line 597)
> restricted ranges for query [0,0] are [[0,0]]
This is the bug. It's not going to query the remote node unless
85070591730234615865843651857942052864 is part of the
e.thanks,-mikeOn Dec 22, 2010, at 1:42 PM, Jonathan Ellis wrote:what do you see in the logs during the list command at debug level?On Tue, Dec 21, 2010 at 5:01 PM, mike dooley <doo...@apple.com> wrote:hi,i am using version 0.7-rc2 and pelops-c642967 from java. when i tryto export all the data in a c
do you see in the logs during the list command at debug level?On Tue, Dec 21, 2010 at 5:01 PM, mike dooley <doo...@apple.com> wrote:hi,i am using version 0.7-rc2 and pelops-c642967 from java. when i tryto export all the data in a column family i don't get all of the data thatwas insert
was inserted. i suspect that this points to an underlying problem with
> the get_range_slices method.
> i can reproduce the problem just using the command line interface
> as follows:
> 1) create a 2 node cluster using the default cassandra.yml with these
> changes:
> * set
using version 0.7-rc2 and pelops-c642967 from java. when i try
> to export all the data in a column family i don't get all of the data that
> was inserted. i suspect that this points to an underlying problem with
> the get_range_slices method.
>
> i can reproduce the p
hi,
i am using version 0.7-rc2 and pelops-c642967 from java. when i try
to export all the data in a column family i don't get all of the data that
was inserted. i suspect that this points to an underlying problem with
the get_range_slices method.
i can reproduce the problem just usin
Looks like the patch that introduced that bug was added in 0.6.6 and wasn't
fixed until 0.6.8 so yes I'd say that is your problem with get_range_slices.
Is there a reason you can't update?
For nodetool ring, if every node in your cluster is not showing one of the
nodes in the ring,
Is this https://issues.apache.org/jira/browse/CASSANDRA-1722 related?
From: Rajat Chopra [mailto:rcho...@makara.com]
Sent: Wednesday, December 15, 2010 9:45 PM
To: user@cassandra.apache.org
Subject: get_range_slices does not work properly
Hi!
Using v0.6.6, I have a 16 node cluster.
One
Hi!
Using v0.6.6, I have a 16 node cluster.
One column family has 16 keys(corresponding to node number) but only 9 get
listed with get_range_slices with a predicate and a key_range with empty start
and end.
When I do a get_slice with one of the keys that I know is there (but not listed
by
No,It's not the columns, but the rows.
These are the keys of the rows.
--
zangds
2010-11-05
-
发件人:Stu Hood
发送日期:2010-11-05 17:07:43
收件人:user
抄送:
主题:RE: how does get_range_slices work?
What column comparator/type are you using? Remember that if you are using
BytesType/UTF8Type, columns will be sorted lexicographically.
-Original Message-
From: "zangds"
Sent: Friday, November 5, 2010 8:53am
To: "user"
Subject: how does get_range_slices work?
H
Hi,
I have a question about the get_range_slices() function about the key sorting.
I inserted some colums with keys of '1', '2' ,'7' and '10'.
When I use get_range_slices to get colums betwen '1' and '10' , I got none.
But When I g
get_slice vs get_range_slices.
> The reason for this question is that we are using some code that uses
> get_range_slices. We have option of forcing it to use count=1 with
> get_range_slices or change the code to use get_slice.
>
> What would you recommend? What will be the net gain on the Ca
Thanks Jonathan.
Another related question is if I need to fetch only 1 row then what will be
the difference between the performance of get_slice vs get_range_slices.
The reason for this question is that we are using some code that uses
get_range_slices. We have option of forcing it to use count=1
get_range_slices never does "searching."
the performance of those two predicates is equivalent, assuming a row
"start key" actually exists.
On Thu, Oct 14, 2010 at 1:09 PM, Narendra Sharma
wrote:
> Hi,
>
> I am using Cassandra 0.6.5. Our application uses the get_
Hi,
I am using Cassandra 0.6.5. Our application uses the get_range_slices to get
rows in the given range.
Could someone please explain how get_range_slices works internally esp when
a count parameter (value = 1) is also specified in the SlicePredicate? Does
Cassandra first search all in the
s to reproduce, using the Keyspace1.Super1 CF:
> * insert three super columns, bar1 bar 2, and bar3, under the same key
> * delete bar1
> * insert bar1 again
> * run a get_range_slices on Super1, with start=bar1, finish=bar3, and count=1
> * I expected only bar1 to be returned, but both
y
* delete bar1
* insert bar1 again
* run a get_range_slices on Super1, with start=bar1, finish=bar3, and count=1
* I expected only bar1 to be returned, but both both bar1 and bar2 are
returned. bar3 isn't, though. so count is somewhat respected.
I've filed a jira with a test sc
: OrderPreservingPartitioner for get_range_slices
My experience for the last question is ... it depends. If you have NO
changes to the store (which I would argue could be abnormal, it's not in a
production environment allowing writes) ... then I you can do a full
range/key scan and get no repeats. Fa
ch row just once?
>
> Thanks.
>
> 2010/9/15 Janne Jalkanen
>
>
>> Correct. You can use get_range_slices with RandomPartitioner too, BUT the
>> iteration order is non-predictable, that is, you will not know in which
>> order you get the rows (RandomPartitioner would p
And what about uniqueness? Can we be sure that we get each row just once?
Thanks.
2010/9/15 Janne Jalkanen
>
> Correct. You can use get_range_slices with RandomPartitioner too, BUT the
> iteration order is non-predictable, that is, you will not know in which
> order you
Correct. You can use get_range_slices with RandomPartitioner too, BUT
the iteration order is non-predictable, that is, you will not know in
which order you get the rows (RandomPartitioner would probably better
be called ObscurePartitioner - it ain't random, but it's as good as if
Hi All,
I was under the impression that in order to query with get_range_slices one
has to have a OrderPreservingPartitioner.
Can we do get_range_slices with RandomPartitioner also? I can distinctly
remember I read that(OrderPreservingPartitioner for get_range_slices) in
Cassnadra WIKI but now
to play
with it soon.
Aaron
On 29 Jul, 2010,at 01:51 PM, Ken Matsumoto wrote:
Hi all,
Are there any better way to retrieve data from Cassandra than using
get_range_slices?
Now I'm going to port some programs using MySQL to Cassandra. The
program query is like
below:
"select * from T
o retrieve data from Cassandra than using
get_range_slices?
Now I'm going to port some programs using MySQL to Cassandra. The
program query is like
below:
"select * from Table_A where date > 1/1/2008 and date < 12/31/2009 and
locationID = 1"
The result of the query will have
Hi all,
Are there any better way to retrieve data from Cassandra than using
get_range_slices?
Now I'm going to port some programs using MySQL to Cassandra. The
program query is like
below:
"select * from Table_A where date > 1/1/2008 and date < 12/31/2009 and
locationID = 1
This is a bug. If you can give us data to reproduce with we can fix it faster.
On Wed, Jul 14, 2010 at 10:29 AM, shimi wrote:
> I wrote a code that iterate on all the rows by using get_range_slices.
> for the first call I use KeyRange from "" to "".
> for all the
I wrote a code that iterate on all the rows by using get_range_slices.
for the first call I use KeyRange from "" to "".
for all the others I use from to "".
I always get the same rows that I got in the previous iteration. I tried
changing the batch size but I still
pping to CL.ONE and see if you only get one copy. If that
> fixes it, I'd suggest searching JIRA.
> Mike
>
> On Thu, Jul 8, 2010 at 6:40 PM, Jonathan Shook wrote:
>>
>> Should I ever expect multiples of the same key (with non-empty column
>> sets) from the same
s of the same key (with non-empty column
> sets) from the same get_range_slices call?
> I've verified that the column data is identical byte-for-byte, as
> well, including column timestamps?
>
Should I ever expect multiples of the same key (with non-empty column
sets) from the same get_range_slices call?
I've verified that the column data is identical byte-for-byte, as
well, including column timestamps?
missing data for a few hours, it's the weird behaviour of
> get_range_slices that's bothering me. I added some logging to
> ColumnFamilyRecordReader to see what's going on:
>
> Split startToken=67160993471237854630929198835217410155,
> endToken=68643623863384825230116928
I don't mind missing data for a few hours, it's the weird behaviour of
get_range_slices that's bothering me. I added some logging to
ColumnFamilyRecordReader to see what's going on:
Split startToken=67160993471237854630929198835217410155,
endToken=686436238633848252
k to make sure. We're running
>> > Cassandra
>> > 0.6.2.
>> >
>> > On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk
>> > wrote:
>> >>
>> >> Greg, can you describe the steps we took to decommission the nodes?
>> >>
>> &
gt; > On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk
> > wrote:
> >>
> >> Greg, can you describe the steps we took to decommission the nodes?
> >>
> >> -- Forwarded message --
> >> From: Rob Coli
> >> Date: Mon, Jun 21
10 at 9:59 PM, Joost Ouwerkerk
> wrote:
>>
>> Greg, can you describe the steps we took to decommission the nodes?
>>
>> -- Forwarded message --
>> From: Rob Coli
>> Date: Mon, Jun 21, 2010 at 8:08 PM
>> Subject: Re: get_range_slices confus
g Cassandra
> 0.6.2.
>
>
> On Mon, Jun 21, 2010 at 9:59 PM, Joost Ouwerkerk wrote:
>
>> Greg, can you describe the steps we took to decommission the nodes?
>>
>>
>> -- Forwarded message --
>> From: Rob Coli
>> Date: Mon, Jun 21, 2
he nodes?
>
>
> -- Forwarded message --
> From: Rob Coli
> Date: Mon, Jun 21, 2010 at 8:08 PM
> Subject: Re: get_range_slices confused about token ranges after
> decommissioning a node
> To: user@cassandra.apache.org
>
>
> On 6/21/10 4:57 PM, Joost Ouwerk
On 6/21/10 4:57 PM, Joost Ouwerkerk wrote:
We're seeing very strange behaviour after decommissioning a node: when
requesting a get_range_slices with a KeyRange by token, we are getting
back tokens that are out of range.
What sequence of actions did you take to "decommission"
We're seeing very strange behaviour after decommissioning a node: when
requesting a get_range_slices with a KeyRange by token, we are getting back
tokens that are out of range.
As a result, ColumnFamilyRecordReader gets confused, since it uses the last
token from the result set to set the
?06?10? 22:03, Dop Sun wrote:
Hi,
As documented in the http://wiki.apache.org/cassandra/API, the key
range for get_range_slices are both inclusive.
As discussed in this thread:
http://groups.google.com/group/jassandra-user/browse_thread/thread/c2e56453cde067d3,
there is a case that us
Thanks for your quick and detailed explain on the key scan. This is really
helpful!
Dop
From: Philip Stanhope [mailto:pstanh...@wimba.com]
Sent: Thursday, June 10, 2010 10:40 PM
To: user@cassandra.apache.org
Subject: Re: keyrange for get_range_slices
No ... and I personally don't
y the SlicePredicate. A keyscan can
easily turn into a "dump the entire datastore" if you aren't careful.
On Jun 10, 2010, at 10:03 AM, Dop Sun wrote:
> Hi,
>
> As documented in the http://wiki.apache.org/cassandra/API, the key range for
> get_range_slices are both inclu
Hi,
As documented in the http://wiki.apache.org/cassandra/API, the key range for
get_range_slices are both inclusive.
As discussed in this thread:
http://groups.google.com/group/jassandra-user/browse_thread/thread/c2e56453c
de067d3, there is a case that user want to discover all keys
On Tue, May 4, 2010 at 4:17 PM, aaron wrote:
> I was noticing cases under the random partitioner where keys I expected to
> be returned
> were not. Can you give a little advice on the expected behaviour of
> get_range_slices
> with the RP and I'll try to write a JUni
Thanks Jonathan.
After looking at the Lucandra code I realized my confusions has to do with
get_range_slices
and the RandomPartitioner. When I switched to the OPP I got the expected
behaviour.
I was noticing cases under the random partitioner where keys I expected to
be returned
were not
Util.range returns a Range object which is end-exclusive. (You want
"Bounds" for end-inclusive.)
On Sun, May 2, 2010 at 7:19 AM, aaron morton wrote:
> He there, I'm still getting odd behavior with get_range_slices. I've created
> a JUNIT test that illustrates the ca
1 - 100 of 121 matches
Mail list logo