ok so it would be better to cut those large rows, inserting rows with row+
monthId, or row+week, and then all the corresponding columns inside, it
will drastically reduce rows size, but to retrieve results overlapping
between weeks or month, I have to to a multiget, less simple than a get
thx for
Are you sure all your settings are perfect. If so, then plz follow this
steps
./nodetool disablethrift
./nodetool disablegossip
./nodetool drain
stop the service and then delete the all data, saved_caches and commitlog
files. Then restart your service.
Repeat these steps for all the nodes. I hope
Check out the "rows_cached" CF attribute.
On 06/18/2012 06:01 PM, Oleg Dulin wrote:
> Dear distinguished colleagues:
>
> I don't want all of my CFs cached, but one in particular I do.
>
> How can I configure that ?
>
> Thanks,
> Oleg
>
Hi all,
Was there a change of behaviour in multiget_slice query in Cassandra or
Hector between 0.7 and 1.1 when dealing with a key that doesn't exist?
We've just upgraded and our in memory unit test is failing (although
just on my machine). The test code is looking for a key that doesn't
exis
On Mon, 18 Jun 2012 11:57:17 -0700, Gurpreet Singh wrote:
> Thanks for all the information Holger.
>
> Will do the jvm updates, kernel updates will be slow to come by. I see
> that with disk access mode standard, the performance is stable and better
> than in mmap mode, so i will probably stick t
I found a fix for this one, rather a workaround.
I changed the rpc_server_type in cassandra.yaml, from hsha to sync, and the
error went away. I guess, there is some issue with the thrift nonblocking
server.
Thanks
Gurpreet
On Wed, May 16, 2012 at 7:04 PM, Gurpreet Singh wrote:
> Thanks Aaron. w
I did all you said. No errors and warnings.
On Mon, Jun 18, 2012 at 2:31 PM, aaron morton wrote:
> Did you set the cluster name to be the same ?
>
> Check the logs on the machines for errors or warnings.
>
> Finally check that each node can telnet to port 7000 on the others.
>
> Cheers
>
> --
Did you set the cluster name to be the same ?
Check the logs on the machines for errors or warnings.
Finally check that each node can telnet to port 7000 on the others.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/06/2012, at 6:29
It's not an exact science. Some general guidelines though:
* A row normally represents an entity
* Rows wider than the thrift_max_message_length_in_mb (16MB) cannot be
retrieved in a single call
* Wide rows (in the 10's of MB) become can make repair do more work than is
needed.
* Rows wider than
I am new to Cassandra, and setting up a cluster for the first time with
1.1.1. There are three nodes, 1 acts as a seed node that all three have the
ip address of that node as their seed. I have set the listen address to the
address of each node and rpc address as 0.0.0.0. I turned the trace on on
a
For DSE specific questions try http://www.datastax.com/support-forums/
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/06/2012, at 1:09 AM, Abhijit Chanda wrote:
> Hi All,
>
> I am using datastax enterprise editions 2.1 latest feature s
In what extent, having possibly large rows, (many columns (sorted as
timeststamp, or geohash or ...) will be nefast for a muli-node ring.
I guess a row can be read/write just on one node, if yes it's more likely
to fail, (than having one row per timestamp ..)
thanks for explanations
On Mon, Jun 18, 2012 at 8:53 AM, mich.hph wrote:
> Dear all!
> In my cluster, I found every key needs 192bytes in the key cache.So I want
> to know what determines the memory that used by key cache. How to calculate
> the value.
>
According to
http://cassandra-user-incubator-apache-org.3065146.n
Okay, we investigated the problem and found the source of proble in package
org.apache.cassandra.io.sstable;
public class Descriptor
public static Pair fromFilename(File directory, String name)
{
// tokenize the filename
StringTokenizer st = new StringTokenizer(name, String.valueOf(separator))
Dear all!
In my cluster, I found every key needs 192bytes in the key cache.So I want to
know what determines the memory that used by key cache. How to calculate the
value.
Thanks in advance.
We use 7u3 in production long enough with no problems.
7u4 requires larger minimum stack size, 160KB vs 128KB, but 160KB still not
enough for Cassandra, with 192KB better, but needs more testing.
https://issues.apache.org/jira/browse/CASSANDRA-4275 suggests 256KB
Best regards / Pagarbiai
V
Hello!
Is it safe to use java 1.7 with cassandra 1.0.x Reason why i want do
that, is that in java 1.7 appear options for rotate GC log:
http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=ff824681055961e1f62393b68deb5?bug_id=6941923
Hi
After I enable key cache and row cache, the problem gone, I guess it
because we have lots of data in SSTable, and it takes more time, memory and
cpu to search the data.
BRs
//Tang Weiqiang
2012/6/18 aaron morton
> It is also strange that although no data in Cassandra can fulfill the
> que
18 matches
Mail list logo