Setting system_memory_in_mb to 16 GB means the Cassandra heap size you are
using is 4 GB.
If you meant to use a 16GB heap you should uncomment the line
#MAX_HEAP_SIZE="4G"
And set
MAX_HEAP_SIZE="16G"
You should uncomment the HEAP_NEWSIZE setting as well. I would leave it with
the default setting
"Cassandra is a highly scalable, eventually consistent, distributed, structured
key-value store" http://wiki.apache.org/cassandra/
It is intended for searching by key. It has more querying options but it really
shines when querying by key.
Not all databases offer the same functionality. Both a k
Hi all,
We have a cluster of 3 nodes with RF 3 (2.1.2 version). We had created a
table which consists daily parsed log. Due to lack of understanding we
created few indexes on this table. It contains billions of rows. As indexes
also needs compaction it is impacting server's performance.
So we hav
On Mon, Jul 20, 2015 at 6:20 PM, Christophe Schmitz <
christo...@instaclustr.com> wrote:
> I am running a 6 node cluster on 2.1.7 ...
>
Sounds similar to :
https://issues.apache.org/jira/browse/CASSANDRA-9577 or maybe
https://issues.apache.org/jira/browse/CASSANDRA-9056 or
https://issues.apache.o
Hi Erick,
In cassandra-env.sh, system_memory_in_mb was set to 2GB, I changed it into
16GB, but I still get the same issue. Following are my complete system.log
after changing cassandra-env.sh, and new cassandra-env.sh.
https://gist.githubusercontent.com/cdwijayarathna/5e7e69c62ac09b45490b/raw/f7
Hi,
I have a simple (perhaps stupid) question.
If I want to *search* data in cassandra,
how could find in a text field all records
which start with 'Cas'
( in sql I do select * from table where field like 'Cas%')
I know that this is not directly possible.
- But how is it possible?
- Do nobo
If last_modified is a clustering column, it needs a partitioning column, which
is what date is for (although I should have named it day, and I also forgot to
add the order by desc clause). This is essentially what I came up with. Still
not liking how easy it is to get duplicates.
On Jul 21, 201
Thanks for your reply.
Yes, I am sure all nodes are running the same version.
On second thoughts, I think my gossip pb is due to intense GC activities,
leading to be even not able to do a gossip handshake !
Regards,
Dominique
[@@ THALES GROUP INTERNAL @@]
De : Carlos Rolo [mailto:r...@pythia
The time series doesn’t provide the access pattern I’m looking for. No way to
query recently-modified documents.
On Jul 21, 2015, at 9:13 AM, Carlos Alonso
mailto:i...@mrcalonso.com>> wrote:
Hi Robert,
What about modelling it as a time serie?
CREATE TABLE document (
docId UUID,
doc TEXT,
That error should only occur when you have a mismatch between the Seed
version and the new node version. Are you sure all your nodes are running
in the same version?
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian - Love your data
rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.c
Hi Amlan,
We have the same pb with Cassandra 2.1.5.
I have no hint (yet) to follow.
Did you found the root of this pb ?
Thanks.
Regards,
Dominique
[@@ THALES GROUP INTERNAL @@]
De : Amlan Roy [mailto:amlan@cleartrip.com]
Envoyé : mercredi 1 juillet 2015 12:46
À : user@cassandra.apache.o
Keep the original document base table, but then the query table should have
the PK as last_modified, docId, with last_modified descending, so that a
query can get the n most recently modified documents.
Yes, you still need to manually delete the old entry for the document in
the query table if dup
Hi Robert,
What about modelling it as a time serie?
CREATE TABLE document (
docId UUID,
doc TEXT,
last_modified TIMESTAMP
PRIMARY KEY(docId, last_modified)
) WITH CLUSTERING ORDER BY (last_modified DESC);
This way, you the lastest modification will always be the first record in
the row,
I'm relatively new to data modeling in Cassandra, but perhaps instead of
date and last_modified in your primary key for doc_by_last_modified, just
use the docId. This way, you are can update the last_modified and date
fields against the docId and it removes the duplicate issue and obviates
the need
Yup... it seems like it's gc fault
gc logs
2015-07-21T14:19:54.336+: 2876133.270: Total time for which
application threads were stopped: 0.0832030 seconds
2015-07-21T14:19:55.739+: 2876134.673: Total time for which
application threads were stopped: 0.0806960 seconds
2015-07-21T14:19:57.14
15 matches
Mail list logo