are you really recommending I throw 4 years of work out and completely
rewrite code that works and has been tested?
Our codebase was about 3 years old, and we finished migrating it to CQL not
that long ago. It can definitely be frustrating to have to touch stable
code to modernize it. Our
If you're concerned about impacting production performance, the steps of
compacting and sstable2json will almost certainly also cause performance
problems if performed on the same hardware. You won't get away from a
production performance impact as long as you're using production hardware.
If
If you are doing only writes and no reads, then 'cold_reads_to_omit' is
probably preventing your cluster from crossing a threshold where it decides
it needs to engage in compaction. Setting it to 0.0 should fix this, but
remember that you tuned it as you should be able to revert it to default
Your are right,currently My cluster only write ,when My cluster build after
two month ,I will change to the default thresholds.
Thanks for your reply.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/
2015-01-26 22:40 GMT+08:00 Eric
Hi everyone,
I wanted to know if someone has a feedback using geoHash algorithme with
cassandra ?
I will have to create a nearby functionnality soon, and I really would
like to do it with cassandra for it's scalability, otherwise the smart
choice would be MongoDB apparently.
Is Cassandra can be
Hi Alain;
The requirements are impossible to meet, since you are expected to have a
predictable and determinist tests while you need recent data (max 1 week old
data).Reason: You cannot have a replicable result set when the data is
variable on a weekly basis.
To obtain a replicable test
Using Cassandra triggers is generally a fairly dangerous proposition, and
generally not recommended.It's probably a better idea to load your
search data with a separate process.
On Mon, Jan 26, 2015 at 11:42 AM, Brian Sam-Bodden bsbod...@integrallis.com
wrote:
I did an little experiment
Thanks Eric! I just played a bit with the triggers trying to mimic the
concept of a transaction but they do seem to be a pretty poor substitute
and potentially an unnecessary feature if I have my replication scheme
setup correctly.
On Mon, Jan 26, 2015 at 12:02 PM, Eric Stevens migh...@gmail.com
My understanding is consistent with Alain's, there's no way to force a
tombstone-only compaction, your only option is major compaction. If you're
using size tiered, that comes with its own drawbacks.
I wonder if there's a technical limitation that prevents introducing a
shadowed data cleanup
I don't know much about geohash except for very casual conversations, but
from what I know, it seems like you should be able to put a geohash into a
clustering key and do range searches on that - not sure if that meets your
use case or not.
On Mon, Jan 26, 2015 at 10:29 AM, SEGALIS Morgan
On Mon, Jan 26, 2015 at 9:29 AM, SEGALIS Morgan msega...@gmail.com wrote:
That's actually GREAT news !! + Solr will give a lot of feature to
Cassandra !
But while waiting for this huge feature (and wanted for a lot of users I
guess)
What's the news here?
Datastax commercial edition of
I don't have directly relevant advice, especially WRT getting a meaningful
and coherent subset of your production data - that's probably too closely
coupled with your business logic. Perhaps you can run a testing cluster
with a default TTL on all your tables of ~2 weeks, feeding it with real
I did an little experiment with a Geohash/Geocells
https://github.com/integrallis/geomodel (a poor port of a Python Geohash
librairy) and using Cassandra in a demo using public schools data here
https://github.com/integrallis/geomodel-cassandra-demo
My conclusions is that you can use the Geohash
That's actually GREAT news !! + Solr will give a lot of feature to
Cassandra !
But while waiting for this huge feature (and wanted for a lot of users I
guess)
I guess that Prefix search will also be useful for using geohash...
2015-01-26 18:12 GMT+01:00 Eric Stevens migh...@gmail.com:
We're
The news I guess would be to have it for free. I do not use commercial
versions. I wasn't aware of it though.
Le lundi 26 janvier 2015, Robert Coli rc...@eventbrite.com a écrit :
On Mon, Jan 26, 2015 at 9:29 AM, SEGALIS Morgan msega...@gmail.com
Hello,
You'll find this useful
http://www.slideshare.net/mobile/mmalone/working-with-dimensional-data-in-distributed-hash-tables
Its how simplegeo used geohashing and Cassandra for geolocation.
On Mon, 26 Jan 2015 15:48 SEGALIS Morgan msega...@gmail.com wrote:
Hi everyone,
I wanted to know
I read the CQL table properties again.This property could contorl the
compaction . Right now My C* cluster only write without any read.
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/
2015-01-26 17:41 GMT+08:00 Roland Etzenhammer
Parth,
So are you saying that I should query cassandra right away?
Well, don’t take my word for it, but it definitely sounds like a more simple
approach.
If yes, like I mentioned, I have to run this during traffic hours. Isnt there
a possibility then that my traffic to the db may
Yes I use cassandra 2.1.2 JDK is 1.7.0_71. I will try your solution.
Thank you Roland Etzenhammer!!!
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/
2015-01-26 17:41 GMT+08:00 Roland Etzenhammer r.etzenham...@t-online.de:
Hi,
Hi guys,
We currently use a CI with tests based on docker containers.
We have a C* service dockerized. Yet we have an issue since we would like
2 things, hard to achieve:
- A fix data set to have predictable and determinist tests (that we can
repeat at any time with the same result)
- A recent
Hi guys,
We migrate a cluster to 2.0.11 (From 1.2.18), in a rolling upgrade way.
Now anytime I restart a node, it needs about 30' to start (350 GB average).
I used the debug level and see that we have a lot of INDEX LOAD TIME
lasting more than 200+ secs. It remembers me when I switched
Did you disable auto compaction through nodetool?
disableautocompactionDisable autocompaction for the given keyspace
and column family
Jason
On Mon, Jan 26, 2015 at 11:34 AM, 曹志富 cao.zh...@gmail.com wrote:
Hi everybody:
I have 18 nodes using cassandra2.1.2.Every node has 4 core, 32
Hi Parth,
I’ll take your questions in order:
1. Have a look at the compaction subproperties for STCS:
http://datastax.com/documentation/cql/3.1/cql/cql_reference/compactSubprop.html
2. Why not talk to Cassandra when generating the report? It will be waaay
faster (and easier!);
No,to confirm this I have run the command all my nodes:bin/nodetool
enableautocompaction
--
曹志富
手机:18611121927
邮箱:caozf.zh...@gmail.com
微博:http://weibo.com/boliza/
2015-01-26 16:49 GMT+08:00 Jason Wee peich...@gmail.com:
Did you disable auto compaction
Hi,
are you running 2.1.2 evenutally? I had this problem recently and there
were two topics here about this. Problem was, that my test cluster had
almost no reads and did not compact sstables.
Reason for me was that those minor compactions did not get triggered
since there were almost no
I don't think that such a thing exists as SSTables are immutable. You
compact it entirely or you don't. Minor compaction will eventually evict
tombstones. If it is too slow, AFAIK, the better solution is a major
compaction.
C*heers,
Alain
2015-01-23 0:00 GMT+01:00 Ravi Agrawal
Thank you both for your precious advices !
2015-01-26 12:30 GMT+01:00 mck m...@apache.org:
However I guess it can be easily changed ?
that's correct.
--
Morgan SEGALIS
I think you can't as in previous version, you might want to look at streams
(nodetool netstats) and validation compactions (nodetool compactionstats).
I don't enter in the details as this has already been answered a lot of
time since 0.X version of Cassandra.
The only new thing I was able to find
However I guess it can be easily changed ? doesn' it ?
2015-01-25 20:18 GMT+01:00 mck m...@apache.org:
NetworkTopogolyStrategy gives you a better horizon and more flexibility
as you scale out, at least once you've gone past small cluster problems
like wanting RF=3 in a 4 node two dc cluster.
However I guess it can be easily changed ?
that's correct.
Where we differ that I feel the coverage for existing thrift use cases
isn't
100%. That may be right or wrong, but it is my impression.
Here's my problem: either CQL covers all existing thrift use cases or it
does
not (in which case the non supported use case should be pointed out). It's a
On Fri, Jan 23, 2015 at 5:28 PM, Ken Hancock ken.hanc...@schange.com
wrote:
I have some thrift column families that were created with BytesType. All
the data written to the keys/columns/values were simple string.
In cassandra-cli, I can correct these to UTF8Type (I believe both
UTF8Type
32 matches
Mail list logo