Hello,
We have only on CF as
CREATE TABLE t1(id bigint, ts timestamp, definition text, primary key (id,
ts))
with clustering order by (ts desc) and gc_grace_seconds=0
and compaction = {'class': 'DateTieredCompactionStrategy',
'timestamp_resolution':'SECONDS', 'base_time_seconds':'20',
'max_sstabl
Adding Java driver forum.
Even we like to know more on this.
-
Ajay
On Wed, Apr 8, 2015 at 8:15 PM, Jack Krupansky
wrote:
> Just a couple of quick comments:
>
> 1. The driver is supposed to be doing availability and load balancing
> already.
> 2. If your cluster is lightly loaded, it isn't nec
Hi,
What are the guidelines on when to use STCS/DTCS/LCS?. Most preferred way
to test it with each of them and find the best fit. But is there some
guidelines or best practices (out of experience) which one to use when?
Thanks
Ajay
Yikes, 18tb/node is a very bad idea.
I dont like to go over 2-3 personally and you have to be careful with JBOD.
See one of Ellis's latest posts on this and suggested use of LVM. It is a
reversal on previous position re JBOD.
--
Colin
+1 612 859 6129
Skype colin.p.clark
> On Apr 8, 2015, at
Because the schema is stored in the system tables, you may be seeing some
of the recent issues with compaction, particularly:
https://issues.apache.org/jira/browse/CASSANDRA-8635
Upgrade to 2.1.4 and see if it helps (note: there is still
https://issues.apache.org/jira/browse/CASSANDRA-8860 fixed i
We have a 3 node cassandra 2.1.2 development cluster on which we frequently do
a lot of keyspace creation and deletion. But after couple of days, we observed
that keyspace creation is taking in order of 4-5 minutes.
Anyone has any idea what might be causing it?
FYI, if I run repair, cleanup and
I can certainly sympathize if you have IT staff/management who will
willingly spring for some disk drives, but not for full machines, even if
they are relatively commodity boxes. Seems penny-wise and pound-foolish to
me, but management has their own priorities, plus there is the pre-existing
Oracle
On Wed, Apr 8, 2015 at 1:27 AM, Serega Sheypak
wrote:
> and I set OrderPreservingPartitioner as a partitioner for the table
>
As a general statement, you almost certainly do not want to use the
OrderPreservingPartitioner for any purpose.
It should probably be called the
DontUseThisIfYouWantMost
First off, I agree that the preferred path is adding nodes, but it is
possible.
> Can C* handle up to 18 TB data size per node with this amount of RAM?
Depends on how deep in the weeds you want to get tuning and testing. See
below.
>
> Is it feasible to increase the disk size by mounting a new (
http://docs.datastax.com/en/cql/3.0/cql/cql_reference/copy_r.html
This only works through cqlsh.
On Wed, Apr 8, 2015 at 1:48 PM, Divya Divs wrote:
> hi
> Please tell me the cqlsh commands for importing .csv file datasets into
> cassandra. please help to start. Iam using windows
>
--
- mich
hi
Please tell me the cqlsh commands for importing .csv file datasets into
cassandra. please help to start. Iam using windows
Agreed with Jack. Cassandra is a database meant to scale horizontally by
adding nodes, and what you're describing is vertical scale.
Aside from the vertical scale issue, unless you're running a very specific
workload (time series data w/ Date Tiered Compaction) and you REALLY know
what you're doi
Just a couple of quick comments:
1. The driver is supposed to be doing availability and load balancing
already.
2. If your cluster is lightly loaded, it isn't necessary to be so precise
with load balancing.
3. If your cluster is heavily loaded, it won't help. Solution is to expand
your cluster so
Hi all,
we are thinking of how to best proceed with availability testing of
Cassandra nodes. It is becoming more and more apparent that it is rather
complex task. We thought that we should try to read and write to each
cassandra node to "monitoring" keyspace with a unique value with low
TTL. This
The preferred pattern for scaling data with Cassandra is to add nodes.
Growing the disk on each node is an anti-pattern. The key strength of
Cassandra is that it is a DISTRIBUTED database, so always keep your eye on
distributing your data.
But if you do need to grow disk, be sure to grow RAM and C
I run a 10-node Cassandra cluster in production. 99% writes; 1% reads, 0%
deletes. The nodes have 32 GB RAM; C* runs with 8 GB heap. Each node has a
SDD for commitlog and 2x4 TB spinning disks for data (sstables). The schema
uses key caching only. C* version is 2.1.2.
It can be predicted that the
Hi imagine I have a table "events"
with fields:
ymd int
user_id uuid
ts timestamp
attr_1
attr_2
with primary key ((ymd, user_id, ts))
and I set OrderPreservingPartitioner as a partitioner for the table
ymd is int representation for the day: 20150410, 20150411, e.t.c.
Can I select from table usin
To elaborate a bit on what Marcin said:
* Once a node starts to believe that a few other nodes are down, it seems
to stay that way for a very long time (hours). I'm not even sure it will
recover without a restart.
* I've tried to stop then start gossip with nodetool on the node that
thinks several
18 matches
Mail list logo