Short question afterwards:
I have read in the documentation, that after a major compaction, minor
compactions are no longer automatically trigger.
Does this mean, that I have to do the nodetool compact regulary? Or is there a
way to get back to the automatically minor compactions?
Thx,
Br,
Hello,
while trying out cassandra I read about the steps necessary to replace a
dead node. In my test cluster I used a setup using num_tokens instead of
initial_tokens. How do I replace a dead node in this scenario?
Thanks,
Jan
you could consider enabling leveled compaction:
http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra
On Tue, Mar 5, 2013 at 9:46 AM, Matthias Zeilinger
matthias.zeilin...@bwinparty.com wrote:
Short question afterwards:
I have read in the documentation, that after a major
Check out
http://techblog.netflix.com/2012/07/benchmarking-high-performance-io-with.html
Netflix used Cassandra with SSDs and were able to drop their memcache layer.
Mind you they were not using it purely as an in memory KV store.
Ben
Instaclustr | www.instaclustr.com | @instaclustr
On
Without looking into details too closely, I'd say you're probably hitting
https://issues.apache.org/jira/browse/CASSANDRA-5292 (since you use
NTS+propertyFileSnitch+a DC name in caps).
Long story short, the CREATE KEYSPACE interpret your DC-TORONTO as
dc-toronto, which then probably don't match
Any clue on this ?
2013/2/25 Alain RODRIGUEZ arodr...@gmail.com
Hi,
I am having issues after decommissioning 3 nodes one by one of my 1.1.6 C*
cluster (RF=3):
On the c.164 node, which was added a week after removing the 3 nodes,
with gossipinfo I have:
/a.135
RPC_ADDRESS:0.0.0.0
Hi Sylvain,
thanks for fast answer. I have updated keyspace definition and
cassandra-topologies.properties to all 3 nodes and restarted each node.
Both problems are still reproducible. I'm not able to read my writes and
also the selects shows same data as in my previous email.
for write and
Thank you i am able to solve this one.
If i am trying as :
SELECT * FROM CompositeUser WHERE userId='mevivs' LIMIT 100 ALLOW
FILTERING
it works. Somehow got confused by
http://www.datastax.com/docs/1.2/cql_cli/cql/SELECT, which states as :
SELECT select_expression
FROM
try assasinate from the jmx?
http://nartax.com/2012/09/assassinate-cassandra-node/;
I finally used this solution... It always solves the problems of ghost
nodes :D. Last time I had unreachable nodes while describing cluster in CLI
(as described in the link) and I used the jmx
Somebody in group, please confirm if it is an issue or that needs rectified
for select syntax.
-Vivek
On Tue, Mar 5, 2013 at 5:31 PM, Vivek Mishra mishra.v...@gmail.com wrote:
Thank you i am able to solve this one.
If i am trying as :
SELECT * FROM CompositeUser WHERE userId='mevivs' LIMIT
This is not an issue of Cassandra. In particular
http://cassandra.apache.org/doc/cql3/CQL.html#selectStmt is up to date.
It is an issue of the datastax documentation however. I'll see with them
that this gets resolved.
On Tue, Mar 5, 2013 at 3:26 PM, Vivek Mishra mishra.v...@gmail.com wrote:
Thanks!
Dean
On 3/4/13 7:12 PM, Wei Zhu wz1...@yahoo.com wrote:
We have 200G and ended going with 10M. The compaction after repair takes
a day to finish. Try to run a repair and see how it goes.
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To:
So, I have added more logging to the test app (comments inline). For
some reason I'm loosing updates. In a for loop I'm executing upload,
read writetime, download blob. Executed 10 times... See iteration number
2 and 3
1. initialize session
0[main] INFO
Our upgradesstables completed but I still see *he-593-Data.db files and such
which from testing seem to be the 1.1.x table names. I see the new
*ib-695-Data.db files.
Can I safely delete the *-he-* files now? I was expecting cassandra to delete
them when I was done but maybe they are there
Details are here https://issues.apache.org/jira/browse/CASSANDRA-3271
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 4/03/2013, at 8:04 AM, Jason Wee peich...@gmail.com wrote:
version 1.0.8
Just curious, what is
Hinted Handoff works well. But it's an optimisation that has certain safety
valves, configuration and throttling that means it is still not considered the
way to ensure on disk consistency.
In general, if a node restarts or drops mutations HH should get the message
there eventually. In
Was probably this https://issues.apache.org/jira/browse/CASSANDRA-4597
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 4/03/2013, at 2:05 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
I was reading
That ticket says it was fixed in 1.1.5 and we are on 1.2.2. We upgraded from
1.1.4 to 1.2.2, ran upgrade tables and watched filenames change from *-he-*.db
to *-id-*.db, then changed compaction strategies and still had this issue. Is
it the fact we came from 1.1.4? Ours was a very simple 4
Well no one says my assertion is false, so it is probably true.
Going further, what would be the steps to migrate from STC to LCS ? Is
there any precautions to take doing it using C*1.1.6 (like removing commit
logs since drain is broken) ?
Any insight or link on this procedure would be
+1. We are trying to figure that all out too.
I don't know if it helps but we finally upgraded to 1.2.2 which is supposed to
have better LCS support from what I understand. We did lots of QA testing and
jumped from 1.1.4. Rolling restart did not work at all in QA so we went with
take the
According to this:
https://issues.apache.org/jira/browse/CASSANDRA-5029
Bloom filter is still on by default for LCS in 1.2.X
Thanks.
-Wei
From: Hiller, Dean dean.hil...@nrel.gov
To: user@cassandra.apache.org user@cassandra.apache.org
Sent: Monday, March 4,
Great to know!……0.1 though is still a heck of a lot of savings compared to
0.01(size-tiered) when using the bloomfilter calculator.
Thanks for the info,
Dean
From: Wei Zhu wz1...@yahoo.commailto:wz1...@yahoo.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
I happened to notice some bizarre timestamps coming out of the
cassandra-cli. Example:
[default@XXX] get CF[‘e2b753aa33b13e74e5e803d787b06000'];
= (column=c35ef420-c37a-11e0-ac88-09b2f4397c6a, value=XXX,
timestamp=2013042719)
= (column=c3845ea0-c37a-11e0-8f6f-09b2f4397c6a, value=XXX,
There is no exact spec on timestamp the convention is micros from epoch but
you are free to use anything you want. To update a column you only need a
timestamp higher then the original.
On Tue, Mar 5, 2013 at 1:55 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Yes, clients can write timestamps in
To answer my own question here, we tested this out in QA and then ran in
production with no issues
Step 1. Upgrade to 1.2.2
Step 2. Start up all nodes
It works great. There was no need to run upgradesstables. That said, we
are doing a rolling upgradesstables on every node in production right
Otherwise, it means the version conflict solving strong depends on global
sequence id (timestamp) which need provide by client ?
Yes.
If you have an area of your data model that has a high degree of concurrency
C* may not be the right match.
In 1.1 we have atomic updates so clients see
The advantage of HH is that it reduces the probability of a DigestMismatch when
using a CL ONE. A DigestMismatch means the read has to run a second time
before returning to the client.
- No risk of hinted-handoffs building up
- No risk of hinted-handoffs flooding a node that just came up
AFAIK you just fire up the new one and let nature take it's course :)
http://www.datastax.com/docs/1.2/operations/add_replace_nodes#replace-node
i.e. you do not need to use -Dcassandra.replace_token.
Hope that helps.
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
If you have a data model with long lived and frequently updated rows, you can
get around the all fragments problem by running a user defined compaction.
Look for the CompactionManagerMbean on the JMX API
Don't forget you can test things
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-1-live-traffic-sampling
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 5/03/2013, at 7:37 AM, Hiller, Dean
Hello Aaron,
thanks for your reply.
Found it just an hour ago on my own, yesterday I accidentally looked at
the 1.0 docs. Right now my replacement node is streaming from the others
- than more testing can follow.
Thanks again,
Jan
he-593-Data.db
Search the logs to see if there are any messages for the sstable.
Can I safely delete the *-he-* files now?
Maybe, maybe not.
Personally I would not.
Others can say yes.
Restart and see if it get's opened in the logs. if it's still there you can
either run another upgrade
bah, think I got confused by looking at the version in the email you linked to.
if the update CF call is not working, and this is QA, run it with DEBUG
logging and file a bug here https://issues.apache.org/jira/browse/CASSANDRA
Thanks
-
Aaron Morton
Freelance Cassandra
Short answer is no.
Medium answer is yes but you want like it.
Medium to Long answer is to remove data with a high timestamp you need to
delete it with a higher time stamp, and make sure it is purged in compaction by
reducing gc_grace.
But, if this is the schema it's probably best to take
Note that CQL 3 in 1.1 is compatible with CQL 3 in 1.2. Also you do not have
to use CQL 3, you can still use the cassandra-cli to create CF's.
The syntax you use to populate it depends on the client you are using.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New
Thanks @aaron for the rectification
On Wed, Mar 6, 2013 at 1:17 PM, aaron morton aa...@thelastpickle.comwrote:
Note that CQL 3 in 1.1 is compatible with CQL 3 in 1.2. Also you do not
have to use CQL 3, you can still use the cassandra-cli to create CF's.
The syntax you use to populate it
36 matches
Mail list logo