When I upgraded my system from 1.2.x to 2.0.x there were simple hint:
never upgrade before target release does not have at least 5 on third
place. versions before x.x.5 are unstable and aren't ready for
production use. I don't know if it's still true, but be careful ;)
Regards
Olek
2014-09-17
IMHO, delete then insert will take two times more disk space then
single insert. But after compaction the difference will disappear.
This was true in version prior to 2.0, but it should still work this
way. But maybe someone will correct me, if i'm wrong.
Cheers,
Olek
2014-09-10 18:30 GMT+02:00
you have only one line, in first: two.
I hope my post is correct ;)
regards,
Olek
2014-09-10 18:56 GMT+02:00 Michal Budzyn michalbud...@gmail.com:
Would the factor before compaction be always 2 ?
On Wed, Sep 10, 2014 at 6:38 PM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
IMHO
no real benefit I can think of for doing the delete first.
On Sep 10, 2014, at 2:25 PM, olek.stas...@gmail.com wrote:
I think so.
this is how i see it:
on the very beginning you have such line in datafile:
{key: [col_name, col_value, date_of_last_change]} //something similar,
i don't remember now
Bump one more time, could anybody help me?
regards
Olek
2014-03-19 16:44 GMT+01:00 olek.stas...@gmail.com olek.stas...@gmail.com:
Bump, could anyone comment this behaviour, is it correct, or should I
create Jira task for this problems?
regards
Olek
2014-03-18 16:49 GMT+01:00 olek.stas
Bump, could anyone comment this behaviour, is it correct, or should I
create Jira task for this problems?
regards
Olek
2014-03-18 16:49 GMT+01:00 olek.stas...@gmail.com olek.stas...@gmail.com:
Oh, one more question: what should be configuration for storing
system_traces keyspace? Should
help me, how can I safely add new DC to the cluster?
Regards
Aleksander
2014-03-14 18:28 GMT+01:00 olek.stas...@gmail.com olek.stas...@gmail.com:
Ok, I'll do this during the weekend, I'll give you a feedback on Monday.
Regards
Aleksander
14 mar 2014 18:15 Robert Coli rc...@eventbrite.com
Oh, one more question: what should be configuration for storing
system_traces keyspace? Should it be replicated or stored locally?
Regards
Olek
2014-03-18 16:47 GMT+01:00 olek.stas...@gmail.com olek.stas...@gmail.com:
Ok, i've dropped all system keyspaces, rebuild cluster and recover
schema
system_traces? should it be
removed and recreted? What data it's holding?
best regards
Aleksander
2014-03-14 0:14 GMT+01:00 Robert Coli rc...@eventbrite.com:
On Thu, Mar 13, 2014 at 1:20 PM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
Huh,
you mean json dump?
If you're using
Ok, I'll do this during the weekend, I'll give you a feedback on Monday.
Regards
Aleksander
14 mar 2014 18:15 Robert Coli rc...@eventbrite.com napisał(a):
On Fri, Mar 14, 2014 at 12:40 AM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
OK, I see, so the data files stay in place, i have
Bump, are there any solutions to bring my cluster back to schema consistency?
I've 6 node cluster with exactly six versions of schema, how to deal with it?
regards
Aleksander
2014-03-11 14:36 GMT+01:00 olek.stas...@gmail.com olek.stas...@gmail.com:
Didn't help :)
thanks and regards
Aleksander
Huh,
you mean json dump?
Regards
Aleksander
2014-03-13 18:59 GMT+01:00 Robert Coli rc...@eventbrite.com:
On Thu, Mar 13, 2014 at 2:05 AM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
Bump, are there any solutions to bring my cluster back to schema
consistency?
I've 6 node cluster
Hi All,
I've faced an issue with cassandra 2.0.5.
I've 6 node cluster with random partitioner, still using tokens
instead of vnodes.
Cause we're changing hardware we decide to migrate cluster to 6 new
machines and change partitioning options to vnode rather then
token-based.
I've followed
(if
it is caused by CASSANDRA-6700 then you are in luck: it is fixed in 2.0.6).
Best wishes, Duncan.
On 11/03/14 13:30, olek.stas...@gmail.com wrote:
Hi All,
I've faced an issue with cassandra 2.0.5.
I've 6 node cluster with random partitioner, still using tokens
instead of vnodes.
Cause
Didn't help :)
thanks and regards
Aleksander
2014-03-11 14:14 GMT+01:00 Duncan Sands duncan.sa...@gmail.com:
On 11/03/14 14:00, olek.stas...@gmail.com wrote:
I plan to install 2.0.6 as soon as it will be available in datastax rpm
repo.
But how to deal with schema inconsistency on such scale
, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
No, i've done repair after upgrade sstables. In fact it was about 4
weeks after, because of bug:
If you only did a repair after you upgraded SSTables, when did you have an
opportunity to hit :
https://issues.apache.org/jira/browse
Seems good. I'll discus it with data owners and we choose the best method.
Best regards,
Aleksander
4 lut 2014 19:40 Robert Coli rc...@eventbrite.com napisał(a):
On Tue, Feb 4, 2014 at 12:21 AM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
I don't know what is the real cause of my
Hi All,
We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
1.2.10). Probably after upgradesstable (but it's only a guess,
because we noticed problem few weeks later), some rows became
tombstoned. They just disappear from results of queries. After
inverstigation I've noticed,
Aleksander
ps. I like your link Rob, i'll pin it over my desk ;) In Oracle there
were a rule: never deploy RDBMS before release 2 ;)
2014-02-03 Robert Coli rc...@eventbrite.com:
On Mon, Feb 3, 2014 at 12:51 AM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
We've faced very similar effect
2014-02-03 Robert Coli rc...@eventbrite.com:
On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com
olek.stas...@gmail.com wrote:
Today I've noticed that oldest files with broken values appear during
repair (we do repair once a week on each node). Maybe it's the repair
operation, which
Yes, as I wrote in first e-mail. When I removed key cache file
cassandra started without further problems.
regards
Olek
2013/11/13 Robert Coli rc...@eventbrite.com:
On Wed, Nov 13, 2013 at 12:35 AM, Tom van den Berge t...@drillster.com
wrote:
I'm having the same problem, after upgrading
Hello,
I'm facing bug https://issues.apache.org/jira/browse/CASSANDRA-6277.
After migration to 2.0.2 I can't perform repair on my cluster (six
nodes). Repair on the biggest CF breaks with error described in Jira.
I know, that probably there is a solution in repository, but it's not
included in any
Hello,
I'm facing OOM on reading key_cache
Cluster conf is as follows:
-6 machines which 8gb RAM each and three 150GB disks each
-default heap configuration
-deafult key cache configuration
-the biggest keyspace has abt 500GB size (RF: 2, so in fact there is
250GB of raw data).
After upgrading
23 matches
Mail list logo