Hello,
When I execute cqlsh -e "SELECT statement .." , it gives the output with a
pipe ('|') separator. Is there anyway I can change this default delimiter in
the output of cqlsh -e " SELECT statement ..".
Thanks & Regards,Hari
Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to 2.1.18
for 2 DC (20 nodes each, 600GB per node). Few questions:1), what happen when
doing rolling upgrade? Let''s say we upgrade only one node to new version,
before upgrade sstable, the data coming in will stay in the node an
Hi, folks: I am planning to upgrade our production from dsc 2.0.16 to 2.1.18
for 2 DC (20 nodes each, 600GB per node). Few questions:1), what happen when
doing rolling upgrade. Let's say we only upgrade one node to new version,
before upgrade sstable, the data coming in will stay in the node and
Thanks Kurt.
We had one sstable from a cf of ours. I am actually running a repair on
that cf now and then plan to try and join the additional nodes as you
suggest. I deleted the opscenter corrupt sstables as well but will not
bother repairing that before adding capacity.
Been keeping an eye acr
HI Asad,
The post flush task frees up allocated commit log segments.
Apart for commit log segment allocation the post flush task "synchronises
custom secondary indexes and provides ordering guarantees for futures on
switchMemtable/flush
etc, which expect to be able to wait until the flush (and
On 14 Aug. 2017 00:59, "Brian Spindler" wrote:
Do you think with the setup I've described I'd be ok doing that now to
recover this node?
The node died trying to run the scrub; I've restarted it but I'm not sure
it's going to get past a scrub/repair, this is why I deleted the other
files as a bru
Do you think with the setup I've described I'd be ok doing that now to
recover this node?
The node died trying to run the scrub; I've restarted it but I'm not sure
it's going to get past a scrub/repair, this is why I deleted the other
files as a brute force method. I think I might have to do the
Running repairs when you have corrupt sstables can spread the corruption
In 2.1.15, corruption is almost certainly from something like a bad disk or bad
RAM
One way to deal with corruption is to stop the node and replace is (with
-Dcassandra.replace_address) so you restream data from neighbors.
Hi Jeff, I ran the scrub online and that didn't help. I went ahead and
stopped the node, deleted all the corrupted data files --*.db
files and planned on running a repair when it came back online.
Unrelated I believe, now another CF is corrupted!
org.apache.cassandra.io.sstable.CorruptSSTableExc
Hi Vlad,
Are you by any chance inserting null values? If so you will create
tombstones. The work around (Cassandra >= 2.2) is to use unset on your
bound statement (see https://issues.apache.org/jira/browse/CASSANDRA-7304)
Cheers,
Christophe
On 13 August 2017 at 20:48, Vlad wrote:
> Hi,
>
> I
Hi,
I insert about 45000 rows to empty table in Python using prepared statements
and IF NOT EXISTS. While reading after insert I get warnings likeServer
warning: Read 5000 live rows and 33191 tombstone cells for query SELECT * FROM
... LIMIT 5000 (see tombstone_warn_threshold)
How it can happe
11 matches
Mail list logo