ker logs just outputs stderr and stdout. It doesn't show anything more
> than what I put in top email
>
> On Mon, May 18, 2020 at 7:42 PM James Shaw wrote:
>
>> docker logs ... see any error in docker/container logs ?
>>
>> On Mon, May 18, 2020 at 10:27 PM Rober
docker logs ... see any error in docker/container logs ?
On Mon, May 18, 2020 at 10:27 PM Robert Snakard
wrote:
> # cat /var/log/cassandra/system.log
> => cat: system.log: No such file or directory
>
> I've also checked other possible locations. Since this is occurring before
> startup no logs
Do you mean that you want to fix sstable table corrupt error and don't mind
the testing data ? You may run nodetool scrub
or nodetool upgradesstable -a( -a is
re-write to current version).
Thanks,
James
On Mon, May 18, 2020 at 12:54 PM Leena Ghatpande
wrote:
> Running cassandra 3.7
> ou
Surbhi:
I don't think you may truncate the materialized view.
What exact error got ? If you think it is same as the bug, then you may
try to avoid the bug triggered condition. It says pending hints. So you may
let all hints applied, then try drop the view.
Thanks,
James
On Fri, May 15, 20
atic
> backups, etc, then an operator makes it a lot easier to manage.
> Check it out on GitHub: https://github.com/instaclustr/cassandra-operator
> The project is approaching MVP status, and we would certainly appreciate
> any feedback or contributions.
>
> Regards,
> Adam
>
>
Hi, there:
What are latest good tools for monitoring open source cassandra ?
I was used to Datastax opscenter tool, felt all tasks quite easy. Now on
new project, open source cassandra, on Kubernetes container/docker, logs in
Splunk, feel very challenge.
Most wanted metrics are read / write
you may go OS level to delete the files.That's what I did before. Truncate
action is frequently failed on some remote nodes in a heavy transactions
env.
Thanks,
James
On Thu, Aug 23, 2018 at 8:54 PM, Rahul Singh
wrote:
> David ,
>
> What CL do you set when running this command?
>
> Rahul Sing
can you run this:
select associate_degree, writetime( associate_degree ) from user_data where
Thanks,
James
On Wed, Aug 22, 2018 at 7:13 PM, James Shaw wrote:
> can you run this:
> select writetime( associate_degree ) from user_data where
> see what are writetime
>
>
can you run this:
select writetime( associate_degree ) from user_data where
see what are writetime
On Wed, Aug 22, 2018 at 7:03 PM, James Shaw wrote:
> interesting. what are insert statement and select statement ?
>
> Thanks,
>
> James
>
> On Wed, Aug 22, 2018 at 6:55
interesting. what are insert statement and select statement ?
Thanks,
James
On Wed, Aug 22, 2018 at 6:55 PM, Gosar M
wrote:
> CREATE TABLE user_data (
> "userid" text,
> "secondaryid" text,
> "tDate" timestamp,
> "tid3" text,
> "sid4" text,
> "pid5" text,
> associat
in a heavy transaction PROD env, it is risk, considering c* has a lot of
bugs.
the DDL has to be replicated to all nodes, use nodetool describecluster to
check schema version same on all nodes, if not, you may restart that node
which DDL not replicated.
in new version, DDL is none or all, you ma
considering:
row size large or not
update a lot or not - update is insert actually
read heavy or not
overall read performance
if row size large , you may consider table:user_detail , add column id in
all tables.
In application side, merge/join by id.
But paid read price, 2nd query to user_de
nodetool compactionstats --- see compacting which table
nodetool cfstats keyspace_name.table_name --- check partition side,
tombstones
go the data file directories: look the data file size, timestamp, ---
compaction will write to new temp file with _tmplink...,
use sstablemetadata ...
does your application really need counter ? just an option.
Thanks,
James
On Mon, Jul 23, 2018 at 10:57 AM, learner dba wrote:
> Thanks a lot Ben. This makes sense but feel bad that we don't have a
> solution yet. We can try consistency level one but that will be against
> general rule for ha
other concerns:
there is no replication between 2.11 and 3.11, store in hints, and replay
hints when remote is same version. have to do repair if over window. if
read quorum 2/3, will get error.
in case rollback to 2.11, can not read new version 3.11 data files, but
online rolling upgrade, some
nodetool repair -pr on every node --- covered all ranges of data.
On Sun, Jul 1, 2018 at 7:03 AM, Riccardo Ferrari wrote:
> Hi list,
>
> After long time of operation we come to the need of growing our cluster.
> This cluster was born on 2.X and almos 2 years ago migrated to 3.0.6 ( I
> kno
you may use: nodetool upgradesstables -a keyspace_name table_name
it will re-write this table's sstable files to current version, while
re-writing, will evit droppable tombstones (expired + gc_grace_seconds
(default 10 days) ), if partition cross different files, they will still be
kept, but most
I think it's case by case, depended on chasing read performance or write
performance, or both.
Ours are used for application, read request 10 times larger than writing,
application wants read performance and doesn't care writing, we use 4 SSD
each 380 GB for each node (total 1.5T a node), read la
per my testing, repair not help.
repair build Merkle tree to compare data, it only write to a new file while
have difference, very very small file at the end (of course, means most
data are synced)
On Fri, Mar 9, 2018 at 10:31 PM, Madhu B wrote:
> Yasir,
> I think you need to run full repair in
Ours have similar issue and I am working to solve it this weekend.
Our case is because STCS make one huge table's sstable file bigger and
bigger after compaction (this is STCS compaction nature, nothing wrong),
even all most all data TTL 30days, but tombstones not evicted since largest
file is wai
if me, I will go 1 table, just think too much labor to manage many tables
and also how reliable while switching tables.
Regarding tombstones, may try some ways to fight:
reasonable partition size ( big partition with large tombstones will be a
problem);
don't query tombstones as possible, in app
i see leveled compaction used, if it's last, it will have to stay until
next level compaction happens, then will be gone, right ?
On Thu, Feb 1, 2018 at 2:33 AM, Bo Finnerup Madsen
wrote:
> Hi,
>
> We are running a small 9 node Cassandra v2.1.17 cluster. The cluster
> generally runs fine, but we
22 matches
Mail list logo