Re: TWCS and autocompaction

2018-01-16 Thread Cogumelos Maravilha
in recent time windows, but not the > opposite. > > Cheers, > > > Le mar. 16 janv. 2018 à 12:07, Cogumelos Maravilha > mailto:cogumelosmaravi...@sapo.pt>> a écrit : > > Hi list, > > My settings: > > AND compaction = {'class': >

TWCS and autocompaction

2018-01-16 Thread Cogumelos Maravilha
Hi list, My settings: AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '4', 'compaction_window_unit': 'HOURS', 'enabled': 'true', 'max_threshold': '64', 'min_threshold': '2', 'tombstone_compaction_interval': '15000', 'tombston

version 3.11.1 number_of_keys_estimate is missing

2017-10-11 Thread Cogumelos Maravilha
Hi list, After upgrading from 3.11.0 to 3.11.1 I've notice in nodetool tablestats that the number_of_keys_estimate is missing. How can I get this value now? Thanks in advance. - To unsubscribe, e-mail: user-unsubscr...@cassandr

decommission mode with background compactors running

2017-10-04 Thread Cogumelos Maravilha
Hi list, I've decommission a node but in the background with nodetool status I've checked and there was 4 compactors running and simultaneously the SSTables sent to other nodes. Is this safe or we should disable all background process before decommission like: nodetool disableautocompaction nodet

Re: From SimpleStrategy to DCs approach

2017-09-16 Thread Cogumelos Maravilha
If zone a or b goes dark I want to keep my cluster alive with QUORUM on reading. That's why I'm imagine solve this using another node in a different location. Thanks On 15-09-2017 22:32, kurt greaves wrote: > You can add a tiny node with 3 tokens. it will own a very small amount > of data and be

Re: From SimpleStrategy to DCs approach

2017-09-15 Thread Cogumelos Maravilha
eping only 1 DC. The whole DC per > rack thing isn't necessary and will make your clients overly complicated. > > On 5 Sep. 2017 21:01, "Cogumelos Maravilha" > mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Hi list, > > CREATE KEYSPACE test

Re: truncate table in C* 3.11.0

2017-09-07 Thread Cogumelos Maravilha
arterolo > <http://linkedin.com/in/carlosjuzarterolo>_ > Mobile: +351 918 918 100 > www.pythian.com <http://www.pythian.com/> > > On Thu, Sep 7, 2017 at 10:07 AM, Cogumelos Maravilha > mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Hi list, &

truncate table in C* 3.11.0

2017-09-07 Thread Cogumelos Maravilha
Hi list, Using cqlsh: consistency all; select count(*) table1; 219871 truncate table1; select count(*) table1; 219947 There is a consumer reading data from kafka and inserting in C* but the rate is around 50 inserts/minute. Cheers --

Re: C* 3 node issue -Urgent

2017-09-06 Thread Cogumelos Maravilha
After insert a new node we should: ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' : ... 'replication_factor' : x }; x = number of nodes in dc The default user and password should work: -u cassandra -p cassandra Cheers. On 23-08-2017 11:14, kurt greaves wrote: > The cassandra user requ

From SimpleStrategy to DCs approach

2017-09-05 Thread Cogumelos Maravilha
Hi list, CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = true; I'm using C* 3.11.0 with 8 nodes at aws, 4 nodes at zone a and the other 4 nodes at zone b. The idea is to keep the cluster alive if zone a or b goes dark and keep Q

Adding a new node with the double of disk space

2017-08-17 Thread Cogumelos Maravilha
Hi all, I need to add a new node to my cluster but this time the new node will have the double of disk space comparing to the other nodes. I'm using the default vnodes (num_tokens: 256). To fully use the disk space in the new node I just have to configure num_tokens: 512? Thanks in advance. -

Re: Deflate compressor

2017-07-08 Thread Cogumelos Maravilha
27;2',|*'unchecked_tombstone_compaction': 'true'*|} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.DeflateCompressor'} Is this approach enough? Thanks. On 07/06/2017 06:27 PM, Jeff Jirsa wrote: > > On 2017-07-06 01:37 (-0700),

Re: Deflate compressor

2017-07-06 Thread Cogumelos Maravilha
on = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.DeflateCompressor'} There are some days that I have exactly 24 SSTables: ls -alFh *Data*|grep 'Jul 3'|wc 24 Others no: ls -alFh *Data*|grep 'Jul 2'|wc

Deflate compressor

2017-07-01 Thread Cogumelos Maravilha
Hi list, Is there a way to set Deflate level of compression? Brotli sounds good but unstable. I just need more compression ratio. I'm using C* 3.11.0 Cheers. - To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org F

Re: Node replacement strategy with AWS EBS

2017-06-13 Thread Cogumelos Maravilha
Simplest way of all, if you are using RF>=2 simple terminate the old instance and create a new one. Cheers. On 13-06-2017 18:01, Rutvij Bhatt wrote: > Nevermind, I misunderstood the first link. In this case, the > replacement would just be leaving the listen_address as is (to > InetAddress.getLoc

Re: Cassandra Server 3.10 unable to Start after crash - commitlog needs to be removed

2017-06-01 Thread Cogumelos Maravilha
You can also manually delete the corrupt log file. Just check is name on the logs. Of course you are losing some data or not! Cheers On 01-06-2017 20:01, Peter Reilly wrote: > Please, how do you do this? > > Peter > > > On Fri, May 19, 2017 at 7:13 PM, Varun Gupta > wro

Re: EC2 instance recommendations

2017-05-23 Thread Cogumelos Maravilha
Exactly. On 23-05-2017 23:55, Gopal, Dhruva wrote: > > By that do you mean it’s like bootstrapping a node if it fails or is > shutdown and with a RF that is 2 or higher, data will get replicated > when it’s brought up? > > > > *From: *Cogumelos Maravilha > *Date:

Re: EC2 instance recommendations

2017-05-23 Thread Cogumelos Maravilha
Yes we can only reboot. But using rf=2 or higher it's only a node fresh restart. EBS is a network attached disk. Spinning disk or SSD is almost the same. It's better take the "risk" and use type i instances. Cheers. On 23-05-2017 21:39, sfesc...@gmail.com wrote: > I think this is overstating

Re: InternalResponseStage low on some nodes

2017-05-23 Thread Cogumelos Maravilha
This is really atypical. What about nodetool compactionstats? crontab jobs in each node like nodetool repair, etc? Also security these 2 nodes have the same ports open? Same configuration, same JVM params? nodetool ring normal? Cheers. On 23-05-2017 20:11, Andrew Jorgensen wrote: > Hello, >

Re: Slowness in C* cluster after implementing multiple network interface configuration.

2017-05-23 Thread Cogumelos Maravilha
Hi, I never used version 2.0.x but I think port 7000 isn't enough. Try enable: 7000 inter-node 7001 SSL inter-node 9042 CQL 9160 Thrift is enable in that version And **In Cassandra.yaml, add property “broadcast_address”. = local ipv4 **In Cassandra.yaml, change “listen_address” to privat

Re: Bottleneck for small inserts?

2017-05-23 Thread Cogumelos Maravilha
Hi, Change to *|durable_writes = false|* And please post the results. Thanks. On 05/22/2017 10:08 PM, Jonathan Haddad wrote: > How many CPUs are you using for interrupts? > > http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux > > Have you tried making a flame graph

Re: Is it safe to upgrade 2.2.6 to 3.0.13?

2017-05-20 Thread Cogumelos Maravilha
It's better wait for 3.0.14 https://issues.apache.org/jira/browse/CASSANDRA/fixforversion/12340362/?selectedTab=com.atlassian.jira.jira-projects-plugin:version-summary-panel Cheers. On 05/20/2017 11:31 AM, Stefano Ortolani wrote: > Hi Varun, > > can you elaborate a bit more? I have seen a schem

Re: Nodes stopping

2017-05-11 Thread Cogumelos Maravilha
218 > For some context, I'm trying to get regular repairs going but am > having issues with it. > > > On May 11 2017, at 2:10 pm, Cogumelos Maravilha > wrote: > > Can you grep ERROR system.log > > > On 11-05-2017 21:52, Daniel Steuernol wrote: >>

Re: Nodes stopping

2017-05-11 Thread Cogumelos Maravilha
Can you grep ERROR system.log On 11-05-2017 21:52, Daniel Steuernol wrote: > There is nothing in the system log about it being drained or shutdown, > I'm not sure how else it would be pre-empted. No one else on the team > is on the servers and I haven't been shutting them down. There also is > no

Try version 3.11

2017-05-06 Thread Cogumelos Maravilha
Hi all, deb http://www.apache.org/dist/cassandra/debian 310x main deb http://www.apache.org/dist/cassandra/debian 311x main deb http://www.apache.org/dist/cassandra/debian sid main deb http://www.apache.org/dist/cassandra/debian unstable main Is there a way to try C* version 3.11 binary before re

Re: Totally unbalanced cluster

2017-05-05 Thread Cogumelos Maravilha
arn can put you in some undesirable situations. That's > why I keep mentioning some blog posts, talks or documentations that I > think could be helpful to know Apache Cassandra internals and > processes a bit more. > > C*heers, > --- > Alain Ro

Re: DTCS to TWCS

2017-05-04 Thread Cogumelos Maravilha
Hi, Take a look to https://issues.apache.org/jira/browse/CASSANDRA-13038 Regards On 04-05-2017 18:22, vasu gunja wrote: > Hi All, > > We are currently on C* 2.1.13 version and we are using DTCS for our > tables. > We planning to move to TWCS. > > My questions > From which versions TWCS is avai

Re: Totally unbalanced cluster

2017-05-04 Thread Cogumelos Maravilha
lect.html. > What problem are you trying to solve here? Your data uses TTLs and > TWCS, so expired SSTable should be going away without any issue. > > 46 19 * * * rootnodetool clearsnapshot > > > Again? What for? > > 50 23 * * * rootno

Totally unbalanced cluster

2017-05-04 Thread Cogumelos Maravilha
Hi all, I'm using C* 3.10. CREATE KEYSPACE mykeyspace WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = false; CREATE TABLE mykeyspace.data ( id bigint PRIMARY KEY, kafka text ) WITH bloom_filter_fp_chance = 0.5 AND caching = {'keys': 'AL

Re: Node always dieing

2017-04-11 Thread Cogumelos Maravilha
"system_auth" not my table. On 04/11/2017 07:12 AM, Oskar Kjellin wrote: > You changed to 6 nodes because you were running out of disk? But you > still replicate 100% to all so you don't gain anything > > > > On 10 Apr 2017, at 13:48, Cogumelos Maravilha &g

Re: Node always dieing

2017-04-10 Thread Cogumelos Maravilha
ou have all six of your nodes as seeds? is it possible that > the last one you added used itself as the seed and is isolated? > > On Thu, Apr 6, 2017 at 6:48 AM, Cogumelos Maravilha > mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Yes C* is running as cassandra: > >

Re: Node always dieing

2017-04-07 Thread Cogumelos Maravilha
6/2017 06:13 PM, Carlos Rolo wrote: > i3 are having those issues more than the other instances it seems. Not > the first report I heard about. > Regards, > Carlos Juzarte Rolo > Cassandra Consultant / Datastax Certified Architect / Cassandra MVP > > Pythian - Love your data >

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
Yes but this time I going to give lots of time between killing and pickup. Thanks a lot. On 04/06/2017 05:31 PM, Avi Kivity wrote: > > Your disk is bad. Kill that instance and hope someone else gets it. > > > On 04/06/2017 07:27 PM, Cogumelos Maravilha wrote: &g

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
gt; Is there anything in dmesg? > > > On 04/06/2017 07:25 PM, Cogumelos Maravilha wrote: >> >> Now dies and restart (systemd) without logging why >> >> system.log >> >> INFO [Native-Transport-Requests-2] 2017-04-06 16:06:55,362 >> AuthCache.java:172

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
On 04/06/2017 04:18 PM, Cogumelos Maravilha wrote: > find /mnt/cassandra/ \! -user cassandra > nothing > > I've found some "strange" solutions on Internet > chmod -R 2777 /tmp > chmod -R 2775 cassandra folder > > Lets give some time to see the result >

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
dra` run. Checking only the > top level directory ownership is insufficient, since root could own > files/dirs created below the top level. Find all files not owned by user > cassandra: `find /mnt/cassandra/ \! -user cassandra` > > Just another thought. > > -- Michael On 04/06/2

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
wrote: > There was some issue with the i3 instances and Cassandra. Did you had > this cluster running always on i3? > > On Apr 6, 2017 13:06, "Cogumelos Maravilha" > mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Limit Soft Limit

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
0 Max realtime timeout unlimitedunlimitedus Please find something wrong there! Thanks. On 04/06/2017 11:50 AM, benjamin roth wrote: > Limits: You should check them in /proc/$pid/limits > > 2017-04-06 12:48 GMT+02:00 Cogumelos

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
you checked the effective limits of a running CS process? > Is CS run as Cassandra? Just to rule out missing file perms. > > > Am 06.04.2017 12:24 schrieb "Cogumelos Maravilha" > mailto:cogumelosmaravi...@sapo.pt>>: > > From cassandra.yaml: >

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
ropriate limit for max open files. Running > out of open files can also be a reason for the IO error. > > 2017-04-06 11:34 GMT+02:00 Cogumelos Maravilha > mailto:cogumelosmaravi...@sapo.pt>>: > > Hi list, > > I'm using C* 3.10 in a 6 nodes cluster RF=2.

Node always dieing

2017-04-06 Thread Cogumelos Maravilha
Hi list, I'm using C* 3.10 in a 6 nodes cluster RF=2. All instances type i3.xlarge (AWS) with 32GB, 2 cores and SSD LVM XFS formated 885G. I have one node that is always dieing and I don't understand why. Can anyone give me some hints please. All nodes using the same configuration. Thanks in adva

How to add a node with zero downtime

2017-03-21 Thread Cogumelos Maravilha
Hi list, I'm using C* 3.10; authenticator: PasswordAuthenticator and authorizer: CassandraAuthorizer When adding a node and before |nodetool repair system_auth| finished all my clients die with: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'10.100.100.19': Authentica

Re: Count(*) is not working

2017-02-16 Thread Cogumelos Maravilha
Selvam Raman wrote: > I am using cassandra 3.9. > > Primary Key: > id text; > > On Thu, Feb 16, 2017 at 12:25 PM, Cogumelos Maravilha > mailto:cogumelosmaravi...@sapo.pt>> wrote: > > C* version please and partition key. > > > On 02/16/2017 12:18 PM, Se

Re: Count(*) is not working

2017-02-16 Thread Cogumelos Maravilha
C* version please and partition key. On 02/16/2017 12:18 PM, Selvam Raman wrote: > Hi, > > I want to know the total records count in table. > > I fired the below query: >select count(*) from tablename; > > and i have got the below output > > Read 100 live rows and 1423 tombstone cells for

Re: inconsistent results

2017-02-14 Thread Cogumelos Maravilha
A little of python code also would help to debug; query = SimpleStatement( consistency_level=ConsistencyLevel.ANY) On 14-02-2017 21:43, Josh England wrote: > I'll try it the repair. Using quorum tends to lead to too many > timeout problems though. :( > > -JE > > > On Tue, Feb 1

Extract big data to file

2017-02-08 Thread Cogumelos Maravilha
Hi list, My database stores data from Kafka. Using C* 3.0.10 In my cluster I'm using: AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} The result of extract one day of data uncompressed is around 360G. I've find these approaches: echo "SELECT kafka fr

Re: Global TTL vs Insert TTL

2017-02-01 Thread Cogumelos Maravilha
s needed, you can > enable TWCS as any other default compaction strategy. > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > <mailto:al...@thelastpickle.com> > France > > The Last Pickle - Apache

Re: Global TTL vs Insert TTL

2017-01-31 Thread Cogumelos Maravilha
g/2016/12/08/TWCS-part1.html > http://thelastpickle.com/blog/2017/01/10/twcs-part2.html > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > <mailto:al...@thelastpickle.com> > France > > The Last Pickle - Apac

Global TTL vs Insert TTL

2017-01-31 Thread Cogumelos Maravilha
Hi I'm just wondering what option is fastest: Global:***create table xxx (.|AND |**|default_time_to_live = |**|XXX|**|;|**||and**UPDATE xxx USING TTL XXX;* Line by line: *INSERT INTO xxx (...USING TTL xxx;* Is there a overhead using line by line option or wasted disk space?

Kill queries

2017-01-23 Thread Cogumelos Maravilha
Hi, I'm using cqlsh --request-timeout=1 but because I've more than 600.000.000 rows some times I get blocked and I kill the cqlsh. But what about the running query in Cassandra? How can I check that? Thanks in advance.

Re: Is this normal!?

2017-01-11 Thread Cogumelos Maravilha
Nodetool repair always list lots of data and never stays repaired. I think. Cheers On 01/11/2017 02:15 PM, Hannu Kröger wrote: > Just to understand: > > What exactly is the problem? > > Cheers, > Hannu > >> On 11 Jan 2017, at 16.07, Cogumelos Maravilha &g

Is this normal!?

2017-01-11 Thread Cogumelos Maravilha
Cassandra 3.9. nodetool status Datacenter: dc1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.0.120.145 1.21 MiB 256 49.5% da6683cd-c3cf-4c14