RE: Why "select count("*) from .." hangs ?

2014-03-25 Thread Pieter Callewaert
Hi Shalab, Are you using anything in your WHERE clause of the query? If not, you are doing a full scan of your data. In iteration 8 it will scan 1 500 000 entries, and the default time out value is pretty low. If you do select count(*) from traffic_by_day where segment_id = 1 and day = 1 it sho

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
( Virtuval Machines) Do we need to increase the nofile limts to more than 32768 ? On Thu, Nov 7, 2013 at 4:55 PM, Pieter Callewaert mailto:pieter.callewa...@be-mobile.be>> wrote: Hi Murthy, Did you do a package install (.deb?) or you downloaded the tar? If the latest, you have to

RE: Getting into Too many open files issues

2013-11-07 Thread Pieter Callewaert
000 files. (can be found in /etc/init.d/cassandra, FD_LIMIT). However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was too low. Kind regards, Pieter Callewaert From: Murthy Chelankuri [mailto:kmurt...@gmail.com] Sent: donderdag 7 november 2013 12:15 To: user

RE: OpsCenter not connecting to Cluster

2013-10-29 Thread Pieter Callewaert
owing how to reproduce. I know the ppl of datastax are now investigating this, but no fix yet... Kind regards, Pieter Callewaert -Original Message- From: Nigel LEACH [mailto:nigel.le...@uk.bnpparibas.com] Sent: dinsdag 29 oktober 2013 18:24 To: user@cassandra.apache.org Subject: OpsCente

RE: Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
ppens. -It's not socket related. -Using Oracle Java(TM) SE Runtime Environment (build 1.7.0_40-b43) -Using multiple data directories (maybe related ?) I'm stuck at the moment, I don't know If I should try DEBUG log because it will be too much information? Kind regard

Too many open files (Cassandra 2.0.1)

2013-10-29 Thread Pieter Callewaert
at java.lang.Thread.run(Thread.java:724) Several minutes later I get Too many open files. Specs: 12-node cluster with Ubuntu 12.04 LTS, Cassandra 2.0.1 (datastax packages), using JBOD of 2 disks. JNA enabled. Any suggestions? Kind regards, Pieter Callewaert [Description: cid:image003.png@01CD9CE5.CE5

RE: default_time_to_live

2013-10-01 Thread Pieter Callewaert
Thanks, it works perfectly with ALTER TABLE. Stupid I didn't thought of this. Maybe I overlooked, but maybe this should be added in the docs. Really a great feature! Kind regards, [Description: cid:image003.png@01CD9CE5.CE5A2330] Pieter Callewaert Web & IT engineer Web:

default_time_to_live

2013-10-01 Thread Pieter Callewaert
I doing something wrong? Or is this a bug? Kind regards, [Description: cid:image003.png@01CD9CE5.CE5A2330] Pieter Callewaert Web & IT engineer Web: www.be-mobile.be<http://www.be-mobile.be/> Email: pieter.callewa...@be-mobile.be<mailto:pieter.callewa...@be-mobile.be> Tel: + 32 9 330 51 80 <>

RE: cryptic exception in Hadoop/Cassandra job

2013-01-30 Thread Pieter Callewaert
@cassandra.apache.org Subject: Re: cryptic exception in Hadoop/Cassandra job Cassandra 1.1.5, using BulkOutputFormat Brian On Jan 30, 2013, at 7:39 AM, Pieter Callewaert wrote: > Hi Brian, > > Which version of cassandra are you using? And are you using the BOF to write > to Cassandr

RE: cryptic exception in Hadoop/Cassandra job

2013-01-30 Thread Pieter Callewaert
Hi Brian, Which version of cassandra are you using? And are you using the BOF to write to Cassandra? Kind regards, Pieter -Original Message- From: Brian Jeltema [mailto:brian.jelt...@digitalenvoy.net] Sent: woensdag 30 januari 2013 13:20 To: user@cassandra.apache.org Subject: cryptic e

RE: idea drive layout - 4 drives + RAID question

2012-10-30 Thread Pieter Callewaert
We also have 4-disk nodes, and we use the following layout: 2 x OS + Commit in RAID 1 2 x Data disk in RAID 0 This gives us the advantage we never have to reinstall the node when a drive crashes. Kind regards, Pieter From: Ran User [mailto:ranuse...@gmail.com] Sent: dinsdag 30 oktober 2012 4:3

RE: frequent node up/downs

2012-07-02 Thread Pieter Callewaert
Hi, Had the same problem this morning, seems related to the leap second bug. Rebooting the nodes fixed it for me, but there seems to be a fix also without rebooting the server. Kind regards, Pieter From: feedly team [mailto:feedly...@gmail.com] Sent: maandag 2 juli 2012 17:09 To: user@cassandra

RE: forceUserDefinedCompaction in 1.1.0

2012-07-02 Thread Pieter Callewaert
Hi, While I was typing my mail I had the idea to try with the new directory layout. It seems you have to change the parameter settings from 1.0 to 1.1 In 1.0: Param 1: Param 2: In 1.1: Param 1: Param 2: / Don't know if this is a bug or a breaking change ? Kind regards, Pieter Calle

forceUserDefinedCompaction in 1.1.0

2012-07-02 Thread Pieter Callewaert
still active. Does this have to do something with the new directory structure from 1.1 ? Or are the parameters changed from the function? Kind regards, Pieter Callewaert

RE: supercolumns with TTL columns not being compacted correctly

2012-05-23 Thread Pieter Callewaert
? Kind regards, Pieter Callewaert From: Yuki Morishita [mailto:mor.y...@gmail.com] Sent: dinsdag 22 mei 2012 16:21 To: user@cassandra.apache.org Subject: Re: supercolumns with TTL columns not being compacted correctly Data will not be deleted when those keys appear in other stables outside of

RE: supercolumns with TTL columns not being compacted correctly

2012-05-22 Thread Pieter Callewaert
gc_grace is 0, but still the data from the sstable is being written to the new one, while I am 100% sure all the data is invalid. Kind regards, Pieter Callewaert From: samal [mailto:samalgo...@gmail.com] Sent: dinsdag 22 mei 2012 14:33 To: user@cassandra.apache.org Subject: Re: supercolumns with TTL

supercolumns with TTL columns not being compacted correctly

2012-05-22 Thread Pieter Callewaert
andra cassandra 3.9G May 22 14:12 /data/MapData007/HOS-tmp-hc-196898-Data.db The sstable is being 1-on-1 copied to a new one. What am I missing here? TTL works perfectly, but is it giving a problem because it is in a super column, and so never to be deleted from disk? Kind regards Pieter Callewaert

RE: 1.1 not removing commit log files?

2012-05-21 Thread Pieter Callewaert
Hi, In 1.1 the commitlog files are pre-allocated with files of 128MB. (https://issues.apache.org/jira/browse/CASSANDRA-3411) This should however not exceed your commitlog size in Cassandra.yaml. commitlog_total_space_in_mb: 4096 Kind regards, Pieter Callewaert From: Bryce Godfrey

RE: sstableloader 1.1 won't stream

2012-05-18 Thread Pieter Callewaert
Hi, Sorry to say I didn't look further into this. I'm using CentOS 6.2 now for loader without any problems. Kind regards, Pieter Callewaert -Original Message- From: sj.climber [mailto:sj.clim...@gmail.com] Sent: vrijdag 18 mei 2012 3:56 To: cassandra-u...@incubator.apache.o

RE: sstableloader 1.1 won't stream

2012-05-10 Thread Pieter Callewaert
the Cassandra.yaml or is it completely independent? Kind regards -Original Message- From: Pieter Callewaert [mailto:pieter.callewa...@be-mobile.be] Sent: woensdag 9 mei 2012 17:41 To: user@cassandra.apache.org Subject: RE: sstableloader 1.1 won't stream I don't see any entr

RE: sstableloader 1.1 won't stream

2012-05-09 Thread Pieter Callewaert
(CentOS release 5.8 (Final)) not running Cassandra to a 3-node Cassandra cluster. All running 1.1. My next step will be to try to use sstableloader on one of the nodes from the cluster, to see if that works... If anyone has any other ideas, please share. Kind regards, Pieter Callewaert -

RE: sstableloader 1.1 won't stream

2012-05-08 Thread Pieter Callewaert
won't stream You may want to upgrade all your nodes to 1.1. The streaming process connect to every living nodes of the cluster (you can explicitely diable some nodes), so all nodes need to speak 1.1. 2012/5/7 Pieter Callewaert : > Hi, > > > > I’m trying to upgrade our

sstableloader 1.1 won't stream

2012-05-07 Thread Pieter Callewaert
/10.10.10.102 0/1 (0)] [/10.10.10.100 0/1 (0)] [/10.10.10.101 0/1 (0)] [total: 0 - 0MB/s (avg: 0MB/s)] ... Anyone any idea what I'm doing wrong? Kind regards, Pieter Callewaert