gc_grace_seconds to 0?

2013-10-16 Thread Arindam Barua

We don't do any deletes in our cluster, but do set ttls of 8 days on most of 
the columns. After reading a bunch of earlier threads, I have concluded that I 
can safely set gc_grace_seconds to 0 and not have to worry about expired 
columns coming back to life. However, I wanted to know if there is any other 
downside to setting gc_grace_seconds to 0. Eg. I saw a mention of the ttl of 
hints set to gc_grace_seconds.

Thanks,
Arindam


Re: Cassandra Agent

2013-10-16 Thread Sean McCully

On Wednesday, October 16, 2013 08:10:10 AM Romain HARDOUIN wrote:
> > Can you be more specific?
> 
> I noticed you're a Stacker ;)
> So I mean DBaaS like OpenStack Trove.
> Cassandra support is in progress (targeted for Icehouse release):
> https://blueprints.launchpad.net/trove/+spec/cassandra-db-support
I thought that's what you might be referring to, though I haven't actually 
spent any time with Trove. Now that you've pointed it out, I'll at least be 
aware of it.



-- 
Sean McCully



Opscenter 3.2.2 (?) jmx auth issues

2013-10-16 Thread Sven Stark
Hi guys,

we have secured C* jmx with username/pw. We upgraded our Opscenter from
3.0.2 to 3.2.2 last week and noticed that the agents could not connect
anymore

ERROR [jmx-metrics-4] 2013-10-17 00:45:54,437 Error getting general metrics
java.lang.SecurityException: Authentication failed! Credentials required
at
com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticationFailure(JMXPluggableAuthenticator.java:193)
at
com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticate(JMXPluggableAuthenticator.java:145)

even though the credentials were correctly in
/etc/opscenter/clusters/foo-cluster.conf

[jmx]
username = secret
password = verysecret
port = 20001

Checks with other jmx based tools (nodetool, jmxtrans) confirm that the jmx
setup is correct.

Downgrading Opscenter to 3.0.2 immediately resolved the issue. Could
anybody confirm whether that's a known bug?


Cheers,
Sven


Re: Automated backup and restore of Cassandra 2.0

2013-10-16 Thread Robert Coli
On Wed, Oct 16, 2013 at 11:04 AM, David Laube  wrote:

> I would like to handle this either in-house or with open source software
> instead of going down the Datastax route. Can anyone suggest some methods
> of accomplishing this goal in a straight-forward way? Thanks!
>

https://github.com/synack/tablesnap

Tablesnap has an associated tool (contributed by kad when @Eventbrite, go
us!) called tableslurp, used on the restore side.

=Rob
PS - There is also a librato fork with some changes/improvements. synack
maintains tablesnap and is quite good about merging pull sane pull
requests, so I run the mainline one. :D


Automated backup and restore of Cassandra 2.0

2013-10-16 Thread David Laube
Hi All,

I was wondering if anyone on the list could make some recommendations as to how 
you are currently backing up and restoring your ring in an automated manner. We 
are deploying Cassandra 2.0 with Chef and as part of our release process, we 
test our application in a full staging environment before going to production. 
From what I understand about the snapshot/restore process, this requires that 
we restore X number of individual node snapshots to X number of nodes in a 
separate staging-ring. I would like to handle this either in-house or with open 
source software instead of going down the Datastax route. Can anyone suggest 
some methods of accomplishing this goal in a straight-forward way? Thanks!

Best regards,
-David Laube

Re: DELETE does not delete :)

2013-10-16 Thread Nate McCall
This is almost a guaranteed sign that the clocks are off in your cluster.
If you run the select query a couple of times in a row right after
deletion, do you see the data appear again?


On Wed, Oct 16, 2013 at 12:12 AM, Alexander Shutyaev wrote:

> Hi all,
>
> Unfortunately, we still have a problem. I've modified my code, so that it
> explicitly sets the consistency level to *QUORUM* for each query.
> However, we found out a few cases when the record is deleted on only *1
> node of 3*. In this cases the *delete* query executed ok, and the 
> *select*query that we do right after delete returned
> *0* rows. Later when we ran a global daily check *select* returned *1*row. 
> How can that be? What can we be missing?
>
>
> 2013/10/7 Jon Haddad 
>
>> I haven't used VMWare but it seems odd that it would lock up the ntp
>> port.  try "ps aux | grep ntp" to see if ntpd it's already running.
>>
>> On Oct 7, 2013, at 12:23 AM, Alexander Shutyaev 
>> wrote:
>>
>> Hi Michał,
>>
>> I didn't notice your message at first.. Well this seems like a real cause
>> candidate.. I'll add an explicit consistency level QUORUM and see if that
>> helps. Thanks
>>
>>
>> 2013/10/7 Alexander Shutyaev 
>>
>>> Hi Nick,
>>>
>>> Thanks for the note! We have our cassanra instances installed on virtual
>>> hosts in VMWare and the clock synchronization is handled by the latter, so
>>> I can't use ntpdate (says that NTP socket is in use). Is there any way to
>>> check if the clocks are really synchronized? My best attempt was using
>>> three shell windows with commands already typed thus requiring only
>>> clicking on the window and hitting enter. The results varied by 100-200
>>> msec which I guess is just about the time I need to click and press enter :)
>>>
>>> Thanks in advance,
>>> Alexander
>>>
>>>
>>> 2013/10/7 Nikolay Mihaylov 
>>>
 Hi

 my two cents - before doing anything else, make sure clocks are
 synchronized to the millisecond.
 ntp will do so.

 Nick.


 On Mon, Oct 7, 2013 at 9:02 AM, Alexander Shutyaev 
 wrote:

> Hi all,
>
> We have encountered the following problem with cassandra.
>
> * We use *cassandra v2.0.0* from *Datastax* community repo.
>
> * We have *3 nodes* in a cluster, all of them are seed providers.
>
> * We have a *single keyspace* with *replication factor = 3*:
>
> *CREATE KEYSPACE bof WITH replication = {*
> *  'class': 'SimpleStrategy',*
> *  'replication_factor': '3'*
> *};*
>
> * We use *Datastax Java CQL Driver v1.0.3* in our application.
>
> * We have not modified any *consistency settings* in our app, so I
> assume we have the *default QUORUM* (2 out of 3 in our case)
> consistency *for reads and writes*.
>
> * We have 400+ tables which can be divided in two groups (*main* and *
> uids*). All tables in a group have the same definition, they vary
> only by name. The sample definitions are:
>
> *CREATE TABLE bookingfile (*
> *  key text,*
> *  entity_created timestamp,*
> *  entity_createdby text,*
> *  entity_entitytype text,*
> *  entity_modified timestamp,*
> *  entity_modifiedby text,*
> *  entity_status text,*
> *  entity_uid text,*
> *  entity_updatepolicy text,*
> *  version_created timestamp,*
> *  version_createdby text,*
> *  version_data blob,*
> *  version_dataformat text,*
> *  version_datasource text,*
> *  version_modified timestamp,*
> *  version_modifiedby text,*
> *  version_uid text,*
> *  version_versionnotes text,*
> *  version_versionnumber int,*
> *  versionscount int,*
> *  PRIMARY KEY (key)*
> *) WITH*
> *  bloom_filter_fp_chance=0.01 AND*
> *  caching='KEYS_ONLY' AND*
> *  comment='' AND*
> *  dclocal_read_repair_chance=0.00 AND*
> *  gc_grace_seconds=864000 AND*
> *  index_interval=128 AND*
> *  read_repair_chance=0.10 AND*
> *  replicate_on_write='true' AND*
> *  populate_io_cache_on_flush='false' AND*
> *  default_time_to_live=0 AND*
> *  speculative_retry='NONE' AND*
> *  memtable_flush_period_in_ms=0 AND*
> *  compaction={'class': 'SizeTieredCompactionStrategy'} AND*
> *  compression={'sstable_compression': 'LZ4Compressor'};*
>
> *CREATE TABLE bookingfile_uids (*
> *  date text,*
> *  timeanduid text,*
> *  deleted boolean,*
> *  PRIMARY KEY (date, timeanduid)*
> *) WITH*
> *  bloom_filter_fp_chance=0.01 AND*
> *  caching='KEYS_ONLY' AND*
> *  comment='' AND*
> *  dclocal_read_repair_chance=0.00 AND*
> *  gc_grace_seconds=864000 AND*
> *  index_interval=128 AND*
> *  read_repair_chance=0.10 AND*
> *  replicate_on_write='true' AND*
> *  populate_io_cache_on_flush='false' AND*
> *  default_time_to_live=0 AND*
> *  speculative_retry='NONE' AND*
> *  memtable_flush_period_in_ms=0 AND*
>

Re: DELETE does not delete :)

2013-10-16 Thread Daniel Chia
What is the gc_grace_period for the CF you're deleting from?

Thanks,
Daniel


On Tue, Oct 15, 2013 at 10:12 PM, Alexander Shutyaev wrote:

> Hi all,
>
> Unfortunately, we still have a problem. I've modified my code, so that it
> explicitly sets the consistency level to *QUORUM* for each query.
> However, we found out a few cases when the record is deleted on only *1
> node of 3*. In this cases the *delete* query executed ok, and the 
> *select*query that we do right after delete returned
> *0* rows. Later when we ran a global daily check *select* returned *1*row. 
> How can that be? What can we be missing?
>
>
> 2013/10/7 Jon Haddad 
>
>> I haven't used VMWare but it seems odd that it would lock up the ntp
>> port.  try "ps aux | grep ntp" to see if ntpd it's already running.
>>
>> On Oct 7, 2013, at 12:23 AM, Alexander Shutyaev 
>> wrote:
>>
>> Hi Michał,
>>
>> I didn't notice your message at first.. Well this seems like a real cause
>> candidate.. I'll add an explicit consistency level QUORUM and see if that
>> helps. Thanks
>>
>>
>> 2013/10/7 Alexander Shutyaev 
>>
>>> Hi Nick,
>>>
>>> Thanks for the note! We have our cassanra instances installed on virtual
>>> hosts in VMWare and the clock synchronization is handled by the latter, so
>>> I can't use ntpdate (says that NTP socket is in use). Is there any way to
>>> check if the clocks are really synchronized? My best attempt was using
>>> three shell windows with commands already typed thus requiring only
>>> clicking on the window and hitting enter. The results varied by 100-200
>>> msec which I guess is just about the time I need to click and press enter :)
>>>
>>> Thanks in advance,
>>> Alexander
>>>
>>>
>>> 2013/10/7 Nikolay Mihaylov 
>>>
 Hi

 my two cents - before doing anything else, make sure clocks are
 synchronized to the millisecond.
 ntp will do so.

 Nick.


 On Mon, Oct 7, 2013 at 9:02 AM, Alexander Shutyaev 
 wrote:

> Hi all,
>
> We have encountered the following problem with cassandra.
>
> * We use *cassandra v2.0.0* from *Datastax* community repo.
>
> * We have *3 nodes* in a cluster, all of them are seed providers.
>
> * We have a *single keyspace* with *replication factor = 3*:
>
> *CREATE KEYSPACE bof WITH replication = {*
> *  'class': 'SimpleStrategy',*
> *  'replication_factor': '3'*
> *};*
>
> * We use *Datastax Java CQL Driver v1.0.3* in our application.
>
> * We have not modified any *consistency settings* in our app, so I
> assume we have the *default QUORUM* (2 out of 3 in our case)
> consistency *for reads and writes*.
>
> * We have 400+ tables which can be divided in two groups (*main* and *
> uids*). All tables in a group have the same definition, they vary
> only by name. The sample definitions are:
>
> *CREATE TABLE bookingfile (*
> *  key text,*
> *  entity_created timestamp,*
> *  entity_createdby text,*
> *  entity_entitytype text,*
> *  entity_modified timestamp,*
> *  entity_modifiedby text,*
> *  entity_status text,*
> *  entity_uid text,*
> *  entity_updatepolicy text,*
> *  version_created timestamp,*
> *  version_createdby text,*
> *  version_data blob,*
> *  version_dataformat text,*
> *  version_datasource text,*
> *  version_modified timestamp,*
> *  version_modifiedby text,*
> *  version_uid text,*
> *  version_versionnotes text,*
> *  version_versionnumber int,*
> *  versionscount int,*
> *  PRIMARY KEY (key)*
> *) WITH*
> *  bloom_filter_fp_chance=0.01 AND*
> *  caching='KEYS_ONLY' AND*
> *  comment='' AND*
> *  dclocal_read_repair_chance=0.00 AND*
> *  gc_grace_seconds=864000 AND*
> *  index_interval=128 AND*
> *  read_repair_chance=0.10 AND*
> *  replicate_on_write='true' AND*
> *  populate_io_cache_on_flush='false' AND*
> *  default_time_to_live=0 AND*
> *  speculative_retry='NONE' AND*
> *  memtable_flush_period_in_ms=0 AND*
> *  compaction={'class': 'SizeTieredCompactionStrategy'} AND*
> *  compression={'sstable_compression': 'LZ4Compressor'};*
>
> *CREATE TABLE bookingfile_uids (*
> *  date text,*
> *  timeanduid text,*
> *  deleted boolean,*
> *  PRIMARY KEY (date, timeanduid)*
> *) WITH*
> *  bloom_filter_fp_chance=0.01 AND*
> *  caching='KEYS_ONLY' AND*
> *  comment='' AND*
> *  dclocal_read_repair_chance=0.00 AND*
> *  gc_grace_seconds=864000 AND*
> *  index_interval=128 AND*
> *  read_repair_chance=0.10 AND*
> *  replicate_on_write='true' AND*
> *  populate_io_cache_on_flush='false' AND*
> *  default_time_to_live=0 AND*
> *  speculative_retry='NONE' AND*
> *  memtable_flush_period_in_ms=0 AND*
> *  compaction={'class': 'SizeTieredCompactionStrategy'} AND*
> *  compression={'sstable_compression': '

Data stored using libQtCassandra not being displayed in the database...

2013-10-16 Thread Krishna Chaitanya
Hello,
 I am currently doing a project in which I am supposed to store
netflow packets from a netflow collector into shared memory, read the
packets from there and store into cassandra. When I'm reading the data and
printing it, the output is something like this:-
Collecting new packet---numpkts:59read_index:36024unread_bucket:1

The packet is:.��R_T�n�aї�
HQ���.�X.�ⓣ�
d2a���.��.�����
d2
d3���.�.
Value is:" "

When I try to append this to a QByteArray(in libQtCassandra) and store it
into the database and try to read the value, all I get is 'Value is: "
"'(the last line in the above output). I just want to store the whole
packet as a single string for performance reasons. I'm having problems
storing any other datatypes other than char*. If I try to retrieve any
other datatypes it shows only " ".
So, I have to cast every data type to a char*. But, here the packets are
stored in a character array and still its not displaying the required data
but just "". Can someone help me? Thanks in advance.

-- 
Regards,
BNSK*.
*