Re: Timeouts but returned consistency level is invalid

2015-01-30 Thread Philip Thompson
Jan is incorrect. Keyspaces do not have consistency levels set on them.
Consistency Levels are always set by the client. You are almost certainly
running into https://issues.apache.org/jira/browse/CASSANDRA-7947 which is
fixed in 2.1.3 and 2.0.12.

On Fri, Jan 30, 2015 at 8:37 AM, Michał Łowicki mlowi...@gmail.com wrote:

 Hi Jan,

 I'm using only one keyspace. Even if it defaults to ONE why sometimes ALL
 is returned?

 On Fri, Jan 30, 2015 at 2:28 PM, Jan cne...@yahoo.com wrote:

 HI Michal;

 The consistency level defaults to ONE for all write and read operations.
 However consistency level is also set for the keyspace.

 Could it be possible that your queries are spanning multiple keyspaces
 which bear different levels of consistency ?

 cheers
 Jan

 C* Architect


   On Friday, January 30, 2015 1:36 AM, Michał Łowicki mlowi...@gmail.com
 wrote:


 Hi,

 We're using C* 2.1.2, django-cassandra-engine which in turn uses
 cqlengine. LOCAL_QUROUM is set as default consistency level. From time to
 time we get timeouts while talking to the database but what is strange
 returned consistency level is not LOCAL_QUROUM:

 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 3 responses. 
 info={'received_responses': 3, 'required_responses': 4, 'consistency': 'ALL'}


 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 1 responses. 
 info={'received_responses': 1, 'required_responses': 2, 'consistency': 
 'LOCAL_QUORUM'}


 code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 0 responses. 
 info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}


 Any idea why it might happen?

 --
 BR,
 Michał Łowicki





 --
 BR,
 Michał Łowicki



RE: Tombstone gc after gc grace seconds

2015-01-30 Thread Ravi Agrawal
I did a small test. I wrote data to 4 different column family. 30MB of data.
256 rowkeys and 100K columns on an average.
And then deleted all data from all of them.


1.   Md_normal - created using default compaction parameters and Gc Grace 
seconds was 5 seconds. Data was written and then deleted. Compaction was ran 
using nodetool compact keyspace columnfamily - I see full disk data, but 
cannot query columns(since data was deleted consistent behavior) and cannot 
query rows in cqlsh. Hits timeout.

2.   Md_test - created using following compact parameters - 
compaction={'tombstone_threshold': '0.01', 'class': 
'SizeTieredCompactionStrategy'} and Gc Grace seconds was 5 seconds. Disksize 
is reduced, and am able to query rows which return 0.

3.   Md_test2 - created using following compact parameters - 
compaction={'tombstone_threshold': '0.0', 'class': 
'SizeTieredCompactionStrategy'}. Disksize is reduced, not able to query rows 
using cqlsh. Hits timeout.

4.   Md_forcecompact - created using compaction parameters 
compaction={'unchecked_tombstone_compaction': 'true', 'class': 
'SizeTieredCompactionStrategy'} and Gc Grace seconds was 5 seconds. Data was 
written and then deleted. I see full disk data, but cannot query any data using 
mddbreader and cannot query rows in cqlsh. Hits timeout.

Next day sizes were -
30M ./md_forcecompact
4.0K./md_test
304K./md_test2
30M ./md_normal

Feel of the data that we have is -
8000 rowkeys per day and columns are added throughout the day. 300K columns on 
an average per rowKey.



From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Friday, January 30, 2015 4:26 AM
To: user@cassandra.apache.org
Subject: Re: Tombstone gc after gc grace seconds

The point is that all the parts or fragments of the row need to be in the 
SSTables implied in the compaction for C* to be able to evict the row 
effectively.

My understanding of those parameters is that they will trigger a compaction on 
the SSTable that exceed this ratio. This will work properly if you never 
update a row (by modifying a value or adding a column). If your workflow is 
something like Write once per partition key, this parameter will do the job.

If you have fragments, you might trigger this compaction for nothing. In the 
case of frequently updated rows (like when using wide rows / time series) your 
only way to get rid of tombstone is a major compaction.

That's how I understand this.

Hope this help,

C*heers,

Alain

2015-01-30 1:29 GMT+01:00 Mohammed Guller 
moham...@glassbeam.commailto:moham...@glassbeam.com:
Ravi -

It may help.

What version are you running? Do you know if minor compaction is getting 
triggered at all? One way to check would be see how many sstables the data 
directory has.

Mohammed

From: Ravi Agrawal 
[mailto:ragra...@clearpoolgroup.commailto:ragra...@clearpoolgroup.com]
Sent: Thursday, January 29, 2015 1:29 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: RE: Tombstone gc after gc grace seconds

Hi,
I saw there are 2 more interesting parameters -

a.   tombstone_threshold - A ratio of garbage-collectable tombstones to all 
contained columns, which if exceeded by the SSTable triggers compaction (with 
no other SSTables) for the purpose of purging the tombstones. Default value - 
0.2

b.  unchecked_tombstone_compaction - True enables more aggressive than 
normal tombstone compactions. A single SSTable tombstone compaction runs 
without checking the likelihood of success. Cassandra 2.0.9 and later.
Could I use these to get what I want?
Problem I am encountering is even long after gc_grace_seconds I see no 
reduction in disk space until I run compaction manually. I was thinking to make 
tombstone threshold close to 0 and unchecked compaction set to true.
Also we are not running nodetool repair on weekly basis as of now.

From: Eric Stevens [mailto:migh...@gmail.com]
Sent: Monday, January 26, 2015 12:11 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Tombstone gc after gc grace seconds

My understanding is consistent with Alain's, there's no way to force a 
tombstone-only compaction, your only option is major compaction.  If you're 
using size tiered, that comes with its own drawbacks.

I wonder if there's a technical limitation that prevents introducing a shadowed 
data cleanup style operation (overwritten data, including deletes, plus 
tombstones past their gc grace period); or maybe even couple it directly with 
cleanup since most of the work (rewriting old SSTables) would be identical.  I 
can't think of something off the top of my head, but it would be so useful that 
it seems like there's got to be something I'm missing.

On Mon, Jan 26, 2015 at 4:15 AM, Alain RODRIGUEZ 
arodr...@gmail.commailto:arodr...@gmail.com wrote:
I don't think that such a thing exists as SSTables are immutable. You compact 
it entirely or you don't. Minor compaction will eventually evict tombstones. If 

Re: Unable to create a keyspace

2015-01-30 Thread Adam Holmberg
I would first ask if you could upgrade to the latest version of Cassandra
2.1.x (presently 2.1.2).

If the issue still occurs consistently, it would be interesting to turn up
logging on the client side and see if something is causing the client to
disconnect during the metadata refresh following the schema change. If this
yields further information, please raise the issue on the driver's user
mailing list.

Adam Holmberg

On Wed, Jan 28, 2015 at 8:19 PM, Saurabh Sethi saurabh_se...@symantec.com
wrote:

 I have a 3 node Cassandra 2.1.0 cluster and I am using datastax 2.1.4
 driver to create a keyspace followed by creating a column family within
 that keyspace from my unit test.

 But I do not see the keyspace getting created and the code for creating
 column family fails because it cannot find the keyspace. I see the
 following in the system.log file:

 INFO  [SharedPool-Worker-1] 2015-01-28 17:59:08,472
 MigrationManager.java:229 - Create new Keyspace:
 KSMetaData{name=testmaxcolumnskeyspace, strategyClass=SimpleStrategy,
 strategyOptions={replication_factor=1}, cfMetaData={}, durableWrites=true,
 userTypes=org.apache.cassandra.config.UTMetaData@370ad1d3}
 INFO  [MigrationStage:1] 2015-01-28 17:59:08,476
 ColumnFamilyStore.java:856 - Enqueuing flush of schema_keyspaces: 512 (0%)
 on-heap, 0 (0%) off-heap
 INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,477 Memtable.java:326 -
 Writing Memtable-schema_keyspaces@1664717092(138 serialized bytes, 3 ops,
 0%/0% of on/off-heap limit)
 INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,486 Memtable.java:360 -
 Completed flushing
 /usr/share/apache-cassandra-2.1.0/bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-118-Data.db
 (175 bytes) for commitlog position ReplayPosition(segmentId=1422485457803,
 position=10514)

 This issue doesn’t happen always. My test runs fine sometimes but once it
 gets into this state, it remains there for a while and I can constantly
 reproduce this.

 Also, when this issue happens for the first time, I also see the following
 error message in system.log file:

 ERROR [SharedPool-Worker-1] 2015-01-28 15:08:24,286 ErrorMessage.java:218 - 
 Unexpected exception during request
 java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_05]
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
 ~[na:1.8.0_05]
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
 ~[na:1.8.0_05]
 at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_05]
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375) 
 ~[na:1.8.0_05]
 at 
 io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:878) 
 ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225)
  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:114)
  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) 
 ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464)
  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) 
 ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) 
 ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
  ~[netty-all-4.0.20.Final.jar:4.0.20.Final]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]


 Anyone has any idea what might be going on here?

 Thanks,
 Saurabh



Re: Timeouts but returned consistency level is invalid

2015-01-30 Thread Michał Łowicki
Thanks Philip. This explains why I see ALL. Any idea why sometimes ONE is 
returned?

—
Michał

On Fri, Jan 30, 2015 at 4:18 PM, Philip Thompson
philip.thomp...@datastax.com wrote:

 Jan is incorrect. Keyspaces do not have consistency levels set on them.
 Consistency Levels are always set by the client. You are almost certainly
 running into https://issues.apache.org/jira/browse/CASSANDRA-7947 which is
 fixed in 2.1.3 and 2.0.12.
 On Fri, Jan 30, 2015 at 8:37 AM, Michał Łowicki mlowi...@gmail.com wrote:
 Hi Jan,

 I'm using only one keyspace. Even if it defaults to ONE why sometimes ALL
 is returned?

 On Fri, Jan 30, 2015 at 2:28 PM, Jan cne...@yahoo.com wrote:

 HI Michal;

 The consistency level defaults to ONE for all write and read operations.
 However consistency level is also set for the keyspace.

 Could it be possible that your queries are spanning multiple keyspaces
 which bear different levels of consistency ?

 cheers
 Jan

 C* Architect


   On Friday, January 30, 2015 1:36 AM, Michał Łowicki mlowi...@gmail.com
 wrote:


 Hi,

 We're using C* 2.1.2, django-cassandra-engine which in turn uses
 cqlengine. LOCAL_QUROUM is set as default consistency level. From time to
 time we get timeouts while talking to the database but what is strange
 returned consistency level is not LOCAL_QUROUM:

 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 3 responses. 
 info={'received_responses': 3, 'required_responses': 4, 'consistency': 
 'ALL'}


 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 1 responses. 
 info={'received_responses': 1, 'required_responses': 2, 'consistency': 
 'LOCAL_QUORUM'}


 code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 0 responses. 
 info={'received_responses': 0, 'required_responses': 1, 'consistency': 
 'ONE'}


 Any idea why it might happen?

 --
 BR,
 Michał Łowicki





 --
 BR,
 Michał Łowicki


Re: FW: How to use cqlsh to access Cassandra DB if the client_encryption_options is enabled

2015-01-30 Thread Adam Holmberg
Assuming the truststore you are referencing is the same one the server is
using, it's probably in the wrong format. You will need to export the cert
into a PEM format for use in the (Python) cqlsh client. If exporting from
the java keystore format, use

keytool -exportcert source keystore, pass, etc -rfc -file output file

If you have the crt file, you should be able to accomplish the same using
openssl:

openssl x509 -in in crt -inform DER -out output file -outform PEM

Then, you should refer to that PEM file in your command. Alternatively, you
can specify a path to the file (along with other options) in your cqlshrc
file.

References:
How cqlsh picks up ssl options
https://github.com/apache/cassandra/blob/cassandra-2.1/pylib/cqlshlib/sslhandling.py
Example cqlshrc file
https://github.com/apache/cassandra/blob/cassandra-2.1/conf/cqlshrc.sample

Adam Holmberg

On Wed, Jan 28, 2015 at 1:08 AM, Lu, Boying boying...@emc.com wrote:

 Hi, All,



 Does anyone know the answer?



 Thanks a lot



 Boying





 *From:* Lu, Boying
 *Sent:* 2015年1月6日 11:21
 *To:* user@cassandra.apache.org
 *Subject:* How to use cqlsh to access Cassandra DB if the
 client_encryption_options is enabled



 Hi, All,



 I turned on the dbclient_encryption_options like this:

 client_encryption_options:

 enabled: *true*

 keystore:  path-to-my-keystore-file

 keystore_password:  my-keystore-password

 truststore: path-to-my-truststore-file

 truststore_password:  my-truststore-password

 …



 I can use following cassandra-cli command to access DB:

 cassandra-cli  -ts path-to-my-truststore-file –tspw my-truststore-password
 –tf org.apache.cassandra.thrift.SSLTransportFactory



 But when I tried to access DB by cqlsh like this:

 SSL_CERTFILE=path-to-my-truststore cqlsh –t
 cqlishlib.ssl.ssl_transport_factory



 I got following error:

 Connection error: Could not connect to localhost:9160: [Errno 0]
 _ssl.c:332: error::lib(0):func(0):reason(0)



 I guess the reason maybe is that I didn’t provide the trustore password.
 But cqlsh doesn’t provide such option.



 Does anyone know how to resolve this issue?



 Thanks



 Boying





Re: Should one expect to see hints being stored/delivered occasionally?

2015-01-30 Thread Vasileios Vlachos

Thanks for your reply Rob, I am back to this after a while...

I am not sure if this is different in 1.2.18, but I remember from older 
versions that GC pauses would only be logged in the /system.log/ if 
their duration /was = 200ms/. Also, when hints are detected, we cannot 
correlate it with GC pauses. We are thinking of tweaking the GC logging 
settings in the /cassandra-env/ file, but we are unsure as to which ones 
are going to be heavy for the server and which ones are safer to modify. 
Would you be able to advice on this?


The hints issue we seem to have, is not catastrophic in the sense that 
it is not causing serious/obvious problems to the clients, but makes us 
feel rather uncomfortable with the overall cluster health because, as 
you said, is a warning sign that something is wrong. It doesn't happen 
very often either, but I don't think this makes the situation any 
better. Apart from increasing the GC logging, I don't see any other way 
of debugging this further.


Thanks for your input,

Vasilis

On 20/01/15 22:53, Robert Coli wrote:
On Sat, Jan 17, 2015 at 3:32 PM, Vasileios Vlachos 
vasileiosvlac...@gmail.com mailto:vasileiosvlac...@gmail.com wrote:


Is there any other occasion that hints are stored and then being
sent in a cluster, other than network or other temporary or
permanent failure? Could it be that the client responsible for
establishing a connection is causing this? We use the Datastax C#
driver for connecting to the cluster and we run C* 1.2.18 on
Ubuntu 12.04.


Other than restarting nodes manually (which I consider a temporary 
failure for the purposes of this question), no. Seeing hints being 
stored and delivered outside of this context is a warning sign that 
something may be wrong with your cluster.


Probably what is happening is that you have stop the world GCs long 
enough to trigger queueing of hints via timeouts during these GCs.


=Rob


Re: Unable to create a keyspace

2015-01-30 Thread Asit KAUSHIK
Saurabh a vague suggestion when you are dropping can you wait for sometime
to let the change propagate to other node.Also I see a replication factor 1
but you have 3 nodes
On Jan 31, 2015 6:28 AM, Saurabh Sethi saurabh_se...@symantec.com wrote:

 Thanks Adam. I upgraded to 2.1.2 but still seeing this issue. Let me go in
 a bit more detail as to what I am seeing – When I create a keyspace for the
 first time in a 3 node cluster, it works fine but if I drop the keyspace
 and try to recreate it, I see that the node received the request to create
 it but it didn’t delegate the request to other nodes because of which the
 keyspace didn’t get created on other two nodes.

 But the request for creating a table does get propagated to other nodes
 and since they can’t find that keyspace they throw an exception.

 I have 2 nodes as seeds, so the seeds property in cassandra.yaml for all 3
 nodes looks like – seeds: node1ip,node3ip

 Following is that I see in system.log file when I enable DEBUG mode:

 DEBUG [SharedPool-Worker-3] 2015-01-30 16:39:37,363 Message.java:437 -
 Received: STARTUP {CQL_VERSION=3.0.0}, v=3
 DEBUG [SharedPool-Worker-3] 2015-01-30 16:39:37,364 Message.java:452 -
 Responding: READY, v=3
 DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,365 Message.java:437 -
 Received: QUERY select cluster_name from system.local, v=3
 DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,366 StorageProxy.java:1534
 - Estimated result rows per range: 20.057144; requested rows: 2147483647,
 ranges.size(): 1; concurrent range requests: 1
 DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,369 Tracing.java:157 -
 request complete
 DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,369 Message.java:452 -
 Responding: ROWS [cluster_name(system, local),
 org.apache.cassandra.db.marshal.UTF8Type]
  | CASSANDRAEDPDEVCLUSTER
 ---, v=3
 DEBUG [SharedPool-Worker-1] 2015-01-30 16:39:37,386 Message.java:437 -
 Received: QUERY CREATE KEYSPACE IF NOT EXISTS TestMaxColumnsKeySpace WITH
 REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };, v=3
 INFO  [SharedPool-Worker-1] 2015-01-30 16:39:37,387
 MigrationManager.java:229 - Create new Keyspace:
 KSMetaData{name=testmaxcolumnskeyspace, strategyClass=SimpleStrategy,
 strategyOptions={replication_factor=1}, cfMetaData={}, durableWrites=true,
 userTypes=org.apache.cassandra.config.UTMetaData@5485ec68}
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,388 FileCacheService.java:150
 - Estimated memory usage is 1182141 compared to actual usage 262698
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,389 FileCacheService.java:102
 - Evicting cold readers for
 /usr/share/apache-cassandra-2.1.2/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-3-Data.db
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:150
 - Estimated memory usage is 1182141 compared to actual usage 394047
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:102
 - Evicting cold readers for
 /usr/share/apache-cassandra-2.1.2/bin/../data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-1-Data.db
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:150
 - Estimated memory usage is 1182141 compared to actual usage 525396
 INFO  [MigrationStage:1] 2015-01-30 16:39:37,392
 ColumnFamilyStore.java:840 - Enqueuing flush of schema_keyspaces: 496 (0%)
 on-heap, 0 (0%) off-heap
 DEBUG [MigrationStage:1] 2015-01-30 16:39:37,392
 ColumnFamilyStore.java:166 - scheduling flush in 360 ms
 INFO  [MemtableFlushWriter:10] 2015-01-30 16:39:37,394 Memtable.java:325 -
 Writing Memtable-schema_keyspaces@922450609(138 serialized bytes, 3 ops,
 0%/0% of on/off-heap limit)
 DEBUG [MemtableFlushWriter:10] 2015-01-30 16:39:37,410 FileUtils.java:161
 - Renaming
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-tmp-ka-10-Statistics.db
 to
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-10-Statistics.db
 DEBUG [MemtableFlushWriter:10] 2015-01-30 16:39:37,410 FileUtils.java:161
 - Renaming
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-tmp-ka-10-Filter.db
 to
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-10-Filter.db
 DEBUG [MemtableFlushWriter:10] 2015-01-30 16:39:37,410 FileUtils.java:161
 - Renaming
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-tmp-ka-10-TOC.txt
 to
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-10-TOC.txt
 DEBUG [MemtableFlushWriter:10] 2015-01-30 16:39:37,411 FileUtils.java:161
 - Renaming
 bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-tmp-ka-10-Digest.sha1
 to
 

Cassandra 2.0.11 with stargate-core read writes are slow

2015-01-30 Thread Asit KAUSHIK
Hi all,
We are testing our logging application on 3 node cluster each system is
virtual machine with 4 cores and 8GB RAM with RedHat enterprise. Now my
question is in 3 parts
1) Am I using the right hardware as of now I am testing say 10 record
reads.
2) I am using Stargate-core for full text search is there any slowness
observed because of that as ???
2) How can I simulate the write load I created an application which creates
say 20 threads and each tread I insert 1000 records and on each thread I
open cluster connection session connection execute 1000 records and close
the connection. This takes a lot of time please suggest if I missing
something


Re: Unable to create a keyspace

2015-01-30 Thread Saurabh Sethi
I repeat the test after verifying that the keyspace has been dropped.

Also, I do not want any replication as of now that’s why replication factor is 
1. But I don’t think that should be the reason for the command not propagating 
to other nodes.

One more thing that I observed right now is that if I create the keyspace from 
node 1 and node 2, it doesn’t get created but if I try to create it from node 
3, it does get created. Node1 and Node 2 are Vms hosted on the same physical 
hardware, not sure if that has anything to do.

Thanks,
Saurabh
From: Asit KAUSHIK 
asitkaushikno...@gmail.commailto:asitkaushikno...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Friday, January 30, 2015 at 5:33 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org 
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Unable to create a keyspace


Saurabh a vague suggestion when you are dropping can you wait for sometime to 
let the change propagate to other node.Also I see a replication factor 1 but 
you have 3 nodes

On Jan 31, 2015 6:28 AM, Saurabh Sethi 
saurabh_se...@symantec.commailto:saurabh_se...@symantec.com wrote:
Thanks Adam. I upgraded to 2.1.2 but still seeing this issue. Let me go in a 
bit more detail as to what I am seeing – When I create a keyspace for the first 
time in a 3 node cluster, it works fine but if I drop the keyspace and try to 
recreate it, I see that the node received the request to create it but it 
didn’t delegate the request to other nodes because of which the keyspace didn’t 
get created on other two nodes.

But the request for creating a table does get propagated to other nodes and 
since they can’t find that keyspace they throw an exception.

I have 2 nodes as seeds, so the seeds property in cassandra.yaml for all 3 
nodes looks like – seeds: node1ip,node3ip

Following is that I see in system.log file when I enable DEBUG mode:

DEBUG [SharedPool-Worker-3] 2015-01-30 16:39:37,363 Message.java:437 - 
Received: STARTUP {CQL_VERSION=3.0.0}, v=3
DEBUG [SharedPool-Worker-3] 2015-01-30 16:39:37,364 Message.java:452 - 
Responding: READY, v=3
DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,365 Message.java:437 - 
Received: QUERY select cluster_name from system.local, v=3
DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,366 StorageProxy.java:1534 - 
Estimated result rows per range: 20.057144; requested rows: 2147483647, 
ranges.size(): 1; concurrent range requests: 1
DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,369 Tracing.java:157 - request 
complete
DEBUG [SharedPool-Worker-2] 2015-01-30 16:39:37,369 Message.java:452 - 
Responding: ROWS [cluster_name(system, local), 
org.apache.cassandra.db.marshal.UTF8Type]
 | CASSANDRAEDPDEVCLUSTER
---, v=3
DEBUG [SharedPool-Worker-1] 2015-01-30 16:39:37,386 Message.java:437 - 
Received: QUERY CREATE KEYSPACE IF NOT EXISTS TestMaxColumnsKeySpace WITH 
REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };, v=3
INFO  [SharedPool-Worker-1] 2015-01-30 16:39:37,387 MigrationManager.java:229 - 
Create new Keyspace: KSMetaData{name=testmaxcolumnskeyspace, 
strategyClass=SimpleStrategy, strategyOptions={replication_factor=1}, 
cfMetaData={}, durableWrites=true, 
userTypes=org.apache.cassandra.config.UTMetaData@5485ec68}
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,388 FileCacheService.java:150 - 
Estimated memory usage is 1182141 compared to actual usage 262698
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,389 FileCacheService.java:102 - 
Evicting cold readers for 
/usr/share/apache-cassandra-2.1.2/bin/../data/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-3-Data.db
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:150 - 
Estimated memory usage is 1182141 compared to actual usage 394047
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:102 - 
Evicting cold readers for 
/usr/share/apache-cassandra-2.1.2/bin/../data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-1-Data.db
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,390 FileCacheService.java:150 - 
Estimated memory usage is 1182141 compared to actual usage 525396
INFO  [MigrationStage:1] 2015-01-30 16:39:37,392 ColumnFamilyStore.java:840 - 
Enqueuing flush of schema_keyspaces: 496 (0%) on-heap, 0 (0%) off-heap
DEBUG [MigrationStage:1] 2015-01-30 16:39:37,392 ColumnFamilyStore.java:166 - 
scheduling flush in 360 ms
INFO  [MemtableFlushWriter:10] 2015-01-30 16:39:37,394 Memtable.java:325 - 
Writing Memtable-schema_keyspaces@922450609(138 serialized bytes, 3 ops, 0%/0% 
of on/off-heap limit)
DEBUG [MemtableFlushWriter:10] 2015-01-30 16:39:37,410 FileUtils.java:161 - 
Renaming 
bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-tmp-ka-10-Statistics.db
 to 

dblink between oracle and cassandra

2015-01-30 Thread Rahul Bhardwaj
Hi All,


I want make a dblink from oracle 11g and cassandra cluster.

Is there any way or any alternative to do the same. please help.



Regards:
Rahul Bhardwaj

-- 

Follow IndiaMART.com http://www.indiamart.com for latest updates on this 
and more: https://plus.google.com/+indiamart 
https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART Mobile 
Channel: 
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
 
https://play.google.com/store/apps/details?id=com.indiamart.m 
http://m.indiamart.com/
https://www.youtube.com/watch?v=DzORNbeSXN8list=PL2o4J51MqpL0mbue6kzDa6eymLVUXtlR1index=2
Watch how Irrfan Khan gets his work done in no time on IndiaMART, kyunki Kaam 
Yahin Banta Hai https://www.youtube.com/watch?v=hmS4Afl2bNU!!!


Timeouts but returned consistency level is invalid

2015-01-30 Thread Michał Łowicki
Hi,

We're using C* 2.1.2, django-cassandra-engine which in turn uses cqlengine.
LOCAL_QUROUM is set as default consistency level. From time to time we get
timeouts while talking to the database but what is strange returned
consistency level is not LOCAL_QUROUM:

code=1200 [Coordinator node timed out waiting for replica nodes'
responses] message=Operation timed out - received only 3 responses.
info={'received_responses': 3, 'required_responses': 4, 'consistency':
'ALL'}


code=1200 [Coordinator node timed out waiting for replica nodes'
responses] message=Operation timed out - received only 1 responses.
info={'received_responses': 1, 'required_responses': 2, 'consistency':
'LOCAL_QUORUM'}


code=1100 [Coordinator node timed out waiting for replica nodes'
responses] message=Operation timed out - received only 0 responses.
info={'received_responses': 0, 'required_responses': 1, 'consistency':
'ONE'}


Any idea why it might happen?

-- 
BR,
Michał Łowicki


Re: Tombstone gc after gc grace seconds

2015-01-30 Thread Alain RODRIGUEZ
The point is that all the parts or fragments of the row need to be in
the SSTables implied in the compaction for C* to be able to evict the row
effectively.

My understanding of those parameters is that they will trigger a compaction
on the SSTable that exceed this ratio. This will work properly if you never
update a row (by modifying a value or adding a column). If your workflow
is something like Write once per partition key, this parameter will do
the job.

If you have fragments, you might trigger this compaction for nothing. In
the case of frequently updated rows (like when using wide rows / time
series) your only way to get rid of tombstone is a major compaction.

That's how I understand this.

Hope this help,

C*heers,

Alain

2015-01-30 1:29 GMT+01:00 Mohammed Guller moham...@glassbeam.com:

  Ravi -



 It may help.



 What version are you running? Do you know if minor compaction is getting
 triggered at all? One way to check would be see how many sstables the data
 directory has.



 Mohammed



 *From:* Ravi Agrawal [mailto:ragra...@clearpoolgroup.com]
 *Sent:* Thursday, January 29, 2015 1:29 PM
 *To:* user@cassandra.apache.org
 *Subject:* RE: Tombstone gc after gc grace seconds



 Hi,

 I saw there are 2 more interesting parameters -

 a.   tombstone_threshold - A ratio of garbage-collectable tombstones
 to all contained columns, which if exceeded by the SSTable triggers
 compaction (with no other SSTables) for the purpose of purging the
 tombstones. Default value - 0.2

 b.  unchecked_tombstone_compaction - True enables more aggressive
 than normal tombstone compactions. A single SSTable tombstone compaction
 runs without checking the likelihood of success. Cassandra 2.0.9 and later.

 Could I use these to get what I want?

 Problem I am encountering is even long after gc_grace_seconds I see no
 reduction in disk space until I run compaction manually. I was thinking to
 make tombstone threshold close to 0 and unchecked compaction set to true.

 Also we are not running nodetool repair on weekly basis as of now.



 *From:* Eric Stevens [mailto:migh...@gmail.com migh...@gmail.com]
 *Sent:* Monday, January 26, 2015 12:11 PM
 *To:* user@cassandra.apache.org
 *Subject:* Re: Tombstone gc after gc grace seconds



 My understanding is consistent with Alain's, there's no way to force a
 tombstone-only compaction, your only option is major compaction.  If you're
 using size tiered, that comes with its own drawbacks.



 I wonder if there's a technical limitation that prevents introducing a
 shadowed data cleanup style operation (overwritten data, including deletes,
 plus tombstones past their gc grace period); or maybe even couple it
 directly with cleanup since most of the work (rewriting old SSTables) would
 be identical.  I can't think of something off the top of my head, but it
 would be so useful that it seems like there's got to be something I'm
 missing.



 On Mon, Jan 26, 2015 at 4:15 AM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

  I don't think that such a thing exists as SSTables are immutable. You
 compact it entirely or you don't. Minor compaction will eventually evict
 tombstones. If it is too slow, AFAIK, the better solution is a major
 compaction.



 C*heers,



 Alain



 2015-01-23 0:00 GMT+01:00 Ravi Agrawal ragra...@clearpoolgroup.com:

  Hi,

 I want to trigger just tombstone compaction after gc grace seconds is
 completed not nodetool compact keyspace column family.

 Anyway I can do that?



 Thanks











Re: Timeouts but returned consistency level is invalid

2015-01-30 Thread Michał Łowicki
Hi Jan,

I'm using only one keyspace. Even if it defaults to ONE why sometimes ALL
is returned?

On Fri, Jan 30, 2015 at 2:28 PM, Jan cne...@yahoo.com wrote:

 HI Michal;

 The consistency level defaults to ONE for all write and read operations.
 However consistency level is also set for the keyspace.

 Could it be possible that your queries are spanning multiple keyspaces
 which bear different levels of consistency ?

 cheers
 Jan

 C* Architect


   On Friday, January 30, 2015 1:36 AM, Michał Łowicki mlowi...@gmail.com
 wrote:


 Hi,

 We're using C* 2.1.2, django-cassandra-engine which in turn uses
 cqlengine. LOCAL_QUROUM is set as default consistency level. From time to
 time we get timeouts while talking to the database but what is strange
 returned consistency level is not LOCAL_QUROUM:

 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 3 responses. 
 info={'received_responses': 3, 'required_responses': 4, 'consistency': 'ALL'}


 code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 1 responses. 
 info={'received_responses': 1, 'required_responses': 2, 'consistency': 
 'LOCAL_QUORUM'}


 code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
 message=Operation timed out - received only 0 responses. 
 info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}


 Any idea why it might happen?

 --
 BR,
 Michał Łowicki





-- 
BR,
Michał Łowicki


Re: Timeouts but returned consistency level is invalid

2015-01-30 Thread Jan
HI Michal; 
The consistency level defaults to ONE for all write and read operations.
However consistency level is also set for the keyspace. 
Could it be possible that your queries are spanning multiple keyspaces which 
bear different levels of consistency ?  
cheersJan
C* Architect 

 On Friday, January 30, 2015 1:36 AM, Michał Łowicki mlowi...@gmail.com 
wrote:
   

 Hi,
We're using C* 2.1.2, django-cassandra-engine which in turn uses cqlengine. 
LOCAL_QUROUM is set as default consistency level. From time to time we get 
timeouts while talking to the database but what is strange returned consistency 
level is not LOCAL_QUROUM:
code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
message=Operation timed out - received only 3 responses. 
info={'received_responses': 3, 'required_responses': 4, 'consistency': 'ALL'}
code=1200 [Coordinator node timed out waiting for replica nodes' responses] 
message=Operation timed out - received only 1 responses. 
info={'received_responses': 1, 'required_responses': 2, 'consistency': 
'LOCAL_QUORUM'}
code=1100 [Coordinator node timed out waiting for replica nodes' responses] 
message=Operation timed out - received only 0 responses. 
info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
Any idea why it might happen?
-- 
BR,
Michał Łowicki