Re: sstable_compression for system tables

2013-05-03 Thread John Sanda
The root cause was as I described. System tables were creating while running OpenJDK. Files were written to disk using snappy compression. Cassandra was later restarted with IBM Java. With the IBM JRE on a 32 bit arch, the native snappy library is not found; consequently, Cassandra is not able to r

Re: sstable_compression for system tables

2013-05-03 Thread Robert Coli
On Fri, May 3, 2013 at 11:07 AM, John Sanda wrote: > The machine where this error occurred had both OpenJDK and IBM's Java > installed. The only way I have been able to reproduce is by installing > Cassandra with OpenJDK, shutting it down, the starting it back up with IBM > Java. Maybe the root c

Re: SSTables not opened on new cluste

2013-05-03 Thread Philippe
Unfortunately not, I've moved on to trying to add the nodes the current cluster and then decommission the "old" ones. But even that is not working, this is the strangest of things : while trying to add a new node, I - set its token to an existing value+1 - ensure the yaml (clutser name, partiti

Re: How much heap does Cassandra 1.1.11 really need ?

2013-05-03 Thread Oleg Dulin
What constitutes an "extreme write" ? On 2013-05-03 15:45:33 +, Edward Capriolo said: If your writes are so extreme that metables are flushing all the time, the best you can do is turn off all caches, do bloom filters off heap, and then instruct cassandra to use large portions of the heap

Cassandra on Joyent

2013-05-03 Thread Shahryar Sedghi
Hi I was wondering if anyone has used or evaluated Cassandra on Joyent (either SmartOS or Linux). Price Performance, data transfer and availability is so promising. I was wondering if it is to good to be true. Thanks in advance Shahryar

Re: cql query

2013-05-03 Thread Jabbar Azam
Sorry Sri, I've never used hector. How ever it's straightforward in astyanax. There are examples on the github page. On 3 May 2013 18:50, "Sri Ramya" wrote: > Can you tell me how to do this in hector. Can you give me some example. > > On Fri, May 3, 2013 at 10:29 AM, Sri Ramya wrote: > >> than

Re: sstable_compression for system tables

2013-05-03 Thread John Sanda
The machine where this error occurred had both OpenJDK and IBM's Java installed. The only way I have been able to reproduce is by installing Cassandra with OpenJDK, shutting it down, the starting it back up with IBM Java. Snappy compression is enabled with OpenJDK so SSTables, including for system

Re: cql query

2013-05-03 Thread Sri Ramya
Can you tell me how to do this in hector. Can you give me some example. On Fri, May 3, 2013 at 10:29 AM, Sri Ramya wrote: > thank you very much. i will try and let you know whether its working or not > > > > On Thu, May 2, 2013 at 7:04 PM, Jabbar Azam wrote: > >> Hello Sri, >> >> As far as I kn

Re: Cassandra multi-datacenter

2013-05-03 Thread Daning Wang
Thanks Jabbar and Aaron. Aaron - for broadcast_address , looks it is only working with EC2MultiRegionSnitch. but in our case, we will have one center in colo, and one center in ec2(sorry, did not make that clear, we'd like to replicate data from colo to EC2) So can we still use broadcast_address?

Re: sstable_compression for system tables

2013-05-03 Thread John Sanda
I am still trying to sort this out. When I run with Oracle's JRE, it does in fact look like compression is enabled for system tables. cqlsh> DESCRIBE TABLE system.schema_columnfamilies ; CREATE TABLE schema_columnfamilies ( keyspace_name text, columnfamily_name text, bloom_filter_fp_chance

Re: How much heap does Cassandra 1.1.11 really need ?

2013-05-03 Thread Edward Capriolo
If your writes are so extreme that metables are flushing all the time, the best you can do is turn off all caches, do bloom filters off heap, and then instruct cassandra to use large portions of the heap as memtables. On Fri, May 3, 2013 at 11:40 AM, Bryan Talbot wrote: > It's true that a 16GB h

Re: How much heap does Cassandra 1.1.11 really need ?

2013-05-03 Thread Bryan Talbot
It's true that a 16GB heap is generally not a good idea; however, it's not clear from the data provided what problem you're trying to solve. What is it that you don't like about the default settings? -Bryan On Fri, May 3, 2013 at 4:27 AM, Oleg Dulin wrote: > Here is my question. It can't pos

Hadoop jobs and data locality

2013-05-03 Thread cscetbon.ext
Hi, I'm using Pig to calculate the sum of a columns from a columnfamily (scan of all rows) and I've read that input data locality is supported at http://wiki.apache.org/cassandra/HadoopSupport However when I execute my Pig script Hadoop assigns only one mapper to the task and not one mapper on

Re: sstable_compression for system tables

2013-05-03 Thread Edward Capriolo
I did not know the system tables were compressed. That would seem like an odd decision you would think that the system tables are small and would not benefit from compression much. Is it a static object static object that requires initialization even though it is not used? On Fri, May 3, 2013 at

sstable_compression for system tables

2013-05-03 Thread John Sanda
Is there a way to change the sstable_compression for system tables? I am trying to deploy Cassandra 1.2.2 on a platform with IBM Java and 32 bit arch where the snappy-java native library fails to load. The error I get looks like, ERROR [SSTableBatchOpen:1] 2013-05-02 14:42:42,485 CassandraDaemon.j

local_quorum

2013-05-03 Thread Kanwar Sangha
Hi - I have 2 data centres (DC1 and DC2) and I have local_quorum set as the CL for reads. Say there is a RF factor = 2. (so 2 copies each in DC). If both nodes which own the data in DC1 are down and I do a read with CL as "local_quorum" , will I get an error back to the application ? or will Ca

Error on Range queries

2013-05-03 Thread himanshu.joshi
Hi, I have created a 2 node test cluster in Cassandra version 1.2.3 with Simple Strategy, Replication Factor 2 and ByteOrderedPartitioner(so as to get Range Query functionality). When i am using a range query on a secondary index in CQLSH, I am getting the error : "Bad Request: No in

1.1.9 -> 1.1.11 rpm upgrade issue

2013-05-03 Thread William Oberman
I get this: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch I'm using Centos. Problem with my OS, or problem with the package? (And how can it conflict with itself??) will

Re: Slow retrieval using secondary indexes

2013-05-03 Thread Francisco Nogueira Calmon Sobral
Thanks! The creation of the new CF worked pretty well and fast! Unfortunately, I was unable to trace the request made using secondary indexes: cqlsh:Sessions> select * from "Items" where key = '687474703a2f2f6573706f7'; key| mahoutItemid +---

How much heap does Cassandra 1.1.11 really need ?

2013-05-03 Thread Oleg Dulin
Here is my question. It can't possibly be a good set up to use 16gig heap space, but this is the best I can do. Setting it to default never worked well for me, setting it to 8g doesn't work well either. It can't keep up with flushing memtables. It is possibly that someone at some point may have

RE: How does a healthy node look like?

2013-05-03 Thread Steppacher Ralf
Sure, I can do that. My main concern is write latency and the write timeouts we are experiencing. Read latency is secondary, as long as we do not introduce timeouts on read and do not exceed our sampling intervals (see below). We are running Cassandra 1.2.1 on Ubuntu 12.04 with JDK 1.7.0_17 (64

RE: Repair session failed

2013-05-03 Thread Christopher Wirt
Hi Aaron, We're running 1.2.4, so with vNodes We ran scrub but saw the issue again when repairing nodetool status - Datacenter: DC01 = Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 10.70.48.23