Re: Embedded Cassandra Performance

2014-04-16 Thread Sávio Teles
n up to date alone (bug fixes and awesome new feature X!) > would make embedding worth it only for edge scenarios. I would recommend > against it. > > --- > Chris Lohfink > > On Apr 16, 2014, at 10:13 AM, Sávio Teles > wrote: > > Is it advisable to run the embedded Cassa

Re: Embedded Cassandra Performance

2014-04-16 Thread Sávio Teles
Is it advisable to run the embedded Cassandra in production? 2014-04-16 12:08 GMT-03:00 Sávio Teles : > I'm running a cluster with Cassandra and my app embedded. > > Regarding performance, it is better to run embedded Cassandra? > > What are the implications of running a

Embedded Cassandra Performance

2014-04-16 Thread Sávio Teles
I'm running a cluster with Cassandra and my app embedded. Regarding performance, it is better to run embedded Cassandra? What are the implications of running an embedded Cassandra ? Tks -- Atenciosamente, Sávio S. Teles de Oliveira voice: +55 62 9136 6996 http://br.linkedin.com/in/savioteles Me

Re: Cass 2.0.0: Extensive memory allocation when row_cache enabled

2013-11-05 Thread Sávio Teles
We have the same problem. 2013/11/5 Jiri Horky > Hi there, > > we are seeing extensive memory allocation leading to quite long and > frequent GC pauses when using row cache. This is on cassandra 2.0.0 > cluster with JNA 4.0 library with following settings: > > key_cache_size_in_mb: 300 > key_ca

Re: Why Solandra stores Solr data in Cassandra ? Isn't solr complete solution ?

2013-09-30 Thread Sávio Teles
> Solr's index sitting on a single machine, even if that single machine can > vertically scale, is a single point of failure. > And about Cloud Solr? 2013/9/30 Ken Hancock > Yes. > > > On Mon, Sep 30, 2013 at 1:57 PM, Andrey Ilinykh wrote: > >> >> Also, be aware that while Cassandra has knobs

Re: List retrieve performance

2013-09-03 Thread Sávio Teles
The list is "null". 2013/9/3 Baskar Duraikannu > I don't know of any. I would check the size of LIST. If it is taking long, > it could be just that disk read is taking long. > > -- > Date: Sat, 31 Aug 2013 16:35:22 -0300 > Subject: List retrieve performance > From: s

Re: Timeout Exception with row_cache enabled

2013-09-02 Thread Sávio Teles
/www.mail-archive.com/user@cassandra.apache.org/msg31693.html > > tl;dr - it depends completely on use case. Small static rows work best. > > > > On Mon, Sep 2, 2013 at 2:05 PM, Sávio Teles > wrote: > >> I'm running the Cassandra 1.2.4 and when I enable the row_cach

Timeout Exception with row_cache enabled

2013-09-02 Thread Sávio Teles
I'm running the Cassandra 1.2.4 and when I enable the row_cache, the system throws TImeoutExcpetion and Garbage Collection don't stop. When I disable the query returns in 700ms. *Configuration: * - *row_cache_size_in_mb: 256* - *row_cache_save_period: 0* - *# row_cache_keys_to_save: 10

Re: How to perform range queries efficiently?

2013-09-02 Thread Sávio Teles
introduction-to-composite-columns-part-1 > > Hope it helps. > > > -Vivek > > > On Wed, Aug 28, 2013 at 5:49 PM, Sávio Teles > wrote: > >> I can populate again. We are modelling the data yet! Tks. >> >> >> 2013/8/28 Vivek Mishra >> >>

Re: Low Row Cache Request

2013-08-31 Thread Sávio Teles
Yes, it is! I've fixed the problem. I miss the caching property set to 'ALL' to create the columily family. 2013/8/31 Jonathan Haddad > 9/12 = .75 > > It's a rate, not a percentage. > > > On Sat, Aug 31, 2013 at 2:21 PM, Sávio Teles > wrote: > &

Low Row Cache Request

2013-08-31 Thread Sávio Teles
I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row cache* with *1GB*. But looking the Cassandra metrics on JConsole, *Row Cache Requests* are very *low* after a high number of queries (about 12 requests). RowCache metrics: *Capacity: 1GB* *Entries: 3 * *HitRate: 0.75 * *Hits

List retrieve performance

2013-08-31 Thread Sávio Teles
I have a column family with this conf: CREATE TABLE geoms ( geom_key text PRIMARY KEY, part_geom list, the_geom text ) WITH bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=864000 AND read_repair_c

Re: How to perform range queries efficiently?

2013-08-28 Thread Sávio Teles
I can populate again. We are modelling the data yet! Tks. 2013/8/28 Vivek Mishra > Just saw that you already have data populated, so i guess modifying for > composite key may not work for you. > > -Vivek > > > On Tue, Aug 27, 2013 at 11:55 PM, Sávio Teles > wr

Re: How to perform range queries efficiently?

2013-08-27 Thread Sávio Teles
ry indexes will be passed on each node in > ring. > > -Vivek > > > On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles > wrote: > >> Use a database that is designed for efficient range queries? ;D >>> >> >> Is there no way to do this with Cassandra? Like

Re: How to perform range queries efficiently?

2013-08-27 Thread Sávio Teles
> > Use a database that is designed for efficient range queries? ;D > Is there no way to do this with Cassandra? Like using Hive, Sorl... 2013/8/27 Robert Coli > On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles > wrote: > >> I need to perform range query efficiently.

Re: How to perform range queries efficiently?

2013-08-27 Thread Sávio Teles
> Do you have indexes defined ? > > What is a "long time" exactly ? > > Alain > Le 23 août 2013 14:53, "Sávio Teles" a > écrit : > > I need to perform range query efficiently. I have the table like: >> >> users >> --- >> user_

Re: How to perform range queries efficiently?

2013-08-26 Thread Sávio Teles
Ops, inverted index*! 2013/8/26 Sávio Teles > Do I Have to use revert index to optimize range query operation? > > > 2013/8/23 Sávio Teles > >> I need to perform range query efficiently. I have the table like: >> >> users >> --- >> user_id

Re: How to perform range queries efficiently?

2013-08-26 Thread Sávio Teles
Do I Have to use revert index to optimize range query operation? 2013/8/23 Sávio Teles > I need to perform range query efficiently. I have the table like: > > users > --- > user_id | age | gender | salary | ... > > The attr user_id is the PRIMARY KEY. > > Exam

How to perform range queries efficiently?

2013-08-23 Thread Sávio Teles
I need to perform range query efficiently. I have the table like: users --- user_id | age | gender | salary | ... The attr user_id is the PRIMARY KEY. Example of querying: select * from users where user_id = '*x*' and age > *y *and age < *z* and salary > *a* and salary < *b *and age='M';

Re: cassandra 1.2.5- virtual nodes (num_token) pros/cons?

2013-07-25 Thread Sávio Teles
It is very useful to upgrade the apps perfomance. For example, if you have a machine with X capacity, you can put the num_token=256. If you add a machine in your cluster with (X*2) capacity you can put the num_token=512. So, this new machine will receive twice the load automatically. Moreover, you

Re: Strange cassandra-stress results with 2.0.0 beta1

2013-07-25 Thread Sávio Teles
Some bug was fixed in 2.0.0-beta2 by C* developers. Try it! 2013/7/22 Andrew Cobley > I've been noticing some strange casandra-stress results with 2.0.0 beta > 1. I've set up a single node on a Mac (4 gig ram, 2.8Ghz core 2 duo) and > installed 2.0.0 beta1. > > When I run ./cassandra-stress

Re: Cassandra with vnode and ByteOrderedPartition

2013-07-05 Thread Sávio Teles
Ok! Thanks! 2013/7/3 Richard Low > On 3 July 2013 22:18, Sávio Teles wrote: > >> We were able to implement ByteOrderedPartition on Cassandra 1.1 and >> insert an object in a specific machine. >> >> However, with Cassandra 1.2 and VNodes we can't implement VN

Re: Cassandra with vnode and ByteOrderedPartition

2013-07-03 Thread Sávio Teles
We were able to implement ByteOrderedPartition on Cassandra 1.1 and insert an object in a specific machine. However, with Cassandra 1.2 and VNodes we can't implement VNode with ByteOrderedPartitioner to insert an object in a specific machine. 2013/7/3 Richard Low > On 3 July 2013 21:0

Cassandra with vnode and ByteOrderedPartition

2013-07-03 Thread Sávio Teles
We're using ByteOrderedPartition to programmatically choose the machine which a objet will be inserted.* *How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2? * * -- Atenciosamente, Sávio S. Teles de Oliveira voice: +55 62 9136 6996 http://br.linkedin.com/in/savioteles Mestrando em

Embedded Cassandra 1.2

2013-07-03 Thread Sávio Teles
We are using Cassandra 1.2 Embedded in a production environment. We are some issues with these lines: SocketAddress remoteSocket.get = socket (); assert socket! = null; ThriftClientState cState = activeSocketSessions.get (socket); The connection is maintained by remoteSocket thread. How

Thrift key

2013-03-22 Thread Sávio Teles
I'm running a Cassandra 1.1.10 cluster with a ByteOrderedPartitioner. I'm generating a key to force an object to be stored in a specific machine. When I used org.apache.cassandra.thrift.CassandraServer to store the object, this object was stored on correct machine. When I used Thrift, the key is ch

Re: Row cache off-heap ?

2013-03-11 Thread Sávio Teles
I have the same problem! 2013/3/11 Alain RODRIGUEZ > I can add that I have JNA corectly loaded, from the logs: "JNA mlockall > successful" > > > 2013/3/11 Alain RODRIGUEZ > >> Any clue on this ? >> >> Row cache well configured could avoid us a lot of disk read, and IO >> is definitely our bottl

Unavaible Exception with Chunked Object Store

2013-01-23 Thread Sávio Teles
We are using ChunkedStorage described in https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store to store large objects (about 40 MB). We have defined the chunk size to 1 MB. But, when this code is called, the exception UnavaibleException is thrown. Does anyone has any idea? Thanks in advan

Re: How to store large columns?

2013-01-22 Thread Sávio Teles
ute the write/read load across the cluster as keys will >> (hopefully) be distributed on different nodes. This helps to avoid hot >> spots. >> >> Hope this helps, >> >> -Jason Brown >> Netflix >> -- >> *From:* Sávio

Re: How to store large columns?

2013-01-22 Thread Sávio Teles
ibuted on different nodes. This helps to avoid hot > spots. > > Hope this helps, > > -Jason Brown > Netflix > ------ > *From:* Sávio Teles [savio.te...@lupa.inf.ufg.br] > *Sent:* Monday, January 21, 2013 9:51 AM > *To:* user@cassandra.apache.or

Re: How to store large columns?

2013-01-21 Thread Sávio Teles
Astyanax split large objects into multiple keys. Is it a good idea? It is better to split into multiple columns? Thanks 2013/1/21 Sávio Teles > > Thanks Keith Wright. > > > 2013/1/21 Keith Wright > >> This may be helpful: >> https://github.com/Netflix/asty

Re: How to store large columns?

2013-01-21 Thread Sávio Teles
Thanks Keith Wright. 2013/1/21 Keith Wright > This may be helpful: > https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store > > From: Vegard Berget > Reply-To: "user@cassandra.apache.org" , Vegard > Berget > Date: Monday, January 21, 2013 8:35 AM > To: "user@cassandra.apache.org" > Sub

How to store large columns?

2013-01-21 Thread Sávio Teles
We wish to store a column in a row with size larger thanthrift_framed_transport_size_in_mb . But, Thrift has a maximum frame size configured by thrift_framed_transport_size_in_mb in cassandra.yaml. so, How to store columns with size larger than thrift_framed_transport_size_in_mb? Increasing this va

Re: Astyanax returns empty row

2013-01-16 Thread Sávio Teles
We have multiple clients reading the same row key. It makes no sense fail in one machine. When we use Thrift, Cassandra always returns the correct result. 2013/1/16 Sávio Teles > I ran the tests with only one machine, so the CL_ONE is not the problem. > Am i right? -- Atencios

Re: Astyanax returns empty row

2013-01-16 Thread Sávio Teles
I ran the tests with only one machine, so the CL_ONE is not the problem. Am i right? 2013/1/15 Hiller, Dean > What is your consistency level set to? If you set it to CL_ONE, you could > get different results or is your database constant and unchanging? > > Dean > > From: Sáv

Astyanax returns empty row

2013-01-15 Thread Sávio Teles
I'm currently using Astyanax 1.56.21 to retrieve a entire row. My code: ColumnList result = keyspace.prepareQuery(cf_name) .getKey(key) .execute().getResult(); But, sometimes Astyanax returns a empty row for a specific key. For example, on first attempt Astyanax returns a empty row for a

Re: Force data to a specific node

2013-01-03 Thread Sávio Teles
Hi Dominique, I have the same problem! I would like to place an object in a specific node because I'm working in a spatial application. How should I choose the K1 part to forcing a given object to go to a node? 2013/1/3 DE VITO Dominique > Hi Everton, > > ** ** > > AFAIK, the pb is not f