n up to date alone (bug fixes and awesome new feature X!)
> would make embedding worth it only for edge scenarios. I would recommend
> against it.
>
> ---
> Chris Lohfink
>
> On Apr 16, 2014, at 10:13 AM, Sávio Teles
> wrote:
>
> Is it advisable to run the embedded Cassa
Is it advisable to run the embedded Cassandra in production?
2014-04-16 12:08 GMT-03:00 Sávio Teles :
> I'm running a cluster with Cassandra and my app embedded.
>
> Regarding performance, it is better to run embedded Cassandra?
>
> What are the implications of running a
I'm running a cluster with Cassandra and my app embedded.
Regarding performance, it is better to run embedded Cassandra?
What are the implications of running an embedded Cassandra ?
Tks
--
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Me
We have the same problem.
2013/11/5 Jiri Horky
> Hi there,
>
> we are seeing extensive memory allocation leading to quite long and
> frequent GC pauses when using row cache. This is on cassandra 2.0.0
> cluster with JNA 4.0 library with following settings:
>
> key_cache_size_in_mb: 300
> key_ca
> Solr's index sitting on a single machine, even if that single machine can
> vertically scale, is a single point of failure.
>
And about Cloud Solr?
2013/9/30 Ken Hancock
> Yes.
>
>
> On Mon, Sep 30, 2013 at 1:57 PM, Andrey Ilinykh wrote:
>
>>
>> Also, be aware that while Cassandra has knobs
The list is "null".
2013/9/3 Baskar Duraikannu
> I don't know of any. I would check the size of LIST. If it is taking long,
> it could be just that disk read is taking long.
>
> --
> Date: Sat, 31 Aug 2013 16:35:22 -0300
> Subject: List retrieve performance
> From: s
/www.mail-archive.com/user@cassandra.apache.org/msg31693.html
>
> tl;dr - it depends completely on use case. Small static rows work best.
>
>
>
> On Mon, Sep 2, 2013 at 2:05 PM, Sávio Teles
> wrote:
>
>> I'm running the Cassandra 1.2.4 and when I enable the row_cach
I'm running the Cassandra 1.2.4 and when I enable the row_cache, the system
throws TImeoutExcpetion and Garbage Collection don't stop.
When I disable the query returns in 700ms.
*Configuration:
*
- *row_cache_size_in_mb: 256*
- *row_cache_save_period: 0*
- *# row_cache_keys_to_save: 10
introduction-to-composite-columns-part-1
>
> Hope it helps.
>
>
> -Vivek
>
>
> On Wed, Aug 28, 2013 at 5:49 PM, Sávio Teles
> wrote:
>
>> I can populate again. We are modelling the data yet! Tks.
>>
>>
>> 2013/8/28 Vivek Mishra
>>
>>
Yes, it is! I've fixed the problem. I miss the caching property set to
'ALL' to create the columily family.
2013/8/31 Jonathan Haddad
> 9/12 = .75
>
> It's a rate, not a percentage.
>
>
> On Sat, Aug 31, 2013 at 2:21 PM, Sávio Teles
> wrote:
>
&
I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
cache* with *1GB*.
But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
very *low* after a high number of queries (about 12 requests).
RowCache metrics:
*Capacity: 1GB*
*Entries: 3
*
*HitRate: 0.75
*
*Hits
I have a column family with this conf:
CREATE TABLE geoms (
geom_key text PRIMARY KEY,
part_geom list,
the_geom text
) WITH
bloom_filter_fp_chance=0.01 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.00 AND
gc_grace_seconds=864000 AND
read_repair_c
I can populate again. We are modelling the data yet! Tks.
2013/8/28 Vivek Mishra
> Just saw that you already have data populated, so i guess modifying for
> composite key may not work for you.
>
> -Vivek
>
>
> On Tue, Aug 27, 2013 at 11:55 PM, Sávio Teles > wr
ry indexes will be passed on each node in
> ring.
>
> -Vivek
>
>
> On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles > wrote:
>
>> Use a database that is designed for efficient range queries? ;D
>>>
>>
>> Is there no way to do this with Cassandra? Like
>
> Use a database that is designed for efficient range queries? ;D
>
Is there no way to do this with Cassandra? Like using Hive, Sorl...
2013/8/27 Robert Coli
> On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles
> wrote:
>
>> I need to perform range query efficiently.
> Do you have indexes defined ?
>
> What is a "long time" exactly ?
>
> Alain
> Le 23 août 2013 14:53, "Sávio Teles" a
> écrit :
>
> I need to perform range query efficiently. I have the table like:
>>
>> users
>> ---
>> user_
Ops, inverted index*!
2013/8/26 Sávio Teles
> Do I Have to use revert index to optimize range query operation?
>
>
> 2013/8/23 Sávio Teles
>
>> I need to perform range query efficiently. I have the table like:
>>
>> users
>> ---
>> user_id
Do I Have to use revert index to optimize range query operation?
2013/8/23 Sávio Teles
> I need to perform range query efficiently. I have the table like:
>
> users
> ---
> user_id | age | gender | salary | ...
>
> The attr user_id is the PRIMARY KEY.
>
> Exam
I need to perform range query efficiently. I have the table like:
users
---
user_id | age | gender | salary | ...
The attr user_id is the PRIMARY KEY.
Example of querying:
select * from users where user_id = '*x*' and age > *y *and age < *z* and
salary > *a* and salary < *b *and age='M';
It is very useful to upgrade the apps perfomance.
For example, if you have a machine with X capacity, you can put the
num_token=256. If you add a machine in your cluster with (X*2) capacity you
can put the num_token=512.
So, this new machine will receive twice the load automatically.
Moreover, you
Some bug was fixed in 2.0.0-beta2 by C* developers. Try it!
2013/7/22 Andrew Cobley
> I've been noticing some strange casandra-stress results with 2.0.0 beta
> 1. I've set up a single node on a Mac (4 gig ram, 2.8Ghz core 2 duo) and
> installed 2.0.0 beta1.
>
> When I run ./cassandra-stress
Ok! Thanks!
2013/7/3 Richard Low
> On 3 July 2013 22:18, Sávio Teles wrote:
>
>> We were able to implement ByteOrderedPartition on Cassandra 1.1 and
>> insert an object in a specific machine.
>>
>> However, with Cassandra 1.2 and VNodes we can't implement VN
We were able to implement ByteOrderedPartition on Cassandra 1.1 and insert
an object in a specific machine.
However, with Cassandra 1.2 and VNodes we can't implement VNode with
ByteOrderedPartitioner
to insert an object in a specific machine.
2013/7/3 Richard Low
> On 3 July 2013 21:0
We're using ByteOrderedPartition to programmatically choose the machine
which a objet will be inserted.*
*How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2?
*
*
--
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Mestrando em
We are using Cassandra 1.2 Embedded in a production environment.
We are some issues with these lines:
SocketAddress remoteSocket.get = socket ();
assert socket! = null;
ThriftClientState cState = activeSocketSessions.get (socket);
The connection is maintained by remoteSocket thread. How
I'm running a Cassandra 1.1.10 cluster with a ByteOrderedPartitioner. I'm
generating a key to force an object to be stored in a specific machine.
When I used org.apache.cassandra.thrift.CassandraServer to store the
object, this object was stored on correct machine. When I used Thrift, the
key is ch
I have the same problem!
2013/3/11 Alain RODRIGUEZ
> I can add that I have JNA corectly loaded, from the logs: "JNA mlockall
> successful"
>
>
> 2013/3/11 Alain RODRIGUEZ
>
>> Any clue on this ?
>>
>> Row cache well configured could avoid us a lot of disk read, and IO
>> is definitely our bottl
We are using ChunkedStorage described in
https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store to store
large objects (about 40 MB). We have defined the chunk size to 1 MB. But,
when this code is called, the exception UnavaibleException is thrown. Does
anyone has any idea?
Thanks in advan
ute the write/read load across the cluster as keys will
>> (hopefully) be distributed on different nodes. This helps to avoid hot
>> spots.
>>
>> Hope this helps,
>>
>> -Jason Brown
>> Netflix
>> --
>> *From:* Sávio
ibuted on different nodes. This helps to avoid hot
> spots.
>
> Hope this helps,
>
> -Jason Brown
> Netflix
> ------
> *From:* Sávio Teles [savio.te...@lupa.inf.ufg.br]
> *Sent:* Monday, January 21, 2013 9:51 AM
> *To:* user@cassandra.apache.or
Astyanax split large objects into multiple keys. Is it a good idea? It
is better
to split into multiple columns?
Thanks
2013/1/21 Sávio Teles
>
> Thanks Keith Wright.
>
>
> 2013/1/21 Keith Wright
>
>> This may be helpful:
>> https://github.com/Netflix/asty
Thanks Keith Wright.
2013/1/21 Keith Wright
> This may be helpful:
> https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store
>
> From: Vegard Berget
> Reply-To: "user@cassandra.apache.org" , Vegard
> Berget
> Date: Monday, January 21, 2013 8:35 AM
> To: "user@cassandra.apache.org"
> Sub
We wish to store a column in a row with size larger
thanthrift_framed_transport_size_in_mb
. But, Thrift has a maximum frame size configured by
thrift_framed_transport_size_in_mb in cassandra.yaml.
so, How to store columns with size larger than
thrift_framed_transport_size_in_mb? Increasing this va
We have multiple clients reading the same row key. It makes no sense fail
in one machine. When we use Thrift, Cassandra always returns the correct
result.
2013/1/16 Sávio Teles
> I ran the tests with only one machine, so the CL_ONE is not the problem.
> Am i right?
--
Atencios
I ran the tests with only one machine, so the CL_ONE is not the problem. Am
i right?
2013/1/15 Hiller, Dean
> What is your consistency level set to? If you set it to CL_ONE, you could
> get different results or is your database constant and unchanging?
>
> Dean
>
> From: Sáv
I'm currently using Astyanax 1.56.21 to retrieve a entire row. My code:
ColumnList result = keyspace.prepareQuery(cf_name)
.getKey(key)
.execute().getResult();
But, sometimes Astyanax returns a empty row for a specific key. For
example, on first attempt Astyanax returns a empty row for a
Hi Dominique,
I have the same problem! I would like to place an object in a specific node
because I'm working in a spatial application. How should I choose the K1
part to forcing a given object to go to a node?
2013/1/3 DE VITO Dominique
> Hi Everton,
>
> ** **
>
> AFAIK, the pb is not f
37 matches
Mail list logo