The short answer is yes, we are looking into adding streaming of
results to solve that problem
(https://issues.apache.org/jira/browse/CASSANDRA-4415).
--
Sylvain
On Tue, Jul 24, 2012 at 6:51 PM, Josep Blanquer wrote:
> Thank Sylvain,
>
> The main argument for this is pagination. Let me try to e
No one has any idea?
we tryed
update to 1.1.2
DiskAccessMode standard, indexAccessMode standard
row_cache_size_in_mb: 0
key_cache_size_in_mb: 0
Our next try will to change
SerializingCacheProvider to ConcurrentLinkedHashCacheProvider
any other proposals are welcom
On 07/04/2012 02:13 PM, Tho
There are Big Data and NoSQL tracks where Cassandra talks would be appropriate.
-- Forwarded message --
From: Nick Burch
Date: Thu, Jul 19, 2012 at 1:14 PM
Subject: Call for Papers for ApacheCon Europe 2012 now open!
To: committ...@apache.org
Hi All
We're pleased to announce t
aaron morton thelastpickle.com> writes:
>
> The cluster is running into GC problems and this is slowing it down under the
stress test. When it slows down one or more of the nodes is failing to perform
the write within rpc_timeout . This causes the coordinator of the write to
raise
the Time
On Mon, Jul 23, 2012 at 10:24 PM, Eran Chinthaka Withana
wrote:
> Thanks Brandon for the answer (and I didn't know driftx = Brandon Williams.
> Thanks for your awesome support in Cassandra IRC)
Thanks :)
> Increasing CL is tricky for us for now, as our RF on that datacenter is 2
> and CL is set
Thank Sylvain,
The main argument for this is pagination. Let me try to explain the use
cases, and compare it to RDBMS for better illustration:
1- Right now, Cassandra doesn't stream the requests, so large resultsets
are a royal pain in the neck to deal with. I.e., if I have a range_slice,
or eve
I am guessing you already asked if they could give you three 100MB files
instead? so you could parallelize the operation. or maybe your task
doesn't lend itself well to that.
Dean
On Tue, Jul 24, 2012 at 10:01 AM, Pushpalanka Jayawardhana <
pushpalankaj...@gmail.com> wrote:
> Hi all,
>
> I am d
Hi all,
I am dealing with a scenario where I receive a .csv file in every 10mins
intervals which is of average 300MB. I need to update a Cassandra cluster
according to the received data from .csv file, after some processing
functions.
Current approach is keeping a Hashmap in memory, updating it f
On Mon, Jul 23, 2012 at 1:25 PM, Mike Heffner wrote:
> Hi,
>
> We are migrating from a 0.8.8 ring to a 1.1.2 ring and we are noticing
> missing data post-migration. We use pre-built/configured AMIs so our
> preferred route is to leave our existing production 0.8.8 untouched and
> bring up a paral
writes:
> De : Pierre-Yves Ritschard [mailto:p...@spootnik.org]
>> Snapshot and restores are great for point in time recovery. There's no
>> particular side-effect if you're willing to accept the downtime.
>
> Are you sure? The system KS has no book-keeping about the KSs/CFs?
> For instance, s
De : Pierre-Yves Ritschard [mailto:p...@spootnik.org]
> Snapshot and restores are great for point in time recovery. There's no
> particular side-effect if you're willing to accept the downtime.
Are you sure? The system KS has no book-keeping about the KSs/CFs?
For instance, schema changes, etc?
writes:
> One of the scenarios I have to have in account for a small Cassandra
> cluster (N=4) is
> restoring the data back in time. I will have full backups for 15 days, and
> it's possible
> that I will need to restore, let's say, the data from 10 days ago (don't ask,
> I'm not
> going in
One of the scenarios I have to have in account for a small Cassandra cluster
(N=4) is
restoring the data back in time. I will have full backups for 15 days, and it's
possible
that I will need to restore, let's say, the data from 10 days ago (don't ask,
I'm not
going into the details why).
Hey,
Mutations taking longer than rpc_timeout will be dropped because
coordinator won't be waiting for the coordinator and will return
TimeoutException to the client, if it doesn't reach the consistency level
[1].
In case of counters though, since counter mutations aren't idempotent, the
client i
On Tue, Jul 24, 2012 at 12:09 AM, Josep Blanquer
wrote:
> is there some way to express that in CQL3? something logically equivalent to
>
> SELECT * FROM bug_test WHERE a:b:c:d:e > 1:1:1:1:2??
No, there isn't. Not currently at least. But feel free of course to
open a ticket/request on
https:/
Greetings.
We have a very strange problem: it seems that sometimes our keyspaces become
immodifiable.
user@server:~$ cqlsh -3 -k goh_master cassandra1
Connected to GOH Cluster at cassandra1:9160.
[cqlsh 2.2.0 | Cassandra 1.1.2 | CQL spec 3.0.0 | Thrift protocol 19.32.0]
Use HELP for help.
cqlsh:
You are better off using Sun Java 6 to run Cassandra. In the past
there were issues reported on 7. Can you try running it on Sun Java 6?
kind regards
Joost
On Tue, Jul 24, 2012 at 10:04 AM, Nikolay Kоvshov wrote:
> 48 G of Ram on that machine, swap is not used. I will disable swap at all
> jus
I ran sar only recently after your advice and did not meet any huge GC-s on
that server
At 08:14 there was a GC lasting 4.5 seconds, that's not five minutes of course,
but also quite an unpleasant value;
Still I'm waiting for big GC values and will provide according sar logs.
07:25:01 PM pgp
48 G of Ram on that machine, swap is not used. I will disable swap at all just
in case
I have 4 cassandra processes (parts of 4 different clusters), each allocated 8
GB and using 4 of them
>java -version
java version "1.7.0"
Java(TM) SE Runtime Environment (build 1.7.0-b147)
Java HotSpot(TM) 64-
19 matches
Mail list logo