Hello,
I have base Cassandra 1.1.7 installed in two data centers with 3 nodes each
using a PropertyFileSnitch as outlined below. When I run a nodetool ring, I see
a very uneven load. Any idea what I could be going on? I have not added/removed
any nodes or changed the replication scheme or count
Well, not sure how parallel is multiget. Someone is saying it's in parallel
sending requests to the different nodes and on each node it's executed
sequentially. I didn't bother looking into the source code yet. Anyone knows it
for sure?
I am using Hector, just copied the thrift definition from
What's wrong with multiget…parallel performance is great from multiple disks
and so usually that is a good thing.
Also, something looks wrong, since you have list keys, I would expect
the Map to be Map>
Are you sure you have that correct? IF you set range to 100, it should be 100
columns each
I know it's probably not a good idea to use multiget, but for my use case, it's
the only choice,
I have question regarding the SlicePredicate argument of the multiget_slice
The SlicePredicate takes slice_range which takes start, end and range. I
suppose start and end will apply to each individ
On Thu, Dec 6, 2012 at 7:36 PM, aaron morton wrote:
> So for memory mapped files, compaction can do a madvise SEQUENTIAL instead
> of current DONTNEED flag after detecting appropriate OS versions. Will this
> help?
>
>
> AFAIK Compaction does use memory mapped file access.
The history :
https://
my two cents ... i know this thread is a bit old, but the fact that
odd-sized SSTABLEs (usually large ones) will hang around for a while
can be very troublesome on disk space and planning. our data is
temporal in cassandra, being deleted constantly. we have seen space
usage in the 1+ TB range whe
Assuming you need to work with quorum in a non-vnode scenario. That means
that if 2 nodes in a row in the ring are down some number of quorum
operations will fail with UnavailableException (TimeoutException right
after the failures). This is because the for a given range of tokens quorum
will be im
On Mon, Dec 10, 2012 at 12:36 PM, Abhijit Chanda
wrote:
> Hi All,
>
> I have a column family which structure is
>
> CREATE TABLE practice (
> id text,
> name text,
> addr text,
> pin text,
> PRIMARY KEY (id, name)
> ) WITH
> comment='' AND
> caching='KEYS_ONLY' AND
> read_repair_ch
Hi Tyler,
You're right, the math does assume independence which is unlikely to be
accurate. But if you do have correlated failure modes e.g. same power,
racks, DC, etc. then you can still use Cassandra's rack-aware or DC-aware
features to ensure replicas are spread around so your cluster can surv