Jack did a superb job of explaining all of your issues, and his last
sentence seems to fit your needs (and my experience) very well. The only
other point I would add is to ascertain if the use patterns commend
microservices to abstract from data locality, even if the initial
deployment is a noop to
What is your replication factor?
Any idea how much data has to be processed under the query?
With that few nodes (3) in each DC, even with replication=1, you are
probably not getting much inter-node data transfer in a local quorum, until
of course you do cross data centers and at least one full c
why? Then there are 2 places 2 maintain or get jira'ed for a discrepancy.
On Mar 30, 2015 4:46 PM, "Robert Coli" wrote:
> On Mon, Mar 30, 2015 at 1:38 AM, Pierre wrote:
>
>> Does anyone know if there is a more complete and up to date documentation
>> about the sstable files structure (data, inde
Fascinating. Both the mysql front and and this delightful inverted search
solution. Your creativity makes me wonder what other delights your query
solutions might expose!!
sent from my mobile
Daemeon C.M. Reiydelle
USA 415.501.0198
London +44.0.20.8144.9872
On Mar 27, 2015 7:08 PM, "Robert Wille"
data node? Because master node is down, table can not be created on data
> nodes.
>
>
>
> Regards,
>
> Peter
>
>
>
>
>
>
>
> *发件人:* daemeon reiydelle [mailto:daeme...@gmail.com]
> *发送时间:* 2015年3月17日 13:38
> *收件人:* user@cassandra.apache.org; Saladi Naidu
>
y” strategy for “Table_test”,
> but “System keyspace” is Cassandra internal keyspace, its strategy is
> localStrategy.
> So my question is how to guarantee “Table_test” is created in all the
> nodes before any R/W opertions?
>
> Thanks.
>
> Peter
>
>
>
>
> *
If you want to guarantee that the data is written to all nodes before the
code returns, then yes you have to use "consistency all". Otherwise there
is a small risk of outdated data being served if a node goes offline longer
than hints timeouts.
Somewhat looser options that can assure multiple copi
If your cluster is typical, your most critical resource is your network
bandwidth, if this is the case, I would not do this split you are
proposing. One issue with large MTU's is that they are often split at the
switch fabric. Switches are not generally known for having processors that
are idle, so
What is the replication? Could you be serving stale data from a node that
was not properly replicated (hints timeout exceeded by a node being down?)
On Wed, Mar 4, 2015 at 11:03 AM, Jens Rantil wrote:
> Frens,
>
> What consistency are you querying with? Could be you are simply receiving
> resu
Are you finding a correlation between the shards on the OOM DC1 nodes and
the OOM DC2 nodes? Does your monitoring tool indicate that the DC1 nodes
are using significantly more CPU (and memory) than the nodes that are NOT
failing? I am leading you down the path to suspect that your sharding is
givin
I think you may have a vicious circle of errors: because your data is not
properly replicated to the neighbour, it is not replicating to the
secondary data center (yeah, obvious). I would suspect the GC errors are
(also obviously) the result of a backlog of compactions that take out the
neighbour (
101 - 111 of 111 matches
Mail list logo