Re: 3PP: C++14/17 standard support

2019-03-29 Thread Dinesh Joshi
The driver is actually maintained by DataStax and not the Cassandra community. 
Please look at the documentation here - https://github.com/datastax/cpp-driver 
 Hopefully someone from their driver 
team can confirm.

Dinesh

> On Mar 29, 2019, at 2:54 AM, Deepti Sharma S  
> wrote:
> 
> Hi Team,
>  
> We are planning to migrate the code-base with C++14/17 standard (GCC C++ 
> compiler), and we are linking Cassandra Version 2.6 c++ client libraries in 
> our module.
> Would you please confirm the support of these libraries with C++14/17 
> standard?
>  
>  
>  
> DEEPTI SHARMA 
> Specialist 
> ITIL 2011 Foundation Certified 
> BDGS, R&D
> 
> Ericsson
> 3rd Floor, ASF Insignia - Block B Kings Canyon,
> Gwal Pahari, Gurgaon, Haryana 122 003, India
> Phone 0124-6243000 
> deepti.s.sha...@ericsson.com 
> www.ericsson.com 


3PP: C++14/17 standard support

2019-03-29 Thread Deepti Sharma S
Hi Team,

We are planning to migrate the code-base with C++14/17 standard (GCC C++ 
compiler), and we are linking Cassandra Version 2.6 c++ client libraries in our 
module.
Would you please confirm the support of these libraries with C++14/17 standard?


[Ericsson]
DEEPTI SHARMA
Specialist
ITIL 2011 Foundation Certified
BDGS, R&D

Ericsson
3rd Floor, ASF Insignia - Block B Kings Canyon,
Gwal Pahari, Gurgaon, Haryana 122 003, India
Phone 0124-6243000
deepti.s.sha...@ericsson.com
www.ericsson.com




Multi-DC replication and hinted handoff

2019-03-29 Thread Jens Fischer
Hi,

I have a Cassandra setup with multiple data centres. The vast majority of 
writes are LOCAL_ONE writes to data center DC-A. One node (lets call this node 
A1) in DC-A has accumulated large amounts of hint files (~100 GB). In the logs 
of this node I see lots of messages like the following:

INFO  [HintsDispatcher:26] 2019-03-28 01:49:25,217 
HintsDispatchExecutor.java:289 - Finished hinted handoff of file 
db485ac6-8acd-4241-9e21-7a2b540459de-1553419324363-1.hints to endpoint 
/10.10.2.55: db485ac6-8acd-4241-9e21-7a2b540459de

The node 10.10.2.55 is in DC-B, lets call this node B1. There is no indication 
whatsoever that B1 was down: Nothing in our monitoring, nothing in the logs of 
B1, nothing in the logs of A1. Are there any other situations where hints to B1 
are stored at A1? Other than A1's failure detection detecting B1 as down I 
mean. For example could the reason for the hints be that B1 is overloaded and 
can not handle the intake from the A1? Or that the network connection between 
DC-A and DC-B is to slow?

While researching this I also found the following information on Stack Overflow 
from Ben Slater regarding hints and multi-dc replication:

Another factor here is the consistency level you are using - a LOCAL_* 
consistency level will only require writes to be written to the local DC for 
the operation to be considered a success (and hints will be stored for 
replication to the other DC).
(…)
The hints are the records of writes that have been made in one DC that are not 
yet replicated to the other DC (or even nodes within a DC). I think your 
options to avoid them are: (1) write with ALL or QUOROM (not LOCAL_*) 
consistency - this will slow down your writes but will ensure writes go into 
both DCs before the op completes (2) Don't replicate the data to the second DC 
(by setting the replication factor to 0 for the second DC in the keyspace 
definition) (3) Increase the capacity of the second DC so it can keep up with 
the writes (4) Slow down your writes so the second DC can keep up.

Source: https://stackoverflow.com/a/37382726

This reads like hints are used for “normal” (async) replication between data 
centres, i.e. hints could show up without any nodes being down whatsoever. This 
could explain what I am seeing. Does anyone now more about this? Does that mean 
I will see hints even if I disable hinted handoff?

Any pointers or help are greatly appreciated!

Thanks in advance
Jens


[https://img.sonnen.de/TSEE2019_Banner_sonnenGmbH_de_1.jpg]

Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen Schneider, 
Hermann Schweizer.
Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer 127/137/50792, 
USt.-IdNr. DE272208908


Re: Cassandra Possible read/write race condition in LOCAL_ONE?

2019-03-29 Thread Jacques-Henri Berthemet
If you use LOCAL_ONE for write and read and you have RF>1, it means both 
operations could go to different replicas that does not have the data yet.
Try to use LOCAL_QUORUM instead, as usual check your clocks as well.

From: Jeff Jirsa 
Reply-To: "user@cassandra.apache.org" 
Date: Thursday 28 March 2019 at 23:29
To: cassandra 
Subject: Re: Cassandra Possible read/write race condition in LOCAL_ONE?

Yes it can race; if you don't want to race, you'd want to use SERIAL or 
LOCAL_SERIAL.

On Thu, Mar 28, 2019 at 3:04 PM Richard Xin  
wrote:
Hi,
Our Cassandra Consistency level is currently set to LOCAL_ONE, we have script 
doing followings
1) insert one record into table_A
2) select last_inserted_record from table_A and do something ...

step #1 & 2 are running sequentially without pause,  and I assume 1 & 2 suppose 
to run in same DC

we are facing sporadic issues that step #2 didnt get inserted data by #1.
is it possible to have a race condition when LOCAL_ONE that #2 might not get 
inserted data on step #1?

Thanks in advance!
Richard