Anyways, thanks for your reply.
On Thu, Apr 28, 2016 at 1:59 PM, Hannu Kröger wrote:
> Ok, then I don’t understand the problem.
>
> Hannu
>
> On 28 Apr 2016, at 11:19, Siddharth Verma
> wrote:
>
> Hi Hannu,
>
> Had the issue been caused due to read, the insert, and delete statement
> would hav
Ok, then I don’t understand the problem.
Hannu
> On 28 Apr 2016, at 11:19, Siddharth Verma
> wrote:
>
> Hi Hannu,
>
> Had the issue been caused due to read, the insert, and delete statement would
> have been erroneous.
> "I saw the stdout from web-ui of spark, and the query along with true w
Hi Hannu,
Had the issue been caused due to read, the insert, and delete statement
would have been erroneous.
"I saw the stdout from web-ui of spark, and the query along with true was
printed for both the queries.".
The statements were correct as seen on the UI.
Thanks,
Siddharth Verma
On Thu, A
Hi,
could it be consistency level issue? If you use ONE for reads and writes,
might be that sometimes you don't get what you are writing.
See:
https://docs.datastax.com/en/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html
Br,
Hannu
2016-04-27 20:41 GMT+03:00 Siddharth Verma :
> Hi,
>
Edit:
1. dc2 node has been removed.
nodetool status shows only active nodes.
2. Repair done on all nodes.
3. Cassandra restarted
Still it doesn't solve the problem.
On Thu, Apr 28, 2016 at 9:00 AM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:
> Hi, If the info could be used
> we ar
Hi, If the info could be used
we are using two DCs
dc1 - 3 nodes
dc2 - 1 node
however, dc2 has been down for 3-4 weeks, and we haven't removed it yet.
spark slaves on same machines as the cassandra nodes.
each node has two instances of slaves.
spark master on a separate machine.
If anyone could