Re: After restarting Cassandra , some of nodes are missing in nodetool status

2019-09-09 Thread Nandakishore Tokala
Ok, I will check. Can you please be more specific? like what to check about
the token range and if we find any mismatch what to do next.

On Mon, Sep 9, 2019 at 3:28 PM Dhanunjaya Tokala 
wrote:

> I would suggest you to check the token ranges of all the nodes and see if
> any of the nodes have similar token ranges.
>
> Regard’s
> Dhanunjaya Tokala
>
> On Mon, Sep 9, 2019 at 6:25 PM Nandakishore Tokala <
> nandakishore.tok...@gmail.com> wrote:
>
>> @dhanunjayatok...@gmail.com  NO, we did not
>> remove those nodes and it is not exactly same missing nodes for each
>> restart (for each restart i am seeing different nodes).
>>
>> @jji...@gmail.com  yeah it is typo 3.11.0
>>
>>
>> Thanks
>>
>> On Mon, Sep 9, 2019 at 12:38 PM Jeff Jirsa  wrote:
>>
>>> Is 3.1.0 a typo? Is it really 3.1 or 3.11.0 ?
>>>
>>>
>>> On Mon, Sep 9, 2019 at 10:51 AM Nandakishore Tokala <
>>> nandakishore.tok...@gmail.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> we are running apache Cassandra 3.1.0 on AWS in multi-region nearly
>>>> around 200 nodes.
>>>>  after restarting each node I am seeing some of the nodes are missing
>>>> from the nodetool status (not UN or DN they are completely missing). after
>>>> couple of restarts, I am seeing them back.
>>>>
>>>> Please help me if I am missing something at the configuration side
>>>>
>>>>
>>>> Thanks
>>>> Nanda
>>>>
>>>
>>
>> --
>> Thanks & Regards,
>> Nanda Kishore
>>
>

-- 
Thanks & Regards,
Nanda Kishore


Re: After restarting Cassandra , some of nodes are missing in nodetool status

2019-09-09 Thread Nandakishore Tokala
@dhanunjayatok...@gmail.com  NO, we did not
remove those nodes and it is not exactly same missing nodes for each
restart (for each restart i am seeing different nodes).

@jji...@gmail.com  yeah it is typo 3.11.0


Thanks

On Mon, Sep 9, 2019 at 12:38 PM Jeff Jirsa  wrote:

> Is 3.1.0 a typo? Is it really 3.1 or 3.11.0 ?
>
>
> On Mon, Sep 9, 2019 at 10:51 AM Nandakishore Tokala <
> nandakishore.tok...@gmail.com> wrote:
>
>> Hi All,
>>
>> we are running apache Cassandra 3.1.0 on AWS in multi-region nearly
>> around 200 nodes.
>>  after restarting each node I am seeing some of the nodes are missing
>> from the nodetool status (not UN or DN they are completely missing). after
>> couple of restarts, I am seeing them back.
>>
>> Please help me if I am missing something at the configuration side
>>
>>
>> Thanks
>> Nanda
>>
>

-- 
Thanks & Regards,
Nanda Kishore


After restarting Cassandra , some of nodes are missing in nodetool status

2019-09-09 Thread Nandakishore Tokala
Hi All,

we are running apache Cassandra 3.1.0 on AWS in multi-region nearly around
200 nodes.
 after restarting each node I am seeing some of the nodes are missing from
the nodetool status (not UN or DN they are completely missing). after
couple of restarts, I am seeing them back.

Please help me if I am missing something at the configuration side


Thanks
Nanda


Re: Node tool status is not showing all the nodes on sum of the nodes in the cluster

2019-08-09 Thread Nandakishore Tokala
After rolling restart, we are seeing all nodes in the cluster, but still, I
am seeing TOKENS: not present in gossipinfo

/96.xx.xx.xx
  generation:1565342394
  heartbeat:30093
  LOAD:30083:8.7263163447E10
  SCHEMA:14:684b2142-327b-30a0-bcaf-b906208bd9b4
  DC:8:Xx-EAST-xx
  RACK:10:xx-xx-xxx
  RELEASE_VERSION:4:3.11.0
  INTERNAL_IP:6:96.xx.xx.xx.
  RPC_ADDRESS:3:96.xx.xx.xx
  NET_VERSION:1:11
  HOST_ID:2:05131b7e-4ad4-4daf-b5ad-d8de69b0455f
  RPC_READY:32:true
  TOKENS: not present


Can anyone provide me the documentation for each field in gossipinfo , and
for some of the nodes i am seeing( TOKENS: not present ) why we will see
this issue

Thanks
Nanda



On Sun, Jul 14, 2019 at 11:59 PM Nandakishore Tokala <
nandakishore.tok...@gmail.com> wrote:

> HI All
>
> I am having 2 que's
>
> first one:
> My cluster is a 200 node cluster and we recently joined 4 Datacenters to
> existing 48 node cluster.
> on some of the nodes, Nodetool status is reporting 197, 198 (which varies
> from 198 to 202), Can anyone faced the same issue?
>
> Second one:
>
> what represents Tokens in the Nodetool  Gossipinfo output
>
> /XX.87.XX.XX
>   generation:1561848796
>   heartbeat:1374022
>   STATUS:21:NORMAL,-1118741779833099361
>   LOAD:1374005:3.5356469372E10
>   SCHEMA:1364914:62ecdb21-4490-3aab-91ff-d1cf4f716fed
>   DC:8:XX-XX-DC
>   RACK:10:XX-XX-RAC
>   RELEASE_VERSION:4:3.11.0
>   INTERNAL_IP:6:XX.XX.XX.169
>   RPC_ADDRESS:3:XX.XX.XX.169
>   NET_VERSION:1:11
>   HOST_ID:2:1bd319cf-8493-43a6-a98a-d6e7e0d4968d
>   RPC_READY:32:true
>   *TOKENS:20:*
>
>
> *Thanks*
>
>
>
>
>
>
> --
> Thanks & Regards,
> Nanda Kishore
>


-- 
Thanks & Regards,
Nanda Kishore


Node tool status is not showing all the nodes on sum of the nodes in the cluster

2019-07-15 Thread Nandakishore Tokala
HI All

I am having 2 que's

first one:
My cluster is a 200 node cluster and we recently joined 4 Datacenters to
existing 48 node cluster.
on some of the nodes, Nodetool status is reporting 197, 198 (which varies
from 198 to 202), Can anyone faced the same issue?

Second one:

what represents Tokens in the Nodetool  Gossipinfo output

/XX.87.XX.XX
  generation:1561848796
  heartbeat:1374022
  STATUS:21:NORMAL,-1118741779833099361
  LOAD:1374005:3.5356469372E10
  SCHEMA:1364914:62ecdb21-4490-3aab-91ff-d1cf4f716fed
  DC:8:XX-XX-DC
  RACK:10:XX-XX-RAC
  RELEASE_VERSION:4:3.11.0
  INTERNAL_IP:6:XX.XX.XX.169
  RPC_ADDRESS:3:XX.XX.XX.169
  NET_VERSION:1:11
  HOST_ID:2:1bd319cf-8493-43a6-a98a-d6e7e0d4968d
  RPC_READY:32:true
  *TOKENS:20:*


*Thanks*






-- 
Thanks & Regards,
Nanda Kishore


Re: Cassandra tools are missing

2019-06-24 Thread Nandakishore Tokala
yum install cassandra-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
comcast-neto-io
| 2.6 kB  00:00:00
comcast-neto-io-x86_64
 | 2.6 kB  00:00:00
No package cassandra-tools available.
Error: Nothing to do

it is giving below error

On Mon, Jun 24, 2019 at 8:11 PM Michael Shuler 
wrote:

> `yum install cassandra-tools`
>
> You should also upgrade to 3.11.4 when you can, there are a number of
> important bug fixes since 3.11.0.
>
> Kind regards,
> Michael
>
> On 6/24/19 10:04 PM, Nandakishore Tokala wrote:
> > HI ,
> >
> > we installed cassandra-3.11.0 on centos -7  and i see only below tools,
> > sstableloader   sstablescrubsstableupgrade  sstableutil
> > sstableverify
> >
> >
> > i am missing lot of other tools , Can any one help me to get other tools
> >
> > --
> > Thanks & Regards,
> > Nanda Kishore
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

-- 
Thanks & Regards,
Nanda Kishore


Cassandra tools are missing

2019-06-24 Thread Nandakishore Tokala
HI ,

we installed cassandra-3.11.0 on centos -7  and i see only below tools,
sstableloader   sstablescrubsstableupgrade  sstableutil
sstableverify


i am missing lot of other tools , Can any one help me to get other tools

-- 
Thanks & Regards,
Nanda Kishore


Ideal duration for max_hint_window_in_ms

2019-05-24 Thread Nandakishore Tokala
HI All,

what is the impact of increasing the duration of the max_hint_window_in_ms,
as we are seeing nodes are going down and some times we are not bringing
them up in 3 hour's and during the repair, we are seeing a lot of streaming
data, due to the node is down.

so we are planning to increase the max_hint_window_in_ms time so that we
will less streaming during repair, so is there any drawback in increasing
the max_hint_window_in_ms?, and what is the ideal time for it(6 hrs, 12
hrs, 24 hrs)

Thanks
Nanda


Merging two cluster's in to one without any downtime

2019-03-25 Thread Nandakishore Tokala
Please let me know the best practices to combine 2 different cluster's into
one without having any downtime.

Thanks & Regards,
Nanda Kishore


Migrating from DSE5.1.2 to Opensource cassandra

2018-12-04 Thread Nandakishore Tokala
HI All,

we are migrating from DSE to open source Cassandra. if anyone has recently
migrated, Can you please share their experience, steps you followed and
challenges you guys faced.

we want to migrate to the same computable version in open source, can you
give us version number(even with the minor version) for DSE 5.1.2

5.1 DSE production-certified 3.10 + enhancements 3.4 + enhancements big m

-- 
Thanks & Regards,
Nanda Kishore


Re: Schema version mismatch

2018-06-07 Thread Nandakishore Tokala
FYI

this is the info I got after research on schema version

what is the schema version in Cassandra and for what are all the changes
schema version is changed?

schema version is a UUID used to set a baseline version of the schema. It
tracks if a schema migration is required. Schema version changes when there
is any schema changes made to the cluster like adding or dropping
tables/column/keyspaces i.e CREATE, DROP, ALTER etc.

why do we see schema version mismatch other than upgrades time?

It can be seen when schema changes are made i.e DDL statements are issues
concurrently.
If a node is down and schema changes are being made.
NTP is not in sync
If the nodes in the cluster are running with mixed  Apache versions.
Cassandra schema updates assume that schema changes are done one-at-a-time.
If you make multiple changes at the same time, you can cause some nodes to
end up with a different schema, than others.

Can we solve the schema version mismatch without restart the Cassandra nodes

When you only have one node in disagreement, the easiest way is simply to
run nodetool resetlocalschema, which will make that node forget it's schema
and request it again from the other nodes.

When you have more than one node in disagreement, it becomes more
difficult, because you can't control which node will respond to the schema
request, so one of the nodes with the wrong schema could send it back to
the node where you ran resetlocalschema, and then it will still be in
disagreement with the majority.

In this case, you need to shut down all the nodes that have the incorrect
schema, then start them up one node at a time and after each node comes up,
run nodetool resetlocalschema. Check that the schema is now in agreement
for all the nodes that are currently running, and then repeat the process
for each remaining node that has the incorrect schema.

On Thu, Jun 7, 2018 at 4:54 PM, Nandakishore Tokala <
nandakishore.tok...@gmail.com> wrote:

> Thanks Justin
>
> On Thu, Jun 7, 2018 at 4:27 PM, Justin Cameron 
> wrote:
>
>> You may run into schema mismatch issues if you make multiple schema
>> alterations in a very short period of time (e.g. if you're programmatically
>> modifying tables 50x a second). You'll be better off making schema changes
>> in advance. If you need to make dynamic changes then you could check the
>> version to make sure that the previous change has applied before applying
>> the next one.
>>
>> AFAIK the only way to resolve schema mismatch is a rolling restart
>>
>> On Fri, 8 Jun 2018 at 03:03 Nandakishore Tokala <
>> nandakishore.tok...@gmail.com> wrote:
>>
>>> what is the schema version in Cassandra and for what are all the changes
>>> schema version is changed?
>>>
>>> why do we see schema version mismatch other than upgrades time?
>>> Can we solve the schema version mismatch without restart the
>>> Cassandra nodes
>>>
>>>
>>>
>>>
>>> --
>>> Thanks & Regards,
>>> Nanda Kishore
>>>
>> --
>>
>>
>> *Justin Cameron*Senior Software Engineer
>>
>>
>> <https://www.instaclustr.com/>
>>
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>
>
>
> --
> Thanks & Regards,
> Nanda Kishore
>



-- 
Thanks & Regards,
Nanda Kishore


Re: Schema version mismatch

2018-06-07 Thread Nandakishore Tokala
Thanks Justin

On Thu, Jun 7, 2018 at 4:27 PM, Justin Cameron 
wrote:

> You may run into schema mismatch issues if you make multiple schema
> alterations in a very short period of time (e.g. if you're programmatically
> modifying tables 50x a second). You'll be better off making schema changes
> in advance. If you need to make dynamic changes then you could check the
> version to make sure that the previous change has applied before applying
> the next one.
>
> AFAIK the only way to resolve schema mismatch is a rolling restart
>
> On Fri, 8 Jun 2018 at 03:03 Nandakishore Tokala <
> nandakishore.tok...@gmail.com> wrote:
>
>> what is the schema version in Cassandra and for what are all the changes
>> schema version is changed?
>>
>> why do we see schema version mismatch other than upgrades time?
>> Can we solve the schema version mismatch without restart the
>> Cassandra nodes
>>
>>
>>
>>
>> --
>> Thanks & Regards,
>> Nanda Kishore
>>
> --
>
>
> *Justin Cameron*Senior Software Engineer
>
>
> <https://www.instaclustr.com/>
>
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>



-- 
Thanks & Regards,
Nanda Kishore


Schema version mismatch

2018-06-07 Thread Nandakishore Tokala
what is the schema version in Cassandra and for what are all the changes
schema version is changed?

why do we see schema version mismatch other than upgrades time?
Can we solve the schema version mismatch without restart the Cassandra nodes



-- 
Thanks & Regards,
Nanda Kishore


Re: Repair fails for unknown reason

2018-01-03 Thread Nandakishore Tokala
hi Hannu,

I think some of the repairs are hanging there. please restart all the nodes
in the  cluster and start the repair


Thanks
Nanda

On Wed, Jan 3, 2018 at 9:35 AM, Hannu Kröger  wrote:

> Additional notes:
>
> 1) If I run the repair just on those tables, it works fine
> 2) Those tables are empty
>
> Hannu
>
> > On 3 Jan 2018, at 18:23, Hannu Kröger  wrote:
> >
> > Hello,
> >
> > Situation is as follows:
> >
> > Repair was started on node X on this keyspace with —full —pr. Repair
> fails on node Y.
> >
> > Node Y has debug logging on (DEBUG on org.apache.cassandra) and I’m
> looking at the debug.log. I see following messages related to this repair
> request:
> >
> > ---
> > DEBUG [AntiEntropyStage:1] 2018-01-02 17:52:12,530
> RepairMessageVerbHandler.java:114 - Validating 
> ValidationRequest{gcBefore=1511473932}
> org.apache.cassandra.repair.messages.ValidationRequest@5a17430c
> > DEBUG [ValidationExecutor:4] 2018-01-02 17:52:12,531
> StorageService.java:3321 - Forcing flush on keyspace mykeyspace, CF mytable
> > DEBUG [MemtablePostFlush:54] 2018-01-02 17:52:12,531
> ColumnFamilyStore.java:954 - forceFlush requested but everything is clean
> in mytable
> > ERROR [ValidationExecutor:4] 2018-01-02 17:52:12,532 Validator.java:268
> - Failed creating a merkle tree for [repair 
> #1df000a0-effa-11e7-8361-b7c9edfbfc33
> on mykeyspace/mytable, [(6917529027641081856,-9223372036854775808]]], /
> 123.123.123.123 (see log for details)
> > ---
> >
> > then the same about another table and after that which indicates that
> repair “master” has told to abort basically, right?
> >
> > ---
> > DEBUG [AntiEntropyStage:1] 2018-01-02 17:52:12,563
> RepairMessageVerbHandler.java:142 - Got anticompaction request
> AnticompactionRequest{parentRepairSession=1de949e0-effa-11e7-8361-b7c9edfbfc33}
> org.apache.cassandra.repair.messages.AnticompactionRequest@5dc8be
> > ea
> > ERROR [AntiEntropyStage:1] 2018-01-02 17:52:12,563
> RepairMessageVerbHandler.java:168 - Got error, removing parent repair
> session
> > ERROR [AntiEntropyStage:1] 2018-01-02 17:52:12,564
> CassandraDaemon.java:228 - Exception in thread Thread[AntiEntropyStage:1,5,
> main]
> > java.lang.RuntimeException: java.lang.RuntimeException: Parent repair
> session with id = 1de949e0-effa-11e7-8361-b7c9edfbfc33 has failed.
> >at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(
> RepairMessageVerbHandler.java:171) ~[apache-cassandra-3.11.0.jar:3.11.0]
> >at 
> > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> >at 
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_111]
> >at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_111]
> >at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ~[na:1.8.0_111]
> >at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_111]
> >at org.apache.cassandra.concurrent.NamedThreadFactory.
> lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
> [apache-cassandra-3.11.0.jar:3.11.0]
> >at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> > Caused by: java.lang.RuntimeException: Parent repair session with id =
> 1de949e0-effa-11e7-8361-b7c9edfbfc33 has failed.
> >at org.apache.cassandra.service.ActiveRepairService.
> getParentRepairSession(ActiveRepairService.java:409)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> >at org.apache.cassandra.service.ActiveRepairService.
> doAntiCompaction(ActiveRepairService.java:444)
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> >at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(
> RepairMessageVerbHandler.java:143) ~[apache-cassandra-3.11.0.jar:3.11.0]
> >... 7 common frames omitted
> > ---
> >
> > But that is almost all in the log and I don’t really see what the
> original problem here is.
> >
> > Cassandra flushes the table to start building merkle tree and on next
> millisecond it already fails the repair but without proper exception or
> error logging about the problem.
> >
> > Cassandra version is the 3.11.0.
> >
> > Any ideas?
> >
> > Cheers,
> > Hannu
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


-- 
Thanks & Regards,
Nanda Kishore