Re: "Not enough replicas available for query" after reboot
Yes, that works with consistency ALL. I restarted one of the Cassandra instances, and seems it's working again now. I'm not sure what happened. On 4 February 2016 at 23:48, Peddi, Praveen wrote: > Are you able to run queries using cqlsh with consistency ALL? > > On Feb 4, 2016, at 6:32 PM, Flavien Charlon > wrote: > > No, there was no other change. I did run "apt-get upgrade" before > rebooting, but Cassandra has not been upgraded. > > On 4 February 2016 at 22:48, Bryan Cheng wrote: > >> Hey Flavien! >> >> Did your reboot come with any other changes (schema, configuration, >> topology, version)? >> >> On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon < >> flavien.char...@gmail.com> wrote: >> >>> I'm using the C# driver 2.5.2. I did try to restart the client >>> application, but that didn't make any difference, I still get the same >>> error after restart. >>> >>> On 4 February 2016 at 21:54, wrote: >>> >>>> What client are you using? >>>> >>>> >>>> >>>> It is possible that the client saw nodes down and has kept them marked >>>> that way (without retrying). Depending on the client, you may have options >>>> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will >>>> probably fix the problem for now. >>>> >>>> >>>> >>>> >>>> >>>> Sean Durity >>>> >>>> >>>> >>>> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com] >>>> *Sent:* Thursday, February 04, 2016 4:06 PM >>>> *To:* user@cassandra.apache.org >>>> *Subject:* Re: "Not enough replicas available for query" after reboot >>>> >>>> >>>> >>>> Yes, all three nodes see all three nodes as UN. >>>> >>>> >>>> >>>> Also, connecting from a local Cassandra machine using cqlsh, I can run >>>> the same query just fine (with QUORUM consistency level). >>>> >>>> >>>> >>>> On 4 February 2016 at 21:02, Robert Coli wrote: >>>> >>>> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon < >>>> flavien.char...@gmail.com> wrote: >>>> >>>> My cluster was running fine. I rebooted all three nodes (one by one), >>>> and now all nodes are back up and running. "nodetool status" shows UP for >>>> all three nodes on all three nodes: >>>> >>>> >>>> >>>> -- AddressLoad Tokens OwnsHost ID >>>> Rack >>>> >>>> UN xx.xx.xx.xx331.84 GB 1 ? >>>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 >>>> >>>> UN xx.xx.xx.xx317.2 GB 1 ? >>>> de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 >>>> >>>> UN xx.xx.xx.xx 291.61 GB 1 ? >>>> b489c970-68db-44a7-90c6-be734b41475f RAC1 >>>> >>>> >>>> >>>> However, now the client application fails to run queries on the cluster >>>> with: >>>> >>>> >>>> >>>> Cassandra.UnavailableException: Not enough replicas available for query >>>> at consistency Quorum (2 required but only 1 alive) >>>> >>>> >>>> >>>> Do *all* nodes see each other as UP/UN? >>>> >>>> >>>> >>>> =Rob >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> The information in this Internet Email is confidential and may be >>>> legally privileged. It is intended solely for the addressee. Access to this >>>> Email by anyone else is unauthorized. If you are not the intended >>>> recipient, any disclosure, copying, distribution or any action taken or >>>> omitted to be taken in reliance on it, is prohibited and may be unlawful. >>>> When addressed to our clients any opinions or advice contained in this >>>> Email are subject to the terms and conditions expressed in any applicable >>>> governing The Home Depot terms of business or client engagement letter. The >>>> Home Depot disclaims all responsibility and liability for the accuracy and >>>> content of this attachment and for any damages or losses arising from any >>>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other >>>> items of a destructive nature, which may be contained in this attachment >>>> and shall not be liable for direct, indirect, consequential or special >>>> damages in connection with this e-mail message or its attachment. >>>> >>> >>> >> >
Re: "Not enough replicas available for query" after reboot
Are you able to run queries using cqlsh with consistency ALL? On Feb 4, 2016, at 6:32 PM, Flavien Charlon mailto:flavien.char...@gmail.com>> wrote: No, there was no other change. I did run "apt-get upgrade" before rebooting, but Cassandra has not been upgraded. On 4 February 2016 at 22:48, Bryan Cheng mailto:br...@blockcypher.com>> wrote: Hey Flavien! Did your reboot come with any other changes (schema, configuration, topology, version)? On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon mailto:flavien.char...@gmail.com>> wrote: I'm using the C# driver 2.5.2. I did try to restart the client application, but that didn't make any difference, I still get the same error after restart. On 4 February 2016 at 21:54, mailto:sean_r_dur...@homedepot.com>> wrote: What client are you using? It is possible that the client saw nodes down and has kept them marked that way (without retrying). Depending on the client, you may have options to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will probably fix the problem for now. Sean Durity From: Flavien Charlon [mailto:flavien.char...@gmail.com<mailto:flavien.char...@gmail.com>] Sent: Thursday, February 04, 2016 4:06 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: "Not enough replicas available for query" after reboot Yes, all three nodes see all three nodes as UN. Also, connecting from a local Cassandra machine using cqlsh, I can run the same query just fine (with QUORUM consistency level). On 4 February 2016 at 21:02, Robert Coli mailto:rc...@eventbrite.com>> wrote: On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon mailto:flavien.char...@gmail.com>> wrote: My cluster was running fine. I rebooted all three nodes (one by one), and now all nodes are back up and running. "nodetool status" shows UP for all three nodes on all three nodes: -- AddressLoad Tokens OwnsHost ID Rack UN xx.xx.xx.xx331.84 GB 1 ? d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 UN xx.xx.xx.xx317.2 GB 1 ? de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 UN xx.xx.xx.xx 291.61 GB 1 ? b489c970-68db-44a7-90c6-be734b41475f RAC1 However, now the client application fails to run queries on the cluster with: Cassandra.UnavailableException: Not enough replicas available for query at consistency Quorum (2 required but only 1 alive) Do *all* nodes see each other as UP/UN? =Rob The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
Re: "Not enough replicas available for query" after reboot
No, there was no other change. I did run "apt-get upgrade" before rebooting, but Cassandra has not been upgraded. On 4 February 2016 at 22:48, Bryan Cheng wrote: > Hey Flavien! > > Did your reboot come with any other changes (schema, configuration, > topology, version)? > > On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon > wrote: > >> I'm using the C# driver 2.5.2. I did try to restart the client >> application, but that didn't make any difference, I still get the same >> error after restart. >> >> On 4 February 2016 at 21:54, wrote: >> >>> What client are you using? >>> >>> >>> >>> It is possible that the client saw nodes down and has kept them marked >>> that way (without retrying). Depending on the client, you may have options >>> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will >>> probably fix the problem for now. >>> >>> >>> >>> >>> >>> Sean Durity >>> >>> >>> >>> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com] >>> *Sent:* Thursday, February 04, 2016 4:06 PM >>> *To:* user@cassandra.apache.org >>> *Subject:* Re: "Not enough replicas available for query" after reboot >>> >>> >>> >>> Yes, all three nodes see all three nodes as UN. >>> >>> >>> >>> Also, connecting from a local Cassandra machine using cqlsh, I can run >>> the same query just fine (with QUORUM consistency level). >>> >>> >>> >>> On 4 February 2016 at 21:02, Robert Coli wrote: >>> >>> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon < >>> flavien.char...@gmail.com> wrote: >>> >>> My cluster was running fine. I rebooted all three nodes (one by one), >>> and now all nodes are back up and running. "nodetool status" shows UP for >>> all three nodes on all three nodes: >>> >>> >>> >>> -- AddressLoad Tokens OwnsHost ID >>> Rack >>> >>> UN xx.xx.xx.xx331.84 GB 1 ? >>> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 >>> >>> UN xx.xx.xx.xx317.2 GB 1 ? >>> de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 >>> >>> UN xx.xx.xx.xx 291.61 GB 1 ? >>> b489c970-68db-44a7-90c6-be734b41475f RAC1 >>> >>> >>> >>> However, now the client application fails to run queries on the cluster >>> with: >>> >>> >>> >>> Cassandra.UnavailableException: Not enough replicas available for query >>> at consistency Quorum (2 required but only 1 alive) >>> >>> >>> >>> Do *all* nodes see each other as UP/UN? >>> >>> >>> >>> =Rob >>> >>> >>> >>> >>> >>> -- >>> >>> The information in this Internet Email is confidential and may be >>> legally privileged. It is intended solely for the addressee. Access to this >>> Email by anyone else is unauthorized. If you are not the intended >>> recipient, any disclosure, copying, distribution or any action taken or >>> omitted to be taken in reliance on it, is prohibited and may be unlawful. >>> When addressed to our clients any opinions or advice contained in this >>> Email are subject to the terms and conditions expressed in any applicable >>> governing The Home Depot terms of business or client engagement letter. The >>> Home Depot disclaims all responsibility and liability for the accuracy and >>> content of this attachment and for any damages or losses arising from any >>> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other >>> items of a destructive nature, which may be contained in this attachment >>> and shall not be liable for direct, indirect, consequential or special >>> damages in connection with this e-mail message or its attachment. >>> >> >> >
Re: "Not enough replicas available for query" after reboot
Hey Flavien! Did your reboot come with any other changes (schema, configuration, topology, version)? On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon wrote: > I'm using the C# driver 2.5.2. I did try to restart the client > application, but that didn't make any difference, I still get the same > error after restart. > > On 4 February 2016 at 21:54, wrote: > >> What client are you using? >> >> >> >> It is possible that the client saw nodes down and has kept them marked >> that way (without retrying). Depending on the client, you may have options >> to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will >> probably fix the problem for now. >> >> >> >> >> >> Sean Durity >> >> >> >> *From:* Flavien Charlon [mailto:flavien.char...@gmail.com] >> *Sent:* Thursday, February 04, 2016 4:06 PM >> *To:* user@cassandra.apache.org >> *Subject:* Re: "Not enough replicas available for query" after reboot >> >> >> >> Yes, all three nodes see all three nodes as UN. >> >> >> >> Also, connecting from a local Cassandra machine using cqlsh, I can run >> the same query just fine (with QUORUM consistency level). >> >> >> >> On 4 February 2016 at 21:02, Robert Coli wrote: >> >> On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon < >> flavien.char...@gmail.com> wrote: >> >> My cluster was running fine. I rebooted all three nodes (one by one), and >> now all nodes are back up and running. "nodetool status" shows UP for all >> three nodes on all three nodes: >> >> >> >> -- AddressLoad Tokens OwnsHost ID >> Rack >> >> UN xx.xx.xx.xx331.84 GB 1 ? >> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 >> >> UN xx.xx.xx.xx317.2 GB 1 ? >> de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 >> >> UN xx.xx.xx.xx 291.61 GB 1 ? >> b489c970-68db-44a7-90c6-be734b41475f RAC1 >> >> >> >> However, now the client application fails to run queries on the cluster >> with: >> >> >> >> Cassandra.UnavailableException: Not enough replicas available for query >> at consistency Quorum (2 required but only 1 alive) >> >> >> >> Do *all* nodes see each other as UP/UN? >> >> >> >> =Rob >> >> >> >> >> >> -- >> >> The information in this Internet Email is confidential and may be legally >> privileged. It is intended solely for the addressee. Access to this Email >> by anyone else is unauthorized. If you are not the intended recipient, any >> disclosure, copying, distribution or any action taken or omitted to be >> taken in reliance on it, is prohibited and may be unlawful. When addressed >> to our clients any opinions or advice contained in this Email are subject >> to the terms and conditions expressed in any applicable governing The Home >> Depot terms of business or client engagement letter. The Home Depot >> disclaims all responsibility and liability for the accuracy and content of >> this attachment and for any damages or losses arising from any >> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other >> items of a destructive nature, which may be contained in this attachment >> and shall not be liable for direct, indirect, consequential or special >> damages in connection with this e-mail message or its attachment. >> > >
Re: "Not enough replicas available for query" after reboot
I'm using the C# driver 2.5.2. I did try to restart the client application, but that didn't make any difference, I still get the same error after restart. On 4 February 2016 at 21:54, wrote: > What client are you using? > > > > It is possible that the client saw nodes down and has kept them marked > that way (without retrying). Depending on the client, you may have options > to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will > probably fix the problem for now. > > > > > > Sean Durity > > > > *From:* Flavien Charlon [mailto:flavien.char...@gmail.com] > *Sent:* Thursday, February 04, 2016 4:06 PM > *To:* user@cassandra.apache.org > *Subject:* Re: "Not enough replicas available for query" after reboot > > > > Yes, all three nodes see all three nodes as UN. > > > > Also, connecting from a local Cassandra machine using cqlsh, I can run the > same query just fine (with QUORUM consistency level). > > > > On 4 February 2016 at 21:02, Robert Coli wrote: > > On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon < > flavien.char...@gmail.com> wrote: > > My cluster was running fine. I rebooted all three nodes (one by one), and > now all nodes are back up and running. "nodetool status" shows UP for all > three nodes on all three nodes: > > > > -- AddressLoad Tokens OwnsHost ID > Rack > > UN xx.xx.xx.xx331.84 GB 1 ? > d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 > > UN xx.xx.xx.xx317.2 GB 1 ? > de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 > > UN xx.xx.xx.xx 291.61 GB 1 ? > b489c970-68db-44a7-90c6-be734b41475f RAC1 > > > > However, now the client application fails to run queries on the cluster > with: > > > > Cassandra.UnavailableException: Not enough replicas available for query at > consistency Quorum (2 required but only 1 alive) > > > > Do *all* nodes see each other as UP/UN? > > > > =Rob > > > > > > -- > > The information in this Internet Email is confidential and may be legally > privileged. It is intended solely for the addressee. Access to this Email > by anyone else is unauthorized. If you are not the intended recipient, any > disclosure, copying, distribution or any action taken or omitted to be > taken in reliance on it, is prohibited and may be unlawful. When addressed > to our clients any opinions or advice contained in this Email are subject > to the terms and conditions expressed in any applicable governing The Home > Depot terms of business or client engagement letter. The Home Depot > disclaims all responsibility and liability for the accuracy and content of > this attachment and for any damages or losses arising from any > inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other > items of a destructive nature, which may be contained in this attachment > and shall not be liable for direct, indirect, consequential or special > damages in connection with this e-mail message or its attachment. >
RE: "Not enough replicas available for query" after reboot
What client are you using? It is possible that the client saw nodes down and has kept them marked that way (without retrying). Depending on the client, you may have options to set in RetryPolicy, FailoverPolicy, etc. A bounce of the client will probably fix the problem for now. Sean Durity From: Flavien Charlon [mailto:flavien.char...@gmail.com] Sent: Thursday, February 04, 2016 4:06 PM To: user@cassandra.apache.org Subject: Re: "Not enough replicas available for query" after reboot Yes, all three nodes see all three nodes as UN. Also, connecting from a local Cassandra machine using cqlsh, I can run the same query just fine (with QUORUM consistency level). On 4 February 2016 at 21:02, Robert Coli mailto:rc...@eventbrite.com>> wrote: On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon mailto:flavien.char...@gmail.com>> wrote: My cluster was running fine. I rebooted all three nodes (one by one), and now all nodes are back up and running. "nodetool status" shows UP for all three nodes on all three nodes: -- AddressLoad Tokens OwnsHost ID Rack UN xx.xx.xx.xx331.84 GB 1 ? d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 UN xx.xx.xx.xx317.2 GB 1 ? de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 UN xx.xx.xx.xx 291.61 GB 1 ? b489c970-68db-44a7-90c6-be734b41475f RAC1 However, now the client application fails to run queries on the cluster with: Cassandra.UnavailableException: Not enough replicas available for query at consistency Quorum (2 required but only 1 alive) Do *all* nodes see each other as UP/UN? =Rob The information in this Internet Email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this Email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed to our clients any opinions or advice contained in this Email are subject to the terms and conditions expressed in any applicable governing The Home Depot terms of business or client engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy and content of this attachment and for any damages or losses arising from any inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature, which may be contained in this attachment and shall not be liable for direct, indirect, consequential or special damages in connection with this e-mail message or its attachment.
Re: "Not enough replicas available for query" after reboot
Yes, all three nodes see all three nodes as UN. Also, connecting from a local Cassandra machine using cqlsh, I can run the same query just fine (with QUORUM consistency level). On 4 February 2016 at 21:02, Robert Coli wrote: > On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon < > flavien.char...@gmail.com> wrote: > >> My cluster was running fine. I rebooted all three nodes (one by one), and >> now all nodes are back up and running. "nodetool status" shows UP for all >> three nodes on all three nodes: >> >> -- AddressLoad Tokens OwnsHost ID >> Rack >> UN xx.xx.xx.xx331.84 GB 1 ? >> d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 >> UN xx.xx.xx.xx317.2 GB 1 ? >> de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 >> UN xx.xx.xx.xx 291.61 GB 1 ? >> b489c970-68db-44a7-90c6-be734b41475f RAC1 >> >> However, now the client application fails to run queries on the cluster >> with: >> >> Cassandra.UnavailableException: Not enough replicas available for query >>> at consistency Quorum (2 required but only 1 alive) >> >> > Do *all* nodes see each other as UP/UN? > > =Rob > >
Re: "Not enough replicas available for query" after reboot
On Thu, Feb 4, 2016 at 12:53 PM, Flavien Charlon wrote: > My cluster was running fine. I rebooted all three nodes (one by one), and > now all nodes are back up and running. "nodetool status" shows UP for all > three nodes on all three nodes: > > -- AddressLoad Tokens OwnsHost ID > Rack > UN xx.xx.xx.xx331.84 GB 1 ? > d3d3a79b-9ca5-43f9-88c4-c3c7f08ca538 RAC1 > UN xx.xx.xx.xx317.2 GB 1 ? > de7917ed-0de9-434d-be88-bc91eb4f8713 RAC1 > UN xx.xx.xx.xx 291.61 GB 1 ? > b489c970-68db-44a7-90c6-be734b41475f RAC1 > > However, now the client application fails to run queries on the cluster > with: > > Cassandra.UnavailableException: Not enough replicas available for query at >> consistency Quorum (2 required but only 1 alive) > > Do *all* nodes see each other as UP/UN? =Rob
RE: Not enough replicas???
Sweet! That worked! THANK YOU! Stephen Thompson Wells Fargo Corporation Internet Authentication & Fraud Prevention 704.427.3137 (W) | 704.807.3431 (C) This message may contain confidential and/or privileged information, and is intended for the use of the addressee only. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. From: Tyler Hobbs [mailto:ty...@datastax.com] Sent: Monday, February 04, 2013 1:43 PM To: user@cassandra.apache.org Subject: Re: Not enough replicas??? Sorry, to be more precise, the name of the datacenter is just the string "28", not "DC28". On Mon, Feb 4, 2013 at 12:07 PM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter and replication, as follows: create keyspace KEYSPACE_NAME with placement_strategy = 'org.apache.cassandra.locator.NetworkTopologyStrategy' and strategy_options={DC28:2}; And yet I still get the exact same error message: me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level. It certainly is showing that it took my change: [default@KEYSPACE_NAME] describe; Keyspace: KEYSPACE_NAME: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [DC28:2] Looking at the ring [root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring Datacenter: 28 == Replicas: 0 Address RackStatus State LoadOwns Token 9187343239835811839 10.28.205.126 205 Up Normal 95.89 KB0.00% -9187343239835811840 10.28.205.126 205 Up Normal 95.89 KB0.00% -9151314442816847872 10.28.205.126 205 Up Normal 95.89 KB0.00% -9115285645797883904 ( HUGE SNIP ) 10.28.205.127 205 Up Normal 84.63 KB0.00% 9115285645797883903 10.28.205.127 205 Up Normal 84.63 KB0.00% 9151314442816847871 10.28.205.127 205 Up Normal 84.63 KB0.00% 9187343239835811839 So both boxes are showing up in the ring. Thank you guys SO MUCH for helping me figure this stuff out. From: Tyler Hobbs [mailto:ty...@datastax.com<mailto:ty...@datastax.com>] Sent: Monday, February 04, 2013 11:17 AM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Not enough replicas??? RackInferringSnitch determines each node's DC and rack by looking at the second and third octets in its IP address (http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch), so your nodes are in DC "28". Your replication strategy says to put one replica in DC "datacenter1", but doesn't mention DC "28" at all, so you don't have any replicas for your keyspace. On Mon, Feb 4, 2013 at 7:55 AM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: Hi Edward - thanks for responding. The keyspace could not have been created more simply: create keyspace KEYSPACE_NAME; According to the help, this should have created a replication factor of 1: Keyspace Attributes (all are optional): - placement_strategy: Class used to determine how replicas are distributed among nodes. Defaults to NetworkTopologyStrategy with one datacenter defined with a replication factor of 1 ("[datacenter1:1]"). Steve -Original Message- From: Edward Capriolo [mailto:edlinuxg...@gmail.com<mailto:edlinuxg...@gmail.com>] Sent: Friday, February 01, 2013 5:49 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Not enough replicas??? Please include the information on how your keyspace was created. This may indicate you set the replication factor to 3, when you only have 1 node, or some similar condition. On Fri, Feb 1, 2013 at 4:57 PM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: > I need to offer my profound thanks to this community which has been so > helpful in trying to figure this system out. > > > > I've setup a simple ring with two nodes and I'm trying to insert data > to them. I get failures 100% with this error: > > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not > be enough replicas present to handle consistency level. > > > > I'm not doing anything fancy - this
Re: Not enough replicas???
Sorry, to be more precise, the name of the datacenter is just the string "28", not "DC28". On Mon, Feb 4, 2013 at 12:07 PM, wrote: > Thanks Tyler … so I created my keyspace to explicitly indicate the > datacenter and replication, as follows: > > ** ** > > create *keyspace* KEYSPACE_NAME > > with placement_strategy = > 'org.apache.cassandra.locator.NetworkTopologyStrategy' > > and strategy_options={DC28:2}; > > ** ** > > And yet I still get the exact same error message: > > ** ** > > *me.prettyprint.hector.api.exceptions.HUnavailableException*: : May not > be enough replicas present to handle consistency level. > > ** ** > > It certainly is showing that it took my change: > > ** ** > > [default@KEYSPACE_NAME] describe; > > Keyspace: KEYSPACE_NAME: > > Replication Strategy: > org.apache.cassandra.locator.NetworkTopologyStrategy > > Durable Writes: true > > Options: [DC28:2] > > ** ** > > Looking at the ring …. > > ** ** > > [root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost > ring > > ** ** > > Datacenter: 28 > > == > > Replicas: 0 > > ** ** > > Address RackStatus State Load > OwnsToken > > > 9187343239835811839 > > 10.28.205.126 205 Up Normal 95.89 KB > 0.00% -9187343239835811840 > > 10.28.205.126 205 Up Normal 95.89 KB > 0.00% -9151314442816847872 > > 10.28.205.126 205 Up Normal 95.89 KB > 0.00% -9115285645797883904 > > ** ** > > ( HUGE SNIP ) > > ** ** > > 10.28.205.127 205 Up Normal 84.63 KB > 0.00% 9115285645797883903 > > 10.28.205.127 205 Up Normal 84.63 KB > 0.00% 9151314442816847871 > > 10.28.205.127 205 Up Normal 84.63 KB > 0.00% 9187343239835811839 > > ** ** > > So both boxes are showing up in the ring. > > ** ** > > *Thank you guys SO MUCH for helping me figure this stuff out.* > > ** ** > > ** ** > > *From:* Tyler Hobbs [mailto:ty...@datastax.com] > *Sent:* Monday, February 04, 2013 11:17 AM > > *To:* user@cassandra.apache.org > *Subject:* Re: Not enough replicas??? > > ** ** > > RackInferringSnitch determines each node's DC and rack by looking at the > second and third octets in its IP address ( > http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch), > so your nodes are in DC "28". > > Your replication strategy says to put one replica in DC "datacenter1", but > doesn't mention DC "28" at all, so you don't have any replicas for your > keyspace. > > ** ** > > On Mon, Feb 4, 2013 at 7:55 AM, wrote: > > > Hi Edward - thanks for responding. The keyspace could not have been > created more simply: > > > > create keyspace KEYSPACE_NAME; > > > > According to the help, this should have created a replication factor of 1: > > > > > Keyspace Attributes (all are optional): > > - placement_strategy: Class used to determine how replicas > > are distributed among nodes. Defaults to NetworkTopologyStrategy with*** > * > > one datacenter defined with a replication factor of 1 > ("[datacenter1:1]"). > > > > Steve > > > > -Original Message- > From: Edward Capriolo [mailto:edlinuxg...@gmail.com] > Sent: Friday, February 01, 2013 5:49 PM > To: user@cassandra.apache.org > Subject: Re: Not enough replicas??? > > > > Please include the information on how your keyspace was created. This may > indicate you set the replication factor to 3, when you only have 1 node, or > some similar condition. > > > > On Fri, Feb 1, 2013 at 4:57 PM, > wrote: > > > I need to offer my profound thanks to this community which has been so * > *** > > > helpful in trying to figure this system out. > > > > > > > > > > > > I’ve setup a simple ring with two nodes and I’m trying to insert data ** > ** > > > to them. I get failures 100% with this error: > > > > > > > > > > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not ** > ** > > > be enough replicas present to ha
RE: Not enough replicas???
Thanks Tyler ... so I created my keyspace to explicitly indicate the datacenter and replication, as follows: create keyspace KEYSPACE_NAME with placement_strategy = 'org.apache.cassandra.locator.NetworkTopologyStrategy' and strategy_options={DC28:2}; And yet I still get the exact same error message: me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be enough replicas present to handle consistency level. It certainly is showing that it took my change: [default@KEYSPACE_NAME] describe; Keyspace: KEYSPACE_NAME: Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy Durable Writes: true Options: [DC28:2] Looking at the ring [root@Config3482VM1 apache-cassandra-1.2.0]# bin/nodetool -h localhost ring Datacenter: 28 == Replicas: 0 Address RackStatus State LoadOwns Token 9187343239835811839 10.28.205.126 205 Up Normal 95.89 KB0.00% -9187343239835811840 10.28.205.126 205 Up Normal 95.89 KB0.00% -9151314442816847872 10.28.205.126 205 Up Normal 95.89 KB0.00% -9115285645797883904 ( HUGE SNIP ) 10.28.205.127 205 Up Normal 84.63 KB0.00% 9115285645797883903 10.28.205.127 205 Up Normal 84.63 KB0.00% 9151314442816847871 10.28.205.127 205 Up Normal 84.63 KB0.00% 9187343239835811839 So both boxes are showing up in the ring. Thank you guys SO MUCH for helping me figure this stuff out. From: Tyler Hobbs [mailto:ty...@datastax.com] Sent: Monday, February 04, 2013 11:17 AM To: user@cassandra.apache.org Subject: Re: Not enough replicas??? RackInferringSnitch determines each node's DC and rack by looking at the second and third octets in its IP address (http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch), so your nodes are in DC "28". Your replication strategy says to put one replica in DC "datacenter1", but doesn't mention DC "28" at all, so you don't have any replicas for your keyspace. On Mon, Feb 4, 2013 at 7:55 AM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: Hi Edward - thanks for responding. The keyspace could not have been created more simply: create keyspace KEYSPACE_NAME; According to the help, this should have created a replication factor of 1: Keyspace Attributes (all are optional): - placement_strategy: Class used to determine how replicas are distributed among nodes. Defaults to NetworkTopologyStrategy with one datacenter defined with a replication factor of 1 ("[datacenter1:1]"). Steve -Original Message- From: Edward Capriolo [mailto:edlinuxg...@gmail.com<mailto:edlinuxg...@gmail.com>] Sent: Friday, February 01, 2013 5:49 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Not enough replicas??? Please include the information on how your keyspace was created. This may indicate you set the replication factor to 3, when you only have 1 node, or some similar condition. On Fri, Feb 1, 2013 at 4:57 PM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: > I need to offer my profound thanks to this community which has been so > helpful in trying to figure this system out. > > > > I've setup a simple ring with two nodes and I'm trying to insert data > to them. I get failures 100% with this error: > > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not > be enough replicas present to handle consistency level. > > > > I'm not doing anything fancy - this is just from setting up the > cluster following the basic instructions from datastax for a simple > one data center cluster. My config is basically the default except > for the changes they discuss (except that I have configured for my IP > addresses... my two boxes are > .126 and .127) > > > > cluster_name: 'MyDemoCluster' > > num_tokens: 256 > > seed_provider: > > - class_name: org.apache.cassandra.locator.SimpleSeedProvider > > parameters: > > - seeds: "10.28.205.126" > > listen_address: 10.28.205.126 > > rpc_address: 0.0.0.0 > > endpoint_snitch: RackInferringSnitch > > > > Nodetool shows both nodes active in the ring, status = up, state = normal. > > > > For the CF: > > > >ColumnFamily: SystemEvent > > Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > Default column value validator: > org.apache.
Re: Not enough replicas???
RackInferringSnitch determines each node's DC and rack by looking at the second and third octets in its IP address ( http://www.datastax.com/docs/1.0/cluster_architecture/replication#rackinferringsnitch), so your nodes are in DC "28". Your replication strategy says to put one replica in DC "datacenter1", but doesn't mention DC "28" at all, so you don't have any replicas for your keyspace. On Mon, Feb 4, 2013 at 7:55 AM, wrote: > Hi Edward - thanks for responding. The keyspace could not have been > created more simply: > > ** ** > > create keyspace KEYSPACE_NAME; > > ** ** > > According to the help, this should have created a replication factor of 1: > > > ** ** > > Keyspace Attributes (all are optional): > > - placement_strategy: Class used to determine how replicas > > are distributed among nodes. Defaults to NetworkTopologyStrategy with*** > * > > one datacenter defined with a replication factor of 1 > ("[datacenter1:1]"). > > ** ** > > Steve > > ** ** > > -Original Message----- > From: Edward Capriolo [mailto:edlinuxg...@gmail.com] > Sent: Friday, February 01, 2013 5:49 PM > To: user@cassandra.apache.org > Subject: Re: Not enough replicas??? > > ** ** > > Please include the information on how your keyspace was created. This may > indicate you set the replication factor to 3, when you only have 1 node, or > some similar condition. > > ** ** > > On Fri, Feb 1, 2013 at 4:57 PM, > wrote: > > > I need to offer my profound thanks to this community which has been so * > *** > > > helpful in trying to figure this system out. > > >** ** > > >** ** > > >** ** > > > I’ve setup a simple ring with two nodes and I’m trying to insert data ** > ** > > > to them. I get failures 100% with this error: > > >** ** > > >** ** > > >** ** > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not ** > ** > > > be enough replicas present to handle consistency level. > > >** ** > > >** ** > > >** ** > > > I’m not doing anything fancy – this is just from setting up the > > > cluster following the basic instructions from datastax for a simple > > > one data center cluster. My config is basically the default except > > > for the changes they discuss (except that I have configured for my IP ** > ** > > > addresses… my two boxes are > > > .126 and .127) > > >** ** > > >** ** > > >** ** > > > cluster_name: 'MyDemoCluster' > > >** ** > > > num_tokens: 256 > > >** ** > > > seed_provider: > > >** ** > > > - class_name: org.apache.cassandra.locator.SimpleSeedProvider > > >** ** > > > parameters: > > >** ** > > > - seeds: "10.28.205.126" > > >** ** > > > listen_address: 10.28.205.126 > > >** ** > > > rpc_address: 0.0.0.0 > > >** ** > > > endpoint_snitch: RackInferringSnitch > > >** ** > > >** ** > > >** ** > > > Nodetool shows both nodes active in the ring, status = up, state = > normal. > > >** ** > > >** ** > > >** ** > > > For the CF: > > >** ** > > >** ** > > >** ** > > >ColumnFamily: SystemEvent > > >** ** > > > Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > >** ** > > > Default column value validator: > > > org.apache.cassandra.db.marshal.UTF8Type > > >** ** > > > Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type > > >** ** > > > GC grace seconds: 864000 > > >** ** > > > Compaction min/max thresholds: 4/32 > > >** ** > > > Read repair chance: 0.1 > > >** ** > > > DC Local Read repair chance: 0.0 > > >** ** > > > Replicate on write: true > > >** ** > > > Caching: KEYS_ONLY > > >** ** > > > Bloom Filter FP chance: default > > >** ** > > > Built indexes: [SystemEvent.IdxName] > > >** ** > > > Column Metadata: > > >** ** > > >Column Name: eventTimeStamp > > >** ** > > > Validation Class: org.apache.cassandra.db.marshal.DateType > > >** ** > > >Column Name: name > > >** ** > > > Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > >** ** > > > Index Name: IdxName > > >** ** > > > Index Type: KEYS > > >** ** > > > Compaction Strategy: > > > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy > > >** ** > > > Compression Options: > > >** ** > > >sstable_compression: > > > org.apache.cassandra.io.compress.SnappyCompressor > > >** ** > > >** ** > > >** ** > > > Any ideas? > -- Tyler Hobbs DataStax <http://datastax.com/>
RE: Not enough replicas???
Hi Edward - thanks for responding. The keyspace could not have been created more simply: create keyspace KEYSPACE_NAME; According to the help, this should have created a replication factor of 1: Keyspace Attributes (all are optional): - placement_strategy: Class used to determine how replicas are distributed among nodes. Defaults to NetworkTopologyStrategy with one datacenter defined with a replication factor of 1 ("[datacenter1:1]"). Steve -Original Message- From: Edward Capriolo [mailto:edlinuxg...@gmail.com] Sent: Friday, February 01, 2013 5:49 PM To: user@cassandra.apache.org Subject: Re: Not enough replicas??? Please include the information on how your keyspace was created. This may indicate you set the replication factor to 3, when you only have 1 node, or some similar condition. On Fri, Feb 1, 2013 at 4:57 PM, mailto:stephen.m.thomp...@wellsfargo.com>> wrote: > I need to offer my profound thanks to this community which has been so > helpful in trying to figure this system out. > > > > I've setup a simple ring with two nodes and I'm trying to insert data > to them. I get failures 100% with this error: > > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not > be enough replicas present to handle consistency level. > > > > I'm not doing anything fancy - this is just from setting up the > cluster following the basic instructions from datastax for a simple > one data center cluster. My config is basically the default except > for the changes they discuss (except that I have configured for my IP > addresses... my two boxes are > .126 and .127) > > > > cluster_name: 'MyDemoCluster' > > num_tokens: 256 > > seed_provider: > > - class_name: org.apache.cassandra.locator.SimpleSeedProvider > > parameters: > > - seeds: "10.28.205.126" > > listen_address: 10.28.205.126 > > rpc_address: 0.0.0.0 > > endpoint_snitch: RackInferringSnitch > > > > Nodetool shows both nodes active in the ring, status = up, state = normal. > > > > For the CF: > > > >ColumnFamily: SystemEvent > > Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > Default column value validator: > org.apache.cassandra.db.marshal.UTF8Type > > Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type > > GC grace seconds: 864000 > > Compaction min/max thresholds: 4/32 > > Read repair chance: 0.1 > > DC Local Read repair chance: 0.0 > > Replicate on write: true > > Caching: KEYS_ONLY > > Bloom Filter FP chance: default > > Built indexes: [SystemEvent.IdxName] > > Column Metadata: > >Column Name: eventTimeStamp > > Validation Class: org.apache.cassandra.db.marshal.DateType > >Column Name: name > > Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > Index Name: IdxName > > Index Type: KEYS > > Compaction Strategy: > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy > > Compression Options: > >sstable_compression: > org.apache.cassandra.io.compress.SnappyCompressor > > > > Any ideas?
Re: Not enough replicas???
Please include the information on how your keyspace was created. This may indicate you set the replication factor to 3, when you only have 1 node, or some similar condition. On Fri, Feb 1, 2013 at 4:57 PM, wrote: > I need to offer my profound thanks to this community which has been so > helpful in trying to figure this system out. > > > > I’ve setup a simple ring with two nodes and I’m trying to insert data to > them. I get failures 100% with this error: > > > > me.prettyprint.hector.api.exceptions.HUnavailableException: : May not be > enough replicas present to handle consistency level. > > > > I’m not doing anything fancy – this is just from setting up the cluster > following the basic instructions from datastax for a simple one data center > cluster. My config is basically the default except for the changes they > discuss (except that I have configured for my IP addresses… my two boxes are > .126 and .127) > > > > cluster_name: 'MyDemoCluster' > > num_tokens: 256 > > seed_provider: > > - class_name: org.apache.cassandra.locator.SimpleSeedProvider > > parameters: > > - seeds: "10.28.205.126" > > listen_address: 10.28.205.126 > > rpc_address: 0.0.0.0 > > endpoint_snitch: RackInferringSnitch > > > > Nodetool shows both nodes active in the ring, status = up, state = normal. > > > > For the CF: > > > >ColumnFamily: SystemEvent > > Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > Default column value validator: > org.apache.cassandra.db.marshal.UTF8Type > > Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type > > GC grace seconds: 864000 > > Compaction min/max thresholds: 4/32 > > Read repair chance: 0.1 > > DC Local Read repair chance: 0.0 > > Replicate on write: true > > Caching: KEYS_ONLY > > Bloom Filter FP chance: default > > Built indexes: [SystemEvent.IdxName] > > Column Metadata: > >Column Name: eventTimeStamp > > Validation Class: org.apache.cassandra.db.marshal.DateType > >Column Name: name > > Validation Class: org.apache.cassandra.db.marshal.UTF8Type > > Index Name: IdxName > > Index Type: KEYS > > Compaction Strategy: > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy > > Compression Options: > >sstable_compression: > org.apache.cassandra.io.compress.SnappyCompressor > > > > Any ideas?