Re: Adding New Nodes/Data Center to an existing Cluster.

2015-09-01 Thread Neha Trivedi
Sachin,
Hope you are not using Cassandra 2.2 in production?
regards
Neha

On Tue, Sep 1, 2015 at 11:20 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> DSE 4.7 ships with Cassandra 2.1 for stability.
>
> All the best,
>
>
> [image: datastax_logo.png] <http://www.datastax.com/>
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>
> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
> <https://twitter.com/datastax> [image: g+.png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax>
>
>
> <http://cassandrasummit-datastax.com/?utm_campaign=summit15_medium=summiticon_source=emailsignature>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Tue, Sep 1, 2015 at 12:53 PM, Sachin Nikam <skni...@gmail.com> wrote:
>
>> @Neha,
>> We are using DSE 4.7 & Cassandra 2.2
>>
>> @Alain,
>> I will check with out OPS team about repair vs rebuild and get back to
>> you.
>> Regards
>> Sachin
>>
>> On Tue, Sep 1, 2015 at 5:59 AM, Alain RODRIGUEZ <arodr...@gmail.com>
>> wrote:
>>
>>> Hi Sachin,
>>>
>>> You are speaking about a repair, when the proper command to do this is
>>> "rebuild" ?
>>>
>>> Did you tried adding your DC this way:
>>> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_dc_to_cluster_t.html
>>>  ?
>>>
>>>
>>> 2015-09-01 5:32 GMT+02:00 Neha Trivedi <nehajtriv...@gmail.com>:
>>>
>>>> Hi,
>>>> Can you specify which version of Cassandra you are using?
>>>> Can you provide the Error Stack ?
>>>>
>>>> regards
>>>> Neha
>>>>
>>>> On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
>>>> sebastian.este...@datastax.com> wrote:
>>>>
>>>>> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>>>>>
>>>>> All the best,
>>>>>
>>>>>
>>>>> [image: datastax_logo.png] <http://www.datastax.com/>
>>>>>
>>>>> Sebastián Estévez
>>>>>
>>>>> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>>>>>
>>>>> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
>>>>> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
>>>>> <https://twitter.com/datastax> [image: g+.png]
>>>>> <https://plus.google.com/+Datastax/about>
>>>>> <http://feeds.feedburner.com/datastax>
>>>>>
>>>>>
>>>>> <http://cassandrasummit-datastax.com/?utm_campaign=summit15_medium=summiticon_source=emailsignature>
>>>>>
>>>>> DataStax is the fastest, most scalable distributed database
>>>>> technology, delivering Apache Cassandra to the world’s most innovative
>>>>> enterprises. Datastax is built to be agile, always-on, and predictably
>>>>> scalable to any size. With more than 500 customers in 45 countries, 
>>>>> DataStax
>>>>> is the database technology and transactional backbone of choice for the
>>>>> worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>>>>
>>>>> On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans <eev...@wikimedia.org>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam <skni...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> When we add 3 more nodes in Data Center B, the repair tool starts
>>>>>>> syncing the data between 2 data centers and then gives up after ~2 days.
>>>>>>>
>>>>>>> Has anybody run in to similar issue before? If so what is the
>>>>>>> solution?
>>>>>>>
>>>>>>
>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Eric Evans
>>>>>> eev...@wikimedia.org
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Adding New Nodes/Data Center to an existing Cluster.

2015-08-31 Thread Neha Trivedi
Hi,
Can you specify which version of Cassandra you are using?
Can you provide the Error Stack ?

regards
Neha

On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>
> All the best,
>
>
> [image: datastax_logo.png] 
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
>
> [image: linkedin.png]  [image:
> facebook.png]  [image: twitter.png]
>  [image: g+.png]
> 
> 
>
>
> 
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Mon, Aug 31, 2015 at 5:24 PM, Eric Evans  wrote:
>
>>
>> On Mon, Aug 31, 2015 at 1:32 PM, Sachin Nikam  wrote:
>>
>>> When we add 3 more nodes in Data Center B, the repair tool starts
>>> syncing the data between 2 data centers and then gives up after ~2 days.
>>>
>>> Has anybody run in to similar issue before? If so what is the solution?
>>>
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-9624, maybe?
>>
>>
>> --
>> Eric Evans
>> eev...@wikimedia.org
>>
>
>


Re: Error while adding a new node.

2015-07-02 Thread Neha Trivedi
any help?

On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 One of the column family has SStable count as under :
 SSTable count: 98506

 Can it be because of 2.1.3 version of cassandra..
 I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

 regards
 Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error,
 but after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service
 cassandra start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













Re: [MASSMAIL]Re: Error while adding a new node.

2015-07-02 Thread Neha Trivedi
thanks for the reply.!!
I will update it to 2.1.7 and checkout.

On Thu, Jul 2, 2015 at 6:59 PM, Carlos Rolo r...@pythian.com wrote:

 Marco you should also avoid 2.1.5 and 2.1.6 because of
 https://issues.apache.org/jira/browse/CASSANDRA-9549

 I know (And often don't recommend last versions, I'm still recommending
 2.0.x series unless someone is already in 2.1.x) but given the above bug,
 2.1.7 is the best option.

 Regards,

 Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

 rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jul 2, 2015 at 3:20 PM, Marcos Ortiz mlor...@uci.cu wrote:

  The recommended version to use is 2.1.5 because, like you Carlos said,
 2.1.6 and 2.1.7 are very new to consider them like
 stable.

 On 02/07/15 08:55, Carlos Rolo wrote:

  Indeed you should upgrade to 2.1.7.

  And then report if you are still facing problems. Versions up to 2.1.5
 (in the 2.1.x series) are not considered stable.

Regards,

  Carlos Juzarte Rolo
 Cassandra Consultant

 Pythian - Love your data

  rolo@pythian | Twitter: cjrolo | Linkedin: 
 *linkedin.com/in/carlosjuzarterolo
 http://linkedin.com/in/carlosjuzarterolo*
 Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
 www.pythian.com

 On Thu, Jul 2, 2015 at 11:40 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 any help?

 On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 also:
 root@cas03:~# sudo service cassandra start
 root@cas03:~# lsof -n | grep java | wc -l
 5315
 root@cas03:~# lsof -n | grep java | wc -l
 977317
 root@cas03:~# lsof -n | grep java | wc -l
 880240
 root@cas03:~# lsof -n | grep java | wc -l
 882402


 On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

   One of the column family has SStable count as under :
 SSTable count: 98506

  Can it be because of 2.1.3 version of cassandra..
  I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

  regards
  Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

  Hey..
 nodetool compactionstats
 pending tasks: 0

  no pending tasks.

  Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending
 (Opscenter / nodetool compactionstats).

  Also you can monitor the number of sstables.

  C*heers

  Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Thanks I will checkout.
  I increased the ulimit to 10, but I am getting the same
 error, but after a while.
  regards
  Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ 
 arodr...@gmail.com wrote:

  Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

  C*heers,

  Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

   Arun,
  I am logging on to Server as root and running (sudo service
 cassandra start)

  regards
  Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com
 wrote:

 Looks like you have too many open files issue. Increase the
 ulimit for the user.

  If you are starting the cassandra daemon using user
 cassandra, increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi 
 nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get
 the following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be 
 unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Index.db (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3
 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 













 --




 --
 Marcos Ortiz http://about.me/marcosortiz, Sr. Product Manager (Data
 Infrastructure) at UCI
 @marcosluis2186 http://twitter.com/marcosluis2186




 --






Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
One of the column family has SStable count as under :
SSTable count: 98506

Can it be because of 2.1.3 version of cassandra..
I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

regards
Neha


On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending (Opscenter
 / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
  wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 











Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
Hey..
nodetool compactionstats
pending tasks: 0

no pending tasks.

Dont have opscenter. how do I monitor sstables?


On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:

 You also might want to check if you have compactions pending (Opscenter /
 nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 









Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
Thanks I will checkout.
I increased the ulimit to 10, but I am getting the same error, but
after a while.
regards
Neha


On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service cassandra
 start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 







Re: Error while adding a new node.

2015-07-01 Thread Neha Trivedi
also:
root@cas03:~# sudo service cassandra start
root@cas03:~# lsof -n | grep java | wc -l
5315
root@cas03:~# lsof -n | grep java | wc -l
977317
root@cas03:~# lsof -n | grep java | wc -l
880240
root@cas03:~# lsof -n | grep java | wc -l
882402


On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi nehajtriv...@gmail.com wrote:

 One of the column family has SStable count as under :
 SSTable count: 98506

 Can it be because of 2.1.3 version of cassandra..
 I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964

 regards
 Neha


 On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee peich...@gmail.com wrote:

 nodetool cfstats?

 On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hey..
 nodetool compactionstats
 pending tasks: 0

 no pending tasks.

 Dont have opscenter. how do I monitor sstables?


 On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 You also might want to check if you have compactions pending (Opscenter
 / nodetool compactionstats).

 Also you can monitor the number of sstables.

 C*heers

 Alain

 2015-07-01 11:53 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Thanks I will checkout.
 I increased the ulimit to 10, but I am getting the same error, but
 after a while.
 regards
 Neha


 On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ arodr...@gmail.com
 wrote:

 Just check the process owner to be sure (top, htop, ps, ...)


 http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installRecommendSettings.html#reference_ds_sxl_gf3_2k__user-resource-limits

 C*heers,

 Alain

 2015-07-01 7:33 GMT+02:00 Neha Trivedi nehajtriv...@gmail.com:

 Arun,
 I am logging on to Server as root and running (sudo service
 cassandra start)

 regards
 Neha

 On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit
 for the user.

  If you are starting the cassandra daemon using user cassandra,
 increase the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com
 wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the
 following error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db
 (Too many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and
 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 












Re: Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Thanks Arun ! I will try and get back !

On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for the
 user.

  If you are starting the cassandra daemon using user cassandra, increase
 the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the following
 error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94
 - JVM state determined to be unstable.  Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 



Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Hello,
I have a 4 node cluster with SimpleSnitch.
Cassandra :  Cassandra 2.1.3

I am trying to add a new node (cassandra 2.1.7) and I get the following
error.

ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94 -
JVM state determined to be unstable.  Exiting forcefully due to:
java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too many
open files)

I increased the MAX_HEAP_SIZE then I get :
ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
CassandraDaemon.java:223 - Exception in thread
Thread[CompactionExecutor:9,1,main]
java.lang.RuntimeException: java.io.FileNotFoundException:
/var/lib/cassandra/data/-Data.db (Too many open files)
at
org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
~[apache-cassandra-2.1.7.jar:2.1.7]

Is it because of the different version of Cassandra (2.1.3 and 2.17) ?

regards
N


Re: Error while adding a new node.

2015-06-30 Thread Neha Trivedi
Arun,
I am logging on to Server as root and running (sudo service cassandra start)

regards
Neha

On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
wrote:

 Thanks Arun ! I will try and get back !

 On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:

 Looks like you have too many open files issue. Increase the ulimit for
 the user.

  If you are starting the cassandra daemon using user cassandra, increase
 the ulimit for that user.


  On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
 
  Hello,
  I have a 4 node cluster with SimpleSnitch.
  Cassandra :  Cassandra 2.1.3
 
  I am trying to add a new node (cassandra 2.1.7) and I get the following
 error.
 
  ERROR [STREAM-IN-] 2015-06-30 05:13:48,516
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.
 Exiting forcefully due to:
  java.io.FileNotFoundException: /var/lib/cassandra/data/-Index.db (Too
 many open files)
 
  I increased the MAX_HEAP_SIZE then I get :
  ERROR [CompactionExecutor:9] 2015-06-30 23:31:44,792
 CassandraDaemon.java:223 - Exception in thread
 Thread[CompactionExecutor:9,1,main]
  java.lang.RuntimeException: java.io.FileNotFoundException:
 /var/lib/cassandra/data/-Data.db (Too many open files)
  at
 org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52)
 ~[apache-cassandra-2.1.7.jar:2.1.7]
 
  Is it because of the different version of Cassandra (2.1.3 and 2.17) ?
 
  regards
  N
 
 
 
 
 
 
 





Data Modeling for 2.1 Cassandra

2015-04-30 Thread Neha Trivedi
Helle all,
I was wondering which data model of the Three describe below better in
terms of performance. Seems 3 is good.

*#1. log with 3 Index*

CREATE TABLE log (
id int PRIMARY KEY,
first_name settext,
last_name settext,
dob set text
   );
CREATE INDEX log_firstname_index ON test.log (first_name);
CREATE INDEX log_lastname_index ON test.log (last_name);
CREATE INDEX log_dob_index ON test.log (dob);
INSERT INTO log(id, first_name,last_name) VALUES ( 3, {'rob'},{'abbate'});
INSERT INTO log(id, first_name,last_name) VALUES ( 4, {'neha'},{'dave'});
select id from log where first_name contains 'rob';
select id from log where last_name contains 'abbate';

*#2. log with UDT*

CREATE TYPE test.user_profile (
first_name text,
last_name text,
dob text
);

CREATE TABLE test.log_udt (
id int PRIMARY KEY,
userinfo setfrozenuser_profile
);
CREATE INDEX log_udt1__index ON test.log_udt1 (userinfo);
INSERT INTO log_udt1 (id, userinfo ) values (
1,{first_name:'rob',last_name:'abb',dob: 'dob'});
INSERT INTO log_udt1 (id, userinfo ) values (
2,{first_name:'neha',last_name:'dave',dob: 'dob1'});

select * FROM log_udt1 where userinfo = {first_name: 'rob', last_name:
'abb', dob: 'dob'};

This will not do query like : select id from log_fname where first_name
contains 'rob';

*#3. log with different Tables for each*


CREATE TABLE log_fname (
id int PRIMARY KEY,
first_name settext,
   );
CREATE INDEX log_firstname_index ON test.log_fname (first_name);
CREATE TABLE log_lname (
id int PRIMARY KEY,
last_name settext,
   );
CREATE INDEX log_lastname_index ON test.log_lname (last_name);
CREATE TABLE log_dob (
id int PRIMARY KEY,
dob set text
   );
CREATE INDEX log_dob_index ON test.log_dob (dob);

INSERT INTO log_fname(id, first_name) VALUES ( 3, {'rob'});
INSERT INTO log_lname(id, last_name) VALUES ( 4, {'dave'});
select id from log_fname where first_name contains 'rob';
select id from log_lname where last_name contains 'abbate';


Regards
Neha


Re: Best Practice to add a node in a Cluster

2015-04-28 Thread Neha Trivedi
Interesting Eric !!!
Not sure if this would be allowed. Alter keyspace to RF=3 and then add a
node.

On Tue, Apr 28, 2015 at 8:54 PM, Eric Stevens migh...@gmail.com wrote:

 I would double check in a test cluster (or with a tool like CCM to confirm
 to set up a local throwaway cluster), but for this *specific* use case
 (going from RF==NodeCount to RF==NodeCount with a higher number) you should
 be able to have a simpler path.  Set RF=3 before you add your new node,
 then add the new node.  It will bootstrap all data from the other two
 nodes, then your job is done.

 You shouldn't have to run repair (which you normally have to do after
 increasing RF in order to make sure all nodes have their data - the nodes
 already have all their data), and you shouldn't have to run cleanup (which
 you normally have to do after increasing node count to instruct the old
 nodes to forget data for which they are no longer responsible).  The data
 responsibility hasn't changed for any node, all nodes are still responsible
 for all data.

 On Mon, Apr 27, 2015 at 9:19 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thans Arun !

 On Tue, Apr 28, 2015 at 9:44 AM, arun sirimalla arunsi...@gmail.com
 wrote:

 Hi Neha,


 After you add the node to the cluster, run nodetool cleanup on all nodes.
 Next running repair on each node will replicate the data. Make sure you
 run the repair on one node at a time, because repair is an expensive
 process (Utilizes high CPU).




 On Mon, Apr 27, 2015 at 8:36 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Eric and Matt :) !!

 Yes the purpose is to improve reliability.
 Right now, from our driver we are querying using degradePolicy for
 reliability.



 *For changing the keyspace for RF=3, the procedure is as under:*
 1. Add a new node to the cluster (new node is not in seed list)

 2. ALTER KEYSPACE system_auth WITH REPLICATION =
   {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};


1. On each affected node, run nodetool repair

 http://docs.datastax.com/en/cassandra/1.2/cassandra/tools/toolsNodetool_r.html.

2. Wait until repair completes on a node, then move to the next
node.


 Any other things to take care?

 Thanks
 Regards
 neha


 On Mon, Apr 27, 2015 at 9:45 PM, Eric Stevens migh...@gmail.com
 wrote:

 It depends on why you're adding a new node.  If you're running out of
 disk space or IO capacity in your 2 node cluster, then changing RF to 3
 will not improve either condition - you'd still be writing all data to all
 three nodes.

 However if you're looking to improve reliability, a 2 node RF=2
 cluster cannot have either node offline without losing quorum, while a 3
 node RF=3 cluster can have one node offline and still be able to achieve
 quorum.  RF=3 is a common replication factor because of this 
 characteristic.

 Make sure your new node is not in its own seeds list, or it will not
 bootstrap (it will come online immediately and start serving requests).

 On Mon, Apr 27, 2015 at 8:46 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi
 We have a 2 Cluster Node with RF=2. We are planing to add a new node.

 Should we change RF to 3 in the schema?
 OR Just added a new node with the same RF=2?

 Any other Best Practice that we need to take care?

 Thanks
 regards
 Neha






 --
 Arun
 Senior Hadoop/Cassandra Engineer
 Cloudwick

 Champion of Big Data (Cloudera)

 http://www.cloudera.com/content/dev-center/en/home/champions-of-big-data.html

 2014 Data Impact Award Winner (Cloudera)

 http://www.cloudera.com/content/cloudera/en/campaign/data-impact-awards.html






Re: Best Practice to add a node in a Cluster

2015-04-27 Thread Neha Trivedi
Thanks Eric and Matt :) !!

Yes the purpose is to improve reliability.
Right now, from our driver we are querying using degradePolicy for
reliability.



*For changing the keyspace for RF=3, the procedure is as under:*
1. Add a new node to the cluster (new node is not in seed list)

2. ALTER KEYSPACE system_auth WITH REPLICATION =
  {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};


   1. On each affected node, run nodetool repair
   
http://docs.datastax.com/en/cassandra/1.2/cassandra/tools/toolsNodetool_r.html.

   2. Wait until repair completes on a node, then move to the next node.


Any other things to take care?

Thanks
Regards
neha


On Mon, Apr 27, 2015 at 9:45 PM, Eric Stevens migh...@gmail.com wrote:

 It depends on why you're adding a new node.  If you're running out of disk
 space or IO capacity in your 2 node cluster, then changing RF to 3 will not
 improve either condition - you'd still be writing all data to all three
 nodes.

 However if you're looking to improve reliability, a 2 node RF=2 cluster
 cannot have either node offline without losing quorum, while a 3 node RF=3
 cluster can have one node offline and still be able to achieve quorum.
 RF=3 is a common replication factor because of this characteristic.

 Make sure your new node is not in its own seeds list, or it will not
 bootstrap (it will come online immediately and start serving requests).

 On Mon, Apr 27, 2015 at 8:46 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi
 We have a 2 Cluster Node with RF=2. We are planing to add a new node.

 Should we change RF to 3 in the schema?
 OR Just added a new node with the same RF=2?

 Any other Best Practice that we need to take care?

 Thanks
 regards
 Neha





Re: Best Practice to add a node in a Cluster

2015-04-27 Thread Neha Trivedi
Thans Arun !

On Tue, Apr 28, 2015 at 9:44 AM, arun sirimalla arunsi...@gmail.com wrote:

 Hi Neha,


 After you add the node to the cluster, run nodetool cleanup on all nodes.
 Next running repair on each node will replicate the data. Make sure you
 run the repair on one node at a time, because repair is an expensive
 process (Utilizes high CPU).




 On Mon, Apr 27, 2015 at 8:36 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Eric and Matt :) !!

 Yes the purpose is to improve reliability.
 Right now, from our driver we are querying using degradePolicy for
 reliability.



 *For changing the keyspace for RF=3, the procedure is as under:*
 1. Add a new node to the cluster (new node is not in seed list)

 2. ALTER KEYSPACE system_auth WITH REPLICATION =
   {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};


1. On each affected node, run nodetool repair

 http://docs.datastax.com/en/cassandra/1.2/cassandra/tools/toolsNodetool_r.html.

2. Wait until repair completes on a node, then move to the next node.


 Any other things to take care?

 Thanks
 Regards
 neha


 On Mon, Apr 27, 2015 at 9:45 PM, Eric Stevens migh...@gmail.com wrote:

 It depends on why you're adding a new node.  If you're running out of
 disk space or IO capacity in your 2 node cluster, then changing RF to 3
 will not improve either condition - you'd still be writing all data to all
 three nodes.

 However if you're looking to improve reliability, a 2 node RF=2 cluster
 cannot have either node offline without losing quorum, while a 3 node RF=3
 cluster can have one node offline and still be able to achieve quorum.
 RF=3 is a common replication factor because of this characteristic.

 Make sure your new node is not in its own seeds list, or it will not
 bootstrap (it will come online immediately and start serving requests).

 On Mon, Apr 27, 2015 at 8:46 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi
 We have a 2 Cluster Node with RF=2. We are planing to add a new node.

 Should we change RF to 3 in the schema?
 OR Just added a new node with the same RF=2?

 Any other Best Practice that we need to take care?

 Thanks
 regards
 Neha






 --
 Arun
 Senior Hadoop/Cassandra Engineer
 Cloudwick

 Champion of Big Data (Cloudera)

 http://www.cloudera.com/content/dev-center/en/home/champions-of-big-data.html

 2014 Data Impact Award Winner (Cloudera)

 http://www.cloudera.com/content/cloudera/en/campaign/data-impact-awards.html




Best Practice to add a node in a Cluster

2015-04-27 Thread Neha Trivedi
Hi
We have a 2 Cluster Node with RF=2. We are planing to add a new node.

Should we change RF to 3 in the schema?
OR Just added a new node with the same RF=2?

Any other Best Practice that we need to take care?

Thanks
regards
Neha


Re: COPY command to export a table to CSV file

2015-04-20 Thread Neha Trivedi
Thanks Sebastian, I will try it out.
But I am also curious why is the COPY command failing with Out of Memory
Error.

regards
Neha

On Tue, Apr 21, 2015 at 4:35 AM, Sebastian Estevez 
sebastian.este...@datastax.com wrote:

 Blobs are ByteBuffer s  it calls getBytes().toString:


 https://github.com/brianmhess/cassandra-loader/blob/master/src/main/java/com/datastax/loader/parser/ByteBufferParser.java#L35

 All the best,


 [image: datastax_logo.png] http://www.datastax.com/

 Sebastián Estévez

 Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

 [image: linkedin.png] https://www.linkedin.com/company/datastax [image:
 facebook.png] https://www.facebook.com/datastax [image: twitter.png]
 https://twitter.com/datastax [image: g+.png]
 https://plus.google.com/+Datastax/about
 http://feeds.feedburner.com/datastax

 http://cassandrasummit-datastax.com/

 DataStax is the fastest, most scalable distributed database technology,
 delivering Apache Cassandra to the world’s most innovative enterprises.
 Datastax is built to be agile, always-on, and predictably scalable to any
 size. With more than 500 customers in 45 countries, DataStax is the
 database technology and transactional backbone of choice for the worlds
 most innovative companies such as Netflix, Adobe, Intuit, and eBay.

 On Mon, Apr 20, 2015 at 5:47 PM, Serega Sheypak serega.shey...@gmail.com
 wrote:

 hi, what happens if unloader meets blob field?

 2015-04-20 23:43 GMT+02:00 Sebastian Estevez 
 sebastian.este...@datastax.com:

 Try Brian's cassandra-unloader
 https://github.com/brianmhess/cassandra-loader#cassandra-unloader

 All the best,


 [image: datastax_logo.png] http://www.datastax.com/

 Sebastián Estévez

 Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

 [image: linkedin.png] https://www.linkedin.com/company/datastax [image:
 facebook.png] https://www.facebook.com/datastax [image: twitter.png]
 https://twitter.com/datastax [image: g+.png]
 https://plus.google.com/+Datastax/about
 http://feeds.feedburner.com/datastax

 http://cassandrasummit-datastax.com/

 DataStax is the fastest, most scalable distributed database technology,
 delivering Apache Cassandra to the world’s most innovative enterprises.
 Datastax is built to be agile, always-on, and predictably scalable to any
 size. With more than 500 customers in 45 countries, DataStax is the
 database technology and transactional backbone of choice for the worlds
 most innovative companies such as Netflix, Adobe, Intuit, and eBay.

 On Mon, Apr 20, 2015 at 12:31 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Does the nproc,nofile,memlock settings in
 /etc/security/limits.d/cassandra.conf are set to optimum value ?
 it's all default.

 What is the consistency level ?
 CL = Qurom

 Is there any other way to export a table to CSV?

 regards
 Neha

 On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Hi,

 Thanks for the info,

 Does the nproc,nofile,memlock settings in
 /etc/security/limits.d/cassandra.conf are set to optimum value ?

 What is the consistency level ?

 Best Regardds,
 Kiran.M.K.


 On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi nehajtriv...@gmail.com
  wrote:

 hi,

 What is the count of records in the column-family ?
   We have about 38,000 Rows in the column-family for which we are
 trying to export
 What  is the Cassandra Version ?
  We are using Cassandra 2.0.11

 MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
 The Server is 8 GB.

 regards
 Neha

 On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Hi,

 check  the MAX_HEAP_SIZE configuration in cassandra-env.sh
 environment file

 Also HEAP_NEWSIZE ?

 What is the Consistency Level you are using ?

 Best REgards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Seems like the is related to JAVA HEAP Memory.

 What is the count of records in the column-family ?

 What  is the Cassandra Version ?

 Best Regards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi 
 nehajtriv...@gmail.com wrote:

 Hello all,

 We are getting the OutOfMemoryError on one of the Node and the
 Node is down, when we run the export command to get all the data from 
 a
 table.


 Regards
 Neha




 ERROR [ReadStage:532074] 2015-04-09 01:04:00,603
 CassandraDaemon.java (line 199) Exception in thread
 Thread[ReadStage:532074,5,main]
 java.lang.OutOfMemoryError: Java heap space
 at
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
 at
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
 at
 org.apache.cassandra.db.Column$1

Re: COPY command to export a table to CSV file

2015-04-20 Thread Neha Trivedi
Values in /etc/security/limits.d/cassandra.conf

# Provided by the cassandra package
cassandra  -  memlock  unlimited
cassandra  -  nofile   10


On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk coolkiran2...@gmail.com wrote:

 Hi,

 Thanks for the info,

 Does the nproc,nofile,memlock settings in
 /etc/security/limits.d/cassandra.conf are set to optimum value ?

 What is the consistency level ?

 Best Regardds,
 Kiran.M.K.


 On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 hi,

 What is the count of records in the column-family ?
   We have about 38,000 Rows in the column-family for which we are
 trying to export
 What  is the Cassandra Version ?
  We are using Cassandra 2.0.11

 MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
 The Server is 8 GB.

 regards
 Neha

 On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Hi,

 check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
 file

 Also HEAP_NEWSIZE ?

 What is the Consistency Level you are using ?

 Best REgards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Seems like the is related to JAVA HEAP Memory.

 What is the count of records in the column-family ?

 What  is the Cassandra Version ?

 Best Regards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hello all,

 We are getting the OutOfMemoryError on one of the Node and the Node is
 down, when we run the export command to get all the data from a table.


 Regards
 Neha




 ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
 (line 199) Exception in thread Thread[ReadStage:532074,5,main]
 java.lang.OutOfMemoryError: Java heap space
 at
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
 at
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)





 --
 Best Regards,
 Kiran.M.K.




 --
 Best Regards,
 Kiran.M.K.





 --
 Best Regards,
 Kiran.M.K.



Re: COPY command to export a table to CSV file

2015-04-20 Thread Neha Trivedi
Does the nproc,nofile,memlock settings in
/etc/security/limits.d/cassandra.conf are set to optimum value ?
it's all default.

What is the consistency level ?
CL = Qurom

Is there any other way to export a table to CSV?

regards
Neha

On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk coolkiran2...@gmail.com wrote:

 Hi,

 Thanks for the info,

 Does the nproc,nofile,memlock settings in
 /etc/security/limits.d/cassandra.conf are set to optimum value ?

 What is the consistency level ?

 Best Regardds,
 Kiran.M.K.


 On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 hi,

 What is the count of records in the column-family ?
   We have about 38,000 Rows in the column-family for which we are
 trying to export
 What  is the Cassandra Version ?
  We are using Cassandra 2.0.11

 MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
 The Server is 8 GB.

 regards
 Neha

 On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Hi,

 check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
 file

 Also HEAP_NEWSIZE ?

 What is the Consistency Level you are using ?

 Best REgards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Seems like the is related to JAVA HEAP Memory.

 What is the count of records in the column-family ?

 What  is the Cassandra Version ?

 Best Regards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hello all,

 We are getting the OutOfMemoryError on one of the Node and the Node is
 down, when we run the export command to get all the data from a table.


 Regards
 Neha




 ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
 (line 199) Exception in thread Thread[ReadStage:532074,5,main]
 java.lang.OutOfMemoryError: Java heap space
 at
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
 at
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)





 --
 Best Regards,
 Kiran.M.K.




 --
 Best Regards

Re: COPY command to export a table to CSV file

2015-04-20 Thread Neha Trivedi
hi,

What is the count of records in the column-family ?
  We have about 38,000 Rows in the column-family for which we are
trying to export
What  is the Cassandra Version ?
 We are using Cassandra 2.0.11

MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
The Server is 8 GB.

regards
Neha

On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk coolkiran2...@gmail.com wrote:

 Hi,

 check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
 file

 Also HEAP_NEWSIZE ?

 What is the Consistency Level you are using ?

 Best REgards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk coolkiran2...@gmail.com
 wrote:

 Seems like the is related to JAVA HEAP Memory.

 What is the count of records in the column-family ?

 What  is the Cassandra Version ?

 Best Regards,
 Kiran.M.K.

 On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hello all,

 We are getting the OutOfMemoryError on one of the Node and the Node is
 down, when we run the export command to get all the data from a table.


 Regards
 Neha




 ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
 (line 199) Exception in thread Thread[ReadStage:532074,5,main]
 java.lang.OutOfMemoryError: Java heap space
 at
 org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
 at
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
 at
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
 at
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
 at
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
 at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
 at
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
 at
 org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
 at
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
 at
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)





 --
 Best Regards,
 Kiran.M.K.




 --
 Best Regards,
 Kiran.M.K.



COPY command to export a table to CSV file

2015-04-19 Thread Neha Trivedi
Hello all,

We are getting the OutOfMemoryError on one of the Node and the Node is
down, when we run the export command to get all the data from a table.


Regards
Neha




ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java (line
199) Exception in thread Thread[ReadStage:532074,5,main]
java.lang.OutOfMemoryError: Java heap space
at
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
at
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
at
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
at
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
at
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
at
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)


Re: Fwd: ReadTimeoutException in Cassandra 2.0.11

2015-01-23 Thread Neha Trivedi
Thanks a lot Steve.
 I suspected the same. I will definitely read.

regards
Neha

On Fri, Jan 23, 2015 at 11:22 PM, Steve Robenalt sroben...@highwire.org
wrote:

 Hi Neha,

 As far as I'm aware, 4GB of RAM is a bit underpowered for Cassandra even
 if there are no other processes on the same server (i.e. Tomcat and
 ActiveMQ). There are some general guidelines at
 http://wiki.apache.org/cassandra/CassandraHardware which should help you
 out. You may not need all of the recommendations therein if you are simply
 evaluating Cassandra with a test workload, but I suspect you will at least
 need more RAM and a dedicated machine to get beyond the read timeouts.
 Also, when using spinning disks, the 2 drive recommendation is important
 because Cassandra uses two very different access patterns for its data
 storage and the disk can be thrashed quite a bit, particularly if you end
 up using enough memory that virtual memory on the host becomes a factor.

 There's a lot of good information on both the Apache Cassandra site and on
 Planet Cassandra about performance and tuning if you want to know more.

 Hope that helps,
 Steve




 On Thu, Jan 22, 2015 at 7:28 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hello Everyone,
 Thanks very much for the input.

 Here is my System info.
 1. I have single node cluster. (For testing)
 2. I have 4GB Memory on the Server and trying to process 200B. ( 1GB is
 allocated to Tomcat7, 1 GB to Cassandra and 1 GB to ActiveMQ. Also nltk
 Server is running)
 3. We are using 2.03 Driver (This is one I can change and try)
 4. 64.4 GB HDD
 5. Attached Memory and CPU information.

 Regards
 Neha

 On Fri, Jan 23, 2015 at 6:50 AM, Steve Robenalt sroben...@highwire.org
 wrote:

 I agree with Rob. You shouldn't need to change the read timeout.

 We had similar issues with intermittent ReadTimeoutExceptions for a
 while when we ran Cassandra on underpowered nodes on AWS. We've also seen
 them when executing unconstrained queries with very large ResultSets
 (because it takes longer than the timeout to return results). If you can
 share more details about the hardware environment you are running your
 cluster on, there are many on the list who can tell you if they are
 underpowered or not (CPUs, memory, and disk/storage config are all
 important factors).

 You might also try running a newer version of the Java Driver (the later
 2.0.x drivers should all work with Cassandra 2.0.3), and I would also
 suggest moving to a newer (2.0.x) version of Cassandra if you have the
 option to do so. We had to move to Cassandra 2.0.5 some time ago from 2.0.3
 for an issue unrelated to the read timeouts.

 Steve


 On Thu, Jan 22, 2015 at 4:48 PM, Robert Coli rc...@eventbrite.com
 wrote:

 On Thu, Jan 22, 2015 at 4:19 PM, Asit KAUSHIK 
 asitkaushikno...@gmail.com wrote:

 There are some values for read timeout  in Cassandra.yaml file and the
 default value is 3 ms change to a bigger value and that resolved our
 issue.

 Having to increase this value is often a strong signal you are Doing It
 Wrong. FWIW!

 =Rob








Re: Fwd: ReadTimeoutException in Cassandra 2.0.11

2015-01-22 Thread Neha Trivedi
Hello Everyone,
Thanks very much for the input.

Here is my System info.
1. I have single node cluster. (For testing)
2. I have 4GB Memory on the Server and trying to process 200B. ( 1GB is
allocated to Tomcat7, 1 GB to Cassandra and 1 GB to ActiveMQ. Also nltk
Server is running)
3. We are using 2.03 Driver (This is one I can change and try)
4. 64.4 GB HDD
5. Attached Memory and CPU information.

Regards
Neha

On Fri, Jan 23, 2015 at 6:50 AM, Steve Robenalt sroben...@highwire.org
wrote:

 I agree with Rob. You shouldn't need to change the read timeout.

 We had similar issues with intermittent ReadTimeoutExceptions for a while
 when we ran Cassandra on underpowered nodes on AWS. We've also seen them
 when executing unconstrained queries with very large ResultSets (because it
 takes longer than the timeout to return results). If you can share more
 details about the hardware environment you are running your cluster on,
 there are many on the list who can tell you if they are underpowered or not
 (CPUs, memory, and disk/storage config are all important factors).

 You might also try running a newer version of the Java Driver (the later
 2.0.x drivers should all work with Cassandra 2.0.3), and I would also
 suggest moving to a newer (2.0.x) version of Cassandra if you have the
 option to do so. We had to move to Cassandra 2.0.5 some time ago from 2.0.3
 for an issue unrelated to the read timeouts.

 Steve


 On Thu, Jan 22, 2015 at 4:48 PM, Robert Coli rc...@eventbrite.com wrote:

 On Thu, Jan 22, 2015 at 4:19 PM, Asit KAUSHIK asitkaushikno...@gmail.com
  wrote:

 There are some values for read timeout  in Cassandra.yaml file and the
 default value is 3 ms change to a bigger value and that resolved our
 issue.

 Having to increase this value is often a strong signal you are Doing It
 Wrong. FWIW!

 =Rob




cat /proc/meminfo 
MemTotal:3838832 kB
MemFree:  176128 kB
Buffers:  172680 kB
Cached:   829556 kB
SwapCached: 8164 kB
Active:  1327288 kB
Inactive: 806532 kB
Active(anon): 697764 kB
Inactive(anon):   458364 kB
Active(file): 629524 kB
Inactive(file):   348168 kB
Unevictable: 1396340 kB
Mlocked: 1396340 kB
SwapTotal:   4194300 kB
SwapFree:4033800 kB
Dirty:40 kB
Writeback: 0 kB
AnonPages:   2524080 kB
Mapped:38052 kB
Shmem:   296 kB
Slab:  83824 kB
SReclaimable:  70124 kB
SUnreclaim:13700 kB
KernelStack:3008 kB
PageTables:11532 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 6113716 kB
Committed_AS:4239512 kB
VmallocTotal:   34359738367 kB
VmallocUsed:8784 kB
VmallocChunk:   34359711939 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k: 3940352 kB
DirectMap2M:   0 kB

cat /proc/cpuinfo 
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 62
model name  : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
stepping: 4
microcode   : 0x415
cpu MHz : 2500.060
cache size  : 25600 KB
physical id : 1
siblings: 1
core id : 1
cpu cores   : 1
apicid  : 35
initial apicid  : 35
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : fpu de tsc msr pae cx8 apic sep cmov pat clflush mmx fxsr sse 
sse2 ss ht syscall nx lm constant_tsc rep_good nopl pni pclmulqdq ssse3 cx16 
sse4_1 sse4_2 popcnt tsc_deadline_timer aes rdrand hypervisor lahf_lm ida arat 
epb pln pts dtherm fsgsbase erms
bogomips: 5000.12
clflush size: 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:


Fwd: ReadTimeoutException in Cassandra 2.0.11

2015-01-21 Thread Neha Trivedi
Hello All,
I am trying to process 200MB file. I am getting following Error. We are
using (apache-cassandra-2.0.3.jar)
com.datastax.driver.core.
exceptions.ReadTimeoutException: Cassandra timeout during read query at
consistency ONE (1 responses were required but only 0 replica responded)

1. Is it due to memory?
2. Is it related to driver?

Initially when I was trying 15MB and it was throwing the same Exception but
after that it started working.


thanks
regards
neha


Re: Cassandra nodes in VirtualBox

2015-01-05 Thread Neha Trivedi
Hi Ajay,
1. you should have at least 2 Seed nodes as it will help, Node1 (only one
seed node) is down.
2. Check you should be using internal ip address in listen_address and
rpc_address.




On Mon, Jan 5, 2015 at 2:07 PM, Ajay ajay.ga...@gmail.com wrote:

 Hi,

 I did the Cassandra cluster set up as below:

 Node 1 : Seed Node
 Node 2
 Node 3
 Node 4

 All 4 nodes are Virtual Box VMs with Ubuntu 14.10. I have set the
 listen_address, rpc_address as the inet address with SimpleSnitch.

 When I start Node2 after Node1 is started, I get the
 java.lang.RuntimeException: Unable to news with any seeds.

 What could be the reason?

 Thanks
 Ajay



Re: Stable cassandra build for production usage

2015-01-01 Thread Neha Trivedi
Use 2.0.11 for production

On Wed, Dec 31, 2014 at 11:50 PM, Robert Coli rc...@eventbrite.com wrote:

 On Wed, Dec 31, 2014 at 8:38 AM, Ajay ajay.ga...@gmail.com wrote:

 For my research and learning I am using Cassandra 2.1.2. But I see couple
 of mail threads going on issues in 2.1.2. So what is the stable or popular
 build for production in Cassandra 2.x series.

 https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/

 =Rob



Re: Cassandra Maintenance Best practices

2014-12-16 Thread Neha Trivedi
Hi Jonathan,QUORUM = (sum_of_replication_factors / 2) + 1, For us Quorum =
(2/2) +1 = 2.

Default CL is ONE and RF=2 with Two Nodes in the cluster.(I am little
confused, what is my read CL and what is my WRITE CL?)

So, does it mean that for every WRITE it will write in both the nodes?

and For every READ, it will read from both nodes and give back to client?

DOWNGRADERETRYPOLICY will downgrade the CL if a node is down?

Regards

Neha

On Wed, Dec 10, 2014 at 1:00 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 I did a presentation on diagnosing performance problems in production at
 the US  Euro summits, in which I covered quite a few tools  preventative
 measures you should know when running a production cluster.  You may find
 it useful:
 http://rustyrazorblade.com/2014/09/cassandra-summit-recap-diagnosing-problems-in-production/

 On ops center - I recommend it.  It gives you a nice dashboard.  I don't
 think it's completely comprehensive (but no tool really is) but it gets you
 90% of the way there.

 It's a good idea to run repairs, especially if you're doing deletes or
 querying at CL=ONE.  I assume you're not using quorum, because on RF=2
 that's the same as CL=ALL.

 I recommend at least RF=3 because if you lose 1 server, you're on the edge
 of data loss.


 On Tue Dec 09 2014 at 7:19:32 PM Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi,
 We have Two Node Cluster Configuration in production with RF=2.

 Which means that the data is written in both the clusters and it's
 running for about a month now and has good amount of data.

 Questions?
 1. What are the best practices for maintenance?
 2. Is OPScenter required to be installed or I can manage with nodetool
 utility?
 3. Is is necessary to run repair weekly?

 thanks
 regards
 Neha




Re: Cassandra Maintenance Best practices

2014-12-16 Thread Neha Trivedi
Thanks Ryan.
So, as Jonathan recommended, we should have RF=3 with Three nodes.
So Quorum = 2 so, CL= 2 (or I need the CL to be set to two) and I will not
need the  downgrading retry policy, in case if my one node goes down.

I can dynamically add a New node to my Cluster.
Can I change my RF to 3, dynamically without affecting my nodes ?

regards
Neha

On Tue, Dec 16, 2014 at 10:32 PM, Ryan Svihla rsvi...@datastax.com wrote:


 CL quorum with RF2 is equivalent to ALL, writes will require
 acknowledgement from both nodes, and reads will be from both nodes.

 CL one will write to both replicas, but return success as soon as the
 first one responds, read will be from one node ( load balancing strategy
 determines which one).

 FWIW I've come around to dislike downgrading retry policy. I now feel like
 if I'm using downgrading, I'm effectively going to be using that downgraded
 policy most of the time under server stress, so in practice that reduced
 consistency is the effective consistency I'm asking for from my writes and
 reads.



 On Tue, Dec 16, 2014 at 10:50 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi Jonathan,QUORUM = (sum_of_replication_factors / 2) + 1, For us Quorum
 = (2/2) +1 = 2.

 Default CL is ONE and RF=2 with Two Nodes in the cluster.(I am little
 confused, what is my read CL and what is my WRITE CL?)

 So, does it mean that for every WRITE it will write in both the nodes?

 and For every READ, it will read from both nodes and give back to client?

 DOWNGRADERETRYPOLICY will downgrade the CL if a node is down?

 Regards

 Neha

 On Wed, Dec 10, 2014 at 1:00 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 I did a presentation on diagnosing performance problems in production at
 the US  Euro summits, in which I covered quite a few tools  preventative
 measures you should know when running a production cluster.  You may find
 it useful:
 http://rustyrazorblade.com/2014/09/cassandra-summit-recap-diagnosing-problems-in-production/

 On ops center - I recommend it.  It gives you a nice dashboard.  I don't
 think it's completely comprehensive (but no tool really is) but it gets you
 90% of the way there.

 It's a good idea to run repairs, especially if you're doing deletes or
 querying at CL=ONE.  I assume you're not using quorum, because on RF=2
 that's the same as CL=ALL.

 I recommend at least RF=3 because if you lose 1 server, you're on the
 edge of data loss.


 On Tue Dec 09 2014 at 7:19:32 PM Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi,
 We have Two Node Cluster Configuration in production with RF=2.

 Which means that the data is written in both the clusters and it's
 running for about a month now and has good amount of data.

 Questions?
 1. What are the best practices for maintenance?
 2. Is OPScenter required to be installed or I can manage with nodetool
 utility?
 3. Is is necessary to run repair weekly?

 thanks
 regards
 Neha



 --

 [image: datastax_logo.png] http://www.datastax.com/

 Ryan Svihla

 Solution Architect

 [image: twitter.png] https://twitter.com/foundev [image: linkedin.png]
 http://www.linkedin.com/pub/ryan-svihla/12/621/727/

 DataStax is the fastest, most scalable distributed database technology,
 delivering Apache Cassandra to the world’s most innovative enterprises.
 Datastax is built to be agile, always-on, and predictably scalable to any
 size. With more than 500 customers in 45 countries, DataStax is the
 database technology and transactional backbone of choice for the worlds
 most innovative companies such as Netflix, Adobe, Intuit, and eBay.




Re: Cassandra Maintenance Best practices

2014-12-16 Thread Neha Trivedi
thanks Ryan.. We will get a new node and add it in the cluster. I will mail
if I have any question regarding the same.

On Tue, Dec 16, 2014 at 10:52 PM, Ryan Svihla rsvi...@datastax.com wrote:

 you'll have to run repair and that will involve some load and streaming,
 but this is a normal use case for cassandra..and your cluster should be
 sized load wise to allow repair, and bootstrapping of new nodes..otherwise
 when you're over whelmed you won't be able to add more nodes easily.

 If you need to reduce the cost of streaming to the existing cluster, just
 set streaming throughput on your existing nodes to a lower number like 50
 or 25.

 On Tue, Dec 16, 2014 at 11:10 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Thanks Ryan.
 So, as Jonathan recommended, we should have RF=3 with Three nodes.
 So Quorum = 2 so, CL= 2 (or I need the CL to be set to two) and I will
 not need the  downgrading retry policy, in case if my one node goes down.

 I can dynamically add a New node to my Cluster.
 Can I change my RF to 3, dynamically without affecting my nodes ?

 regards
 Neha

 On Tue, Dec 16, 2014 at 10:32 PM, Ryan Svihla rsvi...@datastax.com
 wrote:


 CL quorum with RF2 is equivalent to ALL, writes will require
 acknowledgement from both nodes, and reads will be from both nodes.

 CL one will write to both replicas, but return success as soon as the
 first one responds, read will be from one node ( load balancing strategy
 determines which one).

 FWIW I've come around to dislike downgrading retry policy. I now feel
 like if I'm using downgrading, I'm effectively going to be using that
 downgraded policy most of the time under server stress, so in practice that
 reduced consistency is the effective consistency I'm asking for from my
 writes and reads.



 On Tue, Dec 16, 2014 at 10:50 AM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi Jonathan,QUORUM = (sum_of_replication_factors / 2) + 1, For us
 Quorum = (2/2) +1 = 2.

 Default CL is ONE and RF=2 with Two Nodes in the cluster.(I am little
 confused, what is my read CL and what is my WRITE CL?)

 So, does it mean that for every WRITE it will write in both the nodes?

 and For every READ, it will read from both nodes and give back to
 client?

 DOWNGRADERETRYPOLICY will downgrade the CL if a node is down?

 Regards

 Neha

 On Wed, Dec 10, 2014 at 1:00 PM, Jonathan Haddad j...@jonhaddad.com
 wrote:

 I did a presentation on diagnosing performance problems in production
 at the US  Euro summits, in which I covered quite a few tools 
 preventative measures you should know when running a production cluster.
 You may find it useful:
 http://rustyrazorblade.com/2014/09/cassandra-summit-recap-diagnosing-problems-in-production/

 On ops center - I recommend it.  It gives you a nice dashboard.  I
 don't think it's completely comprehensive (but no tool really is) but it
 gets you 90% of the way there.

 It's a good idea to run repairs, especially if you're doing deletes or
 querying at CL=ONE.  I assume you're not using quorum, because on RF=2
 that's the same as CL=ALL.

 I recommend at least RF=3 because if you lose 1 server, you're on the
 edge of data loss.


 On Tue Dec 09 2014 at 7:19:32 PM Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi,
 We have Two Node Cluster Configuration in production with RF=2.

 Which means that the data is written in both the clusters and it's
 running for about a month now and has good amount of data.

 Questions?
 1. What are the best practices for maintenance?
 2. Is OPScenter required to be installed or I can manage with
 nodetool utility?
 3. Is is necessary to run repair weekly?

 thanks
 regards
 Neha



 --

 [image: datastax_logo.png] http://www.datastax.com/

 Ryan Svihla

 Solution Architect

 [image: twitter.png] https://twitter.com/foundev [image: linkedin.png]
 http://www.linkedin.com/pub/ryan-svihla/12/621/727/

 DataStax is the fastest, most scalable distributed database technology,
 delivering Apache Cassandra to the world’s most innovative enterprises.
 Datastax is built to be agile, always-on, and predictably scalable to any
 size. With more than 500 customers in 45 countries, DataStax is the
 database technology and transactional backbone of choice for the worlds
 most innovative companies such as Netflix, Adobe, Intuit, and eBay.



 --

 [image: datastax_logo.png] http://www.datastax.com/

 Ryan Svihla

 Solution Architect

 [image: twitter.png] https://twitter.com/foundev [image: linkedin.png]
 http://www.linkedin.com/pub/ryan-svihla/12/621/727/

 DataStax is the fastest, most scalable distributed database technology,
 delivering Apache Cassandra to the world’s most innovative enterprises.
 Datastax is built to be agile, always-on, and predictably scalable to any
 size. With more than 500 customers in 45 countries, DataStax is the
 database technology and transactional backbone of choice for the worlds
 most innovative companies such as Netflix, Adobe, Intuit, and eBay.




Re: Cassandra Maintenance Best practices

2014-12-15 Thread Neha Trivedi
Thanks very much Jonathan !!

On Wed, Dec 10, 2014 at 1:00 PM, Jonathan Haddad j...@jonhaddad.com wrote:

 I did a presentation on diagnosing performance problems in production at
 the US  Euro summits, in which I covered quite a few tools  preventative
 measures you should know when running a production cluster.  You may find
 it useful:
 http://rustyrazorblade.com/2014/09/cassandra-summit-recap-diagnosing-problems-in-production/

 On ops center - I recommend it.  It gives you a nice dashboard.  I don't
 think it's completely comprehensive (but no tool really is) but it gets you
 90% of the way there.

 It's a good idea to run repairs, especially if you're doing deletes or
 querying at CL=ONE.  I assume you're not using quorum, because on RF=2
 that's the same as CL=ALL.

 I recommend at least RF=3 because if you lose 1 server, you're on the edge
 of data loss.


 On Tue Dec 09 2014 at 7:19:32 PM Neha Trivedi nehajtriv...@gmail.com
 wrote:

 Hi,
 We have Two Node Cluster Configuration in production with RF=2.

 Which means that the data is written in both the clusters and it's
 running for about a month now and has good amount of data.

 Questions?
 1. What are the best practices for maintenance?
 2. Is OPScenter required to be installed or I can manage with nodetool
 utility?
 3. Is is necessary to run repair weekly?

 thanks
 regards
 Neha




Cassandra Maintenance Best practices

2014-12-09 Thread Neha Trivedi
Hi,
We have Two Node Cluster Configuration in production with RF=2.

Which means that the data is written in both the clusters and it's running
for about a month now and has good amount of data.

Questions?
1. What are the best practices for maintenance?
2. Is OPScenter required to be installed or I can manage with nodetool
utility?
3. Is is necessary to run repair weekly?

thanks
regards
Neha


Re: Cassandra add a node and remove a node

2014-12-02 Thread Neha Trivedi
Thanks Jens and Robert !!!

On Wed, Dec 3, 2014 at 2:20 AM, Robert Coli rc...@eventbrite.com wrote:

 On Mon, Dec 1, 2014 at 7:10 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 No the old node is not defective. We Just want to separate out that
 Server for testing.
 And add a new node. (Present cluster has two Nodes and RF=2)


 If you currently have two nodes and RF=2, you must add the new node before
 removing the old node, so that you always have at least two nodes in the
 cluster.

 =Rob




Re: Cassandra add a node and remove a node

2014-12-01 Thread Neha Trivedi
No the old node is not defective. We Just want to separate out that Server
for testing.
And add a new node. (Present cluster has two Nodes and RF=2)

thanks

On Tue, Dec 2, 2014 at 12:04 AM, Robert Coli rc...@eventbrite.com wrote:

 On Sun, Nov 30, 2014 at 10:15 PM, Neha Trivedi nehajtriv...@gmail.com
 wrote:

 I need to Add new Node and remove existing node.


 What is the purpose of this action? Is the old node defective, and being
 replaced 1:1 with the new node?

 =Rob




Cassandra add a node and remove a node

2014-11-30 Thread Neha Trivedi
Hi,
I need to Add new Node and remove existing node.

Should I first remove the node and then add a new node or Add new node and
then remove existing node.
Which practice is better and things I need to take care?

regards
Neha