Re: Out of Memory Error While Opening SSTables on Startup

2015-03-19 Thread Jan
Paul Nickerson; 
curious, did you get a solution to your problem ? 
Regards,Jan/  



 On Tuesday, February 10, 2015 5:48 PM, Flavien Charlon 
 wrote:
   

 I already experienced the same problem (hundreds of thousands of SSTables) 
with Cassandra 2.1.2. It seems to appear when running an incremental repair 
while there is a medium to high insert load on the cluster. The repair goes in 
a bad state and starts creating way more SSTables than it should (even when 
there should be nothing to repair).
On 10 February 2015 at 15:46, Eric Stevens  wrote:

This kind of recovery is definitely not my strong point, so feedback on this 
approach would certainly be welcome.
As I understand it, if you really want to keep that data, you ought to be able 
to mv it out of the way to get your node online, then move those files in a 
several thousand at a time, nodetool refresh OpsCenter rollups60 && nodetool 
compact OpsCenter rollups60; rinse and repeat.  This should let you 
incrementally restore the data in that keyspace without putting so many 
sstables in there that it ooms your cluster again.
On Tue, Feb 10, 2015 at 3:38 PM, Chris Lohfink  wrote:

yeah... probably just 2.1.2 things and not compactions.  Still probably want to 
do something about the 1.6 million files though.  It may be worth just 
mv/rm'ing to 60 sec rollup data though unless really attached to it.
Chris
On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson  wrote:

I was having trouble with snapshots failing while trying to repair that table 
(http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html). I have a 
repair running on it now, and it seems to be going successfully this time. I am 
going to wait for that to finish, then try a manual nodetool compact. If that 
goes successfully, then would it be safe to chalk the lack of compaction on 
this table in the past up to 2.1.2 problems?

 ~ Paul Nickerson
On Tue, Feb 10, 2015 at 3:34 PM, Chris Lohfink  wrote:

Your cluster is probably having issues with compactions (with STCS you should 
never have this many).  I would probably punt with OpsCenter/rollups60. Turn 
the node off and move all of the sstables off to a different directory for 
backup (or just rm if you really don't care about 1 minute metrics), than turn 
the server back on. 
Once you get your cluster running again go back and investigate why compactions 
stopped, my guess is you hit an exception in past that killed your 
CompactionExecutor and things just built up slowly until you got to this point.
Chris
On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson  wrote:

Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There are 
1,617,289 files under OpsCenter/rollups60.
Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I was 
able to start up Cassandra OK with the default heap size formula.
Now my cluster is running multiple versions of Cassandra. I think I will 
downgrade the rest to 2.1.1.
 ~ Paul Nickerson
On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli  wrote:

On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson  wrote:

I am getting an out of memory error why I try to start Cassandra on one of my 
nodes. Cassandra will run for a minute, and then exit without outputting any 
error in the log file. It is happening while SSTableReader is opening a couple 
hundred thousand things.
... 
Does anyone know how I might get Cassandra on this node running again? I'm not 
very familiar with correctly tuning Java memory parameters, and I'm not sure if 
that's the right solution in this case anyway.

Try running 2.1.1, and/or increasing heap size beyond 8gb.
Are there actually that many SSTables on disk?
=Rob 













  

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Flavien Charlon
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should (even
when there should be nothing to repair).

On 10 February 2015 at 15:46, Eric Stevens  wrote:

> This kind of recovery is definitely not my strong point, so feedback on
> this approach would certainly be welcome.
>
> As I understand it, if you really want to keep that data, you ought to be
> able to mv it out of the way to get your node online, then move those files
> in a several thousand at a time, nodetool refresh OpsCenter rollups60 &&
> nodetool compact OpsCenter rollups60; rinse and repeat.  This should let
> you incrementally restore the data in that keyspace without putting so many
> sstables in there that it ooms your cluster again.
>
> On Tue, Feb 10, 2015 at 3:38 PM, Chris Lohfink 
> wrote:
>
>> yeah... probably just 2.1.2 things and not compactions.  Still probably
>> want to do something about the 1.6 million files though.  It may be worth
>> just mv/rm'ing to 60 sec rollup data though unless really attached to it.
>>
>> Chris
>>
>> On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson  wrote:
>>
>>> I was having trouble with snapshots failing while trying to repair that
>>> table (
>>> http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html). I
>>> have a repair running on it now, and it seems to be going successfully this
>>> time. I am going to wait for that to finish, then try a manual nodetool
>>> compact. If that goes successfully, then would it be safe to chalk the lack
>>> of compaction on this table in the past up to 2.1.2 problems?
>>>
>>>
>>>  ~ Paul Nickerson
>>>
>>> On Tue, Feb 10, 2015 at 3:34 PM, Chris Lohfink 
>>> wrote:
>>>
 Your cluster is probably having issues with compactions (with STCS you
 should never have this many).  I would probably punt with
 OpsCenter/rollups60. Turn the node off and move all of the sstables off to
 a different directory for backup (or just rm if you really don't care about
 1 minute metrics), than turn the server back on.

 Once you get your cluster running again go back and investigate why
 compactions stopped, my guess is you hit an exception in past that killed
 your CompactionExecutor and things just built up slowly until you got to
 this point.

 Chris

 On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson 
 wrote:

> Thank you Rob. I tried a 12 GiB heap size, and still crashed out.
> There are 1,617,289 files under OpsCenter/rollups60.
>
> Once I downgraded Cassandra to 2.1.1 (apt-get install
> cassandra=2.1.1), I was able to start up Cassandra OK with the default 
> heap
> size formula.
>
> Now my cluster is running multiple versions of Cassandra. I think I
> will downgrade the rest to 2.1.1.
>
>  ~ Paul Nickerson
>
> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli 
> wrote:
>
>> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson 
>> wrote:
>>
>>> I am getting an out of memory error why I try to start Cassandra on
>>> one of my nodes. Cassandra will run for a minute, and then exit without
>>> outputting any error in the log file. It is happening while 
>>> SSTableReader
>>> is opening a couple hundred thousand things.
>>>
>> ...
>>
>>> Does anyone know how I might get Cassandra on this node running
>>> again? I'm not very familiar with correctly tuning Java memory 
>>> parameters,
>>> and I'm not sure if that's the right solution in this case anyway.
>>>
>>
>> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>>
>> Are there actually that many SSTables on disk?
>>
>> =Rob
>>
>>
>
>

>>>
>>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Eric Stevens
This kind of recovery is definitely not my strong point, so feedback on
this approach would certainly be welcome.

As I understand it, if you really want to keep that data, you ought to be
able to mv it out of the way to get your node online, then move those files
in a several thousand at a time, nodetool refresh OpsCenter rollups60 &&
nodetool compact OpsCenter rollups60; rinse and repeat.  This should let
you incrementally restore the data in that keyspace without putting so many
sstables in there that it ooms your cluster again.

On Tue, Feb 10, 2015 at 3:38 PM, Chris Lohfink  wrote:

> yeah... probably just 2.1.2 things and not compactions.  Still probably
> want to do something about the 1.6 million files though.  It may be worth
> just mv/rm'ing to 60 sec rollup data though unless really attached to it.
>
> Chris
>
> On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson  wrote:
>
>> I was having trouble with snapshots failing while trying to repair that
>> table (
>> http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html). I
>> have a repair running on it now, and it seems to be going successfully this
>> time. I am going to wait for that to finish, then try a manual nodetool
>> compact. If that goes successfully, then would it be safe to chalk the lack
>> of compaction on this table in the past up to 2.1.2 problems?
>>
>>
>>  ~ Paul Nickerson
>>
>> On Tue, Feb 10, 2015 at 3:34 PM, Chris Lohfink 
>> wrote:
>>
>>> Your cluster is probably having issues with compactions (with STCS you
>>> should never have this many).  I would probably punt with
>>> OpsCenter/rollups60. Turn the node off and move all of the sstables off to
>>> a different directory for backup (or just rm if you really don't care about
>>> 1 minute metrics), than turn the server back on.
>>>
>>> Once you get your cluster running again go back and investigate why
>>> compactions stopped, my guess is you hit an exception in past that killed
>>> your CompactionExecutor and things just built up slowly until you got to
>>> this point.
>>>
>>> Chris
>>>
>>> On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson 
>>> wrote:
>>>
 Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
 are 1,617,289 files under OpsCenter/rollups60.

 Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1),
 I was able to start up Cassandra OK with the default heap size formula.

 Now my cluster is running multiple versions of Cassandra. I think I
 will downgrade the rest to 2.1.1.

  ~ Paul Nickerson

 On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli 
 wrote:

> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson 
> wrote:
>
>> I am getting an out of memory error why I try to start Cassandra on
>> one of my nodes. Cassandra will run for a minute, and then exit without
>> outputting any error in the log file. It is happening while SSTableReader
>> is opening a couple hundred thousand things.
>>
> ...
>
>> Does anyone know how I might get Cassandra on this node running
>> again? I'm not very familiar with correctly tuning Java memory 
>> parameters,
>> and I'm not sure if that's the right solution in this case anyway.
>>
>
> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>
> Are there actually that many SSTables on disk?
>
> =Rob
>
>


>>>
>>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Chris Lohfink
yeah... probably just 2.1.2 things and not compactions.  Still probably
want to do something about the 1.6 million files though.  It may be worth
just mv/rm'ing to 60 sec rollup data though unless really attached to it.

Chris

On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson  wrote:

> I was having trouble with snapshots failing while trying to repair that
> table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html).
> I have a repair running on it now, and it seems to be going successfully
> this time. I am going to wait for that to finish, then try a
> manual nodetool compact. If that goes successfully, then would it be safe
> to chalk the lack of compaction on this table in the past up to 2.1.2
> problems?
>
>
>  ~ Paul Nickerson
>
> On Tue, Feb 10, 2015 at 3:34 PM, Chris Lohfink 
> wrote:
>
>> Your cluster is probably having issues with compactions (with STCS you
>> should never have this many).  I would probably punt with
>> OpsCenter/rollups60. Turn the node off and move all of the sstables off to
>> a different directory for backup (or just rm if you really don't care about
>> 1 minute metrics), than turn the server back on.
>>
>> Once you get your cluster running again go back and investigate why
>> compactions stopped, my guess is you hit an exception in past that killed
>> your CompactionExecutor and things just built up slowly until you got to
>> this point.
>>
>> Chris
>>
>> On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson  wrote:
>>
>>> Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
>>> are 1,617,289 files under OpsCenter/rollups60.
>>>
>>> Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1),
>>> I was able to start up Cassandra OK with the default heap size formula.
>>>
>>> Now my cluster is running multiple versions of Cassandra. I think I will
>>> downgrade the rest to 2.1.1.
>>>
>>>  ~ Paul Nickerson
>>>
>>> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli 
>>> wrote:
>>>
 On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson 
 wrote:

> I am getting an out of memory error why I try to start Cassandra on
> one of my nodes. Cassandra will run for a minute, and then exit without
> outputting any error in the log file. It is happening while SSTableReader
> is opening a couple hundred thousand things.
>
 ...

> Does anyone know how I might get Cassandra on this node running again?
> I'm not very familiar with correctly tuning Java memory parameters, and 
> I'm
> not sure if that's the right solution in this case anyway.
>

 Try running 2.1.1, and/or increasing heap size beyond 8gb.

 Are there actually that many SSTables on disk?

 =Rob


>>>
>>>
>>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Paul Nickerson
I was having trouble with snapshots failing while trying to repair that
table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html).
I have a repair running on it now, and it seems to be going successfully
this time. I am going to wait for that to finish, then try a
manual nodetool compact. If that goes successfully, then would it be safe
to chalk the lack of compaction on this table in the past up to 2.1.2
problems?


 ~ Paul Nickerson

On Tue, Feb 10, 2015 at 3:34 PM, Chris Lohfink  wrote:

> Your cluster is probably having issues with compactions (with STCS you
> should never have this many).  I would probably punt with
> OpsCenter/rollups60. Turn the node off and move all of the sstables off to
> a different directory for backup (or just rm if you really don't care about
> 1 minute metrics), than turn the server back on.
>
> Once you get your cluster running again go back and investigate why
> compactions stopped, my guess is you hit an exception in past that killed
> your CompactionExecutor and things just built up slowly until you got to
> this point.
>
> Chris
>
> On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson  wrote:
>
>> Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
>> are 1,617,289 files under OpsCenter/rollups60.
>>
>> Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
>> was able to start up Cassandra OK with the default heap size formula.
>>
>> Now my cluster is running multiple versions of Cassandra. I think I will
>> downgrade the rest to 2.1.1.
>>
>>  ~ Paul Nickerson
>>
>> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli 
>> wrote:
>>
>>> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson 
>>> wrote:
>>>
 I am getting an out of memory error why I try to start Cassandra on one
 of my nodes. Cassandra will run for a minute, and then exit without
 outputting any error in the log file. It is happening while SSTableReader
 is opening a couple hundred thousand things.

>>> ...
>>>
 Does anyone know how I might get Cassandra on this node running again?
 I'm not very familiar with correctly tuning Java memory parameters, and I'm
 not sure if that's the right solution in this case anyway.

>>>
>>> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>>>
>>> Are there actually that many SSTables on disk?
>>>
>>> =Rob
>>>
>>>
>>
>>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Chris Lohfink
Your cluster is probably having issues with compactions (with STCS you
should never have this many).  I would probably punt with
OpsCenter/rollups60. Turn the node off and move all of the sstables off to
a different directory for backup (or just rm if you really don't care about
1 minute metrics), than turn the server back on.

Once you get your cluster running again go back and investigate why
compactions stopped, my guess is you hit an exception in past that killed
your CompactionExecutor and things just built up slowly until you got to
this point.

Chris

On Tue, Feb 10, 2015 at 2:15 PM, Paul Nickerson  wrote:

> Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
> are 1,617,289 files under OpsCenter/rollups60.
>
> Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
> was able to start up Cassandra OK with the default heap size formula.
>
> Now my cluster is running multiple versions of Cassandra. I think I will
> downgrade the rest to 2.1.1.
>
>  ~ Paul Nickerson
>
> On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli  wrote:
>
>> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson 
>> wrote:
>>
>>> I am getting an out of memory error why I try to start Cassandra on one
>>> of my nodes. Cassandra will run for a minute, and then exit without
>>> outputting any error in the log file. It is happening while SSTableReader
>>> is opening a couple hundred thousand things.
>>>
>> ...
>>
>>> Does anyone know how I might get Cassandra on this node running again?
>>> I'm not very familiar with correctly tuning Java memory parameters, and I'm
>>> not sure if that's the right solution in this case anyway.
>>>
>>
>> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>>
>> Are there actually that many SSTables on disk?
>>
>> =Rob
>>
>>
>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Paul Nickerson
Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
are 1,617,289 files under OpsCenter/rollups60.

Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
was able to start up Cassandra OK with the default heap size formula.

Now my cluster is running multiple versions of Cassandra. I think I will
downgrade the rest to 2.1.1.

 ~ Paul Nickerson

On Tue, Feb 10, 2015 at 2:05 PM, Robert Coli  wrote:

> On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson  wrote:
>
>> I am getting an out of memory error why I try to start Cassandra on one
>> of my nodes. Cassandra will run for a minute, and then exit without
>> outputting any error in the log file. It is happening while SSTableReader
>> is opening a couple hundred thousand things.
>>
> ...
>
>> Does anyone know how I might get Cassandra on this node running again?
>> I'm not very familiar with correctly tuning Java memory parameters, and I'm
>> not sure if that's the right solution in this case anyway.
>>
>
> Try running 2.1.1, and/or increasing heap size beyond 8gb.
>
> Are there actually that many SSTables on disk?
>
> =Rob
>
>


Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Robert Coli
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson  wrote:

> I am getting an out of memory error why I try to start Cassandra on one of
> my nodes. Cassandra will run for a minute, and then exit without outputting
> any error in the log file. It is happening while SSTableReader is opening a
> couple hundred thousand things.
>
...

> Does anyone know how I might get Cassandra on this node running again? I'm
> not very familiar with correctly tuning Java memory parameters, and I'm not
> sure if that's the right solution in this case anyway.
>

Try running 2.1.1, and/or increasing heap size beyond 8gb.

Are there actually that many SSTables on disk?

=Rob


RE: Out of memory error

2012-06-10 Thread Prakrati Agrawal
The version is 1.1.0

Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com

From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, June 11, 2012 10:07 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error

What version of Cassandra?

might be related to https://issues.apache.org/jira/browse/CASSANDRA-4098



On 06/11/2012 12:07 AM, Prakrati Agrawal wrote:
Sorry

I ran list columnFamilyName; and it threw this error.

Thanks and Regards
Prakrati

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Saturday, June 09, 2012 12:18 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Out of memory error

When you ask a question please include the query or function call you have 
made. An any other information that would help someone understand what you are 
trying to do.

Also, please list things you have already tried to work around the problem.

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 8/06/2012, at 9:04 PM, Prakrati Agrawal wrote:



Dear all,

When I try to list the entire data in my column family I get the following 
error:

Using default limit of 100
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at 
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)
at 
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at 
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
at 
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
at 
org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1373)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:264)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)

Please help me

Thanks and Regards
Prakrati




This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by

Re: Out of memory error

2012-06-10 Thread Dave Brosius

What version of Cassandra?

might be related to https://issues.apache.org/jira/browse/CASSANDRA-4098



On 06/11/2012 12:07 AM, Prakrati Agrawal wrote:


Sorry

I ran list /columnFamilyName/; and it threw this error.

Thanks and Regards

Prakrati

*From:*aaron morton [mailto:aa...@thelastpickle.com]
*Sent:* Saturday, June 09, 2012 12:18 AM
*To:* user@cassandra.apache.org
*Subject:* Re: Out of memory error

When you ask a question please include the query or function call you 
have made. An any other information that would help someone understand 
what you are trying to do.


Also, please list things you have already tried to work around the 
problem.


Cheers

-

Aaron Morton

Freelance Developer

@aaronmorton

http://www.thelastpickle.com

On 8/06/2012, at 9:04 PM, Prakrati Agrawal wrote:



Dear all,

When I try to list the entire data in my column family I get the 
following error:


Using default limit of 100

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

at 
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)


at 
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)


at 
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)


at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)


at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)


at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)


at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)


at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)


at 
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)


at 
org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1373)


at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:264)


at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)


at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)

Please help me

Thanks and Regards

Prakrati



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the 
person(s) or entities to which it is addressed. Any review, 
retransmission, dissemination or other use of, or taking of any action 
in reliance upon, this information by persons or entities other than 
the intended recipient is prohibited and may be illegal. If you 
received this in error, please contact the sender and delete the 
message from your system.


Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet 
accessibility, the Company cannot accept liability for any virus 
introduced by this e-mail or any attachment and you are advised to use 
up-to-date virus checking software.




This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the 
person(s) or entities to which it is addressed. Any review, 
retransmission, dissemination or other use of, or taking of any action 
in reliance upon, this information by persons or entities other than 
the intended recipient is prohibited and may be illegal. If you 
received this in error, please contact the sender and delete the 
message from your system.


Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet 
accessibility, the Company cannot accept liability for any virus 
introduced by this e-mail or any attachment and you are advised to use 
up-to-date virus checking software.




RE: Out of memory error

2012-06-10 Thread Prakrati Agrawal
Sorry

I ran list columnFamilyName; and it threw this error.

Thanks and Regards
Prakrati

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Saturday, June 09, 2012 12:18 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error

When you ask a question please include the query or function call you have 
made. An any other information that would help someone understand what you are 
trying to do.

Also, please list things you have already tried to work around the problem.

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 8/06/2012, at 9:04 PM, Prakrati Agrawal wrote:


Dear all,

When I try to list the entire data in my column family I get the following 
error:

Using default limit of 100
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at 
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)
at 
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at 
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
at 
org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
at 
org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1373)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:264)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)

Please help me

Thanks and Regards
Prakrati




This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.



This email message may contain proprietary, private and confidential 
information. The information transmitted is intended only for the person(s) or 
entities to which it is addressed. Any review, retransmission, dissemination or 
other use of, or taking of any action in reliance upon, this information by 
persons or entities other than the intended recipient is prohibited and may be 
illegal. If you received this in error, please contact the sender and delete 
the message from your system.

Mu Sigma takes all reasonable steps to ensure that its electronic 
communications are free from viruses. However, given Internet accessibility, 
the Company cannot accept liability for any virus introduced by this e-mail or 
any attachment and you are advised to use up-to-date virus checking software.


Re: Out of memory error

2012-06-08 Thread aaron morton
When you ask a question please include the query or function call you have 
made. An any other information that would help someone understand what you are 
trying to do. 

Also, please list things you have already tried to work around the problem. 

Cheers

-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 8/06/2012, at 9:04 PM, Prakrati Agrawal wrote:

> Dear all,
>  
> When I try to list the entire data in my column family I get the following 
> error:
>  
> Using default limit of 100
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)
> at 
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> at 
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> at 
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
> at 
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
> at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
> at 
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
> at 
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
> at 
> org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1373)
> at 
> org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:264)
> at 
> org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
> at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
>  
> Please help me
>  
> Thanks and Regards
> Prakrati
>  
>  
> 
> This email message may contain proprietary, private and confidential 
> information. The information transmitted is intended only for the person(s) 
> or entities to which it is addressed. Any review, retransmission, 
> dissemination or other use of, or taking of any action in reliance upon, this 
> information by persons or entities other than the intended recipient is 
> prohibited and may be illegal. If you received this in error, please contact 
> the sender and delete the message from your system.
> 
> Mu Sigma takes all reasonable steps to ensure that its electronic 
> communications are free from viruses. However, given Internet accessibility, 
> the Company cannot accept liability for any virus introduced by this e-mail 
> or any attachment and you are advised to use up-to-date virus checking 
> software.



Re: Out of memory error

2012-06-08 Thread shashwat shriparv
Check this slide,

http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera

Regards

∞
Shashwat Shriparv


On Fri, Jun 8, 2012 at 2:34 PM, Prakrati Agrawal <
prakrati.agra...@mu-sigma.com> wrote:

>  Dear all,
>
> ** **
>
> When I try to list the entire data in my column family I get the following
> error: 
>
> ** **
>
> Using default limit of 100
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>
> at
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:140)
> 
>
> at
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> 
>
> at
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
> 
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
> 
>
> at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
> 
>
> at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
>
> at
> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
> 
>
> at
> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
> 
>
> at
> org.apache.cassandra.cli.CliClient.executeList(CliClient.java:1373)
>
> at
> org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:264)
> 
>
> at
> org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
> 
>
> at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
> 
>
> ** **
>
> Please help me
>
> ** **
>
> Thanks and Regards
>
> Prakrati
>
> ** **
>
> ** **
>
> --
> This email message may contain proprietary, private and confidential
> information. The information transmitted is intended only for the person(s)
> or entities to which it is addressed. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited and may be illegal. If you received this in error, please
> contact the sender and delete the message from your system.
>
> Mu Sigma takes all reasonable steps to ensure that its electronic
> communications are free from viruses. However, given Internet
> accessibility, the Company cannot accept liability for any virus introduced
> by this e-mail or any attachment and you are advised to use up-to-date
> virus checking software.
>



-- 


∞
Shashwat Shriparv


Re: Out of memory error in cassandra

2011-07-12 Thread Jonathan Ellis
Then you'll want to use MAT to analyze the dump the JVM gave you of
the heap at OOM time.  (http://www.eclipse.org/mat/)

On Tue, Jul 12, 2011 at 3:22 PM, Anurag Gujral  wrote:
> Hi Jonathan,
>     Thanks for  your mail. But no-one of the things
> mentioned in the link pertains to OOM error I we are seeing.
> thanks
> Anurag
>
> On Tue, Jul 12, 2011 at 10:42 AM, Jonathan Ellis  wrote:
>>
>> Have you seen
>> http://www.datastax.com/docs/0.8/troubleshooting/index#nodes-are-dying-with-oom-errors
>> ?
>>
>> On Mon, Jul 11, 2011 at 1:55 PM, Anurag Gujral 
>> wrote:
>> > Hi All,
>> >    I am getting following error from cassandra:
>> > ERROR [ReadStage:23] 2011-07-10 17:19:18,300
>> > DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
>> > java.lang.OutOfMemoryError: Java heap space
>> >     at
>> >
>> > org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>> >     at
>> >
>> > org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>> >     at
>> >
>> > org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
>> >     at
>> >
>> > org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
>> >     at
>> >
>> > org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
>> >     at
>> >
>> > org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
>> >     at
>> >
>> > org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>> >     at
>> >
>> > org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>> >     at org.apache.cassandra.db.Table.getRow(Table.java:333)
>> >     at
>> >
>> > org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:60)
>> >     at
>> > org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:69)
>> >     at
>> >
>> > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
>> >     at
>> >
>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> >     at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >     at java.lang.Thread.run(Thread.java:636)
>> >  INFO [ScheduledTasks:1] 2011-07-10 17:19:18,306 StatusLogger.java (line
>> > 66)
>> > RequestResponseStage  0 0
>> > ERROR [ReadStage:23] 2011-07-10 17:19:18,306
>> > AbstractCassandraDaemon.java
>> > (line 114) Fatal exception in thread Thread[ReadStage:23,5,main]
>> > java.lang.OutOfMemoryError: Java heap space
>> >     at
>> >
>> > org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>> >     at
>> >
>> > org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>> >     at
>> >
>> > org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
>> >     at
>> >
>> > org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
>> >     at
>> >
>> > org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
>> >     at
>> >
>> > org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
>> >     at
>> >
>> > org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>> >     at
>> >
>> > org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
>> >     at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>> >
>> >
>> > Can someone please help debug this? The maximum heap size is 28G .
>> >
>> > I am not sure why cassandra is giving Out of memory error here.
>> >
>> > Thanks
>> > Anurag
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: Out of memory error in cassandra

2011-07-12 Thread Anurag Gujral
Hi Jonathan,
Thanks for  your mail. But no-one of the things
mentioned in the link pertains to OOM error I we are seeing.
thanks
Anurag

On Tue, Jul 12, 2011 at 10:42 AM, Jonathan Ellis  wrote:

> Have you seen
> http://www.datastax.com/docs/0.8/troubleshooting/index#nodes-are-dying-with-oom-errors
> ?
>
> On Mon, Jul 11, 2011 at 1:55 PM, Anurag Gujral 
> wrote:
> > Hi All,
> >I am getting following error from cassandra:
> > ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> > DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
> > java.lang.OutOfMemoryError: Java heap space
> > at
> >
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
> > at
> >
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
> > at
> >
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
> > at
> >
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
> > at
> >
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
> > at
> >
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
> > at
> >
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
> > at
> >
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
> > at org.apache.cassandra.db.Table.getRow(Table.java:333)
> > at
> >
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:60)
> > at
> > org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:69)
> > at
> >
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> > at java.lang.Thread.run(Thread.java:636)
> >  INFO [ScheduledTasks:1] 2011-07-10 17:19:18,306 StatusLogger.java (line
> 66)
> > RequestResponseStage  0 0
> > ERROR [ReadStage:23] 2011-07-10 17:19:18,306 AbstractCassandraDaemon.java
> > (line 114) Fatal exception in thread Thread[ReadStage:23,5,main]
> > java.lang.OutOfMemoryError: Java heap space
> > at
> >
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
> > at
> >
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
> > at
> >
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
> > at
> >
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
> > at
> >
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
> > at
> >
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
> > at
> >
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
> > at
> >
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> > at
> >
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
> >
> >
> > Can someone please help debug this? The maximum heap size is 28G .
> >
> > I am not sure why cassandra is giving Out of memory error here.
> >
> > Thanks
> > Anurag
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>


Re: Out of memory error in cassandra

2011-07-12 Thread Jonathan Ellis
Have you seen 
http://www.datastax.com/docs/0.8/troubleshooting/index#nodes-are-dying-with-oom-errors
?

On Mon, Jul 11, 2011 at 1:55 PM, Anurag Gujral  wrote:
> Hi All,
>    I am getting following error from cassandra:
> ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
> java.lang.OutOfMemoryError: Java heap space
>     at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>     at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>     at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
>     at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
>     at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
>     at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
>     at
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>     at
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>     at org.apache.cassandra.db.Table.getRow(Table.java:333)
>     at
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:60)
>     at
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:69)
>     at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>     at java.lang.Thread.run(Thread.java:636)
>  INFO [ScheduledTasks:1] 2011-07-10 17:19:18,306 StatusLogger.java (line 66)
> RequestResponseStage  0 0
> ERROR [ReadStage:23] 2011-07-10 17:19:18,306 AbstractCassandraDaemon.java
> (line 114) Fatal exception in thread Thread[ReadStage:23,5,main]
> java.lang.OutOfMemoryError: Java heap space
>     at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
>     at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
>     at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
>     at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
>     at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
>     at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
>     at
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
>     at
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
>     at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>
>
> Can someone please help debug this? The maximum heap size is 28G .
>
> I am not sure why cassandra is giving Out of memory error here.
>
> Thanks
> Anurag
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: Out of memory error in cassandra

2011-07-11 Thread Jeffrey Kesselman
Are you on a 64 bit VM?  A 32 bit vm will basically ignore any setting over
2GB

On Mon, Jul 11, 2011 at 4:55 PM, Anurag Gujral wrote:

> Hi All,
>I am getting following error from cassandra:
> ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
> at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
> at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
> at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
> at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
> at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
> at
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
> at
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
> at org.apache.cassandra.db.Table.getRow(Table.java:333)
> at
> org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:60)
> at
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:69)
> at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:636)
>  INFO [ScheduledTasks:1] 2011-07-10 17:19:18,306 StatusLogger.java (line
> 66) RequestResponseStage  0 0
> ERROR [ReadStage:23] 2011-07-10 17:19:18,306 AbstractCassandraDaemon.java
> (line 114) Fatal exception in thread Thread[ReadStage:23,5,main]
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:49)
> at
> org.apache.cassandra.utils.BloomFilterSerializer.deserialize(BloomFilterSerializer.java:30)
> at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:117)
> at
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:94)
> at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:107)
> at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.(SSTableNamesIterator.java:72)
> at
> org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:59)
> at
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1311)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1203)
> at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1131)
>
>
> Can someone please help debug this? The maximum heap size is 28G .
>
> I am not sure why cassandra is giving Out of memory error here.
>
> Thanks
> Anurag
>



-- 
It's always darkest just before you are eaten by a grue.