Paul Nickerson;
curious, did you get a solution to your problem ?
Regards,Jan/
On Tuesday, February 10, 2015 5:48 PM, Flavien Charlon
wrote:
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incre
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should (eve
This kind of recovery is definitely not my strong point, so feedback on
this approach would certainly be welcome.
As I understand it, if you really want to keep that data, you ought to be
able to mv it out of the way to get your node online, then move those files
in a several thousand at a time, n
yeah... probably just 2.1.2 things and not compactions. Still probably
want to do something about the 1.6 million files though. It may be worth
just mv/rm'ing to 60 sec rollup data though unless really attached to it.
Chris
On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson wrote:
> I was having
I was having trouble with snapshots failing while trying to repair that
table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html).
I have a repair running on it now, and it seems to be going successfully
this time. I am going to wait for that to finish, then try a
manual nodetool
Your cluster is probably having issues with compactions (with STCS you
should never have this many). I would probably punt with
OpsCenter/rollups60. Turn the node off and move all of the sstables off to
a different directory for backup (or just rm if you really don't care about
1 minute metrics),
Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
are 1,617,289 files under OpsCenter/rollups60.
Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
was able to start up Cassandra OK with the default heap size formula.
Now my cluster is running multiple
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
> I am getting an out of memory error why I try to start Cassandra on one of
> my nodes. Cassandra will run for a minute, and then exit without outputting
> any error in the log file. It is happening while SSTableReader is opening a
> couple
The version is 1.1.0
Prakrati Agrawal | Developer - Big Data(I&D)| 9731648376 | www.mu-sigma.com
From: Dave Brosius [mailto:dbros...@mebigfatguy.com]
Sent: Monday, June 11, 2012 10:07 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
What version of Cassandra?
might be rel
...@thelastpickle.com]
*Sent:* Saturday, June 09, 2012 12:18 AM
*To:* user@cassandra.apache.org
*Subject:* Re: Out of memory error
When you ask a question please include the query or function call you
have made. An any other information that would help someone understand
what you are trying to do.
Also, please
Sorry
I ran list columnFamilyName; and it threw this error.
Thanks and Regards
Prakrati
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Saturday, June 09, 2012 12:18 AM
To: user@cassandra.apache.org
Subject: Re: Out of memory error
When you ask a question please include the query or
When you ask a question please include the query or function call you have
made. An any other information that would help someone understand what you are
trying to do.
Also, please list things you have already tried to work around the problem.
Cheers
-
Aaron Morton
Freelance
Check this slide,
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
Regards
∞
Shashwat Shriparv
On Fri, Jun 8, 2012 at 2:34 PM, Prakrati Agrawal <
prakrati.agra...@mu-sigma.com> wrote:
> Dear all,
>
> ** **
>
> When I try to list the entire data in my colum
Then you'll want to use MAT to analyze the dump the JVM gave you of
the heap at OOM time. (http://www.eclipse.org/mat/)
On Tue, Jul 12, 2011 at 3:22 PM, Anurag Gujral wrote:
> Hi Jonathan,
> Thanks for your mail. But no-one of the things
> mentioned in the link pertains to O
Hi Jonathan,
Thanks for your mail. But no-one of the things
mentioned in the link pertains to OOM error I we are seeing.
thanks
Anurag
On Tue, Jul 12, 2011 at 10:42 AM, Jonathan Ellis wrote:
> Have you seen
> http://www.datastax.com/docs/0.8/troubleshooting/index#nodes-are-d
Have you seen
http://www.datastax.com/docs/0.8/troubleshooting/index#nodes-are-dying-with-oom-errors
?
On Mon, Jul 11, 2011 at 1:55 PM, Anurag Gujral wrote:
> Hi All,
> I am getting following error from cassandra:
> ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> DebuggableThreadPoolEx
Are you on a 64 bit VM? A 32 bit vm will basically ignore any setting over
2GB
On Mon, Jul 11, 2011 at 4:55 PM, Anurag Gujral wrote:
> Hi All,
>I am getting following error from cassandra:
> ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> DebuggableThreadPoolExecutor.java (line 103) E
17 matches
Mail list logo