This means you need to raise the nproc limit for the user you run cassandra
with

On Mon, Jun 25, 2012 at 8:48 AM, Oli Schacher <cassan...@lists.wgwh.ch>wrote:

> Hi list
>
> I have a small cassandra cluster consisting of three nodes. Every few
> weeks the whole cluster goes down at the same time. All nodes show:
>
> java.lang.OutOfMemoryError: unable to create new native thread
>        at java.lang.Thread.start0(Native Method)
>        at java.lang.Thread.start(Thread.java:691)
>        at
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
>        at
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1336)
>        at
> org.apache.cassandra.thrift.CustomTThreadPoolServer.serve(CustomTThreadPoolServer.java:104)
>        at
> org.apache.cassandra.thrift.CassandraDaemon$ThriftServer.run(CassandraDaemon.java:214)
>
> There are no other log messages shortly before the crash.
>
> I don't have much experience with cassandra, so I probably forgot to
> configure an important memory parameter. But before I screw things up
> even more, I hope someone on the list can point me in the right
> direction.
>
> Hardware:
> Each Node runs on two Intel Xeon CPU E5645  @ 2.40GHz (6 physical cores
> per CPU, 12 total), 12 Gig memory
>
> Software:
> Datastax Cassandra 1.1 , on Centos 6
>
> Clients:
> 10 linux servers, all of them connecting using pycassa. total of 10-30
> writes / sec
>
> I haven't changed any memory settings from the default, except
> uncommented
> MAX_HEAP_SIZE="4G"
> HEAP_NEWSIZE="800M"
> in cassandra-env.sh, this hasn't made a difference though.
>
> Any hints would be appreciated.
>
> Thanks,
> Oli
>
>
>


-- 
http://twitter.com/tjake

Reply via email to