Sorry, I forgot the following:

I am using a single node, replication factor 1, random partitioner,
and I am doing a multiget_slice with 10 keys.  Yesterday, in the old
setup, I only used get_slice [1].

Maybe Cassandra opens files in parallel for all the keys?

Thanks,
  Martin

[1] The change to version 0.5.0 (pre) was due to that change, because
    of a NPE in the older version for multiget_slice.

> -----Original Message-----
> From: Dr. Martin Grabmüller [mailto:martin.grabmuel...@eleven.de] 
> Sent: Friday, January 22, 2010 11:27 AM
> To: cassandra-user@incubator.apache.org
> Subject: Too many open files
> 
> Hello all,
> 
> I am using Cassandra for storing mail data, and after
> filling my test installation with data over the night,
> I got "Too many open files" errors.  I checked JIRA for
> known bugs, but found nothing that matched my setup.
> 
> I am using version 0.5.0 from http://people.apache.org/~eevans/
> and the exception looks like this:
> 
> ERROR - Fatal exception in thread Thread[ROW-READ-STAGE:33,5,main]
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /mnt/data000/cassandra/data/Archive/MessageIndex-844-Data.db 
> (Too many open files)
>         at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler
> .java:117)
>         at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDelive
> ryTask.java:38)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolEx
> ecutor.java:1110)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolE
> xecutor.java:603)
>         at java.lang.Thread.run(Thread.java:636)
> Caused by: java.io.FileNotFoundException: 
> /mnt/data000/cassandra/data/Archive/MessageIndex-844-Data.db 
> (Too many open files)
>         at java.io.RandomAccessFile.open(Native Method)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
>         at java.io.RandomAccessFile.<init>(RandomAccessFile.java:118)
>         at 
> org.apache.cassandra.io.BufferedRandomAccessFile.<init>(Buffer
> edRandomAccessFile.java:142)
>         at 
> org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGrou
> pReader.<init>(SSTableSliceIterator.java:123)
>         at 
> org.apache.cassandra.db.filter.SSTableSliceIterator.<init>(SST
> ableSliceIterator.java:58)
>         at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColu
> mnIterator(SliceQueryFilter.java:63)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamilyInter
> nal(ColumnFamilyStore.java:1245)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(Colu
> mnFamilyStore.java:1203)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(Colu
> mnFamilyStore.java:1172)
>         at org.apache.cassandra.db.Table.getRow(Table.java:422)
>         at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromR
> eadCommand.java:59)
>         at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler
> .java:79)
>         ... 4 more
> 
> As can be seen from the trace, the exception happens when trying to
> read from Cassandra.
> 
> I have 789 data files in my data directory (+ the same number
> of index and filter files.
> 
> The obvious call to lsof did not give me any insight (with 2271 being
> my Cassandra instance's pid):
> 
>   (env)cassan...@archive00001:~$ lsof -p 2271|wc -l
>   101
> 
> Maybe the file limit is reached while scanning all data files?
> 
> Nevertheless, when working with a development snapshot yesterday,
> before upgrading, Cassandra was happily handling more than 8000
> data files in its data directory.  That was when I was using snapshot
> apache-cassandra-incubating-2010-01-06_12-47-49 from the snapshot
> download page.
> 
> I am looking for advice on how to debug this.
> 
> Thanks,
>   Martin
> 

Reply via email to