If you are using ext3 there is a hard limit on number if files in a
directory of 32K.  EXT4 as a much higher limit (cant remember exactly
IIRC). So true that having many files is not a problem for the file
system though your VFS cache could be less efficient since you would
have a higher inode->data ratio.

Edward

On Mon, Sep 24, 2012 at 7:03 PM, Aaron Turner <synfina...@gmail.com> wrote:
> On Mon, Sep 24, 2012 at 10:02 AM, Віталій Тимчишин <tiv...@gmail.com> wrote:
>> Why so?
>> What are pluses and minuses?
>> As for me, I am looking for number of files in directory.
>> 700GB/512MB*5(files per SST) = 7000 files, that is OK from my view.
>> 700GB/5MB*5 = 700000 files, that is too much for single directory, too much
>> memory used for SST data, too huge compaction queue (that leads to strange
>> pauses, I suppose because of compactor thinking what to compact next),...
>
>
> Not sure why a lot of files is a problem... modern filesystems deal
> with that pretty well.
>
> Really large sstables mean that compactions now are taking a lot more
> disk IO and time to complete.   Remember, Leveled Compaction is more
> disk IO intensive, so using large sstables makes that even worse.
> This is a big reason why the default is 5MB. Also, each level is 10x
> the size as the previous level.  Also, for level compaction, you need
> 10x the sstable size worth of free space to do compactions.  So now
> you need 5GB of free disk, vs 50MB of free disk.
>
> Also, if you're doing deletes in those CF's, that old, deleted data is
> going to stick around a LOT longer with 512MB files, because it can't
> get deleted until you have 10x512MB files to compact to level 2.
> Heaven forbid it doesn't get deleted then because each level is 10x
> bigger so you end up waiting a LOT longer to actually delete that data
> from disk.
>
> Now, if you're using SSD's then larger sstables is probably doable,
> but even then I'd guesstimate 50MB is far more reasonable then 512MB.
>
> -Aaron
>
>
>> 2012/9/23 Aaron Turner <synfina...@gmail.com>
>>>
>>> On Sun, Sep 23, 2012 at 8:18 PM, Віталій Тимчишин <tiv...@gmail.com>
>>> wrote:
>>> > If you think about space, use Leveled compaction! This won't only allow
>>> > you
>>> > to fill more space, but also will shrink you data much faster in case of
>>> > updates. Size compaction can give you 3x-4x more space used than there
>>> > are
>>> > live data. Consider the following (our simplified) scenario:
>>> > 1) The data is updated weekly
>>> > 2) Each week a large SSTable is written (say, 300GB) after full update
>>> > processing.
>>> > 3) In 3 weeks you will have 1.2TB of data in 3 large SSTables.
>>> > 4) Only after 4th week they all will be compacted into one 300GB
>>> > SSTable.
>>> >
>>> > Leveled compaction've tamed space for us. Note that you should set
>>> > sstable_size_in_mb to reasonably high value (it is 512 for us with
>>> > ~700GB
>>> > per node) to prevent creating a lot of small files.
>>>
>>> 512MB per sstable?  Wow, that's freaking huge.  From my conversations
>>> with various developers 5-10MB seems far more reasonable.   I guess it
>>> really depends on your usage patterns, but that seems excessive to me-
>>> especially as sstables are promoted.
>>>
>>
>> --
>> Best regards,
>>  Vitalii Tymchyshyn
>
>
>
> --
> Aaron Turner
> http://synfin.net/         Twitter: @synfinatic
> http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & 
> Windows
> Those who would give up essential Liberty, to purchase a little temporary
> Safety, deserve neither Liberty nor Safety.
>     -- Benjamin Franklin
> "carpe diem quam minimum credula postero"

Reply via email to