Thanks Kai,

One approach discussed in that post is about disabling slab allocation.
What are the consequences except for lower GC performance?


Kai Wang <dep...@gmail.com>于2016年4月6日周三 上午5:40写道:

> Once a while the question about table count rises in this list. The most
> recent is
> https://groups.google.com/forum/#!topic/nosql-databases/IblAhiLUXdk
>
> In short C* is not designed to scale with the table count. For one each
> table/CF has some fixed memory footprint on *ALL* nodes. The consensus is
> you shouldn't have more than "a few hundreds" of tables.
>
> On Mon, Apr 4, 2016 at 10:17 AM, jason zhao yang <
> zhaoyangsingap...@gmail.com> wrote:
>
>> Hi,
>>
>> This is Jason.
>>
>> Currently, I am using C* 2.1.10, I want to ask what's the optimal number
>> of tables I should create in one cluster?
>>
>> My use case is that I will prepare a keyspace for each of my tenant, and
>> every tenant will create tables they needed. Assume each tenant created 50
>> tables with normal workload (half read, half write).   so how many number
>> of tenants I can support in one cluster?
>>
>> I know there are a few issues related to large number of tables.
>> * frequent GC
>> * frequent flush due to insufficient memory
>> * large latency when modifying table schema
>> * large amount of tombstones during creating table
>>
>> Is there any other issues with large number of tables? Using a 32GB
>> instance, I can easily create 4000 tables with off-heap-memtable.
>>
>> BTW, Is this table limitation solved in 3.X?
>>
>> Thank you very much.
>>
>>
>

Reply via email to