On 9/21/2015 1:31 AM, David Laight wrote:
From: Santosh Shilimkar
Sent: 20 September 2015 00:05
Even with per bucket locking scheme, in a massive parallel
system with active rds sockets which could be in excess of multiple
of 10K, rds_bin_lookup() workload is significant because of smaller
hashtable size.

With some tests, it was found that we get modest but still nice
reduction in rds_bind_lookup with bigger bucket.

        Hashtable       Baseline(1k)    Delta
        2048:           8.28%           -2.45%
        4096:           8.28%           -4.60%
        8192:           8.28%           -6.46%
        16384:          8.28%           -6.75%

Based on the data, we set 8K as the bind hash-table size.

Can't you use of on the dynamically sizing hash tables?
8k hash table entries is OTT for a lot of systems.

Do you know an example in Linux kernel uses that ? What I
certainly don't want is over-head of re-sizing whenever
that happens in running systems running multiple databases.

Memory is certainly not an issue on the systems where RDS
has been deployed. I certainly don't want to over-use the
memory but in the system where RDS being used and also
amount of connection it needs to handle, it needs
bigger bucket.

Regards,
Santosh
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to