On Thu, Feb 20, 2014 at 2:26 PM, Amit Kapila <amit.kapil...@gmail.com>wrote:
> On Thu, Feb 20, 2014 at 6:24 AM, Haribabu Kommi > <kommi.harib...@gmail.com> wrote: > > On Thu, Feb 20, 2014 at 11:38 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: > >> > I want to propose a new feature called "priority table" or "cache > >> > table". > >> > This is same as regular table except the pages of these tables are > >> > having > >> > high priority than normal tables. These tables are very useful, where > a > >> > faster query processing on some particular tables is expected. > >> > >> Why exactly does the existing LRU behavior of shared buffers not do > >> what you need? > > > > > > Lets assume a database having 3 tables, which are accessed regularly. The > > user is expecting a faster query results on one table. > > Because of LRU behavior which is not happening some times. > > I think this will not be a problem for regularly accessed tables(pages), > as per current algorithm they will get more priority before getting > flushed out of shared buffer cache. > Have you come across any such case where regularly accessed pages > get lower priority than non-regularly accessed pages? > Because of other regularly accessed tables, some times the table which expects faster results is getting delayed. > However it might be required for cases where user wants to control > such behaviour and pass such hints through table level option or some > other way to indicate that he wants more priority for certain tables > irrespective > of their usage w.r.t other tables. > > Now I think here important thing to find out is how much helpful it is for > users or why do they want to control such behaviour even when Database > already takes care of such thing based on access pattern. > Yes it is useful in cases where the application always expects the faster results whether the table is used regularly or not. Regards, Hari Babu Fujitsu Australia