This patch was broken and waiting for author since early December, so
I've marked it as returned with feedback. Feel free to resubmit an
updated version to a future commitfest.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, T
On Wed, Nov 06, 2019 at 02:55:30AM +, ideriha.take...@fujitsu.com wrote:
> Thank you for the reply.
Latest patch does not apply. Please send a rebase. Patch moved to
next CF, waiting on author.
Bip.
--
Michael
signature.asc
Description: PGP signature
>From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
>If the assumption that working set of backend (set of tables accessed by this
>session)
>is small enough to fit in backend's memory is true, then global meta cache is
>not
>needed at all: it is enough to limit size of local cache and
On 09.10.2019 9:06, ideriha.take...@fujitsu.com wrote:
Hi, Konstantin
From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
I do not completely understand from your description when are are going
to evict entry from local cache?
Just once transaction is committed? I think it will be m
Hi, Konstantin
>>From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
>>I do not completely understand from your description when are are going
>>to evict entry from local cache?
>>Just once transaction is committed? I think it will be more efficient
>>to also specify memory threshold for
Hi, Alvaro
>
>The last patch we got here (a prototype) was almost a year ago. There was
>substantial discussion about it, but no new version of the patch has been
>posted. Are
>we getting a proper patch soon, or did we give up on the approach entirely?
I'm sorry for the late response. I starte
Hi, Konstantin
I'm very sorry for the late response and thank you for your feedback.
(I re-sent this email because my email address changed and couldn't deliver to
hackers.)
>From: Konstantin Knizhnik [mailto:k.knizh...@postgrespro.ru]
>
>Takeshi-san,
>
>I am sorry for late response - I just wa
The last patch we got here (a prototype) was almost a year ago. There
was substantial discussion about it, but no new version of the patch has
been posted. Are we getting a proper patch soon, or did we give up on
the approach entirely?
--
Álvaro Herrerahttps://www.2ndQuadrant.co
Takeshi-san,
I am sorry for late response - I just waited new version of the patch
from you for review.
I read your last proposal and it seems to be very reasonable.
From my point of view we can not reach acceptable level of performance
if we do not have local cache at all.
So, as you propose
Hi, everyone.
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>My current thoughts:
>- Each catcache has (maybe partial) HeapTupleHeader
>- put every catcache on shared memory and no local catcache
>- but catcache for aborted tuple is not put on shared memory
>- Hash table exists p
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>[TL; DR]
>The basic idea is following 4 points:
>A. User can choose which database to put a cache (relation and catalog) on
>shared
>memory and how much memory is used
>B. Caches of committed data are on the
>shared memory. Caches o
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Do you have any thoughts?
>
Hi, I updated my idea, hoping get some feedback.
[TL; DR]
The basic idea is following 4 points:
A. User can choose which database to put a cache (relation and catalog) on
shared memory and how much memory
On Mon, Nov 26, 2018 at 12:12:09PM +, Ideriha, Takeshi wrote:
> On this allocation stuffs I'm trying to handle it in another thread
> [1] in a broader way.
Based on the latets updates of this thread, this is waiting for
review, so moved to next CF.
--
Michael
signature.asc
Description: PGP
Hi,
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Wednesday, October 3, 2018 3:18 PM
>At this moment this patch only allocates catalog cache header and CatCache
>data on
>the shared memory area.
On this allocation stuffs I'm trying to handle it in another thread [1] in a
Hi,
Thank you for the previous discussion while ago.
I’m afraid I haven't replied to all.
To move forward this development I attached a PoC patch.
I introduced a guc called shared_catacache_mem to specify
how much memory is supposed be allocated on the shared memory area.
It defaults to zero,
Hi, Konstantin
>Hi,
>I really think that we need to move to global caches (and especially catalog
>caches) in
>Postgres.
>Modern NUMA servers may have hundreds of cores and to be able to utilize all
>of them,
>we may need to start large number (hundreds) of backends.
>Memory overhead of local ca
On 2018/07/05 23:00, Robert Haas wrote:
> With respect to partitioning specifically, it seems like we might be
> able to come up with some way of planning that doesn't need a full
> relcache entry for every partition, particularly if there are no
> partition-local objects (indexes, triggers, etc.).
>-Original Message-
>From: se...@rielau.com [mailto:se...@rielau.com]
>Sent: Wednesday, June 27, 2018 2:04 AM
>To: Ideriha, Takeshi/出利葉 健 ; pgsql-hackers
>
>Subject: RE: Global shared meta cache
>
>Takeshi-san,
>
>
>>My customer created hundreds of tho
Hi,
On 2018-07-05 10:00:13 -0400, Robert Haas wrote:
> I think we need to take a little bit broader view of this problem.
> For instance, maybe we could have backend-local caches that are kept
> relatively small, and then a larger shared cache that can hold more
> entries.
I think it's pretty muc
On 05.07.2018 17:00, Robert Haas wrote:
On Mon, Jul 2, 2018 at 5:59 AM, Konstantin Knizhnik
wrote:
But I am not sure that just using RW lock will be enough replace local cache
with global.
I'm pretty sure it won't. In fact, no matter what kind of locking you
use, it's bound to cost somethi
On Mon, Jul 2, 2018 at 5:59 AM, Konstantin Knizhnik
wrote:
> But I am not sure that just using RW lock will be enough replace local cache
> with global.
I'm pretty sure it won't. In fact, no matter what kind of locking you
use, it's bound to cost something. There is no such thing as a free
lunc
>-Original Message-
>From: AJG [mailto:ay...@gera.co.nz]
>Sent: Wednesday, June 27, 2018 3:21 AM
>To: pgsql-hack...@postgresql.org
>Subject: Re: Global shared meta cache
>
>Ideriha, Takeshi wrote
>> 2) benchmarked 3 times for each conditions and got t
On 26.06.2018 09:48, Ideriha, Takeshi wrote:
Hi, hackers!
My customer created hundreds of thousands of partition tables and tried to
select data from hundreds of applications,
which resulted in enormous consumption of memory because it consumed # of
backend multiplied by # of local memory (
From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
> 1) Initialized with pgbench -i -s10
...
>pgbench -c48 -T60 -Msimple | 4956|4965
> |95%
The scaling factor should be much greater than the number of clients.
Otherwise, multiple clients would conflict on the same ro
Ideriha, Takeshi wrote
> 2) benchmarked 3 times for each conditions and got the average result of
> TPS.
> |master branch | prototype |
> proto/master (%)
>
>
>pgben
Hi,
On 2018-06-26 06:48:28 +, Ideriha, Takeshi wrote:
> > I think it would be interested for somebody to build a prototype here
> > that ignores all the problems but the first and uses some
> > straightforward, relatively unoptimized locking strategy for the first
> > problem. Then benchmark i
Takeshi-san,
>My customer created hundreds of thousands of partition tables and tried to
>select data from hundreds of applications,
>which resulted in enormous consumption of memory because it consumed # of
>backend multiplied by #
> of local memory (ex. 100 backends X 1GB = 100GB).
>Relation
27 matches
Mail list logo