On Sun, Oct 1, 2017 at 8:36 PM, Daniel Gustafsson wrote:
>> On 18 Aug 2017, at 13:39, Claudio Freire wrote:
>>
>> On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire
>> wrote:
>>> Indeed they do, and that's what motivated this patch. But I'd need
>>> TB-sized tables to set up something like that. I
> On 18 Aug 2017, at 13:39, Claudio Freire wrote:
>
> On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire
> wrote:
>> Indeed they do, and that's what motivated this patch. But I'd need
>> TB-sized tables to set up something like that. I don't have the
>> hardware or time available to do that (vacuu
On Fri, Apr 7, 2017 at 10:51 PM, Claudio Freire wrote:
> Indeed they do, and that's what motivated this patch. But I'd need
> TB-sized tables to set up something like that. I don't have the
> hardware or time available to do that (vacuum on bloated TB-sized
> tables can take days in my experience)
On Wed, Jul 12, 2017 at 1:29 PM, Claudio Freire wrote:
> On Wed, Jul 12, 2017 at 1:08 PM, Claudio Freire
> wrote:
>> On Wed, Jul 12, 2017 at 11:48 AM, Alexey Chernyshov
>> wrote:
>>> Thank you for the patch and benchmark results, I have a couple remarks.
>>> Firstly, padding in DeadTuplesSegmen
On Wed, Jul 12, 2017 at 1:08 PM, Claudio Freire wrote:
> On Wed, Jul 12, 2017 at 11:48 AM, Alexey Chernyshov
> wrote:
>> Thank you for the patch and benchmark results, I have a couple remarks.
>> Firstly, padding in DeadTuplesSegment
>>
>> typedef struct DeadTuplesSegment
>>
>> {
>>
>> ItemPo
On Wed, Jul 12, 2017 at 11:48 AM, Alexey Chernyshov
wrote:
> Thank you for the patch and benchmark results, I have a couple remarks.
> Firstly, padding in DeadTuplesSegment
>
> typedef struct DeadTuplesSegment
>
> {
>
> ItemPointerData last_dead_tuple;/* Copy of the last dead tuple
> (unse
Thank you for the patch and benchmark results, I have a couple remarks.
Firstly, padding in DeadTuplesSegment
typedef struct DeadTuplesSegment
{
ItemPointerData last_dead_tuple;/* Copy of the last dead tuple
(unset
* until the segment is fully
Resending without the .tar.bz2 that get blocked
Sorry for the delay, I had extended vacations that kept me away from
my test rigs, and afterward testing too, liteally, a few weeks.
I built a more thoroguh test script that produced some interesting
results. Will attach the results.
For now, to t
On Fri, Apr 21, 2017 at 6:24 AM, Claudio Freire wrote:
> On Wed, Apr 12, 2017 at 4:35 PM, Robert Haas wrote:
>> On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire
>> wrote:
>>> In essence, the patch as it is proposed, doesn't *need* a binary
>>> search, because the segment list can only grow up to
On Mon, Apr 24, 2017 at 3:57 PM, Claudio Freire wrote:
> I wouldn't fret over the slight slowdown vs the old patch, it could be
> noise (the script only completed a single run at scale 400).
Yeah, seems fine.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Com
On Sun, Apr 23, 2017 at 12:41 PM, Robert Haas wrote:
>> That's after inlining the compare on both the linear and sequential
>> code, and it seems it lets the compiler optimize the binary search to
>> the point where it outperforms the sequential search.
>>
>> That's not the case when the compare i
On Thu, Apr 20, 2017 at 5:24 PM, Claudio Freire wrote:
>> What's not clear to me is how sensitive the performance of vacuum is
>> to the number of cycles used here. For a large index, the number of
>> searches will presumably be quite large, so it does seem worth
>> worrying about performance. B
On Wed, Apr 12, 2017 at 4:35 PM, Robert Haas wrote:
> On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire
> wrote:
>> In essence, the patch as it is proposed, doesn't *need* a binary
>> search, because the segment list can only grow up to 15 segments at
>> its biggest, and that's a size small enough
On Tue, Apr 11, 2017 at 4:38 PM, Claudio Freire wrote:
> In essence, the patch as it is proposed, doesn't *need* a binary
> search, because the segment list can only grow up to 15 segments at
> its biggest, and that's a size small enough that linear search will
> outperform (or at least perform as
On Tue, Apr 11, 2017 at 4:17 PM, Robert Haas wrote:
> On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire
> wrote:
>> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>>> maintenance_work_mem
>>>
>>> So we'll allocate 128M
On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>>
>> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
>> enough so we'
On Tue, Apr 11, 2017 at 3:59 PM, Claudio Freire wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>>
>> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
>> enough so we'
On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas wrote:
> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
> maintenance_work_mem
>
> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
> enough so we'll allocate another 8GB, for a total of 16256MB, but more
> tha
On Fri, Apr 7, 2017 at 9:12 PM, Andres Freund wrote:
>> Why do you say exponential growth fragments memory? AFAIK, all those
>> allocations are well beyond the point where malloc starts mmaping
>> memory, so each of those segments should be a mmap segment,
>> independently freeable.
>
> Not all pl
On 4/7/17 10:19 PM, Claudio Freire wrote:
>
> I rebased the early free patch (patch 3) to apply on top of the v9
> patch 2 (it needed some changes). I recognize the early free patch
> didn't get nearly as much scrutiny, so I'm fine with commiting only 2
> if that one's ready to go but 3 isn't.
>
On Fri, Apr 7, 2017 at 10:06 PM, Claudio Freire wrote:
>>> >> + if (seg->num_dead_tuples >= seg->max_dead_tuples)
>>> >> + {
>>> >> + /*
>>> >> + * The segment is overflowing, so we must allocate
>>> >> a new segment.
>>> >> +
On Fri, Apr 7, 2017 at 10:12 PM, Andres Freund wrote:
> On 2017-04-07 22:06:13 -0300, Claudio Freire wrote:
>> On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
>> > Hi,
>> >
>> >
>> > On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
>> >> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrot
On 2017-04-07 22:06:13 -0300, Claudio Freire wrote:
> On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
> > Hi,
> >
> >
> > On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
> >> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> >> > Hi,
> >> >
> >> > I've *not* read the history of this
On Fri, Apr 7, 2017 at 9:56 PM, Andres Freund wrote:
> Hi,
>
>
> On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
>> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
>> > Hi,
>> >
>> > I've *not* read the history of this thread. So I really might be
>> > missing some context.
>> >
>> >
>>
Hi,
On 2017-04-07 19:43:39 -0300, Claudio Freire wrote:
> On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> > Hi,
> >
> > I've *not* read the history of this thread. So I really might be
> > missing some context.
> >
> >
> >> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:0
On Fri, Apr 7, 2017 at 5:05 PM, Andres Freund wrote:
> Hi,
>
> I've *not* read the history of this thread. So I really might be
> missing some context.
>
>
>> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:00 2001
>> From: Claudio Freire
>> Date: Mon, 12 Sep 2016 23:36:42 -0300
>
On Fri, Apr 7, 2017 at 7:43 PM, Claudio Freire wrote:
>>> + * Lookup in that structure proceeds sequentially in the list of segments,
>>> + * and with a binary search within each segment. Since segment's size grows
>>> + * exponentially, this retains O(N log N) lookup complexity.
>>
>> N log N is
Hi,
I've *not* read the history of this thread. So I really might be
missing some context.
> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:00 2001
> From: Claudio Freire
> Date: Mon, 12 Sep 2016 23:36:42 -0300
> Subject: [PATCH] Vacuum: allow using more than 1GB work mem
>
>
On Wed, Feb 1, 2017 at 7:55 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
>> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
>> wrote:
>>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada
>>> wrote:
Thank you for updating the patch.
Whole pat
On Wed, Feb 1, 2017 at 11:55 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
>> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
>> wrote:
>>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada
>>> wrote:
Thank you for updating the patch.
Whole pa
On Wed, Feb 1, 2017 at 6:13 PM, Masahiko Sawada wrote:
> On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire
> wrote:
>> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada
>> wrote:
>>> Thank you for updating the patch.
>>>
>>> Whole patch looks good to me except for the following one comment.
>>> Th
On Wed, Feb 1, 2017 at 10:02 PM, Claudio Freire wrote:
> On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada wrote:
>> Thank you for updating the patch.
>>
>> Whole patch looks good to me except for the following one comment.
>> This is the final comment from me.
>>
>> /*
>> * lazy_tid_reaped() --
On Wed, Feb 1, 2017 at 5:47 PM, Masahiko Sawada wrote:
> Thank you for updating the patch.
>
> Whole patch looks good to me except for the following one comment.
> This is the final comment from me.
>
> /*
> * lazy_tid_reaped() -- is a particular tid deletable?
> *
> * This has the right
On Tue, Jan 31, 2017 at 3:05 AM, Claudio Freire wrote:
> On Mon, Jan 30, 2017 at 5:51 AM, Masahiko Sawada
> wrote:
>>
>> * We are willing to use at most maintenance_work_mem (or perhaps
>> * autovacuum_work_mem) memory space to keep track of dead tuples. We
>> * initially allocate an ar
On Tue, Jan 31, 2017 at 11:05 AM, Claudio Freire wrote:
> Updated and rebased v7 attached.
Moved to CF 2017-03.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Jan 30, 2017 at 5:51 AM, Masahiko Sawada wrote:
>
> * We are willing to use at most maintenance_work_mem (or perhaps
> * autovacuum_work_mem) memory space to keep track of dead tuples. We
> * initially allocate an array of TIDs of that size, with an upper limit that
> * depends o
On Thu, Jan 26, 2017 at 5:11 AM, Claudio Freire wrote:
> On Wed, Jan 25, 2017 at 1:54 PM, Masahiko Sawada
> wrote:
>> Thank you for updating the patch!
>>
>> + /*
>> +* Quickly rule out by lower bound (should happen a lot) Upper bound
>> was
>> +* already checked by segmen
On Wed, Jan 25, 2017 at 1:54 PM, Masahiko Sawada wrote:
> Thank you for updating the patch!
>
> + /*
> +* Quickly rule out by lower bound (should happen a lot) Upper bound
> was
> +* already checked by segment search
> +*/
> + if (vac_cmp_itemptr((void *) itemp
On Tue, Jan 24, 2017 at 1:49 AM, Claudio Freire wrote:
> On Fri, Jan 20, 2017 at 6:24 AM, Masahiko Sawada
> wrote:
>> On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire
>> wrote:
>>> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
>>> wrote:
28.12.2016 23:43, Claudio Freire:
>>>
On Fri, Jan 20, 2017 at 6:24 AM, Masahiko Sawada wrote:
> On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire
> wrote:
>> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
>> wrote:
>>> 28.12.2016 23:43, Claudio Freire:
>>>
>>> Attached v4 patches with the requested fixes.
>>>
>>>
>>> Sorry fo
I think this patch no longer applies because of conflicts with the one I
just pushed. Please rebase.
Thanks
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hacker
I pushed this patch after rewriting it rather completely. I added
tracing notices to inspect the blocks it was prefetching and observed
that the original coding was failing to prefetch the final streak of
blocks in the table, which is an important oversight considering that it
may very well be tha
Alvaro Herrera wrote:
> There was no discussion whatsoever of the "prefetch" patch in this
> thread; and as far as I can see, nobody even mentioned such an idea in
> the thread. This prefetch patch appeared out of the blue and there was
> no discussion about it that I can see. Now I was about to
You posted two patches with this preamble:
Claudio Freire wrote:
> Attached is the raw output of the test, the script used to create it,
> and just in case the patch set used. I believe it's the same as the
> last one I posted, just rebased.
There was no discussion whatsoever of the "prefetch" p
On Thu, Jan 19, 2017 at 8:31 PM, Claudio Freire wrote:
> On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
> wrote:
>> 28.12.2016 23:43, Claudio Freire:
>>
>> Attached v4 patches with the requested fixes.
>>
>>
>> Sorry for being late, but the tests took a lot of time.
>
> I know. Takes me s
On Thu, Jan 19, 2017 at 6:33 AM, Anastasia Lubennikova
wrote:
> 28.12.2016 23:43, Claudio Freire:
>
> Attached v4 patches with the requested fixes.
>
>
> Sorry for being late, but the tests took a lot of time.
I know. Takes me several days to run my test scripts once.
> create table t1 as select
28.12.2016 23:43, Claudio Freire:
Attached v4 patches with the requested fixes.
Sorry for being late, but the tests took a lot of time.
create table t1 as select i, md5(random()::text) from
generate_series(0,4) as i;
create index md5_idx ON t1(md5);
update t1 set md5 = md5((random()
On Wed, Dec 28, 2016 at 3:41 PM, Claudio Freire wrote:
>> Anyway, I found the problem that had caused segfault.
>>
>> for (segindex = 0; segindex <= vacrelstats->dead_tuples.last_seg; tupindex =
>> 0, segindex++)
>> {
>> DeadTuplesSegment *seg =
>> &(vacrelstats->dead_tuples.dead_tuples[segind
On Wed, Dec 28, 2016 at 10:26 AM, Anastasia Lubennikova
wrote:
> 27.12.2016 20:14, Claudio Freire:
>
> On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
> wrote:
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 0x006941e7 in lazy_vacuum_heap (onerel=0x1ec2360,
>
27.12.2016 20:14, Claudio Freire:
On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
wrote:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x006941e7 in lazy_vacuum_heap (onerel=0x1ec2360,
vacrelstats=0x1ef6e00) at vacuumlazy.c:1417
1417tblk =
Item
27.12.2016 16:54, Alvaro Herrera:
Anastasia Lubennikova wrote:
I ran configure using following set of flags:
./configure --enable-tap-tests --enable-cassert --enable-debug
--enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
And then ran make check. Here is the stacktrace:
Program termin
On Tue, Dec 27, 2016 at 10:41 AM, Anastasia Lubennikova
wrote:
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 0x006941e7 in lazy_vacuum_heap (onerel=0x1ec2360,
> vacrelstats=0x1ef6e00) at vacuumlazy.c:1417
> 1417tblk =
> ItemPointerGetBlockNumber(&seg->
On Tue, Dec 27, 2016 at 10:54 AM, Alvaro Herrera
wrote:
> Anastasia Lubennikova wrote:
>
>> I ran configure using following set of flags:
>> ./configure --enable-tap-tests --enable-cassert --enable-debug
>> --enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
>> And then ran make check. Here
Anastasia Lubennikova wrote:
> I ran configure using following set of flags:
> ./configure --enable-tap-tests --enable-cassert --enable-debug
> --enable-depend CFLAGS="-O0 -g3 -fno-omit-frame-pointer"
> And then ran make check. Here is the stacktrace:
>
> Program terminated with signal SIGSEGV,
23.12.2016 22:54, Claudio Freire:
On Fri, Dec 23, 2016 at 1:39 PM, Anastasia Lubennikova
wrote:
I found the reason. I configure postgres with CFLAGS="-O0" and it causes
Segfault on initdb.
It works fine and passes tests with default configure flags, but I'm pretty
sure that we should fix segfa
On Fri, Dec 23, 2016 at 1:39 PM, Anastasia Lubennikova
wrote:
>> On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire
>> wrote:
>>>
>>> On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
>>> wrote:
The following review has been posted through the commitfest application:
make inst
22.12.2016 21:18, Claudio Freire:
On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire wrote:
On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
wrote:
The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature:
On Thu, Dec 22, 2016 at 12:22 PM, Claudio Freire wrote:
> On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
> wrote:
>> The following review has been posted through the commitfest application:
>> make installcheck-world: tested, failed
>> Implements feature: not tested
>> Spec compli
On Thu, Dec 22, 2016 at 12:15 PM, Anastasia Lubennikova
wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world: tested, failed
> Implements feature: not tested
> Spec compliant: not tested
> Documentation:not tested
The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: not tested
Spec compliant: not tested
Documentation:not tested
Hi,
I haven't read through the thread yet, just tried to apply the patch
On Tue, Nov 22, 2016 at 4:53 AM, Claudio Freire
wrote:
> On Mon, Nov 21, 2016 at 2:15 PM, Masahiko Sawada
> wrote:
> > On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire
> wrote:
> >> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas
> wrote:
> >>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire <
> k
On Mon, Nov 21, 2016 at 2:15 PM, Masahiko Sawada wrote:
> On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire
> wrote:
>> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
>>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
>>> wrote:
Attached is patch 0002 with pgindent applied over it
>>
On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire wrote:
> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
>> wrote:
>>> Attached is patch 0002 with pgindent applied over it
>>>
>>> I don't think there's any other formatting issue, but feel f
On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas wrote:
> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire
> wrote:
>> Attached is patch 0002 with pgindent applied over it
>>
>> I don't think there's any other formatting issue, but feel free to
>> point a finger to it if I missed any
>
> Hmm, I had i
On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire wrote:
> Attached is patch 0002 with pgindent applied over it
>
> I don't think there's any other formatting issue, but feel free to
> point a finger to it if I missed any
Hmm, I had imagined making all of the segments the same size rather
than havi
On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire wrote:
> Attached is patch 0002 with pgindent applied over it
>
> I don't think there's any other formatting issue, but feel free to
> point a finger to it if I missed any
Hmm, I had imagined making all of the segments the same size rather
than havi
On Thu, Nov 17, 2016 at 2:51 PM, Claudio Freire wrote:
> On Thu, Nov 17, 2016 at 2:34 PM, Masahiko Sawada
> wrote:
>> I glanced at the patches but the both patches don't obey the coding
>> style of PostgreSQL.
>> Please refer to [1].
>>
>> [1]
>> http://wiki.postgresql.org/wiki/Developer_FAQ#Wh
On Thu, Nov 17, 2016 at 2:34 PM, Masahiko Sawada wrote:
> I glanced at the patches but the both patches don't obey the coding
> style of PostgreSQL.
> Please refer to [1].
>
> [1]
> http://wiki.postgresql.org/wiki/Developer_FAQ#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F.
I t
On Thu, Oct 27, 2016 at 5:25 AM, Claudio Freire wrote:
> On Thu, Sep 15, 2016 at 1:16 PM, Claudio Freire
> wrote:
>> On Wed, Sep 14, 2016 at 12:24 PM, Claudio Freire
>> wrote:
>>> On Wed, Sep 14, 2016 at 12:17 PM, Robert Haas wrote:
I am kind of doubtful about this whole line of inv
On Fri, Sep 16, 2016 at 9:47 AM, Pavan Deolasee
wrote:
> On Fri, Sep 16, 2016 at 7:03 PM, Robert Haas wrote:
>> On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
>> wrote:
>> > But I actually wonder if we are over engineering things and
>> > overestimating
>> > cost of memmove etc. How about this
On Fri, Sep 16, 2016 at 7:03 PM, Robert Haas wrote:
> On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
> wrote:
> > But I actually wonder if we are over engineering things and
> overestimating
> > cost of memmove etc. How about this simpler approach:
>
> Don't forget that you need to handle the
On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
wrote:
> But I actually wonder if we are over engineering things and overestimating
> cost of memmove etc. How about this simpler approach:
Don't forget that you need to handle the case where
maintenance_work_mem is quite small.
--
Robert Haas
En
On Fri, Sep 16, 2016 at 9:09 AM, Pavan Deolasee
wrote:
>
> I also realised that we can compact the TID array in step (b) above
> because we only need to store 17 bits for block numbers (we already know
> which 1GB segment they belong to). Given that usable offsets are also just
> 13 bits, TID arr
On Fri, Sep 16, 2016 at 12:24 AM, Claudio Freire
wrote:
> On Thu, Sep 15, 2016 at 3:48 PM, Tomas Vondra
> wrote:
> > For example, we always allocate the TID array as large as we can fit into
> > m_w_m, but maybe we don't need to wait with switching to the bitmap until
> > filling the whole array
On Thu, Sep 15, 2016 at 3:48 PM, Tomas Vondra
wrote:
> For example, we always allocate the TID array as large as we can fit into
> m_w_m, but maybe we don't need to wait with switching to the bitmap until
> filling the whole array - we could wait as long as the bitmap fits into the
> remaining par
On 09/15/2016 06:40 PM, Robert Haas wrote:
On Thu, Sep 15, 2016 at 12:22 PM, Tom Lane wrote:
Tomas Vondra writes:
On 09/14/2016 07:57 PM, Tom Lane wrote:
People who are vacuuming because they are out of disk space will be very
very unhappy with that solution.
The people are usually runn
On Thu, Sep 15, 2016 at 12:22 PM, Tom Lane wrote:
> Tomas Vondra writes:
>> On 09/14/2016 07:57 PM, Tom Lane wrote:
>>> People who are vacuuming because they are out of disk space will be very
>>> very unhappy with that solution.
>
>> The people are usually running out of space for data, while th
Tomas Vondra writes:
> On 09/14/2016 07:57 PM, Tom Lane wrote:
>> People who are vacuuming because they are out of disk space will be very
>> very unhappy with that solution.
> The people are usually running out of space for data, while these files
> would be temporary files placed wherever temp
On Thu, Sep 15, 2016 at 12:50 PM, Tomas Vondra
wrote:
> On 09/14/2016 07:57 PM, Tom Lane wrote:
>>
>> Pavan Deolasee writes:
>>>
>>> On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
>>>
>>> wrote:
One thing not quite clear to me is how do we create the bitmap
representation starti
On 09/14/2016 05:17 PM, Robert Haas wrote:
I am kind of doubtful about this whole line of investigation because
we're basically trying pretty hard to fix something that I'm not sure
is broken.I do agree that, all other things being equal, the TID
lookups will probably be faster with a bitma
On 09/14/2016 07:57 PM, Tom Lane wrote:
Pavan Deolasee writes:
On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
wrote:
One thing not quite clear to me is how do we create the bitmap
representation starting from the array representation in midflight
without using twice as much memory transie
On Thu, Sep 15, 2016 at 2:40 AM, Simon Riggs wrote:
> On 14 September 2016 at 11:19, Pavan Deolasee
> wrote:
>
>>> In
>>> theory we could even start with the list of TIDs and switch to the
>>> bitmap if the TID list becomes larger than the bitmap would have been,
>>> but I don't know if it's wo
On Wed, Sep 14, 2016 at 1:23 PM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> Actually, I think that probably *is* worthwhile, specifically because
>> it might let us avoid multiple index scans in cases where we currently
>> require them. Right now, our default maintenance_work_mem value is
>> 6
Pavan Deolasee writes:
> On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
> wrote:
>> One thing not quite clear to me is how do we create the bitmap
>> representation starting from the array representation in midflight
>> without using twice as much memory transiently. Are we going to write
>> t
On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
wrote:
>
>
> One thing not quite clear to me is how do we create the bitmap
> representation starting from the array representation in midflight
> without using twice as much memory transiently. Are we going to write
> the array to a temp file, fr
On 14 September 2016 at 11:19, Pavan Deolasee wrote:
>> In
>> theory we could even start with the list of TIDs and switch to the
>> bitmap if the TID list becomes larger than the bitmap would have been,
>> but I don't know if it's worth the effort.
>>
>
> Yes, that works too. Or may be even bett
Robert Haas wrote:
> Actually, I think that probably *is* worthwhile, specifically because
> it might let us avoid multiple index scans in cases where we currently
> require them. Right now, our default maintenance_work_mem value is
> 64MB, which is enough to hold a little over ten million tuples
On Wed, Sep 14, 2016 at 12:17 PM, Robert Haas wrote:
> For instance, one idea to grow memory usage incrementally would be to
> store dead tuple information separately for each 1GB segment of the
> relation. So we have an array of dead-tuple-representation objects,
> one for every 1GB of the relat
On Wed, Sep 14, 2016 at 8:47 PM, Robert Haas wrote:
>
>
> I am kind of doubtful about this whole line of investigation because
> we're basically trying pretty hard to fix something that I'm not sure
> is broken.I do agree that, all other things being equal, the TID
> lookups will probably be
On Sep 14, 2016 5:18 PM, "Robert Haas" wrote:
>
> On Wed, Sep 14, 2016 at 8:16 AM, Pavan Deolasee
> wrote:
> > Ah, thanks. So MaxHeapTuplesPerPage sets the upper boundary for the per
page
> > bitmap size. Thats about 36 bytes for 8K page. IOW if on an average
there
> > are 6 or more dead tuples p
On Wed, Sep 14, 2016 at 12:17 PM, Robert Haas wrote:
>
> I am kind of doubtful about this whole line of investigation because
> we're basically trying pretty hard to fix something that I'm not sure
> is broken.I do agree that, all other things being equal, the TID
> lookups will probably be fa
On Wed, Sep 14, 2016 at 8:16 AM, Pavan Deolasee
wrote:
> Ah, thanks. So MaxHeapTuplesPerPage sets the upper boundary for the per page
> bitmap size. Thats about 36 bytes for 8K page. IOW if on an average there
> are 6 or more dead tuples per page, bitmap will outperform the current
> representatio
On Wed, Sep 14, 2016 at 5:32 PM, Robert Haas wrote:
> On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
> wrote:
> > Another interesting bit about these small tables is that the largest used
> > offset for these tables never went beyond 291 which is the value of
> > MaxHeapTuplesPerPage. I don't k
On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
wrote:
> Another interesting bit about these small tables is that the largest used
> offset for these tables never went beyond 291 which is the value of
> MaxHeapTuplesPerPage. I don't know if there is something that prevents
> inserting more than M
On Wed, Sep 14, 2016 at 8:47 AM, Pavan Deolasee
wrote:
>
>>
> Sawada-san offered to reimplement the patch based on what I proposed
> upthread. In the new scheme of things, we will allocate a fixed size bitmap
> of length 2D bits per page where D is average page density of live + dead
> tuples. (T
On Wed, Sep 14, 2016 at 12:21 AM, Robert Haas wrote:
> On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada
> wrote:
> > Attached PoC patch changes the representation of dead tuple locations
> > to the hashmap having tuple bitmap.
> > The one hashmap entry consists of the block number and the TID bit
On Tue, Sep 13, 2016 at 4:06 PM, Robert Haas wrote:
> On Tue, Sep 13, 2016 at 2:59 PM, Claudio Freire
> wrote:
>> I've finished writing that patch, I'm in the process of testing its CPU
>> impact.
>>
>> First test seemed to hint at a 40% increase in CPU usage, which seems
>> rather steep compar
On Tue, Sep 13, 2016 at 2:59 PM, Claudio Freire wrote:
> I've finished writing that patch, I'm in the process of testing its CPU
> impact.
>
> First test seemed to hint at a 40% increase in CPU usage, which seems
> rather steep compared to what I expected, so I'm trying to rule out
> some methodo
On Tue, Sep 13, 2016 at 3:51 PM, Robert Haas wrote:
> On Fri, Sep 9, 2016 at 3:04 AM, Masahiko Sawada wrote:
>> Attached PoC patch changes the representation of dead tuple locations
>> to the hashmap having tuple bitmap.
>> The one hashmap entry consists of the block number and the TID bitmap
>>
On Tue, Sep 13, 2016 at 11:51 AM, Robert Haas wrote:
> I think it's probably wrong to worry that an array-of-arrays is going
> to be meaningfully slower than a single array here. It's basically
> costing you some small number of additional memory references per
> tuple, which I suspect isn't all
1 - 100 of 138 matches
Mail list logo