On 07/12/10 10:48, Josh Berkus wrote:
>
>> It seems plausible to fix the first one, but how would you fix the
>> second one? You either allow SET ROLE (which you need, to support the
>> pooler changing authorization), or you don't. There doesn't seem to be
>> a usable middleground.
>
> Well, th
On Mon, Dec 6, 2010 at 9:37 PM, Alvaro Herrera
wrote:
> Excerpts from Robert Haas's message of lun dic 06 23:09:56 -0300 2010:
>> On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus wrote:
>> >
>> >> Please explain more precisely what is wrong with SET SESSION
>> >> AUTHORIZATION / SET ROLE.
>> >
>> > 1)
> It seems plausible to fix the first one, but how would you fix the
> second one? You either allow SET ROLE (which you need, to support the
> pooler changing authorization), or you don't. There doesn't seem to be
> a usable middleground.
Well, this is why such a pooler would *have* to be built
Excerpts from Robert Haas's message of lun dic 06 23:09:56 -0300 2010:
> On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus wrote:
> >
> >> Please explain more precisely what is wrong with SET SESSION
> >> AUTHORIZATION / SET ROLE.
> >
> > 1) Session GUCS do not change with a SET ROLE (this is a TODO I h
On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus wrote:
>
>> Please explain more precisely what is wrong with SET SESSION
>> AUTHORIZATION / SET ROLE.
>
> 1) Session GUCS do not change with a SET ROLE (this is a TODO I haven't
> had any time to work on)
>
> 2) Users can always issue their own SET ROLE
> Please explain more precisely what is wrong with SET SESSION
> AUTHORIZATION / SET ROLE.
1) Session GUCS do not change with a SET ROLE (this is a TODO I haven't
had any time to work on)
2) Users can always issue their own SET ROLE and then "hack into" other
users' data.
--
On Mon, Dec 6, 2010 at 12:57 PM, Josh Berkus wrote:
>>> At some point Hackers should look at pg vs MySQL multi tenantry but it
>>> is way tangential today.
>>
>> My understanding is that our schemas work like MySQL databases; and
>> our databases are an even higher level of isolation. No?
>
> Tha
At some point Hackers should look at pg vs MySQL multi tenantry but it
is way tangential today.
My understanding is that our schemas work like MySQL databases; and
our databases are an even higher level of isolation. No?
That's correct. Drizzle is looking at implementing a feature like our
On Mon, Dec 6, 2010 at 12:38 PM, Tom Lane wrote:
> Robert Haas writes:
>> One possible way to do make an improvement in this area would be to
>> move the responsibility for accepting connections out of the
>> postmaster. Instead, you'd have a group of children that would all
>> call accept() on
On 12/06/2010 09:38 AM, Tom Lane wrote:
Another issue that would require some thought is what algorithm the
postmaster uses for deciding to spawn new children. But that doesn't
sound like a potential showstopper.
We'd probably want a couple of different ones, optimized for different
connectio
Robert Haas writes:
> One possible way to do make an improvement in this area would be to
> move the responsibility for accepting connections out of the
> postmaster. Instead, you'd have a group of children that would all
> call accept() on the socket, and the OS would arbitrarily pick one to
> r
On Sun, Dec 5, 2010 at 9:35 PM, Rob Wultsch wrote:
> On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas wrote:
>> On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wrote:
>>> I think you have read a bit more into what I have said than is
>>> correct. MySQL can deal with thousands of users and separate schem
On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas wrote:
> On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wrote:
>> I think you have read a bit more into what I have said than is
>> correct. MySQL can deal with thousands of users and separate schemas
>> on commodity hardware. There are many design decisi
On Sat, Dec 4, 2010 at 8:04 PM, Jeff Janes wrote:
> But who would be doing the passing? For the postmaster to be doing
> that would probably go against the minimalist design. It would have
> to keep track of which backend is available, and which db and user it
> is primed for. Perhaps a feature
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wrote:
> I think you have read a bit more into what I have said than is
> correct. MySQL can deal with thousands of users and separate schemas
> on commodity hardware. There are many design decisions (some
> questionable) that have made MySQL much bette
On Sun, Dec 5, 2010 at 3:17 PM, Rob Wultsch wrote:
> On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wrote:
>> One thing I would suggest that the PG community keeps in mind while
>> talking about built in connection process caching, is that it is very
>> nice feature for memory leaks caused by a con
On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wrote:
> One thing I would suggest that the PG community keeps in mind while
> talking about built in connection process caching, is that it is very
> nice feature for memory leaks caused by a connection to not exist for
> and continue growing forever.
On Sun, Dec 5, 2010 at 11:59 AM, Josh Berkus wrote:
>
>> * no coordination of restarts/configuration changes between the cluster
>> and the pooler
>> * you have two pieces of config files to configure your pooling settings
>> (having all that available say in a catalog in pg would be awesome)
>> *
* no coordination of restarts/configuration changes between the cluster
and the pooler
* you have two pieces of config files to configure your pooling settings
(having all that available say in a catalog in pg would be awesome)
* you lose all of the advanced authentication features of pg (becaus
On 12/01/2010 05:32 AM, Jeff Janes wrote:
On 11/28/10, Robert Haas wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on one platform may not do much at
all on another. The
On Wed, Dec 1, 2010 at 6:20 AM, Robert Haas wrote:
> On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes wrote:
>> On 11/28/10, Robert Haas wrote:
>>>
>>> In a close race, I don't think we should get bogged down in
>>> micro-optimization here, both because micro-optimizations may not gain
>>> much and
Robert Haas wrote:
> Jeff Janes wrote:
>> Oracle's backend start up time seems to be way higher than PG's.
> Interesting. How about MySQL and SQL Server?
My recollection of Sybase ASE is that establishing a connection
doesn't start a backend or even a thread. It establishes a network
conn
On Wednesday 01 December 2010 15:20:32 Robert Haas wrote:
> On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes wrote:
> > On 11/28/10, Robert Haas wrote:
> >> To some degree we're a
> >> victim of our own flexible and extensible architecture here, but I
> >> find it pretty unsatisfying to just say, OK,
On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes wrote:
> On 11/28/10, Robert Haas wrote:
>>
>> In a close race, I don't think we should get bogged down in
>> micro-optimization here, both because micro-optimizations may not gain
>> much and because what works well on one platform may not do much at
On tis, 2010-11-30 at 15:49 -0500, Tom Lane wrote:
> Peter Eisentraut writes:
> > On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
> >> Rolling in calloc in place of
> >> malloc/memset made no particular difference either, which says that
> >> Fedora 13's glibc does not have any optimization for
On 11/28/10, Robert Haas wrote:
>
> In a close race, I don't think we should get bogged down in
> micro-optimization here, both because micro-optimizations may not gain
> much and because what works well on one platform may not do much at
> all on another. The more general issue here is what to d
Peter Eisentraut writes:
> On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
>> Rolling in calloc in place of
>> malloc/memset made no particular difference either, which says that
>> Fedora 13's glibc does not have any optimization for that case as I'd
>> hoped.
> glibc's calloc is either mmap
On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
> Rolling in calloc in place of
> malloc/memset made no particular difference either, which says that
> Fedora 13's glibc does not have any optimization for that case as I'd
> hoped.
glibc's calloc is either mmap of /dev/zero or malloc followed by
On Monday 29 November 2010 19:10:07 Tom Lane wrote:
> Jeff Janes writes:
> > Are you sure you haven't just moved the page-fault time to a part of
> > the code where it still exists, but just isn't being captured and
> > reported?
>
> I'm a bit suspicious about that too. Another thing to keep in
Greg Stark wrote:
> On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane wrote:
> > The most portable way to do that would be to use calloc insted of malloc,
> > and hope that libc is smart enough to provide freshly-mapped space.
> > It would be good to look and see whether glibc actually does so,
> > of co
Robert Haas wrote:
> On Sun, Nov 28, 2010 at 7:15 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> One possible way to get a real speedup here would be to look for ways
> >> to trim the number of catcaches.
> >
> > BTW, it's not going to help to remove catcaches that have a small
> > initial size
Tom Lane wrote:
> BTW, this might be premature to mention pending some tests about mapping
> versus zeroing overhead, but it strikes me that there's more than one
> way to skin a cat. I still think the idea of statically allocated space
> sucks. But what if we rearranged things so that palloc0 do
Robert Haas wrote:
> In a close race, I don't think we should get bogged down in
> micro-optimization here, both because micro-optimizations may not gain
> much and because what works well on one platform may not do much at
> all on another. The more general issue here is what to do about our
> hi
Tom Lane wrote:
> Robert Haas writes:
> > On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian wrote:
> >> Not sure that information moves us forward. ?If the postmaster cleared
> >> the memory, we would have COW in the child and probably be even slower.
>
> > Well, we can determine the answers to th
On Mon, Nov 29, 2010 at 12:50 PM, Tom Lane wrote:
> (On the last two machines I had to cut the array size to 256MB to avoid
> swapping.)
You weren't kidding about that "not so recent" part. :-)
>> This makes me pretty
>> pessimistic about the chances of a meaningful speedup here.
>
> Yeah, this
Jeff Janes writes:
> Are you sure you haven't just moved the page-fault time to a part of
> the code where it still exists, but just isn't being captured and
> reported?
I'm a bit suspicious about that too. Another thing to keep in mind
is that Robert's original program doesn't guarantee that th
Robert Haas writes:
> I guess the word "run" is misleading (I wrote the program in 5
> minutes); it's just zeroing the same chunk twice and measuring the
> times. The difference is presumably the page fault overhead, which
> implies that faulting is two-thirds of the overhead on MacOS X and
> thr
On Mon, Nov 29, 2010 at 9:24 AM, Andres Freund wrote:
> On Monday 29 November 2010 17:57:51 Robert Haas wrote:
>> On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane wrote:
>> > Robert Haas writes:
>> >> Yeah, very true. What's a bit frustrating about the whole thing is
>> >> that we spend a lot of time
On Monday 29 November 2010 18:34:02 Robert Haas wrote:
> On Mon, Nov 29, 2010 at 12:24 PM, Andres Freund wrote:
> > Hm. A quick test shows that its quite a bit faster if you allocate memory
> > with:
> > size_t s = 512*1024*1024;
> > char *bss = mmap(0, s, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_POP
On Mon, Nov 29, 2010 at 12:24 PM, Andres Freund wrote:
> Hm. A quick test shows that its quite a bit faster if you allocate memory
> with:
> size_t s = 512*1024*1024;
> char *bss = mmap(0, s, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_POPULATE|
> MAP_ANONYMOUS, -1, 0);
Numbers?
--
Robert Haas
Enterp
On Monday 29 November 2010 17:57:51 Robert Haas wrote:
> On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> Yeah, very true. What's a bit frustrating about the whole thing is
> >> that we spend a lot of time pulling data into the caches that's
> >> basically static an
On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane wrote:
> Robert Haas writes:
>> Yeah, very true. What's a bit frustrating about the whole thing is
>> that we spend a lot of time pulling data into the caches that's
>> basically static and never likely to change anywhere, ever.
>
> True. I wonder if w
Robert Haas writes:
>> Well, the lack of extensible XLOG support is definitely a big handicap
>> to building a *production* index AM as an add-on. But it's not such a
>> handicap for development.
>
> Realistically, it's hard for me to imagine that anyone would go to the
> trouble of building it a
Robert Haas writes:
> Yeah, very true. What's a bit frustrating about the whole thing is
> that we spend a lot of time pulling data into the caches that's
> basically static and never likely to change anywhere, ever.
True. I wonder if we could do something like the relcache init file
for the ca
On Sun, Nov 28, 2010 at 7:15 PM, Tom Lane wrote:
> Robert Haas writes:
>> One possible way to get a real speedup here would be to look for ways
>> to trim the number of catcaches.
>
> BTW, it's not going to help to remove catcaches that have a small
> initial size, as the pg_am cache certainly do
Greg Stark writes:
> On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane wrote:
>> Another question that would be worth asking here is whether the
>> hand-baked MemSet macro still outruns memset on modern architectures.
>> I think it's been quite a few years since that was last tested.
> I know glibc has
On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane wrote:
> The most portable way to do that would be to use calloc insted of malloc,
> and hope that libc is smart enough to provide freshly-mapped space.
> It would be good to look and see whether glibc actually does so,
> of course. If not we might end u
BTW, this might be premature to mention pending some tests about mapping
versus zeroing overhead, but it strikes me that there's more than one
way to skin a cat. I still think the idea of statically allocated space
sucks. But what if we rearranged things so that palloc0 doesn't consist
of palloc-
Robert Haas writes:
> One possible way to get a real speedup here would be to look for ways
> to trim the number of catcaches.
BTW, it's not going to help to remove catcaches that have a small
initial size, as the pg_am cache certainly does. If the bucket zeroing
cost is really something to mini
On Sun, Nov 28, 2010 at 6:41 PM, Tom Lane wrote:
> Robert Haas writes:
>> After our recent conversation
>> about KNNGIST, it occurred to me to wonder whether there's really any
>> point in pretending that a user can usefully add an AM, both due to
>> hard-wired planner knowledge and due to lack o
Robert Haas writes:
> After our recent conversation
> about KNNGIST, it occurred to me to wonder whether there's really any
> point in pretending that a user can usefully add an AM, both due to
> hard-wired planner knowledge and due to lack of any sort of extensible
> XLOG support. If not, we cou
On Sun, Nov 28, 2010 at 3:53 PM, Tom Lane wrote:
> Robert Haas writes:
>> The more general issue here is what to do about our
>> high backend startup costs. Beyond trying to recycle backends for new
>> connections, as I've previous proposed and with all the problems it
>> entails, the only thing
Robert Haas writes:
> The more general issue here is what to do about our
> high backend startup costs. Beyond trying to recycle backends for new
> connections, as I've previous proposed and with all the problems it
> entails, the only thing that looks promising here is to try to somehow
> cut do
Robert Haas wrote:
> On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian wrote:
> > Not sure that information moves us forward. ?If the postmaster cleared
> > the memory, we would have COW in the child and probably be even slower.
>
> Well, we can determine the answers to these questions empirically.
On Sun, Nov 28, 2010 at 11:41 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian wrote:
>>> Not sure that information moves us forward. If the postmaster cleared
>>> the memory, we would have COW in the child and probably be even slower.
>
>> Well, we c
Robert Haas writes:
> On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian wrote:
>> Not sure that information moves us forward. If the postmaster cleared
>> the memory, we would have COW in the child and probably be even slower.
> Well, we can determine the answers to these questions empirically.
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian wrote:
> Not sure that information moves us forward. If the postmaster cleared
> the memory, we would have COW in the child and probably be even slower.
Well, we can determine the answers to these questions empirically. I
think some more scrutiny
Robert Haas wrote:
> >> In fact, it wouldn't be that hard to relax the "known at compile time"
> >> constraint either. ?We could just declare:
> >>
> >> char lotsa_zero_bytes[NUM_ZERO_BYTES_WE_NEED];
> >>
> >> ...and then peel off chunks.
> > Won't this just cause loads of additional pagefaults aft
On Wed, Nov 24, 2010 at 5:42 PM, Tom Lane wrote:
> Robert Haas writes:
>> I don't see anything for BUS OUTSTANDING. For CACHE and MISS I have
>> some options:
>
>> DATA_CACHE_MISSES: (counter: all)
>> L3_CACHE_MISSES: (counter: all)
>
> Those two look promising, though I can't claim to be an exp
Robert Haas writes:
> I don't see anything for BUS OUTSTANDING. For CACHE and MISS I have
> some options:
> DATA_CACHE_MISSES: (counter: all)
> L3_CACHE_MISSES: (counter: all)
Those two look promising, though I can't claim to be an expert.
regards, tom lane
--
Sent vi
On Wed, Nov 24, 2010 at 5:15 PM, Andres Freund wrote:
> On Wednesday 24 November 2010 23:03:48 Tom Lane wrote:
>> Robert Haas writes:
>> > On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane wrote:
>> >> (You might be able to confirm or disprove this theory if you ask
>> >> oprofile to count memory access
On Wednesday 24 November 2010 23:03:48 Tom Lane wrote:
> Robert Haas writes:
> > On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane wrote:
> >> (You might be able to confirm or disprove this theory if you ask
> >> oprofile to count memory access stalls instead of CPU clock cycles...)
> >
> > I don't see
Robert Haas writes:
> On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane wrote:
>> (You might be able to confirm or disprove this theory if you ask
>> oprofile to count memory access stalls instead of CPU clock cycles...)
> I don't see an event for that.
You probably want something involving cache misse
On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane wrote:
> (You might be able to confirm or disprove this theory if you ask
> oprofile to count memory access stalls instead of CPU clock cycles...)
I don't see an event for that.
# opcontrol --list-events | grep STALL
INSTRUCTION_FETCH_STALL: (counter: al
On Wednesday 24 November 2010 22:25:45 Tom Lane wrote:
> Robert Haas writes:
> > On Nov 24, 2010, at 4:05 PM, Andres Freund wrote:
> >> Yes, but only once. Also scrubbing a page is faster than copying it...
> >> (and there were patches floating around to do that in advance, not sure
> >> if they
On Wednesday 24 November 2010 22:18:08 Robert Haas wrote:
> On Nov 24, 2010, at 4:05 PM, Andres Freund wrote:
> >>> Won't this just cause loads of additional pagefaults after fork() when
> >>> those pages are used the first time and then a second time when first
> >>> written to (to copy it)?
> >>
Robert Haas writes:
> On Nov 24, 2010, at 4:05 PM, Andres Freund wrote:
>> Yes, but only once. Also scrubbing a page is faster than copying it... (and
>> there were patches floating around to do that in advance, not sure if they
>> got
>> integrated into mainline linux)
> I'm not following -
On Nov 24, 2010, at 4:05 PM, Andres Freund wrote:
>>>
>>> Won't this just cause loads of additional pagefaults after fork() when
>>> those pages are used the first time and then a second time when first
>>> written to (to copy it)?
>>
>> Aren't we incurring those page faults anyway, for whatever
On Wednesday 24 November 2010 21:54:53 Robert Haas wrote:
> On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund wrote:
> > On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
> >> On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane wrote:
> >> > Robert Haas writes:
> >> >> Full results, and call graph, a
Robert Haas writes:
> On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund wrote:
>>> The idea I had was to go the other way and say, hey, if these hash
>>> tables can't be expanded anyway, let's put them on the BSS instead of
>>> heap-allocating them.
>> Won't this just cause loads of additional page
On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund wrote:
> On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
>> On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane wrote:
>> > Robert Haas writes:
>> >> Full results, and call graph, attached. The first obvious fact is
>> >> that most of the memset ov
On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
> On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> Full results, and call graph, attached. The first obvious fact is
> >> that most of the memset overhead appears to be coming from
> >> InitCatCache.
> >
> >
On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane wrote:
> Robert Haas writes:
>> Full results, and call graph, attached. The first obvious fact is
>> that most of the memset overhead appears to be coming from
>> InitCatCache.
>
> AFAICT that must be the palloc0 calls that are zeroing out (mostly)
> the
Robert Haas writes:
> Full results, and call graph, attached. The first obvious fact is
> that most of the memset overhead appears to be coming from
> InitCatCache.
AFAICT that must be the palloc0 calls that are zeroing out (mostly)
the hash bucket headers. I don't see any real way to make that
Robert Haas writes:
> Revised patch attached.
The asserts in AtProcExit_LocalBuffers are a bit pointless since
you forgot to remove the code that forcibly zeroes LocalRefCount[]...
otherwise +1.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@pos
Gerhard Heift writes:
> On Wed, Nov 24, 2010 at 01:20:36PM -0500, Robert Haas wrote:
>> I tried configuring oprofile with --callgraph=10 and then running
>> oprofile with -c, but it gives kooky looking output I can't interpret.
> Have a look at the wiki:
> http://wiki.postgresql.org/wiki/Profilin
On Wednesday 24 November 2010 19:01:32 Robert Haas wrote:
> Somehow I don't think I'm going to get much further with this without
> figuring out how to get oprofile to cough up a call graph.
I think to do that sensibly you need CFLAGS="-O2 -fno-omit-frame-pointer"...
--
Sent via pgsql-hackers mai
On Wed, Nov 24, 2010 at 01:20:36PM -0500, Robert Haas wrote:
> On Wed, Nov 24, 2010 at 1:06 PM, Tom Lane wrote:
> > Robert Haas writes:
> >> OK, patch attached.
> >
> > Two comments:
>
> Revised patch attached.
>
> I tried configuring oprofile with --callgraph=10 and then running
> oprofile wit
On Wed, Nov 24, 2010 at 1:06 PM, Tom Lane wrote:
> Robert Haas writes:
>> OK, patch attached.
>
> Two comments:
Revised patch attached.
I tried configuring oprofile with --callgraph=10 and then running
oprofile with -c, but it gives kooky looking output I can't interpret.
For example:
6
Robert Haas writes:
> OK, patch attached.
Two comments:
1. A comment would help, something like "Assert we released all buffer pins".
2. AtProcExit_LocalBuffers should be redone the same way, for
consistency (it likely won't make any performance difference).
Note the comment for AtProcExit_Loca
On Wed, Nov 24, 2010 at 11:33 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane wrote:
>>> Or make it execute only in assert-enabled mode, perhaps.
>
>> But making the check execute only in assert-enabled more
>> doesn't seem right, since the check actually
Robert Haas writes:
> On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane wrote:
>> Or make it execute only in assert-enabled mode, perhaps.
> But making the check execute only in assert-enabled more
> doesn't seem right, since the check actually acts to mask other coding
> errors, rather than reveal the
On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane wrote:
> Robert Haas writes:
>> The first optimization that occurred to me was "remove the loop
>> altogether".
>
> Or make it execute only in assert-enabled mode, perhaps.
>
> This check had some use back in the bad old days, but the ResourceOwner
> mec
Robert Haas writes:
> On Wed, Nov 24, 2010 at 2:10 AM, Heikki Linnakangas
> wrote:
>> Micro-optimizing that search for the non-zero value helps a little bit
>> (attached). Reduces the percentage shown by oprofile from about 16% to 12%
>> on my laptop.
That "micro-optimization" looks to me like y
On Wed, Nov 24, 2010 at 2:10 AM, Heikki Linnakangas
wrote:
>> Anything we can do about this? That's a lot of overhead, and it'd be
>> a lot worse on a big machine with 8GB of shared_buffers.
>
> Micro-optimizing that search for the non-zero value helps a little bit
> (attached). Reduces the perce
On 24.11.2010 07:07, Robert Haas wrote:
Per previous threats, I spent some time tonight running oprofile
(using the directions Tom Lane was foolish enough to provide me back
in May). I took testlibpq.c and hacked it up to just connect to the
server and then disconnect in a tight loop without doi
On Wed, Nov 24, 2010 at 12:07 AM, Robert Haas wrote:
> Per previous threats, I spent some time tonight running oprofile
> (using the directions Tom Lane was foolish enough to provide me back
> in May). I took testlibpq.c and hacked it up to just connect to the
> server and then disconnect in a ti
Per previous threats, I spent some time tonight running oprofile
(using the directions Tom Lane was foolish enough to provide me back
in May). I took testlibpq.c and hacked it up to just connect to the
server and then disconnect in a tight loop without doing anything
useful, hoping to measure the
88 matches
Mail list logo