On Thu, Nov 15, 2012 at 7:05 PM, Jeff Janes wrote:
> On Wed, Nov 14, 2012 at 11:49 AM, Tom Lane wrote:
>> Jeff Janes writes:
>>> On Thu, Nov 8, 2012 at 9:50 PM, Tom Lane wrote:
>>>> There are at least three ways we could whack that mole: ...
>>>>
>
On Fri, Nov 23, 2012 at 7:22 PM, Bruce Momjian wrote:
> On Mon, Nov 19, 2012 at 12:11:26PM -0800, Jeff Janes wrote:
>
>>
>> Yes, it is with synchronous_commit=off. (or if it wasn't originally,
>> it is now, with the same result)
>>
>> Applying your fsy
dn't it be more useful to
report amount of work that vacuum is specialized for doing for this
number? I don't see the utility at all of reporting what it is
currently reporting.
Am I overlooking something?
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@post
oup by checksum % for various
primes, and not seeing any skew), and I didn't see any worrying
patterns.
Regards,
Jeff Davis
replace-tli-with-checksums-20121125.patch.gz
Description: GNU Zip compressed data
checksums-20121125.patch.gz
Description: GNU Zip compressed data
--
Sent vi
On Wed, 2012-11-21 at 18:25 -0800, Jeff Davis wrote:
> Follow up to discussion:
> http://archives.postgresql.org/pgsql-hackers/2012-11/msg00817.php
>
> I worked out a patch that replaces PD_ALL_VISIBLE with calls to
> visibilitymap_test. It rips out a lot of complexity, with a net
#x27;ve mentioned, and see how
the numbers turn out. I'm worried that I'll need more than 4 cores to
show anything though, so perhaps someone with a many-core box would be
interested to test this out?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
there is no
danger of seeing the state of the bit before it was last cleared.
Callers that don't have the data buffer yet, such as an index only scan
or a VACUUM that is skipping pages, must handle the concurrency as
appropriate."
Regards,
Jeff Davis
--
Sent via pgsql-hackers m
ld explicitly opt out of
the overloading feature at DDL time (somewhat like what Simon suggested
in another reply). E.g. CREATE {UNIQUE|OVERLOADED} FUNCTION ...
I'm not proposing that; in general I am very wary of changes to the type
system. I'm just saying that, if we do have special rules
x27;m just saying we should be careful of
these situations and not make them more likely than necessary.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ion
about what types the user intended and which function they intended to
call. In such an extensible system, that worries me on several fronts.
That being said, I'm not outright in opposition to the idea of making
improvements like this, I just think we should do so cautiously.
Regards,
s that it seems like it solves a pretty high percentage
> of the problem cases without requiring any explicit user action.
What user action are you concerned about? If we (eventually) made the
non-overloaded case the default, would that resolve your concerns?
Regards,
Jeff Davis
--
Sen
still categorizing functions into
"overloaded" and "non-overloaded", but you are doing so at runtime based
on the current contents of the catalog.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
I don't have the numbers at hand, but if my relcache patch is
accepted, then "-1" stops being faster.
-1 gets rid of the AtOEXAct relcache N^2 behavior, but at the cost of
invoking a different N^2, that one in the stats system.
Cheers,
Jeff
--
Sent via pgsql-h
usually wouldn't show up. I didn't see any difference
between patch #1 and patch #3.
On the delete test I detected no difference between #1 and #2 at all.
I think someone with access to a larger box may need to test this. Or,
if someone has a more specific suggestion about how I can
. There was some
discussion about subtransactions, but those problems only seemed to
appear when the CREATE and the INSERT/COPY happened in different
subtransactions.
So, I guess my question is, why the partial revert?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql
s gone. It's not throwing a compiler warning for some reason.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
testing to show whether this is
> insane or not.
My understanding is that Greg Smith is already working on tests here, so
I will wait for his results.
> But this looks in good shape for commit otherwise.
Great!
For now, I rebased the patches against master, and did some very minor
cle
On Tue, 2012-12-04 at 10:15 +, Simon Riggs wrote:
> On 4 December 2012 01:34, Jeff Davis wrote:
> >
> I assume that refers to the discussion here:
> >
> > http://archives.postgresql.org/pgsql-hackers/2012-02/msg01177.php
> >
> > But that was quite a
On Tue, 2012-12-04 at 01:03 -0800, Jeff Davis wrote:
> > 3. I think we need an explicit test of this feature (as you describe
> > above), rather than manual testing. corruptiontester?
>
> I agree, but I'm not 100% sure how to proceed. I'll look at Kevin's
&g
s is urgent. The error-message issue in 9.1.6 and
9.2.2 is merely annoying, while the early-opening one in 9.2.0 and
9.2.1 seems fundamentally unsafe.
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, 2012-11-30 at 13:16 -0800, Jeff Davis wrote:
> I tried for quite a while to show any kind of performance difference
> between checking the VM and checking PD_ALL_VISIBLE on a 12-core box (24
> if you count HT).
>
> Three patches in question:
> 1. Current unpatched m
ditions enough.
Anyway, the partial revert looks good, and the commit message seems
appropriate (essentially, the code got ahead of the discussion).
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
a problem, because the other session won't see the tuple in
pg_class until the creating transaction commits, at which point the rows
have committed, too (because this would only kick in when the rows are
loaded in the same transaction as the CREATE).
So, yes, it's like TRUNCATE in th
gt; which doesn't guarantee that all rows are frozen at the end of it...
Also, if the set of conditions changes in the future, we would have a
problem if that caused new errors to appear.
I think a WARNING might make more sense than a NOTICE, but I don't have
a strong opinion about that.
if the
transaction doesn't commit, it would still be possible to clean out the
dead tuples with a VACUUM, because no information has really been lost
in the process. So there may yet be some kind of safe protocol to set
these even during a load into an existing table...
Regards,
Jeff Da
gt;
> Or to not dump invalid indexes at all in --binary-upgrade mode.
+1
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
able is created and loaded in the same transaction, but there
may be some more sophisticated approaches, as well.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
hat problem.
That being said, it's a reasonable position, and I am fine with either
approach.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
that we change the semantics
of existing commands, unless it's to improve the serializability of DDL.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, 2012-12-06 at 22:34 -0500, Stephen Frost wrote:
> * Jeff Davis (pg...@j-davis.com) wrote:
> > That is documented in the committed patch -- it's a trade, basically
> > saying that you lose isolation but avoid extra writes. It seems
> > reasonable that the
o me you need considerable expertise to figure out how to do
optimal recovery (i.e. losing the least transactions) in this
situation, and that that expertise cannot be automated. Do you trust
a partial file from a good hard drive, or a complete file from a
partially melted pg_xlog?
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
; CF page.
Should active discussion on the hackers list prevent someone from
doing a review? I know I am reluctant to review a patch when it seems
it is still being actively redesigned/debated by others.
Maybe a new status of "needs design consensus" would be useful.
Cheers,
Jeff
--
t;constant value" in the comments in several places. Would
"query value" or "search key" be better?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e same transaction
anyway).
Perhaps we can take some partial steps toward MVCC-safe access? For
instance, how about trying to recheck the xmin of a pg_class tuple when
starting a scan?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
cating this use case.
In particular, I didn't get a response to:
http://archives.postgresql.org/message-id/1354055056.1766.50.camel@sussancws0025
For what it's worth, I'm glad that people like you are pushing on these
usability issues, because it can be hard for insiders to see them
som
t looks like we still haven't reached consensus on the design here. Are
we still discussing, or should this be moved to the next CF?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
On Tue, 2012-11-27 at 14:24 -0800, Jeff Davis wrote:
> On Tue, 2012-11-27 at 15:41 -0500, Robert Haas wrote:
> > I can't quite see how a non-overloaded flag would work, unless we get
> > rid of schemas.
>
> It may work to pick the first schema in the search path that h
Ds are
hidden in the description.
Marking "ready for committer".
Regards,
Jeff Davis
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***
*** 427,432
--- 427,439
+ oid
+ oid
+
+ Row identifier
mes showed mixed results,
however, and I didn't dig in much further because you've already done
significant testing.
Marking this one ready again.
Regards,
Jeff Davis
gist_choose_bloat-0.5A.patch.gz
Description: GNU Zip compressed data
--
Sent via pgsql-hackers mailing list (pgs
27;t much like the option name "randomization". It's not clear
> what's been randomized. I'd prefer something like
> "distribute_on_equal_penalty", although that's really long. Better ideas?
I agree that "randomization" is vague, but I can&
for
corrupting the data as a test? Or should it be part of a
corruption-testing harness (pg_corruptiontester?), that introduces the
corruption and then verifies that it's properly detected?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
in your version need more improvement so I can
understand.
I found it easier to reason in terms of horizontal and vertical lines,
and which quadrants they crossed, and then work out the edge cases. I'm
not sure if that reasoning was correct, but it seemed to make more sense
to me.
R
On Sun, Dec 16, 2012 at 8:38 AM, Tomas Vondra wrote:
> On 8.12.2012 03:08, Jeff Janes wrote:
>>
>> It seems to me you need considerable expertise to figure out how to do
>> optimal recovery (i.e. losing the least transactions) in this
>> situation, and that that experti
d then just skip over the page if it's
corrupt, depending on a GUC. That would at least allow sequential scans
to (partially) work, which might be good enough for some data recovery
situations. If a catalog index is corrupted, that could just be rebuilt.
Haven't thought about the details,
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
> Jeff Janes writes:
> > I now see where the cost is coming from. In commit 21a39de5809 (first
> > appearing in 9.2) the "fudge factor" cost estimate for large indexes
> > was increased by about
s corrupt"
> and give some thought to what the user will do next. Indexes are a
> good case, because we can/should report the block error, mark the
> index as invalid and then hint that it should be rebuilt.
Agreed; this applies to any derived data.
I don't think it will be very
during
replication, so we should not replicate corrupt blocks (of course,
that's not implemented yet, so it's still a concern for now).
And we can also have ways to do background/offline checksum verification
with a separate utility.
Regards,
Jeff Davis
--
Sent via pgsql-hack
[moved to hackers]
On Wednesday, December 5, 2012, Tom Lane wrote:
> Jeff Janes > writes:
> > I now see where the cost is coming from. In commit 21a39de5809 (first
> > appearing in 9.2) the "fudge factor" cost estimate for large indexes
> > was increased by abo
On Tue, Dec 18, 2012 at 5:05 PM, Jeff Janes wrote:
Sorry for the malformed and duplicate post. I was not trying to be
emphatic; I was testing out gmail offline. Clearly the test didn't go
too well.
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make ch
On Tue, 2012-12-04 at 01:03 -0800, Jeff Davis wrote:
> > 4. We need some general performance testing to show whether this is
> > insane or not.
I ran a few tests.
Test 1 - find worst-case overhead for the checksum calculation on write:
fsync = off
bgwriter_lru_
e are no vacuums, I agree.
>
If the table is randomly updated over its entire size, then pretty much
every block will be not-all-visible (and so disqualified from IOS) before
you hit the default 20% vacuum threshold. I wonder if there ought not be
another vac threshold, based on vm density rather than estimated obsolete
tuple density.
Cheers,
Jeff
s at a size where not even all the index fit in RAM, to maximize
the benefit of not having to visit the table.
However, the boost started going away due to vm clearance long before
autovacuum kicked in at default settings.
Cheers,
Jeff
trick?
You could say that benchmarks should run long enough to average out such
changes, but needing to run a benchmark that long can make some kinds of
work (like git bisect) unrealistic rather than merely tedious.
Cheers,
Jeff
tion, the
plancache changes slowed it down by 300%, and this patch doesn't change
that. But that seems to be down to the insertion getting planned
repeatedly, because it decides the custom plan is cheaper than the generic
plan. Whatever savings the custom plan may have are clearly less than the
cost of doing the planning repeatedly.
Cheers,
Jeff
On Wednesday, January 2, 2013, Tom Lane wrote:
> Jeff Janes writes:
> > Using a RULE-based partitioning instead with row by row insertion, the
> > plancache changes slowed it down by 300%, and this patch doesn't change
> > that. But that seems to be down to the
On Saturday, January 5, 2013, Tom Lane wrote:
> Jeff Janes > writes:
> > [moved to hackers]
> > On Wednesday, December 5, 2012, Tom Lane wrote:
> >> Hm. To tell you the truth, in October I'd completely forgotten about
> >> the January patch, and was
On Wed, Jan 9, 2013 at 3:59 AM, Simon Riggs wrote:
> On 9 November 2012 18:50, Jeff Janes wrote:
>
>> quadratic behavior in the resource owner/lock table
>
> I didn't want to let that particular phrase go by without saying
> "exactly what behaviour is that?"
On Wednesday, January 9, 2013, Simon Riggs wrote:
> On 23 November 2012 22:34, Jeff Janes >
> wrote:
>
> > I got rid of need_eoxact_work entirely and replaced it with a short
> > list that fulfills the functions of indicating that work is needed,
> > and suggesting w
On Tue, 2012-12-04 at 01:03 -0800, Jeff Davis wrote:
> For now, I rebased the patches against master, and did some very minor
> cleanup.
I think there is a problem here when setting PD_ALL_VISIBLE. I thought I
had analyzed that before, but upon review, it doesn't look rig
had
many sequential scans going on simultaneously, or was it 100+
different tables each with one sequential scan going on? (You said
different big datasets, but I don't know if these are in different
tables, or in common tables with a column to distinguish them.)
Cheers,
Jeff
--
Sent via p
BufferDirty/SetBufferCommitInfoNeedsSave, to separate the
concept that it may need a WAL record from the concept that actually
dirtying the page is optional.
Another idea is to make the WAL action for visibilitymap_set have
another item in the chain pointing to the heap buffer, and bump the heap
LSN.
Reg
no other concerns.
I guess the src/tutorial directory could participate in regression tests,
in which case this problem would have been detected when introduced, but I
don't think I can demand that you invent regression tests for a feature you
are just fixing rather than creating.
Thanks for the patch,
Jeff
discuss it. It think that
parallel execution is huge and probably more likely for 9.5 (10.0?) than
9.4 for the general case (maybe some special cases for 9.4, like index
builds). Yet the single biggest risk I see to the future of the project is
the lack of parallel execution.
Cheers,
Jeff
omething more transparent than that.
Cheers,
Jeff
't find it. But it still doesn't give you CPU parallelism. The nice
thing about CPU parallelism is that it usually brings some amount of IO
parallelism for free, while the reverse less likely to be so.
Cheers,
Jeff
>
On Tuesday, January 15, 2013, Josh Kupershmidt wrote:
> On Tue, Jan 15, 2013 at 6:35 PM, Jeff Janes
> >
> wrote:
>
> > Do you propose back-patching this? You could argue that this is a bug in
> > 9.1 and 9.2. Before that, they generate deprecation warnings, but
), so if the PD_ALL_VISIBLE patch is committed first
then it will make reviewing this patch easier. Regardless, the second
patch to be committed will need to be rebased on top of the first.
Regards,
Jeff Davis
replace-tli-with-checksums-20130116.patch.gz
Description: GNU Zip compressed
Rebased patch attached. No significant changes.
Regards,
Jeff Davis
rm-pd-all-visible-20130116.patch.gz
Description: GNU Zip compressed data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
> offset, byte value, and what sort of operation to do: overwrite, AND,
> OR, XOR. I like XOR here because you can fix it just by running the
> program again.
Oh, good idea.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To ma
#x27;s to the same table.
>
I think that is rather over-stating it. Even with unindexed untriggered
tables, I can get some benefit from doing hand-rolled parallel COPY before
the extension lock becomes an issue, at least on some machines. And with
triggered or indexed tables, all the more so.
Cheers,
Jeff
ation. But that would be a very simple WAL
routine, rather than the complex one that exists without the patch.
I suppose we could even try to bump the LSN without writing WAL somehow,
but it doesn't seem worth reasoning through that (setting a VM bit is
rare enough).
Regards,
Jeff D
On Thu, 2013-01-17 at 10:39 -0800, Jeff Davis wrote:
> On Thu, 2013-01-17 at 15:25 +0530, Pavan Deolasee wrote:
> > Now that I look at the patch, I wonder if there is another fundamental
> > issue with the patch. Since the patch removes WAL logging for the VM
> > set opera
ng the VM page pinned. I think the bottleneck is elsewhere;
although I am keeping the page pinned in this patch to prevent it from
becoming a bottleneck.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subs
lem if it ever came up.
However, the tests I did didn't show any problem there. The tests were
somewhat noisy, so perhaps I was doing something wrong, but it didn't
appear that looking at the VM page for every update was a bottleneck.
Regards,
Jeff Davis
--
Sent via pgsql
x27;d need lots of backends accessing lots
> of different, small tables. That's not the use case we usually
> benchmark, but I think there are a fair number of people doing things
> like that - for example, imagine a hosting provider or web application
> with many databases or sch
On Thu, 2013-01-17 at 14:53 -0800, Jeff Davis wrote:
> Test plan:
>
> 1. Take current patch (without "skip VM check for small tables"
> optimization mentioned above).
> 2. Create 500 tables each about 1MB.
> 3. VACUUM them all.
> 4. Start 500 connections (
On Thu, 2013-01-17 at 21:03 +0100, Stefan Keller wrote:
> Hi Jeff
> I'm perhaps really late in this discussion but I just was made aware
> of that via the tweet from Josh Berkus about "PostgreSQL 9.3: Current
> Feature Status"
>
> What is the reason to digg
ke use of it. But that work is done at the datatype level, and PostGIS
only has a couple data types, so I don't think it will be a lot of work.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
bly not going to be a novice reviewer who does that.
Sometime this type of high-level summary review does happen, at the senior
person's whim, but is not a formal part of the commit fest process.
What I don't know is how much work it takes for one of those senior people
to make one of those summary judgments, compared to how much it takes for
them to just do an entire review from scratch.
Cheers,
Jeff
>
>
two ideas for the back-branches? And to HEAD,
in case the more invasive patch doesn't make it in?
I have a preliminary nit-pick on the big patch. It generates a compiler
warning:
vacuumlazy.c: In function ‘lazy_scan_heap’:
vacuumlazy.c:445:9: warning: variable ‘prev_dead_count’ set but not used
[-Wunused-but-set-variable]
Thanks,
Jeff
On Sunday, January 20, 2013, Simon Riggs wrote:
> On 20 January 2013 18:42, Robert Haas >
> wrote:
> > On Sat, Jan 19, 2013 at 5:21 PM, Jeff Janes
> > >
> wrote:
> >> Sometime this type of high-level summary review does happen, at the
> senior
> >>
On Sunday, January 20, 2013, Stephen Frost wrote:
> * Jeff Janes (jeff.ja...@gmail.com ) wrote:
>
> > By making the list over-flowable, we fix a demonstrated pathological
> > workload (restore of huge schemas); we impose no detectable penalty to
> > normal workloads; and
On Sun, 2013-01-20 at 22:19 -0500, Tom Lane wrote:
> Robert Haas writes:
> > On Fri, Jan 18, 2013 at 3:31 AM, Jeff Davis wrote:
> >> So, I attached a new version of the patch that doesn't look at the VM
> >> for tables with fewer than 32 pages. That's the onl
On Mon, 2013-01-21 at 11:27 +0530, Pavan Deolasee wrote:
> I tend to agree. When I looked at the patch, I thought since its
> removing a WAL record (and associated redo logic), it has some
> additional value. But that was kind of broken (sorry, I haven't looked
> at the latest pa
table test results. That's a little questionable for
performance (except in those cases where few penalties are identical
anyway), but could plausibly be useful for a crash report or something.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
on't show signs either way.
But yes, I see that others are not interested in the benefits offered by
the patch, which is why I'm giving up on it. If people are concerned
about the costs, then I can fix those; but there's nothing I can do to
increase the benefits.
Regards,
On Mon, 2013-01-21 at 12:00 +0200, Heikki Linnakangas wrote:
> On 21.01.2013 11:10, Jeff Davis wrote:
> > That confuses me. The testing was to show it didn't hurt other workloads
> > (like scans or inserts/updates/deletes); so the best possible result is
> > that they d
The docs on bgworker twice refer to "HOT standby". I don't think that in
either case, the "hot" needs emphasis, and if it does making it look like
an acronym (one already used for something else) is probably not the way to
do it.
Patch to HEAD attached.
Cheers,
Jeff
just for this purpose,
the other should be able to use its existing one.)
Cheers,
Jeff
On Thu, Jan 24, 2013 at 1:28 AM, Pavan Deolasee
wrote:
> Hi Jeff,
>
> On Thu, Jan 24, 2013 at 2:41 PM, Jeff Janes wrote:
>>
>> lazy_vacuum_page now pins and unpins the vmbuffer for each page it marks
>> all-visible, which seems like a lot of needless traffic since the
t easier to use mingw than MSVC for someone used to
building on Linux? (mingw certainly does not seem to have the
advantage of being fast!)
Would you like to put this somewhere on wiki.postgresql.org, or would
you mind if I do so?
Thanks for the primer,
Jeff
--
Sent via pgsql-hackers mail
On Wed, 2013-01-16 at 17:38 -0800, Jeff Davis wrote:
> New version of checksums patch.
And another new version of both patches.
Changes:
* Rebased.
* Rename SetBufferCommitInfoNeedsSave to MarkBufferDirtyHint. Now that
it's being used more places, it makes sense to give it a more gene
too late to change ones
mind. So the problem can be immediately fixed and retried.
Except, is there perhaps some way for the user to decide to promote
WARNINGs to ERRORs on for a given command/transaction?
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
terialize (cost=0.00..2334.00 rows=10 width=4)
-> Seq Scan on foo2 (cost=0.00..1443.00 rows=10 width=4)
Cheers,
Jeff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
7;m missing?
By the way, the approach I took was to add the heap buffer to the WAL
chain of the XLOG_HEAP2_VISIBLE wal record when doing log_heap_visible.
It seemed simpler to understand than trying to add a bunch of options to
MarkBufferDirty.
Regards,
Jeff Davis
--
Sent via p
with_hash_value and found that most
of the time is spent on shared hash tables, not private ones. And the
time attributed to it for the shared hash tables mostly seems to be
due to the time it takes to fight cache lines away from other CPUs. I
suspect the same thing is true of LWLockAcquire.
Chee
On Wed, Nov 24, 2010 at 2:34 PM, Robert Haas wrote:
> On Wed, Nov 24, 2010 at 5:33 PM, Jeff Janes wrote:
>>
>> I've played a bit with hash_search_with_hash_value and found that most
>> of the time is spent on shared hash tables, not private ones. And the
>> time
7;ll just ask.
If the attacking client just waits a few milliseconds for a response
and then drops the socket, opening a new one, will the server-side
walking-dead process continue to be charged against max_connections
until it's sleep expires?
Cheers,
Jeff
--
Sent via pgsql-hackers mailing
On Sun, Nov 28, 2010 at 5:38 AM, Robert Haas wrote:
> On Sat, Nov 27, 2010 at 2:44 PM, Jeff Janes wrote:
>> I haven' t thought of a way to test this, so I guess I'll just ask.
>> If the attacking client just waits a few milliseconds for a response
>> and then drop
On Sun, Nov 28, 2010 at 3:57 PM, Robert Haas wrote:
> On Sun, Nov 28, 2010 at 5:41 PM, Jeff Janes wrote:
>> On Sun, Nov 28, 2010 at 5:38 AM, Robert Haas wrote:
>>> On Sat, Nov 27, 2010 at 2:44 PM, Jeff Janes wrote:
>>
>>>> I haven' t thought of a w
1001 - 1100 of 3210 matches
Mail list logo