On Fri, Apr 12, 2013 at 9:52 PM, Andres Freund wrote:
> On 2013-04-12 12:14:24 -0400, Tom Lane wrote:
> > Andrew Dunstan writes:
> > > On 04/12/2013 10:15 AM, Tom Lane wrote:
> > >> There's 0 chance of making that work, because the two databases
> wouldn't
> > >> have the same notions of committe
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost writes:
> > I'm certainly curious about those, but I'm also very interested in the
> > possibility of making NTUP_PER_BUCKET much smaller, or perhaps variable
> > depending on the work_mem setting.
>
> Not sure about that. That would make th
Heikki Linnakangas writes:
> Looking closer at where that memory is spent, a lot of it goes into the
> FmgrInfo structs in RelationAmInfo. But some of them are outright unused
> (ambuild, ambuildempty, amcostestimate, amoptions), and a few others
> hardly are called so seldom that they hardly n
On Sat, Apr 13, 2013 at 12:38 AM, Jeff Davis wrote:
> On Fri, 2013-04-12 at 23:03 +0300, Heikki Linnakangas wrote:
>> I think this is a bad idea. It complicates the WAL format significantly.
>> Simon's patch didn't include the changes to recovery to validate the
>> checksum, but I suspect it would
On Fri, 2013-04-12 at 23:03 +0300, Heikki Linnakangas wrote:
> I think this is a bad idea. It complicates the WAL format significantly.
> Simon's patch didn't include the changes to recovery to validate the
> checksum, but I suspect it would be complicated. And it reduces the
> error-detection c
On Fri, 2013-04-12 at 21:28 +0200, Andres Freund wrote:
> That means we will have to do the verification for this in
> ValidXLogRecord() *not* in RestoreBkpBlock or somesuch. Otherwise we
> won't always recognize the end of WAL correctly.
> And I am a bit wary of reducing the likelihood of noticing
On Fri, 2013-04-12 at 15:21 -0400, Bruce Momjian wrote:
> > * When we do PageSetChecksumInplace(), we need to be 100% sure that the
> > hole is empty; otherwise the checksum will fail when we re-expand it. It
> > might be worth a memset beforehand just to be sure.
>
> Do we write the page holes to
On 12 April 2013 21:03, Heikki Linnakangas wrote:
> No, the patch has to compute the 16-bit checksum for the page when the
> full-page image is added to the WAL record. There would otherwise be no need
> to calculate the page checksum at that point, but only later when the page
> is written out f
On 12.04.2013 22:31, Bruce Momjian wrote:
On Fri, Apr 12, 2013 at 09:28:42PM +0200, Andres Freund wrote:
Only point worth discussing is that this change would make backup blocks be
covered by a 16-bit checksum, not the CRC-32 it is now. i.e. the record
header is covered by a CRC32 but the backup
While debugging an out-of-memory situation, I noticed that the relcache
entry of each index consumes 2k of memory. From a memory dump:
...
pg_database_datname_index: 2048 total in 1 blocks; 712 free (0 chunks);
1336 used
pg_trigger_tgrelid_tgname_index: 2048 total in 1 blocks; 576 free
On 2013-04-12 15:31:36 -0400, Bruce Momjian wrote:
> On Fri, Apr 12, 2013 at 09:28:42PM +0200, Andres Freund wrote:
> > > Only point worth discussing is that this change would make backup blocks
> > > be
> > > covered by a 16-bit checksum, not the CRC-32 it is now. i.e. the record
> > > header is
On Fri, Apr 12, 2013 at 3:26 PM, Bruce Momjian wrote:
> On Fri, Apr 12, 2013 at 01:34:49PM -0400, Gurjeet Singh wrote:
> > Can you also improve the output when it dies upon failure to fetch
> something?
> > Currently the only error message it emits is "fetching xyz", and leaves
> the
> > user con
On Fri, Apr 12, 2013 at 09:28:42PM +0200, Andres Freund wrote:
> > Only point worth discussing is that this change would make backup blocks be
> > covered by a 16-bit checksum, not the CRC-32 it is now. i.e. the record
> > header is covered by a CRC32 but the backup blocks only by 16-bit.
>
> That
On 2013-04-11 20:12:59 +0100, Simon Riggs wrote:
> On 11 April 2013 04:27, Jeff Davis wrote:
>
> > On Wed, 2013-04-10 at 20:17 +0100, Simon Riggs wrote:
> >
> > > OK, so we have a single combined "calculate a checksum for a block"
> > > function. That uses Jeff's zeroing trick and Ants' bulk-orie
On Fri, Apr 12, 2013 at 01:34:49PM -0400, Gurjeet Singh wrote:
> Can you also improve the output when it dies upon failure to fetch something?
> Currently the only error message it emits is "fetching xyz", and leaves the
> user confused as to what really the problem was. The only indication of a
>
On Fri, Apr 12, 2013 at 12:07:36PM -0700, Jeff Davis wrote:
> > (Attached patch is discussion only. Checking checksum in recovery
> > isn't coded at all.)
>
> I like it.
>
> A few points:
>
> * Given that setting the checksum is unconditional in a backup block, do
> we want to zero the checksum
Kevin Grittner writes:
> For now what I'm suggesting is generating statistics in all the
> cases it did before, plus the case where it starts truncation but
> does not complete it. The fact that before this patch there were
> cases where the autovacuum worker was killed, resulting in not
> genera
On Thu, 2013-04-11 at 20:12 +0100, Simon Riggs wrote:
> So, if we apply a patch like the one attached, we then end up with the
> WAL checksum using the page checksum as an integral part of its
> calculation. (There is no increase in code inside WALInsertLock,
> nothing at all touched in that area)
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost writes:
> > The big win here over a binary COPY is pulling through the indexes as-is
> > as well- without having to rebuild them.
[... lots of reasons this is hard ...]
I agree that it's quite a bit more difficult, to the point that logical
>> A problem regarding to validation of sepgsql-regtest policy module
>> is originated by semodule commands that takes root privilege to
>> list up installed policy modules. So, I avoided to use this command
>> in the test_sepgsql script.
>> However, I have an idea that does not raise script fail e
2013/4/12 Robert Haas :
> On Fri, Apr 12, 2013 at 10:42 AM, Alvaro Herrera
> wrote:
>> Robert Haas escribió:
>>> On Mon, Apr 8, 2013 at 12:28 PM, Kohei KaiGai wrote:
>>
>>> > Also, the attached function-execute-permission patch is a rebased
>>> > version. I rethought its event name should be OAT_
[some relevant dropped bits of the thread restored]
Tom Lane wrote:
> Kevin Grittner writes:
>> Tom Lane wrote:
>>> Kevin Grittner writes:
Jeff Janes wrote:
I propose to do the following:
(1) Restore the prior behavior of the VACUUM command. This
was only ever intended
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> I suppose it would still be faster than a COPY transfer, but I'm not
>> sure it'd be enough faster to justify the work and the additional
>> portability hits you'd be taking.
> The big win here over a binary COPY is pulling through
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Well, it wouldn't be that hard to replace XIDs with FrozenXID or
> InvalidXID as appropriate, if you had access to the source database's
> clog while you did the copying. It just wouldn't be very fast.
If you're doing that in a streaming method, it strikes
Tom Lane escribió:
> Are you saying you intend to revert that whole concept? That'd be
> okay with me, I think. Otherwise we need some thought about how to
> inform the stats collector what's really happening.
Maybe what we need is to consider table truncation as a separate
activity from vacuum
Kevin Grittner writes:
> Tom Lane wrote:
>> I think that the minimum appropriate fix here is to revert the hunk
>> I quoted, ie take out the suppression of stats reporting and analysis.
> I'm not sure I understand -- are you proposing that is all we do
> for both the VACUUM command and autovacuu
On Fri, Apr 12, 2013 at 11:44 AM, Bruce Momjian wrote:
> On Tue, Feb 19, 2013 at 04:50:45PM -0500, Gurjeet Singh wrote:
> > Please find attached the patch for some cleanup and fix bit rot in
> pgindent
> > script.
> >
> > There were a few problems with the script.
> >
> > .) It failed to use the
Looks like psql> vacuum (verbose, analyze) is not reflecting in
pg_stat_user_tables as well in some cases. In this scenario I run the
command, it outputs all the deleted pages etc (unlike the vacuumdb -avz
analyze that seemed to be skipped in the log), but it does not update
pg_stat_user_tables.
On Wed, Apr 10, 2013 at 11:19:56AM -0700, Jeff Davis wrote:
> On Wed, 2013-04-10 at 11:01 +0300, Ants Aasma wrote:
> > I think we should first deal with using it for page checksums and if
> > future versions want to reuse some of the code for WAL checksums then
> > we can rearrange the code.
>
> S
Andres Freund writes:
> On 2013-04-12 13:09:02 -0400, Tom Lane wrote:
>> However, we're still thinking too small. I've been wondering whether we
>> couldn't entirely remove the dirty, awful kluges that were installed in
>> the lock manager to kill autovacuum when somebody blocked behind it.
>> Th
Tom Lane wrote:
> Kevin Grittner writes:
>> OK, will review that to confirm;but assuming that's right, and
>> nobody else is already working on a fix, I propose to do the
>> following:
>
>> (1) Restore the prior behavior of the VACUUM command. This was
>> only ever intended to be a fix for a se
Programming language environment whose parser and interpreter are written
with plpgsql. Proof of concept prototype has been tested. An object
oriented programming language implemented with self-describing Entity
Attribute Value model that stores objects and object metadata descriptions.
The environ
On 2013-04-12 13:09:02 -0400, Tom Lane wrote:
> Kevin Grittner writes:
> > OK, will review that to confirm;but assuming that's right, and
> > nobody else is already working on a fix, I propose to do the
> > following:
>
> > (1)� Restore the prior behavior of the VACUUM command.� This was
> > only
Kevin Grittner writes:
> OK, will review that to confirm;but assuming that's right, and
> nobody else is already working on a fix, I propose to do the
> following:
> (1) Restore the prior behavior of the VACUUM command. This was
> only ever intended to be a fix for a serious autovacuum problem
On Fri, Apr 12, 2013 at 10:22:38PM +0530, Pavan Deolasee wrote:
>
>
>
> On Fri, Apr 12, 2013 at 9:44 PM, Tom Lane wrote:
>
> Andrew Dunstan writes:
> > On 04/12/2013 10:15 AM, Tom Lane wrote:
> >> There's 0 chance of making that work, because the two databases
> wouldn't
> >>
Robert Haas writes:
> The hunk that changes the messages might need some thought so that it
> doesn't cause a translation regression. But in general I see no
> reason not to do this before we release beta1. It seems safe enough,
> and changes that reduce the need for packagers to carry private
>
On Fri, Apr 12, 2013 at 9:44 PM, Tom Lane wrote:
> Andrew Dunstan writes:
> > On 04/12/2013 10:15 AM, Tom Lane wrote:
> >> There's 0 chance of making that work, because the two databases wouldn't
> >> have the same notions of committed XIDs.
>
> > Yeah. Trying to think way outside the box, could
On 2013-04-12 12:14:24 -0400, Tom Lane wrote:
> Andrew Dunstan writes:
> > On 04/12/2013 10:15 AM, Tom Lane wrote:
> >> There's 0 chance of making that work, because the two databases wouldn't
> >> have the same notions of committed XIDs.
>
> > Yeah. Trying to think way outside the box, could we
Andrew Dunstan writes:
> On 04/12/2013 10:15 AM, Tom Lane wrote:
>> There's 0 chance of making that work, because the two databases wouldn't
>> have the same notions of committed XIDs.
> Yeah. Trying to think way outside the box, could we invent some sort of
> fixup mechanism that could be appli
Andrew Dunstan escribió:
>
> On 04/12/2013 10:15 AM, Tom Lane wrote:
> >Sameer Thakur writes:
> >>The proposed tool tries to make migration faster for tables and indices
> >>only by copying their binary data files.
> >There's 0 chance of making that work, because the two databases wouldn't
> >hav
Jeff Janes wrote:
>> If we're going to have the message, we should make it useful.
>> My biggest question here is not whether we should add this info,
>> but whether it should be DEBUG instead of LOG
> I like it being LOG. If it were DEBUG, I don't think anyone
> would be likely to see it when
"Dickson S. Guedes" writes:
> In my tests, after ANALYZE _bug_header and _bug_line, the query plan
> changes and query results returns as expected. Is this a chance that
> things isn't too bad?
No, it just means that in this particular scenario, the bug only
manifests if a nestloop plan is chosen
On 04/12/2013 10:15 AM, Tom Lane wrote:
Sameer Thakur writes:
The proposed tool tries to make migration faster for tables and indices
only by copying their binary data files.
There's 0 chance of making that work, because the two databases wouldn't
have the same notions of committed XIDs.
Y
On Tue, Feb 19, 2013 at 04:50:45PM -0500, Gurjeet Singh wrote:
> Please find attached the patch for some cleanup and fix bit rot in pgindent
> script.
>
> There were a few problems with the script.
>
> .) It failed to use the $ENV{PGENTAB} even if it was set.
> .) The file it tries to download fr
Em Sex, 2013-04-12 às 10:58 -0400, Tom Lane escreveu:
> Robert Haas writes:
> > On Thu, Apr 11, 2013 at 1:25 PM, Tom Lane wrote:
> >> The plan I'm considering is to get this written and committed to HEAD
> >> in the next week, so that it can go out in 9.3beta1. After the patch
> >> has survived
Robert Haas escribió:
> On Tue, Apr 9, 2013 at 8:08 AM, Christoph Berg wrote:
> > Debian has been patching pg_regress for years because our default unix
> > socket directory is /var/run/postgresql, but that is not writable by
> > the build user at build time. This used to be a pretty ugly "make-
>
Scott Marlowe wrote:
> Does this behavior only affect the 9.2 branch? Or was it ported
> to 9.1 or 9.0 or 8.4 as well?
After leaving it on master for a while to see if anyone reported
problems in development, I back-patched as far as 9.0 in time for
the 9.2.3 (and related) patches. Prior to tha
On 2013-04-12 20:44:02 +0530, Robins Tharakan wrote:
> Hi,
>
> While creating regression tests for EXPLAIN I am (somehow) unable to get
> (TIMING) option to work with EXPLAIN!
>
> I must be doing something stupid but all these options below just didn't
> work. Could someone point me to the right
Hi,
While creating regression tests for EXPLAIN I am (somehow) unable to get
(TIMING) option to work with EXPLAIN!
I must be doing something stupid but all these options below just didn't
work. Could someone point me to the right direction?
mpf2=# explain (TIMING) SELECT 1;
ERROR: EXPLAIN optio
On Tue, Apr 9, 2013 at 8:08 AM, Christoph Berg wrote:
> Debian has been patching pg_regress for years because our default unix
> socket directory is /var/run/postgresql, but that is not writable by
> the build user at build time. This used to be a pretty ugly "make-
> patch-make check-unpatch-make
Robert Haas writes:
> On Thu, Apr 11, 2013 at 1:25 PM, Tom Lane wrote:
>> The plan I'm considering is to get this written and committed to HEAD
>> in the next week, so that it can go out in 9.3beta1. After the patch
>> has survived a reasonable amount of beta testing, I'd be more comfortable
>>
Does this behavior only affect the 9.2 branch? Or was it ported to 9.1 or
9.0 or 8.4 as well?
On Thu, Apr 11, 2013 at 7:48 PM, Kevin Grittner wrote:
> Tom Lane wrote:
>
> > However I've got to say that both of those side-effects of
> > exclusive-lock abandonment seem absolutely brain dead now
On Fri, Apr 12, 2013 at 10:42 AM, Alvaro Herrera
wrote:
> Robert Haas escribió:
>> On Mon, Apr 8, 2013 at 12:28 PM, Kohei KaiGai wrote:
>
>> > Also, the attached function-execute-permission patch is a rebased
>> > version. I rethought its event name should be OAT_FUNCTION_EXECUTE,
>> > rather tha
On Thu, Apr 11, 2013 at 1:25 PM, Tom Lane wrote:
> This idea needs more fleshing out, but it's seeming awfully attractive
> right now. The big problem with it is that it's going to be a more
> invasive patch than I feel terribly comfortable about back-patching.
> However, I'm not sure there's muc
Robert Haas escribió:
> On Mon, Apr 8, 2013 at 12:28 PM, Kohei KaiGai wrote:
> > Also, the attached function-execute-permission patch is a rebased
> > version. I rethought its event name should be OAT_FUNCTION_EXECUTE,
> > rather than OAT_FUNCTION_EXEC according to the manner without
> > abbrevia
Sameer Thakur writes:
> The proposed tool tries to make migration faster for tables and indices
> only by copying their binary data files.
There's 0 chance of making that work, because the two databases wouldn't
have the same notions of committed XIDs. You apparently don't
understand what you re
On further review this particular server skipped from 9.2.2 to 9.2.4. This
is my most busy and downtime sensitive server and I was waiting on a
maintenance window to patch to 9.2.3 when 9.2.4 dropped and bumped up the
urgency. However, I have 3 other less busy production servers that were
all run
On Mon, Apr 8, 2013 at 12:28 PM, Kohei KaiGai wrote:
> Thanks. I could find two obvious wording stuffs here, please see smaller
> one of the attached patches. I didn't fixup manner to use "XXX" in source
> code comments.
Committed.
> Also, the attached function-execute-permission patch is a reba
Re: To PostgreSQL Hackers 2013-04-09 <20130409120807.gd26...@msgid.df7cb.de>
If the patch looks too intrusive at this stage of the release, it
would be enough if the last chunk got included, which should really be
painless:
> diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regres
On 04/11/2013 12:17 AM, Tom Lane wrote:
Alvaro Herrera writes:
Hannu Krosing wrote:
Natural solution to this seems to move most of pg_dump functionality
into backend as functions, so we have pg_dump_xxx() for everything
we want to dump plus a topological sort function for getting the
objects i
On 2013-04-12 16:58:44 +0530, Pavan Deolasee wrote:
> On Fri, Apr 12, 2013 at 4:29 PM, Andres Freund wrote:
>
> >
> >
> > I don't think that holds true at all. If you look at pg_stat_bgwriter in
> > any remotely bugs cluster with a hot data set over shared_buffers you'll
> > notice that a large pe
On Fri, Apr 12, 2013 at 4:29 PM, Andres Freund wrote:
>
>
> I don't think that holds true at all. If you look at pg_stat_bgwriter in
> any remotely bugs cluster with a hot data set over shared_buffers you'll
> notice that a large percentage of writes will have been done by backends
> themselves.
>
On 2013-04-12 11:18:01 +0530, Pavan Deolasee wrote:
> On Thu, Apr 11, 2013 at 8:39 PM, Ants Aasma wrote:
>
> > On Thu, Apr 11, 2013 at 5:33 PM, Hannu Krosing
> > wrote:
> > > On 04/11/2013 03:52 PM, Ants Aasma wrote:
> > >>
> > >> On Thu, Apr 11, 2013 at 4:25 PM, Hannu Krosing
> > >> wrote:
> >
On 2013-04-12 02:29:01 +0900, Fujii Masao wrote:
> On Thu, Apr 11, 2013 at 10:25 PM, Hannu Krosing wrote:
> >
> > You just shut down the old master and let the standby catch
> > up (takas a few microseconds ;) ) before you promote it.
> >
> > After this you can start up the former master with reco
On 04/11/2013 11:48 PM, Andrew Dunstan wrote:
It could be interesting to have a library that would output database
metadata in some machine readable and manipulatable format such as
JSON or XML. One thing that's annoying about the text output pg_dump
produces is that it's not at all structured,
Michael Paquier writes:
> I recall discussions about reverse engineering of a parsed query tree in
> the event trigger threads but nothing has been committed I think. Also, you
Yes. The name used in there was "Normalized Command String".
> need to consider that implementing such reverse engineer
Tom Lane writes:
> Alvaro Herrera writes:
>> This idea doesn't work because of back-patch considerations (i.e. we
>> would not be able to create the functions in back branches, and so this
>> new style of pg_dump would only work with future server versions). So
That is a policy question, not a
Tom Lane writes:
> Yeah, I was just looking at the IfSupported variant. In the structure
> I just suggested (separate ProcessSlowUtility function), we could make
> that work by having switch cases for some statements in both functions,
I've done it the way you propose here, and then in the Slow
Hello,
The current process of transferring data files from one cluster to another
by using pg_dump and pg_restore is time consuming.
The proposed tool tries to make migration faster for tables and indices
only by copying their binary data files. This is like pg_upgrade but used
for migration of t
On 04/11/2013 07:29 PM, Fujii Masao wrote:
On Thu, Apr 11, 2013 at 10:25 PM, Hannu Krosing wrote:
You just shut down the old master and let the standby catch
up (takas a few microseconds ;) ) before you promote it.
After this you can start up the former master with recovery.conf
and it will fo
70 matches
Mail list logo