Hi
I've now setup a warm-standby machine by using wal archiving. The
restore_command on the
warm-standby machine loops until the wal requested by postgres appears, instead
of
returning 1. Additionally, restore_command check for two special flag-files
abort
and take_online. If take_online
Merlin Moncure wrote:
On 7/10/06, Florian G. Pflug [EMAIL PROTECTED] wrote:
This methods seems to work, but it is neither particularly fool-proof nor
administrator friendly. It's not possible e.g. to reboot the slave
without postgres
abortint the recovery, and therefor processing all wals
Hi
For my warm-standby-cluster I'm now saving the currently used wal using rsync,
to avoid loosing data from a few hours (or days) ago, when there is little
traffic,
and thus the wal isn't rotated. For online backups, the problem is even worse,
because
a backup might me unuseable even hours
A.M. wrote:
On Fri, July 14, 2006 11:20 am, Florian G. Pflug wrote:
Hi
For my warm-standby-cluster I'm now saving the currently used wal using
rsync, to avoid loosing data from a few hours (or days) ago, when there is
little traffic, and thus the wal isn't rotated. For online backups
Martijn van Oosterhout wrote:
On Fri, Jul 14, 2006 at 05:36:58PM +0200, Florian G. Pflug wrote:
That was the idea - providing pg_rotate_wal(), which would guarantee that
the wal is rotatted at least once if called. Thinking further about this,
for a first prove of concept, I'd be enough
Simon Riggs wrote:
On Fri, 2006-07-14 at 12:09 -0400, Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
I've now thought about how to fix that without doing that rather crude
rsync-pg_xlog-hack.
I've read through the code, and learned that wal-segments are expected to have
Gavin Sherry wrote:
On Mon, 24 Jul 2006, Golden Liu wrote:
begin;
declare foo cursor for select * from bar for update;
fetch foo;
update bar set abc='def' where current of foo;
fetch foo;
delete from bar where current of foo;
commit;
No one has stepped up to do this for 8.2 so unfortunately
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Couldn't this be emulated by doing
begin;
declare foo cursor for select * from bar for update;
fetch foo into v_foo ;
update bar set abc='def' where ctid = v_foo.ctid;
That wouldn't follow the expected semantics if there's
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
How could there be a concurrent update of the _same_ row, when
I do select * from bar *for update*.
AFAICT the spec doesn't require one to have written FOR UPDATE
in order to use WHERE CURRENT OF. (In effect, they expect FOR UPDATE
Bruce Momjian wrote:
Why is this better than:
#if _MSC_VER == 1400
Surely this will not be true if _MSC_VER is undefined?
I experienced injustice and the reason of in OSX for it.
What was the problem with OSX? Did it throw a warning of you did an
equality test on an undefined symbol?
[EMAIL PROTECTED] wrote:
Bruce Momjian wrote:
Why is this better than:
#if _MSC_VER == 1400
Surely this will not be true if _MSC_VER is undefined?
I experienced injustice and the reason of in OSX for it.
What was the problem with OSX? Did it throw a warning of you did an
equality test on
Tom Lane wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
Tom Lane wrote:
Peter's not said exactly how he plans to deal with
this, but I suppose it'll round off one way or the other ...
It'll get truncated by integer division. I wouldn't mind if someone
proposed a patch to create a
Peter Eisentraut wrote:
Florian G. Pflug wrote:
Rounding up would have the advantage that you could just specify 0
in the config file, and have postgres use the smallest value
possible.
In most algebras, dividing zero by something is still zero, so there'd
be no need to round anything.
I
Tom Lane wrote:
Susanne Ebrecht [EMAIL PROTECTED] writes:
... We could provide the mixed update syntax and leave the
typed row value expression for the next release. Do you agree?
I don't really see the point --- the patch won't provide any new
functionality in anything like its current form,
Albe Laurenz wrote:
Tim Allen wrote:
Patch included to implement xlog switching, using an xlog record
processing instruction and forcibly moving xlog pointers.
1. Happens automatically on pg_stop_backup()
Oh - so it will not be possible to do an online backup
_without_ forcing a WAL switch
Tom Lane wrote:
Rod Taylor [EMAIL PROTECTED] writes:
A simple way of doing this might be to use a minimum cost number?
But you don't have any cost numbers until after you've done the plan.
Couldn't this work similar to geqo_effort? The planner could
try planning the query using only cheap
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
But you don't have any cost numbers until after you've done the plan.
Couldn't this work similar to geqo_effort? The planner could
try planning the query using only cheap algorithmns, and if
the cost exceeds
Hi
Since the discussion about how to force a specific plan has
come up, I though I'd post an idea I had for this a while ago.
It's not reall well though out yet, but anyway.
When the topic of optimizer hints comes up, people often suggest
that there should be a way to force postgres to use a
Tom Lane wrote:
Martijn van Oosterhout kleptog@svana.org writes:
ISTM theat the easiest way would be to introduce a sort of predicate
like so:
SELECT * FROM foo, bar WHERE pg_selectivity(foo.a = bar.a, 0.1);
The one saving grace of Florian's proposal was that you could go hack
the
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Image a complex, autogenerated query with looks something like this
select
from t1
join t2 on ...
join t3 on ...
join t4 on ...
...
...
where
big, complicated expression derived from some user input.
This big, complicated
Peter Eisentraut wrote:
Arturo Pérez wrote:
The DBA therefore pokes the
right information into
the planner's statistical tables (or, perhaps, a more human-
manageable one that gets
compiled into the planner's stats).
I think we're perfectly capable of producing a system that can collect
the
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Revised patch enclosed, now believed to be production ready. This
implements regular log switching using the archive_timeout GUC.
Further patch enclosed implementing these changes plus the record type
version of pg_xlogfile_name_offset()
[EMAIL PROTECTED] wrote:
This is what I mean by after thought. PostgreSQL is designed for
32-bit processors. Which is fine. I'm not complaining. The question
was whether there is an interest in pursuing 64-bit specific
optimizations. In the PostgreSQL code, a quick check points me only to
has
Jim C. Nasby wrote:
On Thu, Sep 28, 2006 at 03:07:39PM -0700, David Wheeler wrote:
PostgreSQLers,
I just ran into an issue where a client thought that autovacuum was
running but it wasn't. This is because it's not fatal when autovacuum
is on but stats_start_collector and/or stats_row_level
Tom Lane wrote:
Stephen Frost [EMAIL PROTECTED] writes:
* Tom Lane ([EMAIL PROTECTED]) wrote:
It looks like it should work to have just one polymorphic aggregate
definition, eg, array_accum(anyelement) returns anyarray.
I was hoping to do that, but since it's an aggregate the ffunc format
Heikki Linnakangas wrote:
BTW, we haven't talked about how to acquire a snapshot in the slave.
You'll somehow need to know which transactions have not yet
committed, but will in the future. In the master, we keep track of
in-progress transaction in the ProcArray, so I suppose we'll need to
do
Simon Riggs wrote:
On Sat, 2008-09-13 at 10:48 +0100, Florian G. Pflug wrote:
The main idea was to invert the meaning of the xid array in the snapshot
struct - instead of storing all the xid's between xmin and xmax that are
to be considering in-progress, the array contained all the xid's
xmin
Simon Riggs wrote:
On Sat, 2008-09-13 at 10:48 +0100, Florian G. Pflug wrote:
The current read-only snapshot (which current meaning the
corresponding state on the master at the time the last replayed wal
record was generated) was maintained in shared memory. It' xmin field
was continually
Heikki Linnakangas wrote:
Joachim Wieland wrote:
On Thu, Nov 19, 2009 at 4:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Yes, I have been thinking about that also. So what should happen
when you prepare a transaction that has sent a NOTIFY before?
From the user's point of view, nothing should
Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
A better approach is to do something similar to what we do now: at
prepare, just store the notifications in the state file like we do
already. In notify_twophase_postcommit(), copy the messages to the
shared queue.
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
Tom Lane wrote:
This is still ignoring the complaint: you are creating a clear
risk that COMMIT PREPARED will fail.
I'd see no problem with COMMIT PREPARED failing, as long as it
was possible to retry the COMMIT PREPARED at a later time
Hi
It seems that pl/pgsql ignores the DEFAULT value of domains for local
variables. With the following definitions in place
create domain myint as int default 0;
create or replace function myint() returns myint as $body$
declare
v_result myint;
begin
return v_result;
end;
$body$ language
Hi
While trying to come up with a patch to handle domain DEFAULTs in
plpgsql I've stumbled across the following behavior regarding domain
DEFAULTs and prepared statements.
session 1: create domain myint as int default 0 ;
session 1: create table mytable (i myint) ;
session 2: prepare ins as
Robert Haas wrote:
On Thu, Nov 19, 2009 at 9:06 PM, Florian G. Pflug f...@phlo.org wrote:
I've tried to create a patch, but didn't see how I'd convert the result
from get_typedefault() (A Node*, presumeably the parsetree corresponding
to the default expression?) into a plan that I could store
Tom Lane wrote:
Florian G. Pflug f...@phlo.org writes:
It seems that pl/pgsql ignores the DEFAULT value of domains for
local variables.
The plpgsql documentation seems entirely clear on this:
The DEFAULT clause, if given, specifies the initial value assigned to
the variable when the block
Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
(2) this change, while very useful, does change what had been a
simple rule (All variables are NULL unless specifically set
otherwise) into a conditional one (All variables are NULL unless
set otherwise OR unless they are declared as domain
Gurjeet Singh wrote:
On Sat, Nov 21, 2009 at 7:26 AM, Josh Berkus j...@agliodbs.com
mailto:j...@agliodbs.com wrote: However, there are some other
issues to be resolved:
(1) what should be the interaction of DEFAULT parameters and domains
with defaults?
The function's DEFAULT parameter
Hi
I'm currently investigating how much work it'd be to implement arrays of
domains since I have a client who might be interested in sponsoring that
work.
The comments around the code handling ALTER DOMAIN ADD CONSTRAINT are
pretty clear about the lack of proper locking in that code - altering
Florian G. Pflug wrote:
I do, however, suspect that ALTER TABLE is plagued by similar
problems. Currently, during the rewrite phase of ALTER TABLE,
find_composite_type_dependencies is used to verify that the table's
row type (or any type directly or indirectly depending on that type
Dan Eloff wrote:
At the lower levels in PG, reading from the disk into cache, and
writing from the cache to the disk is always done in pages.
Why does PG work this way? Is it any slower to write whole pages
rather than just the region of the page that changed? Conversely, is
it faster? From
Tom Lane wrote:
: One possibility would be to make it possible to issue SETs that
behave : as if set in a startup packet - imho its an implementation
detail that : SET currently is used.
I think there's a good deal of merit in this, and it would't be hard
at all to implement, seeing that we
Hi
HEAD fails to compile in 64-bit mode on Mac OS X 10.6 with gcc 4.2 and
-Werror.
What happens is that INT64_FORMAT gets defined as %ld (which is
correct - long and unsigned long are 64 bits wide on x86_64), but
the check for a working 64-bit int fails, causing INT64_IS_BUSTED to get
defined
On 15.12.09 16:02 , Tom Lane wrote:
Florian G. Pflugf...@phlo.org writes:
configure fails to recognize long as a working 64-bit type
because the does_int64_work configure test produces warning due to
a missing return value declaration for main() and a missing
prototype for does_int64_work().
On 15.12.09 15:52 , Tom Lane wrote:
to...@tuxteam.de writes:
(and as Andrew Dunstan pointed out off-list: I was wrong with my
bold assertion that one can squeeze infinitely many (arbitrary
length) strings between two given. This is not always the case).
Really? If the string length is
On 15.12.09 23:38 , Tom Lane wrote:
Peter Eisentrautpete...@gmx.net writes:
So to summarize, this is just a bad idea. Creating a less obscure
way to use -Werror might be worthwhile, though.
I suppose we could add --with-Werror but it seems pretty
specialized to me. A more appropriate
Hi
I've completed a (first) working version of a extension that allows
easier introspection of composite types from SQL and pl/PGSQL.
The original proposal and ensuing discussion can be found here:
http://archives.postgresql.org/pgsql-hackers/2009-11/msg00695.php
The extension can be found on:
On 28.12.09 18:54 , Kevin Grittner wrote:
To give some idea of the scope of development, Michael Cahill added
SSI to InnoDB by modifying 250 lines of code and adding 450 lines of
code; however, InnoDB already had the S2PL option and the prototype
implementation isn't as sophisticated as I feel
On 11.04.10 20:47 , Robert Haas wrote:
On Sun, Apr 11, 2010 at 10:26 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Robert Haas wrote:
2010/4/10 Andrew Dunstanand...@dunslane.net:
Heikki Linnakangas wrote:
1. Keep the materialized view up-to-date when the base tables
Kenneth Marshall wrote:
We use DSPAM as one of our anti-spam options. Its UPDATE pattern is to
increment a spam counter or a not-spam counter while keeping the user and
token information the same. This would benefit from this optimization.
Currently we are forced to use MySQL with MyISM tables
Simon Riggs wrote:
On Wed, 2007-03-28 at 22:24 +0530, Pavan Deolasee wrote:
Just when I thought we have nailed down CREATE INDEX, I realized
that there something more to worry. The problem is with the HOT-chains
created by our own transaction which is creating the index. We thought
it will be
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Couldn't you store the creating transaction's xid in pg_index, and
let other transaction check that against their snapshot like they
would for any tuple's xmin or xmax?
What snapshot? I keep having to remind people that system
Pavan Deolasee wrote:
On 3/28/07, Tom Lane [EMAIL PROTECTED] wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Couldn't you store the creating transaction's xid in pg_index, and
let other transaction check that against their snapshot like they
would for any tuple's xmin or xmax?
What
Pavan Deolasee wrote:
In this specific context, this particular case is easy to handle because
we are only concerned about the serializable transactions started before
CREATE INDEX commits. If PREPARE can see the new index, it
implies that the CI transaction is committed. So the transaction
Pavan Deolasee wrote:
On 3/29/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
Yes, but the non-index plan PREPARE generated will be used until the end
of the session, nut only until the end of the transaction.
Frankly I don't know this works, but are you sure that the plan will
be used until
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
ISTM that the run-another-transaction-afterwards idea is the only one
that does everything I think we need. I really do wish we could put in a
wait, like CIC, but I just think it will break existing programs.
Actually, there's a
Simon Riggs wrote:
On Thu, 2007-03-29 at 17:27 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
ISTM that the run-another-transaction-afterwards idea is the only one
that does everything I think we need. I really do wish we could put in a
wait, like CIC, but I just think it will
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
How about storing the snapshot which we used during planning in
CachedPlanSource, if at least one index was seen unusable because
its CREATE INDEX transaction was seen as in-progress ?
I'm getting tired of repeating this, but: the
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
What I am suggesting is to use ActiveSnapshot (actually
Florian's idea) to decide whether the transaction that created
index was still running when we started. Isn't it the case that
some snapshot will be active when we plan ?
I do not
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
I do not think you can assume that the plan won't be used later with
some older snapshot.
So maybe we'd need to use the SerializableSnapshot created at the start
of each transaction for this check
Pavan Deolasee wrote:
On 3/30/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
My idea was to store a list of xid's together with the cached plan that
are assumed to be uncommitted accoring to the IndexSnapshot. The query
is replanned if upon execution the IndexSnapshot assumes that one
Simon Riggs wrote:
On Fri, 2007-03-30 at 16:34 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
2. pg_stop_backup() should wait until all archive files are safely
archived before returning
Not sure I agree with that one. If it fails, you can't tell whether the
action is done and
Hi
Does anyone know if pgsnmpd is still actively developed?
The last version (0.1b1) is about 15 months old.
greetings, Florian Pflug
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Nikolay Samokhvalov wrote:
On 4/10/07, Tom Lane [EMAIL PROTECTED] wrote:
Nikolay Samokhvalov [EMAIL PROTECTED] writes:
I remember several cases when people (e.g. me :-) ) were spending some
time trying to find an error in some pl/pgsql function and the reason
lied in incorrect work with
Neil Conway wrote:
On Tue, 2007-04-10 at 18:28 +0200, Peter Eisentraut wrote:
The problem is that most of the standard methods are platform dependent, as
they require MAC addresses or a good random source, for instance.
http://archives.postgresql.org/pgsql-patches/2007-01/msg00392.php
ISTM
Hi
I'm very excited that my project for implementing read-only queries
on PITR slaves was accepted for GSoC, and I'm now trying to work
out what tools I'll use for that job.
I'd like to be able to create some sort of branches and tags for
my own work (only inside my local repository of course).
Joshua D. Drake wrote:
Alexey Klyukin wrote:
Alvaro Herrera wrote:
But if you have a checked out tree, does it work to do an update after
the tree has been regenerated? As far as I know, the repo is generated
completely every few hours, so it wouldn't surprise me that the checked
out copy is
Hi
When I try to build CVS HEAD on OSX 10.4, compiling
src/interfaces/ecpg/preproc/preproc.c fails with:
In file included from preproc.y:6951:
pgc.l:3:20: error: config.h: No such file or directory
In file included from pgc.l:28,
from preproc.y:6951:
preproc.h:996: error:
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
When I try to build CVS HEAD on OSX 10.4, compiling
src/interfaces/ecpg/preproc/preproc.c fails with:
...
If I delete pgc.c, it is rebuilt automatically, and then
preproc.c compiles just fine.
...
I'm using gcc 4.0.1, flex 2.5.4
Alvaro Herrera wrote:
Ah, it seems the SVN repo just got its first user ;-) Congratulations.
Ask Joshua to send you a Command Prompt tee shirt, maybe he is excited
enough.
I hope the fact that I use the SVN repo just to get the changes into
git doesn't reduce my chances of getting that
Alvaro Herrera wrote:
Florian G. Pflug wrote:
Alvaro Herrera wrote:
Ah, it seems the SVN repo just got its first user ;-) Congratulations.
Ask Joshua to send you a Command Prompt tee shirt, maybe he is excited
enough.
I hope the fact that I use the SVN repo just to get the changes into
git
Joshua D. Drake wrote:
http://projects.commandprompt.com/public/pgsql/browser
or do the anonymous checkout with:
svn co http://projects.commandprompt.com/public/pgsql/repo/
But if you have a checked out tree, does it work to do an update after
the tree has been regenerated? As far as I
Martin Langhoff wrote:
Hi Florian,
I am right now running an rsync of the Pg CVS repo to my work machine to
get a git import underway. I'm rather keen on seeing your cool PITR Pg
project go well and I have some git+cvs fu I can apply here (being one
of the git-cvsimport maintainers) ;-)
Cool -
Aidan Van Dyk wrote:
Martin Langhoff wrote:
Well, now that more than one of us are working with git on PostgreSQL...
I've had a repo conversion running for a while... I've only got it to what
I consider stable last week:
http://repo.or.cz/w/PostgreSQL.git
Zoltan Boszormenyi wrote:
Tom Lane írta:
Zoltan Boszormenyi [EMAIL PROTECTED] writes:
Also, the current grammar is made to give a syntax error
if you say colname type GENERATED BY DEFAULT AS ( expr ).
But it makes the grammar unbalanced, and gives me:
bison -y -d gram.y
conflicts: 2
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
These files are generated (from gram.y, pgc.l and preproc.y
respectievly) and are not present in the CVS repo, though I think they
have been at some point.
It's strange that other generated files (that have also been in the repo
in the
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
Tom Lane wrote:
The current documentation for RESET exhibits a certain lack of, um,
intellectual cohesiveness:
Synopsis
RESET configuration_parameter
RESET ALL
RESET { PLANS | SESSION | TEMP | TEMPORARY }
Maybe DISCARD for the plans
Martin Langhoff wrote:
Aidan Van Dyk wrote:
And remember the warning I gave that my conversion is *not* a direct CVS
import - I intentionally *unexpand* all Keywords before stuffing them
into GIT so that merging and branching can ignore all the Keyword
conflicts...
My import is unexpanding
Hi
I believe I have discovered the following problem in pgsql 8.2 and HEAD,
concerning warm-standbys using WAL log shipping.
The problem is that after a crash, the master might complete incomplete
actions via rm_cleanup() - but since it won't wal-log those changes,
the slave won't know about
Simon Riggs wrote:
On Thu, 2007-04-19 at 22:37 +0200, Florian G. Pflug wrote:
The problem is that after a crash, the master might complete incomplete
actions via rm_cleanup() - but since it won't wal-log those changes,
the slave won't know about this. This will at least prevent the creation
Martin Langhoff wrote:
So - if you are committed to providing your gateway long term to
Florian, I'm happy to drop my gateway in favour of yours.
(Florian, before basing your code on either you should get a checkout of
Aidan's and mine and check that the tips of the branches you are working
Aidan Van Dyk wrote:
* Florian G. Pflug [EMAIL PROTECTED] [070430 08:58]:
It seems as if git pulls all revisions of all files during the pull -
which it shouldn't do as far as I understand things - it should only
pull those objects referenced by some head, no?
Git pulls full history
Zdenek Kotala wrote:
I did not find forensics in translator and It mentions in Oxford
vocabulary but explanation is not clear for me. I agree with Bruce It is
not good name. What about short form of diagnostic diag?
Doesn't forensics basically mean to find the cause of something
*after* it
Richard Huxton wrote:
Richard Huxton wrote:
Heikki Linnakangas wrote:
The problem is that the new tuple version is checked only against the
condition in the update rule, id=OLD.id, but not the condition in the
original update-claus, dt='a'.
Yeah, that's confusing :(.
Bit more than just
Richard Huxton wrote:
Hiroshi Inoue wrote:
Florian G. Pflug wrote:
I think there should be a big, fat warning that self-referential
updates have highly non-obvious behaviour in read-committed mode,
and should be avoided.
It seems pretty difficult for PostgreSQL rule system to avoid
Andrew Dunstan wrote:
What would making a branch actually do for you? The only advantage I can
see is that it will give you a way of checkpointing your files. As I
remarked upthread, I occasionally use RCS for that. But mostly I don't
actually bother. I don't see how you can do it reasonably
Simon Riggs wrote:
On Mon, 2007-05-28 at 19:56 -0400, Bruce Momjian wrote:
Added to TODO:
* Fix self-referential UPDATEs seeing inconsistent row versions in
read-committed mode
http://archives.postgresql.org/pgsql-hackers/2007-05/msg00507.php
I'm sorry guys but I don't agree this is a
Hi
I'm currently working on splitting StartupXLog into smaller
parts, because I need to reuse some of the parts for concurrent
wal recovery (for my GSoC project)
The function recoveryStopsHere in xlog.c checks if we should
stop recovery due to the values of recovery_target_xid and
Matthew T. O'Connor wrote:
Florian G. Pflug wrote:
Work done so far:
-
.) Don't start autovacuum and bgwriter.
Do table stats used by the planner get replicated on a PITR slave? I
assume so, but if not, you would need autovac to do analyzes.
Yes - everything that get wal
Jeff Davis wrote:
On Wed, 2007-06-06 at 16:11 +0200, Florian G. Pflug wrote:
.) Since the slaves needs to track an Snapshot in shared memory, it cannot
resize that snapshot to accomodate however many concurrent transactions
might have been running on the master. My current plan
Simon Riggs wrote:
On Wed, 2007-06-06 at 16:11 +0200, Florian G. Pflug wrote:
.) Added a new GUC operational_mode, which can be set to either
readwrite or readonly. If it is set to readwrite (the default),
postgres behaves as usual. All the following changes are only
in effect
Heikki Linnakangas wrote:
Florian G. Pflug wrote:
Jeff Davis wrote:
Are you referring to the size of the xip array being a problem? Would it
help to tie the size of the xip array to max_connections? I understand
that max_connections might be greater on the master, but maybe something
similar
Jeff Davis wrote:
On Wed, 2007-06-06 at 22:36 +0100, Simon Riggs wrote:
.) Transactions are assigned a dummy xid ReadOnlyTransactionId, that
is considered to be later than any other xid.
So you are bumping FirstNormalTransactionId up by one for this?
You're assuming then that we will
Joshua D. Drake wrote:
Take the following:
INFO: analyzing pg_catalog.pg_authid
INFO: pg_authid: scanned 1 of 1 pages, containing 5 live rows and 0
dead rows; 5 rows in sample, 5 estimated total rows
The above is completely redundant. Why not just say:
INFO: pg_authid: scanned 1 of 1
Dann Corbit wrote:
-Original Message-
From: Hannu Krosing [mailto:[EMAIL PROTECTED]
Since libpq function PQfsize returns -2 for all constant character
strings in SQL statements ... What is the proper procedure to determine
the length of a constant character column after query execution
Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
I agree. XML seems like a fairly natural fit for this. Just as people should
not try to shoehorn everything into XML, neither should they try to shoehorn
everything into a relational format either.
Now all we need is an XML schema
Heikki Linnakangas wrote:
Jim C. Nasby wrote:
On Thu, Jun 07, 2007 at 10:16:25AM -0400, Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Thinking about this whole idea a bit more, it occured to me that the
current approach to write all, then fsync all is really a historical
PFC wrote:
On Fri, 22 Jun 2007 16:43:00 +0200, Bruce Momjian [EMAIL PROTECTED] wrote:
Simon Riggs wrote:
On Fri, 2007-06-22 at 14:29 +0100, Gregory Stark wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
Tom Lane wrote:
untrustworthy disk hardware, for instance. I'd much rather use names
Richard Huxton wrote:
Bruce Momjian wrote:
Tom Lane wrote:
What's wrong with synchronous_commit? It's accurate and simple.
That is fine too.
My concern would be that it can be read two ways:
1. When you commit, sync (something or other - unspecified)
2. Synchronise commits (to each other?
Michael Paesold wrote:
Alvaro Herrera wrote:
So what you are proposing above amounts to setting scale factor = 0.05.
The threshold is unimportant -- in the case of a big table it matters
not if it's 0 or 1000, it will be almost irrelevant in calculations. In
the case of small tables, then the
Tom Lane wrote:
[ back to dealing with this patch, finally ]
Florian G. Pflug [EMAIL PROTECTED] writes:
While creating the patch, I've been thinking if it might be worthwile
to note that we just did recovery in the ShutdownCheckpoint
(or create a new checkpoint type RecoveryCheckpoint
1 - 100 of 303 matches
Mail list logo