On Wed, May 28, 2008 at 7:11 PM, Mike [EMAIL PROTECTED] wrote:
Can somebody point to the most logical place in the code to intercept the
WAL writes? (just a rough direction would be enough)
XLogInsert
Great- I'll take a look at that code.
or if this doesn't make sense at all, another
On Wed, 2008-05-28 at 22:45 +0100, Simon Riggs wrote:
On Wed, 2008-05-28 at 16:28 -0400, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
This is expected to take lots of memory because each row-requiring-check
generates an entry in the pending
On Wed, 2008-05-28 at 19:11 -0400, Mike wrote:
Can somebody point to the most logical place in the code to intercept
the WAL writes? (just a rough direction would be enough)- or if this
doesn’t make sense at all, another suggestion on where to get the
data?
I don't think that intercepting
On Thu, May 29, 2008 at 2:11 AM, Douglas McNaught [EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 7:05 PM, Sabbiolina [EMAIL PROTECTED] wrote:
Hello, in my particular case I need to configure Postgres to handle only
a
few concurrent connections, but I need it to be blazingly fast, so I
On Wed, May 28, 2008 at 4:10 PM, Tom Lane [EMAIL PROTECTED] wrote:
If you've got any bug fixes you've been working on, now is a good time
to get them finished up and sent in...
Has the s/\x09//g patch for psql from Bruce and you been
backported to 8.3? I didn't see it on pgsql-commiters.
On Thu, May 29, 2008 at 4:26 AM, Sabbiolina [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 2:11 AM, Douglas McNaught [EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 7:05 PM, Sabbiolina [EMAIL PROTECTED] wrote:
Hello, in my particular case I need to configure Postgres to handle only
a
Tom Lane wrote:
Jorgen Austvik - Sun Norway [EMAIL PROTECTED] writes:
we would like to be able to use and ship pg_regress and the PostgreSQL
test suite independently of the PostgreSQL build environment, for
testing and maybe even as a separate package to be build and shipped
with the OS for
On Thu, May 29, 2008 at 01:05:22AM +0200, Sabbiolina wrote:
I have 4 Gigs of RAM, how do I force Postgres to use a higher part of such
memory in order to cache more indexes, queries and so on?
PG relies on the operating system to cache most disk accesses. Looking
at the amount of memory a
Guillaume Smet [EMAIL PROTECTED] writes:
On Wed, May 28, 2008 at 4:10 PM, Tom Lane [EMAIL PROTECTED] wrote:
If you've got any bug fixes you've been working on, now is a good time
to get them finished up and sent in...
Has the s/\x09//g patch for psql from Bruce and you been
backported to
Sabbiolina wrote:
On Thu, May 29, 2008 at 2:11 AM, Douglas McNaught [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Wed, May 28, 2008 at 7:05 PM, Sabbiolina [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Hello, in my particular case I need to configure Postgres to
On Thu, May 29, 2008 at 10:19:46AM -0400, Justin wrote:
To my understanding Postgresql only caches queries and results in memory
for that specific connection. So when that connection is closed those
cached results are cleared out.So cached indexs and queries are for
that connection
On Thu, May 29, 2008 at 10:19 AM, Justin [EMAIL PROTECTED] wrote:
To my understanding Postgresql only caches queries and results in memory for
that specific connection. So when that connection is closed those cached
results are cleared out.So cached indexs and queries are for that
The Postgres core team met at PGCon to discuss a few issues, the largest
of which is the need for simple, built-in replication for PostgreSQL.
Historically the project policy has been to avoid putting replication
into core PostgreSQL, so as to leave room for development of competing
solutions,
Tom Lane wrote:
Guillaume Smet [EMAIL PROTECTED] writes:
On Wed, May 28, 2008 at 4:10 PM, Tom Lane [EMAIL PROTECTED] wrote:
If you've got any bug fixes you've been working on, now is a good time
to get them finished up and sent in...
Has the s/\x09//g patch for psql from Bruce and
On Thu, May 29, 2008 at 4:14 PM, Tom Lane [EMAIL PROTECTED] wrote:
No, nothing's been done about that AFAIK. What's the consensus,
do we want to change that behavior in 8.3.2?
IIRC, noone voted against backpatching it after Alvaro and you agreed
with doing so.
Archives link:
On 5/29/08, Tom Lane [EMAIL PROTECTED] wrote:
The Postgres core team met at PGCon to discuss a few issues, the largest
of which is the need for simple, built-in replication for PostgreSQL.
Historically the project policy has been to avoid putting replication
into core PostgreSQL, so as to
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
No, nothing's been done about that AFAIK. What's the consensus,
do we want to change that behavior in 8.3.2?
I think everyone but me wanted it backpatched, so let's do it. I have
posted both patches but I am unable to apply your
On Thu, May 29, 2008 at 10:12:55AM -0400, Tom Lane wrote:
The Postgres core team met at PGCon to discuss a few issues, the
largest of which is the need for simple, built-in replication for
PostgreSQL. Historically the project policy has been to avoid
putting replication into core PostgreSQL,
Radek Strnad napsal(a):
snip
I'm thinking of dividing the problem into two parts - in beginning
pg_collation will contain two functions. One will have hard-coded rules
for these basic collations (SQL_CHARACTER, GRAPHIC_IRV, LATIN1, ISO8BIT,
UCS_BASIC). It will compare each string character
On Thu, May 29, 2008 at 4:58 PM, Tom Lane [EMAIL PROTECTED] wrote:
IIRC I made a few cosmetic cleanups along with the actual bug fix.
I'll take a look this afternoon and put it in.
Thanks.
--
Guillaume
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On Thu, 2008-05-29 at 08:21 -0700, David Fetter wrote:
On Thu, May 29, 2008 at 10:12:55AM -0400, Tom Lane wrote:
This part is a deal-killer. It's a giant up-hill slog to sell warm
standby to those in charge of making resources available because the
warm standby machine consumes SA time,
Merlin Moncure wrote:
On Thu, May 29, 2008 at 10:19 AM, Justin [EMAIL PROTECTED] wrote:
To my understanding Postgresql only caches queries and results in memory for
that specific connection. So when that connection is closed those cached
results are cleared out.So cached indexs and
Marko,
But Tom's mail gave me impression core wants to wait until we get perfect
read-only slave implementation so we wait with it until 8.6, which does
not seem sensible. If we can do slightly inefficient (but simple)
implementation
right now, I see no reason to reject it, we can always
On Thu, May 29, 2008 at 4:45 PM, Justin [EMAIL PROTECTED] wrote:
Then what is the purpose of shared buffers if nothing is being reused is it
only used to keep track locks, changes and what is to being spooled to the
kernel???
It caches disk pages (and holds other data structures), not query
On Thu, May 29, 2008 at 08:46:22AM -0700, Joshua D. Drake wrote:
On Thu, 2008-05-29 at 08:21 -0700, David Fetter wrote:
This part is a deal-killer. It's a giant up-hill slog to sell
warm standby to those in charge of making resources available
because the warm standby machine consumes SA
On Thu, May 29, 2008 at 11:46 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
The only question I have is... what does this give us that PITR doesn't
give us?
I think the idea is that WAL records would be shipped (possibly via
socket) and applied as they're generated, rather than on a
On Thu, 29 May 2008, Justin wrote:
I'm confussed trying to figure out how caches are being use and being
moving through postgresql backend.
The shared_buffers cache holds blocks from the database files. That's it.
If you want some more information about how that actually works head to
Josh Berkus wrote:
Marko,
But Tom's mail gave me impression core wants to wait until we get perfect
read-only slave implementation so we wait with it until 8.6, which does
not seem sensible. If we can do slightly inefficient (but simple)
implementation
right now, I see no reason to
On Thu, May 29, 2008 at 4:48 PM, Douglas McNaught [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 11:46 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
The only question I have is... what does this give us that PITR doesn't
give us?
I think the idea is that WAL records would be shipped
On Thu, May 29, 2008 at 4:52 PM, Dave Page [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 4:45 PM, Justin [EMAIL PROTECTED] wrote:
Then what is the purpose of shared buffers if nothing is being reused is it
only used to keep track locks, changes and what is to being spooled to the
On Thursday 29 May 2008 09:54:03 am Marko Kreen wrote:
On 5/29/08, Tom Lane [EMAIL PROTECTED] wrote:
The Postgres core team met at PGCon to discuss a few issues, the largest
of which is the need for simple, built-in replication for PostgreSQL.
Historically the project policy has been to
On 5/29/08, David Fetter [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 10:12:55AM -0400, Tom Lane wrote:
Ideally this would be coupled with the ability to execute read-only
queries on the slave servers, but we see technical difficulties that
might prevent that from being completed
* Josh Berkus [EMAIL PROTECTED] [080529 11:52]:
Marko,
But Tom's mail gave me impression core wants to wait until we get perfect
read-only slave implementation so we wait with it until 8.6, which does
not seem sensible. If we can do slightly inefficient (but simple)
implementation
right
Joshua D. Drake wrote:
On Thu, 2008-05-29 at 08:21 -0700, David Fetter wrote:
On Thu, May 29, 2008 at 10:12:55AM -0400, Tom Lane wrote:
This part is a deal-killer. It's a giant up-hill slog to sell warm
standby to those in charge of making resources available because the
warm standby
On Thu, May 29, 2008 at 11:58:31AM -0400, Bruce Momjian wrote:
Josh Berkus wrote:
Publishing the XIDs back to the master is one possibility. We
also looked at using spillover segments for vacuumed rows, but
that seemed even less viable.
I'm also thinking, for *async replication*, that
David Fetter wrote:
This part is a deal-killer. It's a giant up-hill slog to sell warm
standby to those in charge of making resources available because the
warm standby machine consumes SA time, bandwidth, power, rack space,
etc., but provides no tangible benefit, and this feature would have
David Fetter wrote:
On Thu, May 29, 2008 at 11:58:31AM -0400, Bruce Momjian wrote:
Josh Berkus wrote:
Publishing the XIDs back to the master is one possibility. We
also looked at using spillover segments for vacuumed rows, but
that seemed even less viable.
I'm also thinking,
* Dave Page [EMAIL PROTECTED] [080529 12:03]:
On Thu, May 29, 2008 at 4:48 PM, Douglas McNaught [EMAIL PROTECTED] wrote:
I think the idea is that WAL records would be shipped (possibly via
socket) and applied as they're generated, rather than on a
file-by-file basis. At least that's what
Bruce,
Another idea I discussed with Tom is having the slave _delay_ applying
WAL files until all slave snapshots are ready.
Well, again, that only works for async mode. I personally think that's
the correct solution for async. But for synch mode, I think we need to
push the xids back to
Dave Page wrote:
On Thu, May 29, 2008 at 4:48 PM, Douglas McNaught [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 11:46 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
The only question I have is... what does this give us that PITR doesn't
give us?
I think the idea is that
On 5/29/08, Joshua D. Drake [EMAIL PROTECTED] wrote:
On Thu, 2008-05-29 at 08:21 -0700, David Fetter wrote:
On Thu, May 29, 2008 at 10:12:55AM -0400, Tom Lane wrote:
This part is a deal-killer. It's a giant up-hill slog to sell warm
standby to those in charge of making resources
Josh Berkus wrote:
Bruce,
Another idea I discussed with Tom is having the slave _delay_ applying
WAL files until all slave snapshots are ready.
Well, again, that only works for async mode. I personally think that's
the correct solution for async. But for synch mode, I think we
On May 29, 2008, at 9:12 AM, David Fetter wrote:
On Thu, May 29, 2008 at 11:58:31AM -0400, Bruce Momjian wrote:
Josh Berkus wrote:
Publishing the XIDs back to the master is one possibility. We
also looked at using spillover segments for vacuumed rows, but
that seemed even less viable.
I'm
Tom Lane wrote:
In practice, simple asynchronous single-master-multiple-slave
replication covers a respectable fraction of use cases, so we have
concluded that we should allow such a feature to be included in the core
project. We emphasize that this is not meant to prevent continued
development
On Thu, 2008-05-29 at 09:10 -0700, Josh Berkus wrote:
Joshua D. Drake wrote:
The only question I have is... what does this give us that PITR doesn't
give us?
Since people seem to be unclear on what we're proposing:
8.4 Synchronous Warm Standby: makes PostgreSQL more suitable for HA
On 5/29/08, Aidan Van Dyk [EMAIL PROTECTED] wrote:
* Dave Page [EMAIL PROTECTED] [080529 12:03]:
On Thu, May 29, 2008 at 4:48 PM, Douglas McNaught [EMAIL PROTECTED] wrote:
I think the idea is that WAL records would be shipped (possibly via
socket) and applied as they're generated, rather
* Marko Kreen [EMAIL PROTECTED] [080529 12:27]:
I don't think thats a problem. If the user runs its server at the
limit of write-bandwidth, thats its problem.
IOW, with synchronous replication, we _want_ the server to lag behind
slaves.
About the single-threading problem - afaik, the
Andrew Dunstan [EMAIL PROTECTED] writes:
Dave Page wrote:
Yes, we're talking real-time streaming (synchronous) log shipping.
That's not what Tom's email said, AIUI.
Sorry, I was a bit sloppy about that. If we go with a WAL-shipping
solution it would be pretty easy to support both synchronous
While running a database-wide analyze on 8.3.1, I received the following:
ERROR: duplicate key value violates unique constraint
pg_statistic_relid_att_index
I've only seen this once before, and because I don't have time to look
at the code at the moment, I figured I'd post it as a heads-up.
--
David Fetter wrote:
This part is a deal-killer. It's a giant up-hill slog to sell warm
standby to those in charge of making resources available because the
warm standby machine consumes SA time, bandwidth, power, rack space,
etc., but provides no tangible benefit, and this feature would have
David Fetter [EMAIL PROTECTED] writes:
On Thu, May 29, 2008 at 08:46:22AM -0700, Joshua D. Drake wrote:
The only question I have is... what does this give us that PITR
doesn't give us?
It looks like a wrapper for PITR to me, so the gain would be ease of
use.
A couple of points about that:
On Thu, 29 May 2008, David Fetter wrote:
It's a giant up-hill slog to sell warm standby to those in charge of
making resources available because the warm standby machine consumes SA
time, bandwidth, power, rack space, etc., but provides no tangible
benefit, and this feature would have exactly
On Thu, 2008-05-29 at 09:18 -0700, Josh Berkus wrote:
Bruce,
Another idea I discussed with Tom is having the slave _delay_ applying
WAL files until all slave snapshots are ready.
Well, again, that only works for async mode.
It depends on what we mean by synchronous. Do we mean the
Josh,
What does this give us that Solaris Cluster, RedHat Cluster, DRBD etc..
doesn't give us?
Actually, these solutions all have some serious drawbacks, not the least
of which is difficult administration (I speak from bitter personal
experience). Also, most of them require installation
On Thu, May 29, 2008 at 12:11:21PM -0400, Brian Hurt wrote:
Being able to do read-only queries makes this feature more valuable in more
situations, but I disagree that it's a deal-breaker.
Your managers are apparently more enlightened than some. ;-)
A
--
Andrew Sullivan
[EMAIL PROTECTED]
On Thu, May 29, 2008 at 07:20:37PM +0300, Marko Kreen wrote:
So you can do lossless failover. Currently there is no good
solution for this.
Indeed. Getting lossless failover would be excellent.
I understand David's worry (having had those arguments more times than
I care to admit), but if
On Thu, May 29, 2008 at 02:13:26PM -0400, Andrew Sullivan wrote:
On Thu, May 29, 2008 at 12:11:21PM -0400, Brian Hurt wrote:
Being able to do read-only queries makes this feature more
valuable in more situations, but I disagree that it's a
deal-breaker.
Your managers are apparently more
in this case too. So each slave just needs to report its own longest
open tx as open to master. Yes, it bloats master but no way around it.
Slaves should not report it every time or every transaction. Vacuum on master
will ask them before doing a real work.
--
Teodor Sigaev
On Thu, May 29, 2008 at 12:19 PM, Andrew Dunstan [EMAIL PROTECTED] wrote:
That's not what Tom's email said, AIUI. Synchronous replication surely
means that the master and slave always have the same set of transactions
applied. Streaming synchronous. But streaming log shipping will allow us
to
Hi everyone,
First of all, I'm absolutely delighted that the PG community is thinking
seriously about replication.
Second, having a solid, easy-to-use database availability solution that works
more or less out of the box would be an enormous benefit to customers.
Availability is the single
On Thu, May 29, 2008 at 10:19 AM, Justin [EMAIL PROTECTED] wrote:
Quoting You Also, postgresql doesn't as a rule cache 'results and queries'.
Then what is the purpose of shared buffers if nothing is being reused is it
only used to keep track locks, changes and what is to being spooled to the
On Thu, May 29, 2008 at 3:05 PM, Robert Hodges
[EMAIL PROTECTED] wrote:
Third, you can't stop with just this feature. (This is the BUT part of the
post.) The use cases not covered by this feature area actually pretty
large. Here are a few that concern me:
1.) Partial replication.
2.) WAN
Andrew Sullivan wrote:
On Thu, May 29, 2008 at 12:11:21PM -0400, Brian Hurt wrote:
Being able to do read-only queries makes this feature more valuable in more
situations, but I disagree that it's a deal-breaker.
Your managers are apparently more enlightened than some. ;-)
A
No
On 5/29/08, Teodor Sigaev [EMAIL PROTECTED] wrote:
in this case too. So each slave just needs to report its own longest
open tx as open to master. Yes, it bloats master but no way around it.
Slaves should not report it every time or every transaction. Vacuum on
master will ask them
On Thu, May 29, 2008 at 12:05:18PM -0700, Robert Hodges wrote:
people are starting to get religion on this issue I would strongly
advocate a parallel effort to put in a change-set extraction API
that would allow construction of comprehensive master/slave
replication.
You know, I gave a
On 5/29/08, Tom Lane [EMAIL PROTECTED] wrote:
* The proposed approach is trying to get to real replication
incrementally. Getting rid of the loss window involved in file-by-file
log shipping is step one, and I suspect that step two is going to be
fixing performance issues in WAL replay to
David Fetter wrote:
Either one of these would be great, but something that involves
machines that stay useless most of the time is just not going to work.
Lots of people do use warm standby already anyway, just not based on
mechanisms built into PostgreSQL. So defining away this need is
Jeff Davis wrote:
It depends on what we mean by synchronous. Do we mean the WAL record
has made it to the disk on the slave system, or the WAL record has
been applied on the slave system?
DRBD, which is a common warm standby solution for PostgreSQL at the moment,
provides various levels of
Merlin Moncure wrote:
Read only slave is the #1 most anticipated feature in the
circles I run with.
Do these circles not know about slony and londiste?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
I've got a BSD system I work on that is jailed and has low shared memory
settings. So low that I cannot even initdb to create a test 8.3.1 database.
It tries to create template1 with shared_buffers of 50 and max_connections
of 13. Is there any
On 5/29/08, Andrew Sullivan [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 12:05:18PM -0700, Robert Hodges wrote:
people are starting to get religion on this issue I would strongly
advocate a parallel effort to put in a change-set extraction API
that would allow construction of
Joshua D. Drake wrote:
What does this give us that Solaris Cluster, RedHat Cluster, DRBD etc..
doesn't give us?
I personally think that DRBD is a fine solution. But it only runs on Linux.
And Solaris Cluster isn't the same as DRBD.
--
Sent via pgsql-hackers mailing list
Andrew Sullivan [EMAIL PROTECTED] writes:
On Thu, May 29, 2008 at 12:05:18PM -0700, Robert Hodges wrote:
people are starting to get religion on this issue I would strongly
advocate a parallel effort to put in a change-set extraction API
that would allow construction of comprehensive
Andrew Sullivan wrote:
The big missing piece is lossless failover. People are currently
doing it with DRBD, various clustering things, c., and those are
complicated to set up and maintain.
Well, we'll see at the end of this (we hope) how a setup procedure of DRBD vs.
PG warm standby works
Mathias Brossard wrote:
From what I gather from those slides it seems to me that the NTT solution
is synchronous not asynchronous. In my opinion it's even better, but I do
understand that others might prefer asynchronous. I'm going to speculate,
but I would think it should be possible
Tom Lane wrote:
We believe that the most appropriate base technology for this is
probably real-time WAL log shipping, as was demoed by NTT OSS at PGCon.
Now how do we get our hands on their code?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
On Thu, May 29, 2008 at 3:59 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
Merlin Moncure wrote:
Read only slave is the #1 most anticipated feature in the
circles I run with.
Do these circles not know about slony and londiste?
Sure.
For various reasons mentioned elsewhere on this thread, a
Robert,
1.) Partial replication.
2.) WAN replication.
3.) Bi-directional replication. (Yes, this is evil but there are
problems where it is indispensable.)
4.) Upgrade support. Aside from database upgrade (how would this ever
really work between versions?), it would not support
On Thu, May 29, 2008 at 09:54:03PM +0200, Peter Eisentraut wrote:
David Fetter wrote:
Either one of these would be great, but something that involves
machines that stay useless most of the time is just not going to
work.
Lots of people do use warm standby already anyway, just not based
David Fetter [EMAIL PROTECTED] writes:
On Thu, May 29, 2008 at 09:54:03PM +0200, Peter Eisentraut wrote:
I think the consensus in the core team was that having synchronous
log shipping in 8.4 would already be a worthwhile feature by itself.
If that was in fact the consensus of the core team,
Greg Sabino Mullane wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160
I've got a BSD system I work on that is jailed and has low shared
memory settings. So low that I cannot even initdb to create a test
8.3.1 database. It tries to create template1 with shared_buffers of
50 and
On Thu, May 29, 2008 at 04:44:19PM -0400, Tom Lane wrote:
David Fetter [EMAIL PROTECTED] writes:
On Thu, May 29, 2008 at 09:54:03PM +0200, Peter Eisentraut wrote:
I think the consensus in the core team was that having
synchronous log shipping in 8.4 would already be a worthwhile
feature
David Fetter wrote:
What is your justification for denigrating this plan with that? Or
are you merely complaining because we know we won't be all the way
there in 8.4?
Again, just my humble opinion, but given the stated goal, which I
agree with, I'd say it's worth holding up 8.4 until
On Thu, May 29, 2008 at 11:05:09PM +0300, Marko Kreen wrote:
There is this tiny matter of replicating schema changes asynchronously,
but I suspect nobody actually cares.
I know that Slony's users call this their number one irritant, so I
have my doubts nobody cares. But maybe nobody cares
David,
I think the consensus in the core team was that having synchronous
log shipping in 8.4 would already be a worthwhile feature by itself.
If that was in fact the consensus of the core team,
It is.
and what I've been
seeing from several core members in this thread makes that idea
On Thu, May 29, 2008 at 04:54:04PM -0400, Bruce Momjian wrote:
David Fetter wrote:
What is your justification for denigrating this plan with that?
Or are you merely complaining because we know we won't be all
the way there in 8.4?
Again, just my humble opinion, but given the stated
Bruce Momjian [EMAIL PROTECTED] writes:
David Fetter wrote:
Again, just my humble opinion, but given the stated goal, which I
agree with, I'd say it's worth holding up 8.4 until some kind of
out-of-the-box replication advances that goal, where Yet Another
Toolkit Suitable For People Who Are
On Thu, May 29, 2008 at 01:55:42PM -0700, Josh Berkus wrote:
David,
I think the consensus in the core team was that having synchronous
log shipping in 8.4 would already be a worthwhile feature by itself.
If that was in fact the consensus of the core team,
It is.
and what I've been
David,
I think having master-slave replication in the core using WAL is a
*great* thing to do, doable, a good path to go on, etc., and I think
it's worth holding up 8.4 until we have at least one actual
out-of-the-box version of same.
Ah, ok. Well, I can tell you that the core team is also
On Thu, May 29, 2008 at 01:39:29PM -0700, David Fetter wrote:
I think the consensus in the core team was that having synchronous
log shipping in 8.4 would already be a worthwhile feature by itself.
If that was in fact the consensus of the core team, and what I've been
seeing from several
Peter Eisentraut wrote:
Mathias Brossard wrote:
From what I gather from those slides it seems to me that the NTT solution
is synchronous not asynchronous. In my opinion it's even better, but I do
understand that others might prefer asynchronous. I'm going to speculate,
but I would think it
On Thursday 29 May 2008 12:13:20 Bruce Momjian wrote:
David Fetter wrote:
On Thu, May 29, 2008 at 11:58:31AM -0400, Bruce Momjian wrote:
Josh Berkus wrote:
Publishing the XIDs back to the master is one possibility. We
also looked at using spillover segments for vacuumed rows, but
On Thu, 2008-05-29 at 17:42 -0400, Robert Treat wrote:
I would have thought the read only piece would have been more important than
the synchronous piece. In my experience readable slaves is the big selling
point in both Oracle and MySQL's implementations, and people are not nearly
as
Robert Treat [EMAIL PROTECTED] writes:
I would have thought the read only piece would have been more important than
the synchronous piece. In my experience readable slaves is the big selling
point in both Oracle and MySQL's implementations, and people are not nearly
as concerned if there is
On 5/29/08, Andrew Sullivan [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 11:05:09PM +0300, Marko Kreen wrote:
There is this tiny matter of replicating schema changes asynchronously,
but I suspect nobody actually cares.
I know that Slony's users call this their number one irritant, so
On Wed, May 28, 2008 at 8:30 PM, Mike [EMAIL PROTECTED] wrote:
When you say a bit of decoding, is that because the data written to the
logs
is after the query parser/planner? Or because it's written in several
chunks? Or?
Because that's the actual recovery record. There is no SQL text, just
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'd first want to applaud core decision: having bare PostgreSQL
propose a reliable and simple to set-up synchronous replication
solution is an excellent perspective! ...
Le 29 mai 08 à 23:42, Robert Treat a écrit :
I would have thought the
Mike [EMAIL PROTECTED] writes:
Is there another place in the code, I can get access to the statements (or
statement like information), after a transaction commit?
No.
Bear in mind that what you have decided to do amounts to rolling your
own replication system. This is a Hard Problem. I would
On Wed, May 28, 2008 at 6:26 PM, in message
[EMAIL PROTECTED],
Florian G. Pflug [EMAIL PROTECTED] wrote:
I think we should put some randomness into the decision,
to spread the IO caused by hit-bit updates after a batch load.
Currently we have a policy of doing a VACUUM FREEZE ANALYZE on a
Dimitri Fontaine [EMAIL PROTECTED] writes:
While at it, would it be possible for the simple part of the core
team statement to include automatic failover?
No, I think it would be a useless expenditure of energy. Failover
includes a lot of things that are not within our purview: switching
IP
1 - 100 of 120 matches
Mail list logo