On Mon, Dec 6, 2010 at 7:22 PM, Tom Lane wrote:
> Josh Berkus writes:
> >> However, if you were doing something like parallel pg_dump you could
> >> just run the parent and child instances all against the slave, so the
> >> pg_dump scenario doesn't seem to offer much of a supporting use-case for
On Fri, Dec 24, 2010 at 06:37:26PM -0500, Andrew Dunstan wrote:
> On 12/24/2010 06:26 PM, Aidan Van Dyk wrote:
> >On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake
> >wrote:
> >
> >>I would have to agree here. The idea that we have to search email
> >>is bad enough (issue/bug/feature tracker anyon
On Dec 24, 2010, at 10:52 AM, Bruce Momjian wrote:
> Agreed. Perhaps we need an anti-TODO that lists things we don't want in
> more detail. The TODO has that for a few items, but scaling things up
> there will be cumbersome.
I don't really think that'd be much better. What might be of some val
On Fri, 2010-12-24 at 18:26 -0500, Aidan Van Dyk wrote:
> On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake
> wrote:
>
> > I would have to agree here. The idea that we have to search email is bad
> > enough (issue/bug/feature tracker anyone?) but to have someone say,
> > search the archives? That
On 12/24/2010 06:26 PM, Aidan Van Dyk wrote:
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake wrote:
I would have to agree here. The idea that we have to search email is bad
enough (issue/bug/feature tracker anyone?) but to have someone say,
search the archives? That is just plain rude and a
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake wrote:
> I would have to agree here. The idea that we have to search email is bad
> enough (issue/bug/feature tracker anyone?) but to have someone say,
> search the archives? That is just plain rude and anti-community.
Saying "search the bugtracke
> anwhile is X.
>
> Agreed. Perhaps we need an anti-TODO that lists things we don't want in
> more detail. The TODO has that for a few items, but scaling things up
> there will be cumbersome.
>
Well there is a problem with this too. A good example is hints. A lot of
the community wants hints.
Robert Haas wrote:
> I actually think that the phrase "this has been discussed before and
> rejected" should be permanently removed from our list of excuses for
> rejecting a patch. Or if we must use that excuse, then I think a link
> to the relevant discussion is a must, and the relevant discussi
On Tue, Dec 14, 2010 at 7:06 PM, Koichi Suzuki wrote:
> Thank you very much for your advice. Indeed, I'm considering to
> change the license to PostgreSQL's one. It may take a bit more
> though...
You wouldn't necessarily need to relicense all of Postgres-XC
(although that would be cool, too,
Robert;
Thank you very much for your advice. Indeed, I'm considering to
change the license to PostgreSQL's one. It may take a bit more
though...
--
Koichi Suzuki
2010/12/15 Robert Haas :
> On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki wrote:
>> This is what Postgres-XC is doing bet
On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki wrote:
> This is what Postgres-XC is doing between a coordinator and a
> datanode. Coordinator may correspond to poolers/loadbalancers.
> Does anyone think it makes sense to extract XC implementation of
> snapshot shipping to PostgreSQL itself?
Per
On 12/07/2010 09:23 AM, Koichi Suzuki wrote:
> This is what Postgres-XC is doing between a coordinator and a
> datanode.Coordinator may correspond to poolers/loadbalancers.
> Does anyone think it makes sense to extract XC implementation of
> snapshot shipping to PostgreSQL itself?
well if ther
This is what Postgres-XC is doing between a coordinator and a
datanode.Coordinator may correspond to poolers/loadbalancers.
Does anyone think it makes sense to extract XC implementation of
snapshot shipping to PostgreSQL itself?
Cheers;
--
Koichi Suzuki
2010/12/7 Stefan Kaltenbrunne
> On 12/07/2010 01:22 AM, Tom Lane wrote:
>> Josh Berkus writes:
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of a supporting use-case for
w
On 12/07/2010 01:22 AM, Tom Lane wrote:
> Josh Berkus writes:
>>> However, if you were doing something like parallel pg_dump you could
>>> just run the parent and child instances all against the slave, so the
>>> pg_dump scenario doesn't seem to offer much of a supporting use-case for
>>> worrying
We may need other means to ensure that the snapshot is available on
the slave. It could be a bit too early to use the snapshot on the
slave depending upon the delay of WAL replay.
--
Koichi Suzuki
2010/12/7 Tom Lane :
> marcin mank writes:
>> On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane w
Josh Berkus writes:
>> However, if you were doing something like parallel pg_dump you could
>> just run the parent and child instances all against the slave, so the
>> pg_dump scenario doesn't seem to offer much of a supporting use-case for
>> worrying about this. When would you really need to be
> However, if you were doing something like parallel pg_dump you could
> just run the parent and child instances all against the slave, so the
> pg_dump scenario doesn't seem to offer much of a supporting use-case for
> worrying about this. When would you really need to be able to do it?
If you
marcin mank writes:
> On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane wrote:
>> IIRC, in old discussions of this problem we first considered allowing
>> clients to pull down an explicit representation of their snapshot (which
>> actually is an existing feature now, txid_current_snapshot()) and then
>> u
On 06.12.2010 21:48, marcin mank wrote:
On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane wrote:
IIRC, in old discussions of this problem we first considered allowing
clients to pull down an explicit representation of their snapshot (which
actually is an existing feature now, txid_current_snapshot()) an
On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane wrote:
> IIRC, in old discussions of this problem we first considered allowing
> clients to pull down an explicit representation of their snapshot (which
> actually is an existing feature now, txid_current_snapshot()) and then
> upload that again to become
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Tom Lane wrote:
>>> No. See subtransactions.
>
>> Subtransactions are included in snapshots?
>
> Sure, see GetSnapshotData(). You could avoid it by setting
> suboverflowed, but that comes at a nontrivial performance cost.
Yeah, sorry for blurti
"Kevin Grittner" writes:
> Tom Lane wrote:
>> No. See subtransactions.
> Subtransactions are included in snapshots?
Sure, see GetSnapshotData(). You could avoid it by setting
suboverflowed, but that comes at a nontrivial performance cost.
regards, tom lane
--
Sent
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Surely you can predict that any snapshot is no larger than a fairly
>> small fixed portion plus sizeof(TransactionId) * MaxBackends?
>
> No. See subtransactions.
Subtransactions are included in snapshots?
-Kevin
--
Sent via pgsql-hackers mai
"Kevin Grittner" writes:
> Tom Lane wrote:
>>> I'm still not convinced that using shared memory is a bad way to
>>> pass these around. Surely we're not talking about large numbers
>>> of them. What am I missing here?
>>
>> They're not of a very predictable size.
> Surely you can predict that
Tom Lane wrote:
>> I'm still not convinced that using shared memory is a bad way to
>> pass these around. Surely we're not talking about large numbers
>> of them. What am I missing here?
>
> They're not of a very predictable size.
Surely you can predict that any snapshot is no larger than a
On 12/06/2010 12:28 PM, Tom Lane wrote:
Andrew Dunstan writes:
Yeah. I'm still not convinced that using shared memory is a bad way to
pass these around. Surely we're not talking about large numbers of them.
What am I missing here?
They're not of a very predictable size.
Ah. Ok.
cheers
Andrew Dunstan writes:
> Why not just say give me the snapshot currently held by process ?
There's not a unique snapshot held by a particular process. Also, we
don't want to expend the overhead to fully publish every snapshot.
I think it's really necessary that the "sending" process take som
Andrew Dunstan writes:
> Yeah. I'm still not convinced that using shared memory is a bad way to
> pass these around. Surely we're not talking about large numbers of them.
> What am I missing here?
They're not of a very predictable size.
Robert's idea of publish() returning a temp file identifi
On 12/06/2010 10:40 AM, Tom Lane wrote:
Robert Haas writes:
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
wrote:
Well, then you need some sort of cross-backend communication, which is
always a bit clumsy.
A temp file seems quite sufficient, and not at all difficult.
"Not at all diff
On Mon, Dec 6, 2010 at 10:40 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
>> wrote:
>>> Well, then you need some sort of cross-backend communication, which is
>>> always a bit clumsy.
>
>> A temp file seems quite sufficient, and not at all diffi
Robert Haas writes:
> On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
> wrote:
>> Well, then you need some sort of cross-backend communication, which is
>> always a bit clumsy.
> A temp file seems quite sufficient, and not at all difficult.
"Not at all difficult" is nonsense. To do that, yo
On Mon, Dec 6, 2010 at 10:35 AM, Andrew Dunstan wrote:
> On 12/06/2010 10:22 AM, Robert Haas wrote:
>>
>> On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
>> wrote:
>>>
>>> On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
>
On 12/06/2010 10:22 AM, Robert Haas wrote:
On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
wrote:
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
for my taste. It might not be as bad as the expression tree stuff,
but there's
On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
wrote:
> On 06.12.2010 15:53, Robert Haas wrote:
>>
>> I guess. It still seems far too much like exposing the server's guts
>> for my taste. It might not be as bad as the expression tree stuff,
>> but there's nothing particularly good about it e
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
for my taste. It might not be as bad as the expression tree stuff,
but there's nothing particularly good about it either.
Note that we already have txid_current_snapshot() function, wh
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
wrote:
> On 06.12.2010 14:57, Robert Haas wrote:
>>
>> On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
>> wrote:
>>>
>>> The client doesn't need to know anything about the snapshot blob that the
>>> server gives it. It just needs to pass it b
On 06.12.2010 14:57, Robert Haas wrote:
On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
wrote:
The client doesn't need to know anything about the snapshot blob that the
server gives it. It just needs to pass it back to the server through the
other connection. To the client, it's just an opa
On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
wrote:
> On 06.12.2010 02:55, Robert Haas wrote:
>>
>> On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane wrote:
>>>
>>> I'm wondering if we should reconsider the pass-it-through-the-client
>>> approach, because if we could make that work it would be more
On 06.12.2010 02:55, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The trick seems to be
Thank you Joachim;
Yes, and the current patch requires the original (publisher)
transaction is alive to prevent RecentXmin updated.
I hope this restriction is acceptable if publishing/subscribing is
provided via functions, not statements.
Cheers;
--
Koichi Suzuki
2010/12/6 Joachim Wie
On Sun, Dec 5, 2010 at 9:27 PM, Robert Haas wrote:
> On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan wrote:
>> Why not just say give me the snapshot currently held by process ?
>>
>> And please, not temp files if possible.
>
> As far as I'm aware, the full snapshot doesn't normally exist in
>
On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan wrote:
> Why not just say give me the snapshot currently held by process ?
>
> And please, not temp files if possible.
As far as I'm aware, the full snapshot doesn't normally exist in
shared memory, hence the need for publication of some sort. W
On 12/05/2010 08:55 PM, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The trick seems t
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane wrote:
> I'm wondering if we should reconsider the pass-it-through-the-client
> approach, because if we could make that work it would be more general and
> it wouldn't need any special privileges. The trick seems to be to apply
> sufficient sanity testing
Greg Smith writes:
> In addition, Joachim submitted a synchronized snapshot patch that looks
> to me like it slipped through the cracks without being fully explored.
> ...
> The way I read that thread, there were two objections:
> 1) This mechanism isn't general enough for all use-cases outsid
Joachim Wieland wrote:
Regarding snapshot cloning and dump consistency, I brought this up
already several months ago and asked if the feature is considered
useful even without snapshot cloning.
In addition, Joachim submitted a synchronized snapshot patch that looks
to me like it slipped throug
On 12/03/2010 12:17 PM, Alvaro Herrera wrote:
Excerpts from Robert Haas's message of vie dic 03 13:56:32 -0300 2010:
I know the use cases are limited, but I think it's still useful on its own.
I don't understand what's so difficult about starting with the snapshot
cloning patch. AFAIR it's
Excerpts from Robert Haas's message of vie dic 03 13:56:32 -0300 2010:
> I know the use cases are limited, but I think it's still useful on its own.
I don't understand what's so difficult about starting with the snapshot
cloning patch. AFAIR it's already been written anyway, no?
--
Álvaro Herr
On Fri, Dec 3, 2010 at 11:40 AM, Andrew Dunstan wrote:
>
>
> On 12/03/2010 11:23 AM, Robert Haas wrote:
>>
>> On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstan
>> wrote:
>>>
>>> I think Josh Berkus' comments in the thread you mentioned are correct:
>>>
Actually, I'd say that there's a broad set
On 12/03/2010 11:23 AM, Robert Haas wrote:
On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstan wrote:
I think Josh Berkus' comments in the thread you mentioned are correct:
Actually, I'd say that there's a broad set of cases of people who want
to do a parallel pg_dump while their system is activ
On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstan wrote:
> I think Josh Berkus' comments in the thread you mentioned are correct:
>
>> Actually, I'd say that there's a broad set of cases of people who want
>> to do a parallel pg_dump while their system is active. Parallel pg_dump
>> on a stopped sys
On 12/02/2010 11:44 PM, Joachim Wieland wrote:
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lane wrote:
In particular, this issue *has* been discussed before, and there was a
consensus that preserving dump consistency was a requirement. I don't
think that Joachim gets to bypass that decision just by
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lane wrote:
> Andrew Dunstan writes:
>> Umm, nobody has attributed ridiculousness to anyone. Please don't put
>> words in my mouth. But I think this is a perfectly reasonable discussion
>> to have. Nobody gets to come along and get the features they want
>> wit
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lane wrote:
> In particular, this issue *has* been discussed before, and there was a
> consensus that preserving dump consistency was a requirement. I don't
> think that Joachim gets to bypass that decision just by submitting a
> patch that ignores it.
I am no
On 12/02/2010 09:41 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 12/02/2010 09:09 PM, Tom Lane wrote:
Now, process 3 is blocked behind process 2 is blocked behind process 1
which is waiting for 3 to complete. Can you say "undetectable deadlock"?
Hmm. Yeah. Maybe we could get around it if
Andrew Dunstan writes:
> On 12/02/2010 09:09 PM, Tom Lane wrote:
>> Now, process 3 is blocked behind process 2 is blocked behind process 1
>> which is waiting for 3 to complete. Can you say "undetectable deadlock"?
> Hmm. Yeah. Maybe we could get around it if we prefork the workers and
> they a
Andrew Dunstan writes:
> Umm, nobody has attributed ridiculousness to anyone. Please don't put
> words in my mouth. But I think this is a perfectly reasonable discussion
> to have. Nobody gets to come along and get the features they want
> without some sort of consensus, not me, not you, not Jo
On 12/02/2010 09:09 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 12/02/2010 05:32 PM, Tom Lane wrote:
(I'm not actually convinced that snapshot cloning is the only problem
here; locking could be an issue too, if there are concurrent processes
trying to take locks that will conflict with pg_
Andrew Dunstan writes:
> On 12/02/2010 05:32 PM, Tom Lane wrote:
>> (I'm not actually convinced that snapshot cloning is the only problem
>> here; locking could be an issue too, if there are concurrent processes
>> trying to take locks that will conflict with pg_dump's. But the
>> snapshot issue
On Dec 2, 2010, at 8:11 PM, Andrew Dunstan wrote:
> Umm, nobody has attributed ridiculousness to anyone. Please don't put words
> in my mouth. But I think this is a perfectly reasonable discussion to have.
> Nobody gets to come along and get the features they want without some sort of
> consens
On 12/02/2010 07:48 PM, Robert Haas wrote:
On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstan wrote:
In the past, proposals for this have always been rejected on the
grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to und
On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstan wrote:
> In the past, proposals for this have always been rejected on the
> grounds
> that it's impossible to assure a consistent dump if different
> connections are used to read different tables. I fail to understand
> why that c
On 12/02/2010 07:13 PM, Robert Haas wrote:
On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
c
On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane wrote:
> Andrew Dunstan writes:
>> On 12/02/2010 05:01 PM, Tom Lane wrote:
>>> In the past, proposals for this have always been rejected on the grounds
>>> that it's impossible to assure a consistent dump if different
>>> connections are used to read diffe
On 12/02/2010 05:32 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail
Dimitri Fontaine wrote:
> Heikki Linnakangas writes:
> > I don't see the point of the sort-by-relpages code. The order the objects
> > are dumped should be irrelevant, as long as you obey the restrictions
> > dictated by dependencies. Or is it only needed for the multiple-target-dirs
> > feature?
Andrew Dunstan writes:
> On 12/02/2010 05:01 PM, Tom Lane wrote:
>> In the past, proposals for this have always been rejected on the grounds
>> that it's impossible to assure a consistent dump if different
>> connections are used to read different tables. I fail to understand
>> why that consider
On 12/02/2010 05:01 PM, Tom Lane wrote:
Heikki Linnakangas writes:
That's a big patch..
Not nearly big enough :-(
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tab
Heikki Linnakangas writes:
> That's a big patch..
Not nearly big enough :-(
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that conside
On Thu, Dec 2, 2010 at 12:56 PM, Josh Berkus wrote:
> Now, if only I could think of some way to write a parallel dump to a set of
> pipes, I'd be in heaven.
What exactly are you trying to accomplish with the pipes?
Joachim
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
T
>> Now, if only I could think of some way to write a parallel dump to a
>> set of pipes, I'd be in heaven.
>
> The only way I can see that working sanely would be to have a program
> gathering stuff at the other end of the pipes, and ensuring it was all
> coherent. That would be a huge growth in
On 12/02/2010 12:56 PM, Josh Berkus wrote:
On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:
So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.
I think it will complicate
On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:
So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.
I think it will complicate this feature unnecessarily for 9.1.
Personall
Joachim Wieland writes:
> A guy called Dimitri Fontaine actually proposed the
> serveral-directories feature here and other people liked the idea.
Hehe :)
Reading that now, it could be that I didn't know at the time that given
a powerful enough subsystem disk there's no way to saturate it with o
On Thu, Dec 2, 2010 at 6:19 AM, Heikki Linnakangas
wrote:
> I don't see the point of the sort-by-relpages code. The order the objects
> are dumped should be irrelevant, as long as you obey the restrictions
> dictated by dependencies. Or is it only needed for the multiple-target-dirs
> feature? Fra
Heikki Linnakangas writes:
> I don't see the point of the sort-by-relpages code. The order the objects
> are dumped should be irrelevant, as long as you obey the restrictions
> dictated by dependencies. Or is it only needed for the multiple-target-dirs
> feature? Frankly I don't see the point of t
On 02.12.2010 07:39, Joachim Wieland wrote:
On Sun, Nov 14, 2010 at 6:52 PM, Joachim Wieland wrote:
You would add a regular parallel dump with
$ pg_dump -j 4 -Fd -f out.dir dbname
So this is an updated series of patches for my parallel pg_dump WIP
patch. Most importantly it now runs on Windo
78 matches
Mail list logo