On Tue, Sep 26, 2017 at 10:36 AM, Vaishnavi Prabakaran
wrote:
> Hi,
>
> On Wed, Sep 13, 2017 at 9:59 AM, Daniel Gustafsson wrote:
>>
>>
>> I’m not entirely sure why this was flagged as "Waiting for Author” by the
>> automatic run, the patch applies
Hi,
On Wed, Sep 13, 2017 at 9:59 AM, Daniel Gustafsson wrote:
>
> I’m not entirely sure why this was flagged as "Waiting for Author” by the
> automatic run, the patch applies for me and builds so resetting back to
> “Needs
> review”.
>
>
This patch applies and build cleanly and
> On 30 May 2017, at 19:55, Peter Eisentraut
> wrote:
>
> On 5/29/17 22:56, Noah Misch wrote:
>> On Fri, May 19, 2017 at 11:33:48AM +0900, Masahiko Sawada wrote:
>>> On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs wrote:
Looks like a
On 09/06/17 18:33, Omar Kilani wrote:
> Is there anything people using float datetimes can do that isn't a
> pg_dumpall | pg_restore to do a less painful update?
>
> We have several TB of data still using float datetimes and I'm trying
> to figure out how we can move to 10 (currently on 9.6.x)
Omar Kilani writes:
> Is there anything people using float datetimes can do that isn't a
> pg_dumpall | pg_restore to do a less painful update?
Um, not really. You may be stuck on 9.6 until you can spare the effort
to convert. The physical representations of timestamps
Hi,
I know I'm 7 months late to this, but only just read the beta 4 release notes.
Is there anything people using float datetimes can do that isn't a
pg_dumpall | pg_restore to do a less painful update?
We have several TB of data still using float datetimes and I'm trying
to figure out how we
On 1 June 2017 at 09:23, Andres Freund wrote:
> Even if we decide this is necessary, I *strongly* suggest trying to get
> the existing standby decoding etc wrapped up before starting something
> nontrival afresh.
Speaking of such, I had a thought about how to sync logical
On 2017-05-31 21:27:56 -0400, Stephen Frost wrote:
> Craig,
>
> * Craig Ringer (cr...@2ndquadrant.com) wrote:
> > TL;DR: replication origins track LSN without timeline. This is
> > ambiguous when physical failover is present since /
> > can now represent more than one state due to
On 1 June 2017 at 09:36, Andres Freund wrote:
> On 2017-05-31 21:33:26 -0400, Stephen Frost wrote:
>> > This only starts becoming an issue once logical replication slots can
>> > exist on replicas and be maintained to follow the master's slot state.
>> > Which is incomplete in
Craig,
* Craig Ringer (cr...@2ndquadrant.com) wrote:
> On 1 June 2017 at 09:27, Stephen Frost wrote:
> > * Craig Ringer (cr...@2ndquadrant.com) wrote:
> >> TL;DR: replication origins track LSN without timeline. This is
> >> ambiguous when physical failover is present since
Andres,
* Andres Freund (and...@anarazel.de) wrote:
> On 2017-05-31 21:33:26 -0400, Stephen Frost wrote:
> > > This only starts becoming an issue once logical replication slots can
> > > exist on replicas and be maintained to follow the master's slot state.
> > > Which is incomplete in Pg10 (not
On 2017-05-31 21:33:26 -0400, Stephen Frost wrote:
> > This only starts becoming an issue once logical replication slots can
> > exist on replicas and be maintained to follow the master's slot state.
> > Which is incomplete in Pg10 (not exposed to users) but I plan to
> > finish getting in for
On 1 June 2017 at 09:23, Andres Freund wrote:
> Hi,
>
> On 2017-06-01 09:12:04 +0800, Craig Ringer wrote:
>> TL;DR: replication origins track LSN without timeline. This is
>> ambiguous when physical failover is present since /
>> can now represent more than one
Andres,
* Andres Freund (and...@anarazel.de) wrote:
> On 2017-05-31 21:27:56 -0400, Stephen Frost wrote:
> > Uh, TL;DR, wow? Why isn't this something which needs to be addressed
> > before PG10 can be released?
>
> Huh? Slots are't present on replicas, ergo there's no way for the whole
> issue
On 1 June 2017 at 09:27, Stephen Frost wrote:
> Craig,
>
> * Craig Ringer (cr...@2ndquadrant.com) wrote:
>> TL;DR: replication origins track LSN without timeline. This is
>> ambiguous when physical failover is present since /
>> can now represent more than one
Craig,
* Craig Ringer (cr...@2ndquadrant.com) wrote:
> TL;DR: replication origins track LSN without timeline. This is
> ambiguous when physical failover is present since /
> can now represent more than one state due to timeline forks with
> promotions. Replication origins should
Hi,
On 2017-06-01 09:12:04 +0800, Craig Ringer wrote:
> TL;DR: replication origins track LSN without timeline. This is
> ambiguous when physical failover is present since /
> can now represent more than one state due to timeline forks with
> promotions. Replication origins should
Hi all
TL;DR: replication origins track LSN without timeline. This is
ambiguous when physical failover is present since /
can now represent more than one state due to timeline forks with
promotions. Replication origins should track timelines so we can tell
the difference, I
On 5/29/17 22:56, Noah Misch wrote:
> On Fri, May 19, 2017 at 11:33:48AM +0900, Masahiko Sawada wrote:
>> On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs wrote:
>>> Looks like a bug that we should fix in PG10, with backpatch to 9.4 (or
>>> as far as it goes).
>>>
>>>
On Fri, May 19, 2017 at 11:33:48AM +0900, Masahiko Sawada wrote:
> On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs wrote:
> > On 22 March 2017 at 02:50, Masahiko Sawada wrote:
> >
> >> When using logical replication, I ran into a situation where the
>
On Wed, Apr 12, 2017 at 5:31 AM, Simon Riggs wrote:
> On 22 March 2017 at 02:50, Masahiko Sawada wrote:
>
>> When using logical replication, I ran into a situation where the
>> pg_stat_replication.state is not updated until any wal record is sent
>>
On 22 March 2017 at 02:50, Masahiko Sawada wrote:
> When using logical replication, I ran into a situation where the
> pg_stat_replication.state is not updated until any wal record is sent
> after started up. For example, I set up logical replication with 2
> subscriber
Hi all,
When using logical replication, I ran into a situation where the
pg_stat_replication.state is not updated until any wal record is sent
after started up. For example, I set up logical replication with 2
subscriber and restart the publisher server, but I see the following
status for a while
On 2017-02-27 17:00:23 -0800, Joshua D. Drake wrote:
> On 02/22/2017 02:45 PM, Tom Lane wrote:
> > Andres Freund writes:
> > > On 2017-02-22 08:43:28 -0500, Tom Lane wrote:
> > > > (To be concrete, I'm suggesting dropping --disable-integer-datetimes
> > > > in HEAD, and just
On 02/22/2017 02:45 PM, Tom Lane wrote:
Andres Freund writes:
On 2017-02-22 08:43:28 -0500, Tom Lane wrote:
(To be concrete, I'm suggesting dropping --disable-integer-datetimes
in HEAD, and just agreeing that in the back branches, use of replication
protocol with
On Mon, Feb 20, 2017 at 09:07:33AM -0500, Tom Lane wrote:
> The question to be asked is whether there is still anybody out there
> using float timestamps. I'm starting to get dubious about it myself.
> Certainly, no packager that I'm aware of has shipped a float-timestamp
> build since we
Andres Freund writes:
> On 2017-02-22 08:43:28 -0500, Tom Lane wrote:
>> (To be concrete, I'm suggesting dropping --disable-integer-datetimes
>> in HEAD, and just agreeing that in the back branches, use of replication
>> protocol with float-timestamp servers is not supported
On Mon, Feb 20, 2017 at 11:58:12AM +0100, Petr Jelinek wrote:
> On 20/02/17 08:03, Andres Freund wrote:
> > On 2017-02-19 10:49:29 -0500, Tom Lane wrote:
> >> Robert Haas writes:
> >>> On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
> Thoughts?
Andrew Dunstan writes:
> On 02/22/2017 10:21 AM, Jim Nasby wrote:
>> Only in the catalog though, not the datums, right? I would think you
>> could just change the oid in the catalog the same as you would for a
>> table column.
> No, in the datums.
Yeah, I don't
On 02/22/2017 10:21 AM, Jim Nasby wrote:
> On 2/22/17 9:12 AM, Andres Freund wrote:
>>> That would allow an in-place upgrade of
>>> a really large cluster. A user would still need to modify their code
>>> to use
>>> the new type.
>>>
>>> Put another way: add ability for pg_upgrade to change the
On 2/22/17 9:12 AM, Andres Freund wrote:
That would allow an in-place upgrade of
a really large cluster. A user would still need to modify their code to use
the new type.
Put another way: add ability for pg_upgrade to change the type of a field.
There might be other uses for that as well.
Type
On 2017-02-22 09:06:38 -0600, Jim Nasby wrote:
> On 2/22/17 7:56 AM, Andres Freund wrote:
> > It sounded more like Jim suggested a full blown SQL type, given that he
> > replied to my concern about the possible need for a deprecation period
> > due to pg_upgrade concerns. To be useful for that,
On 2/22/17 7:56 AM, Andres Freund wrote:
On 2017-02-22 08:43:28 -0500, Tom Lane wrote:
Andres Freund writes:
On 2017-02-22 00:10:35 -0600, Jim Nasby wrote:
I wounder if a separate "floatstamp" data type might fit the bill there. It
might not be completely seamless, but it
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> While I'm generally not one to vote for dropping backwards-compatibility
>> features, I have to say that I find #4 the most attractive of these
>> options. It would result in getting rid of boatloads of
Tom, all,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> While I'm generally not one to vote for dropping backwards-compatibility
> features, I have to say that I find #4 the most attractive of these
> options. It would result in getting rid of boatloads of under-tested
> code, whereas #2 would really
On 2017-02-22 08:43:28 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2017-02-22 00:10:35 -0600, Jim Nasby wrote:
> >> I wounder if a separate "floatstamp" data type might fit the bill there. It
> >> might not be completely seamless, but it would be binary compatible.
>
Andres Freund writes:
> On 2017-02-22 00:10:35 -0600, Jim Nasby wrote:
>> I wounder if a separate "floatstamp" data type might fit the bill there. It
>> might not be completely seamless, but it would be binary compatible.
> I don't really see what'd that solve.
Seems to me
On 2017-02-22 00:10:35 -0600, Jim Nasby wrote:
> On 2/20/17 5:04 AM, Andres Freund wrote:
> > On 2017-02-20 11:58:12 +0100, Petr Jelinek wrote:
> > > That being said, I did wonder myself if we should just deprecate float
> > > timestamps as well.
> >
> > I think we need a proper deprecation
On 2/20/17 5:04 AM, Andres Freund wrote:
On 2017-02-20 11:58:12 +0100, Petr Jelinek wrote:
That being said, I did wonder myself if we should just deprecate float
timestamps as well.
I think we need a proper deprecation period for that, given that the
conversion away will be painful for
On 2/21/17 4:52 PM, James Cloos wrote:
"TL" == Tom Lane writes:
TL> The question to be asked is whether there is still anybody out there
TL> using float timestamps.
Gentoo's ebuild includes:
$(use_enable !pg_legacytimestamp integer-datetimes) \
FWIW, last time I
> "TL" == Tom Lane writes:
TL> The question to be asked is whether there is still anybody out there
TL> using float timestamps.
Gentoo's ebuild includes:
$(use_enable !pg_legacytimestamp integer-datetimes) \
meaning that by default --enable-integer-datetimes is
Robert Haas writes:
> On Mon, Feb 20, 2017 at 7:37 PM, Tom Lane wrote:
>> The question to be asked is whether there is still anybody out there
>> using float timestamps. I'm starting to get dubious about it myself.
> I'm wondering if it has any effect
On Mon, Feb 20, 2017 at 7:37 PM, Tom Lane wrote:
> The question to be asked is whether there is still anybody out there
> using float timestamps. I'm starting to get dubious about it myself.
> Certainly, no packager that I'm aware of has shipped a float-timestamp
> build
Petr Jelinek writes:
> It's definitely not hard, we already have
> IntegerTimestampToTimestampTz() which does the opposite conversion anyway.
It's not the functions that are hard, it's making sure that you have used
them in the correct places, and declared relevant
Andres Freund writes:
> On 2017-02-20 11:58:12 +0100, Petr Jelinek wrote:
>> That being said, I did wonder myself if we should just deprecate float
>> timestamps as well.
> I think we need a proper deprecation period for that, given that the
> conversion away will be painful
On 20/02/17 12:04, Andres Freund wrote:
> On 2017-02-20 11:58:12 +0100, Petr Jelinek wrote:
>> That being said, I did wonder myself if we should just deprecate float
>> timestamps as well.
>
> I think we need a proper deprecation period for that, given that the
> conversion away will be painful
On 2017-02-20 11:58:12 +0100, Petr Jelinek wrote:
> That being said, I did wonder myself if we should just deprecate float
> timestamps as well.
I think we need a proper deprecation period for that, given that the
conversion away will be painful for pg_upgrade using people with big
clusters. So
On 20/02/17 08:03, Andres Freund wrote:
> On 2017-02-19 10:49:29 -0500, Tom Lane wrote:
>> Robert Haas writes:
>>> On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
Thoughts? Should we double down on trying to make this work according
to the
On 2017-02-19 10:49:29 -0500, Tom Lane wrote:
> Robert Haas writes:
> > On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
> >> Thoughts? Should we double down on trying to make this work according
> >> to the "all integer timestamps" protocol specs, or
On Sun, Feb 19, 2017 at 9:19 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
>>> Thoughts? Should we double down on trying to make this work according
>>> to the "all integer
Robert Haas writes:
> On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
>> Thoughts? Should we double down on trying to make this work according
>> to the "all integer timestamps" protocol specs, or cut our losses and
>> change the specs?
> I vote for
On Sun, Feb 19, 2017 at 3:31 AM, Tom Lane wrote:
> Thoughts? Should we double down on trying to make this work according
> to the "all integer timestamps" protocol specs, or cut our losses and
> change the specs?
I vote for doubling down. It's bad enough that we have so
Both the streaming replication and logical replication areas of the code
are, approximately, utterly broken when !HAVE_INT64_TIMESTAMPS. (The fact
that "make check-world" passes anyway is an indictment of the quality of
the regression tests.)
I started poking around in this area after Thomas
On 13 January 2017 at 10:17, Ants Aasma wrote:
> On 5 Jan 2017 2:54 a.m., "Craig Ringer" wrote:
>> Ants, do you think you'll have a chance to convert your shell script
>> test into a TAP test in src/test/recovery?
>>
>> Simon has said he would like to
On Wed, Jan 11, 2017 at 2:09 AM, Michael Paquier
wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, David Steele
> wrote:
> > On 1/10/17 3:06 PM, Stephen Frost wrote:
> >> * Magnus Hagander (mag...@hagander.net) wrote:
> >>> On Tue, Jan 10, 2017 at 8:03
On 5 Jan 2017 2:54 a.m., "Craig Ringer" wrote:
> Ants, do you think you'll have a chance to convert your shell script
> test into a TAP test in src/test/recovery?
>
> Simon has said he would like to commit this fix. I'd personally be
> happier if it had a test to go with
On Wed, Jan 11, 2017 at 10:06 AM, David Steele wrote:
> On 1/10/17 3:06 PM, Stephen Frost wrote:
>> * Magnus Hagander (mag...@hagander.net) wrote:
>>> On Tue, Jan 10, 2017 at 8:03 PM, Robert Haas wrote:
>
I may be outvoted, but I'm still not in
On 1/10/17 3:06 PM, Stephen Frost wrote:
> * Magnus Hagander (mag...@hagander.net) wrote:
>> On Tue, Jan 10, 2017 at 8:03 PM, Robert Haas wrote:
>>> I may be outvoted, but I'm still not in favor of changing the default
>>> wal_level. That caters only to people who lack
Greetings,
* Magnus Hagander (mag...@hagander.net) wrote:
> On Tue, Jan 10, 2017 at 8:03 PM, Robert Haas wrote:
>
> > On Mon, Jan 9, 2017 at 11:02 AM, Peter Eisentraut
> > wrote:
> > > On 1/9/17 7:44 AM, Magnus Hagander wrote:
> > >> So
On Tue, Jan 10, 2017 at 8:03 PM, Robert Haas wrote:
> On Mon, Jan 9, 2017 at 11:02 AM, Peter Eisentraut
> wrote:
> > On 1/9/17 7:44 AM, Magnus Hagander wrote:
> >> So based on that, I suggest we go ahead and make the change to make both
>
On Mon, Jan 9, 2017 at 11:02 AM, Peter Eisentraut
wrote:
> On 1/9/17 7:44 AM, Magnus Hagander wrote:
>> So based on that, I suggest we go ahead and make the change to make both
>> the values 10 by default. And that we do that now, because that lets us
>> get it
On 1/9/17 7:44 AM, Magnus Hagander wrote:
> So based on that, I suggest we go ahead and make the change to make both
> the values 10 by default. And that we do that now, because that lets us
> get it out through more testing on different platforms, so that we catch
> issues earlier on if they do
On Sun, Jan 8, 2017 at 2:19 AM, Jim Nasby wrote:
> On 1/5/17 2:50 PM, Tomas Vondra wrote:
>
>> Ultimately, the question is whether the number of people running into
>> "Hey, I can't take pg_basebackup or setup a standby with the default
>> config!" is higher or lower
On Sat, Jan 7, 2017 at 7:57 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 1/7/17 6:23 AM, Magnus Hagander wrote:
> > In the build farm, I have found 6 critters that do not end up with
> the
> > 100/128MB setting: sidewinder, curculio, coypu, brolga, lorikeet,
> >
On 1/5/17 2:50 PM, Tomas Vondra wrote:
Ultimately, the question is whether the number of people running into
"Hey, I can't take pg_basebackup or setup a standby with the default
config!" is higher or lower than number of people running into "Hey,
CREATE TABLE + COPY is slower now!"
I'm betting
On 1/7/17 6:23 AM, Magnus Hagander wrote:
> In the build farm, I have found 6 critters that do not end up with the
> 100/128MB setting: sidewinder, curculio, coypu, brolga, lorikeet,
> opossum. I wonder what limitations initdb is bumping against.
>
>
> Since you lookeda t the data
On Sat, Jan 7, 2017 at 1:27 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 1/5/17 12:01 PM, Andres Freund wrote:
> > On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
> >> I also suggest making the defaults for both 20 instead of 10. That
> >> leaves enough room that
On 1/5/17 12:01 PM, Andres Freund wrote:
> On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
>> I also suggest making the defaults for both 20 instead of 10. That
>> leaves enough room that almost nobody ever has to change them, whereas
>> 10 can be a bit tight for some not-outrageous
On 1/5/17 4:56 PM, Michael Banck wrote:
>> You can't actually change the other two without changing wal_level.
> That actually goes both ways: I recently saw a server not start cause we
> were experimenting with temporarily setting wal_level to minimal for
> initial bulk loading, but did not
On Mon, Jan 02, 2017 at 10:21:41AM +0100, Magnus Hagander wrote:
> On Mon, Jan 2, 2017 at 10:17 AM, Simon Riggs wrote:
> > On 31 December 2016 at 15:00, Magnus Hagander wrote:
> > > max_wal_senders=10
> > > max_replication_slots=20
[...]
> > >
On 01/05/2017 05:37 PM, Stephen Frost wrote:
Tomas,
* Tomas Vondra (tomas.von...@2ndquadrant.com) wrote:
On 01/05/2017 02:23 PM, Magnus Hagander wrote:
It's easy enough to construct a benchmark specifically to show the
difference, but of any actual "normal workload" for it. Typically the
On 5 Jan 2017 2:54 a.m., "Craig Ringer" wrote:
On 2 January 2017 at 22:24, Craig Ringer wrote:
>
>
> On 2 Jan. 2017 20:20, "Simon Riggs" wrote:
>
> On 21 December 2016 at 13:23, Simon Riggs wrote:
>
>>
On 2017-01-05 09:12:49 -0800, Andres Freund wrote:
> On 2017-01-05 18:08:36 +0100, Magnus Hagander wrote:
> > On Thu, Jan 5, 2017 at 6:01 PM, Andres Freund wrote:
> >
> > > On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
> > > > I also suggest making the defaults for
On 2017-01-05 18:08:36 +0100, Magnus Hagander wrote:
> On Thu, Jan 5, 2017 at 6:01 PM, Andres Freund wrote:
>
> > On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
> > > I also suggest making the defaults for both 20 instead of 10. That
> > > leaves enough room that
On Thu, Jan 5, 2017 at 6:01 PM, Andres Freund wrote:
> On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
> > I also suggest making the defaults for both 20 instead of 10. That
> > leaves enough room that almost nobody ever has to change them, whereas
> > 10 can be a bit
On 2017-01-05 08:38:32 -0500, Peter Eisentraut wrote:
> I also suggest making the defaults for both 20 instead of 10. That
> leaves enough room that almost nobody ever has to change them, whereas
> 10 can be a bit tight for some not-outrageous installations (8 standbys
> plus backup?).
I'm
Tomas,
* Tomas Vondra (tomas.von...@2ndquadrant.com) wrote:
> On 01/05/2017 02:23 PM, Magnus Hagander wrote:
> >It's easy enough to construct a benchmark specifically to show the
> >difference, but of any actual "normal workload" for it. Typically the
> >optimization applies to things like bulk
On 01/05/2017 02:23 PM, Magnus Hagander wrote:
On Thu, Jan 5, 2017 at 12:44 AM, Tomas Vondra
> wrote:
On 01/03/2017 11:56 PM, Tomas Vondra wrote:
Hi,
...
I'll push results for larger ones once those
On 1/4/17 2:44 PM, Peter Eisentraut wrote:
> On 1/4/17 9:46 AM, Magnus Hagander wrote:
>> How about we default max_replication_slots to -1, which means to use the
>> same value as max_wal_senders?
>
>> But you don't necessarily want to adjust them together, do you? They are
>> both capped
On Thu, Jan 5, 2017 at 12:44 AM, Tomas Vondra
wrote:
> On 01/03/2017 11:56 PM, Tomas Vondra wrote:
>
>> Hi,
>>
>> ...
>
>> I'll push results for larger ones once those tests complete (possibly
>> tomorrow).
>>
>>
> I just pushed additional results (from the
On 01/03/2017 11:56 PM, Tomas Vondra wrote:
Hi,
...
I'll push results for larger ones once those tests complete (possibly
tomorrow).
I just pushed additional results (from the additional scales) to the git
repositories. On the larger (16/32-cores) machine with 2x e5-2620, the
results
On 2 January 2017 at 22:24, Craig Ringer wrote:
>
>
> On 2 Jan. 2017 20:20, "Simon Riggs" wrote:
>
> On 21 December 2016 at 13:23, Simon Riggs wrote:
>
>> Fix it up and I'll commit. Thanks for the report.
>
> I was hoping for
On 3 January 2017 at 12:34, Michael Paquier wrote:
> On Mon, Jan 2, 2017 at 10:55 PM, Simon Riggs wrote:
>> In the hope of making things better in 10.0, I remove my objection. If
>> people want to use wal_level = minimal they can restart their
On 1/4/17 9:46 AM, Magnus Hagander wrote:
> How about we default max_replication_slots to -1, which means to use the
> same value as max_wal_senders?
> But you don't necessarily want to adjust them together, do you? They are
> both capped by max_connections, but I don't think they have
On 12/31/16 10:00 AM, Magnus Hagander wrote:
> max_wal_senders=10
> max_replication_slots=20
How about we default max_replication_slots to -1, which means to use the
same value as max_wal_senders?
I think this would address the needs of 99% of users. If we do like you
suggest, there are going
On Wed, Jan 4, 2017 at 3:43 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 12/31/16 10:00 AM, Magnus Hagander wrote:
> > max_wal_senders=10
> > max_replication_slots=20
>
> How about we default max_replication_slots to -1, which means to use the
> same value as
On 01/03/2017 01:34 PM, Michael Paquier wrote:
On Mon, Jan 2, 2017 at 10:55 PM, Simon Riggs wrote:
In the hope of making things better in 10.0, I remove my objection.
If people want to use wal_level = minimal they can restart their
server and they can find that out in
Hi,
On 12/31/2016 04:00 PM, Magnus Hagander wrote:
Cycling back to this topic again, but this time at the beginning of a CF.
Here's an actual patch to change:
wal_level=replica
max_wal_senders=10
max_replication_slots=20
Based on feedback from last year
On Mon, Jan 2, 2017 at 10:55 PM, Simon Riggs wrote:
> In the hope of making things better in 10.0, I remove my objection. If
> people want to use wal_level = minimal they can restart their server
> and they can find that out in the release notes.
>
> Should we set wal_level
On 2 Jan. 2017 20:20, "Simon Riggs" wrote:
On 21 December 2016 at 13:23, Simon Riggs wrote:
> Fix it up and I'll commit. Thanks for the report.
I was hoping for some more effort from Ants to correct this.
I'll commit Craig's new tests for hs
On 2 January 2017 at 09:39, Magnus Hagander wrote:
> The conclusion has been that our defaults should really allow people to take
> backups of their systems, and they currently don't.
>
> Making things run faster is tuning, and people should expect to do that if
> they need
On 21 December 2016 at 13:23, Simon Riggs wrote:
> Fix it up and I'll commit. Thanks for the report.
I was hoping for some more effort from Ants to correct this.
I'll commit Craig's new tests for hs feedback before this, so it can
go in with a Tap test, so I imagine
On 2 January 2017 at 09:48, Simon Riggs wrote:
> I'm willing to assist in a project to allow changing wal_level online
> in this release. Please let's follow that path.
wal_level looks like one of the easier ones to change without a server restart
There are actions to
On 2017-01-02 10:31:28 +, Simon Riggs wrote:
> We must listen to feedback, not just try to blast through it.
Not agreeing with your priorities isn't "blasting through feedback".
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 2 January 2017 at 10:13, Andres Freund wrote:
> On 2017-01-02 11:05:05 +0100, Magnus Hagander wrote:
>> My claim here is that a lot *fewer* people have come to expect this
>> performance optimization, than would (quite reasonably) expect that backups
>> should work on a
On 2017-01-02 11:05:05 +0100, Magnus Hagander wrote:
> My claim here is that a lot *fewer* people have come to expect this
> performance optimization, than would (quite reasonably) expect that backups
> should work on a system without taking it down for restart to reconfigure
> it to support that.
On Mon, Jan 2, 2017 at 10:48 AM, Simon Riggs wrote:
> On 2 January 2017 at 09:39, Magnus Hagander wrote:
>
> > Please do submit a patch for it.
>
> The way this is supposed to go is someone submits a patch and they
> receive feedback, then act on that
On 2 January 2017 at 09:39, Magnus Hagander wrote:
> Please do submit a patch for it.
The way this is supposed to go is someone submits a patch and they
receive feedback, then act on that feedback. If I was able to get away
with deflecting all review comments with a simple
On Mon, Jan 2, 2017 at 10:32 AM, Simon Riggs wrote:
> On 2 January 2017 at 09:21, Magnus Hagander wrote:
> >
> >
> > On Mon, Jan 2, 2017 at 10:17 AM, Simon Riggs
> wrote:
> >>
> >> On 31 December 2016 at 15:00, Magnus Hagander
On 2 January 2017 at 09:21, Magnus Hagander wrote:
>
>
> On Mon, Jan 2, 2017 at 10:17 AM, Simon Riggs wrote:
>>
>> On 31 December 2016 at 15:00, Magnus Hagander wrote:
>> > Cycling back to this topic again, but this time at the
1 - 100 of 625 matches
Mail list logo