Re: [GENERAL] Joining 16 tables seems slow
hi 2017-09-12 8:45 GMT+02:00 Frank Millman : > Hi all > > I am using 9.4.4 on Fedora 22. > > I am experimenting with optimising a SQL statement. One version uses 4 > LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the > filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no > CASE statements. > > My test involves selecting a single row. Both versions work. The first > version takes 0.06 seconds. The second takes 0.23 seconds. On further > experimentation, the time for the second one seems to taken in setting up > the joins, because if I omit selecting anything from the joined tables, it > still takes 0.23 seconds. > > Exactly the same exercise on Sql Server results in 0.06 seconds for both > versions. > > I realise that, if I was selecting a large number of rows, 0.23 seconds is > trivial and the overall result could be different. But still, it seems odd. > > Is this normal, or should I investigate further? > please send result of explain analyze you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher number 14 maybe 16 regards > > Frank Millman > >
[GENERAL] Joining 16 tables seems slow
Hi all I am using 9.4.4 on Fedora 22. I am experimenting with optimising a SQL statement. One version uses 4 LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE statements. My test involves selecting a single row. Both versions work. The first version takes 0.06 seconds. The second takes 0.23 seconds. On further experimentation, the time for the second one seems to taken in setting up the joins, because if I omit selecting anything from the joined tables, it still takes 0.23 seconds. Exactly the same exercise on Sql Server results in 0.06 seconds for both versions. I realise that, if I was selecting a large number of rows, 0.23 seconds is trivial and the overall result could be different. But still, it seems odd. Is this normal, or should I investigate further? Frank Millman
[GENERAL] Perl script is killed by SIGPIPE
> > Dear All, We have one perl script that is opening DB connection and performaing insert DB operation.When duplicate entry Comming in query,Postgres forecfully killing process itself and in Postgres log "unexpected EOF on client connection" error is Comming. This issue is not Comming every time. We have found child script is killed by signal 13 SIGPIPE. When duplicate key violates error occured, script is killed but not all time. Thanks in advance. Yogesh
Re: [GENERAL] contrecord is requested
On Tue, Sep 12, 2017 at 5:27 AM, Scott Marlowe wrote: > So we have a db we're trying to rewind and get synced to the master. > pg_rewind says it doesn't need rewinding, and when we try to bring it > up, it gets this error: > > "contrecord is requested by 2E7/4028" > > And fails to get back up. The consistency recovery point defined by pg_rewind was not reached then. > Is this a known issue? Never heard of this one. > Possible bug in the continuation record code? That looks like a possibility. This complains is about the fact that the XLOG page header cannot be found where it should. You may want to check your WAL segments. -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_ident mapping Kerberos Usernames
On Sun, Sep 10, 2017 at 4:31 PM, wrote: > > GSSAPI is the authentication mechanism of choice, and it's working fine. > > Here is what I'm trying to accomplish. > > 'user1' == 'user1' and 'us...@a.domain.tld' == 'user1'. > > From reading the docs, this is done via the pg_ident.conf file, and from > reading the logs, there is a problem with my map. > > Hmm... Interesting thought. > *testing* > It sort of works. Setting the maps below maps the users straight across. > 'user1' == 'user1' and 'us...@a.domain.tld' == 'us...@a.domain.tld', so > it's partially working. > > pg_indent.conf: > testnet /^([0-9A-Za-z_-]+)@A\.DOMAIN\.TLD$ \1 > testnet /^([0-9A-Za-z_-]+)@A\.DOMAIN\.TLD$ \1...@a.domain.tld > I think you want this: testnet/(.*) \1 testnet/^([0-9A-Za-z_-]+)@A\.DOMAIN\.TLD$ \1 testnet /^([0-9A-Za-z_-]+) \1...@a.domain.tld But since your pg_hba has include_realm=1, I don't know how you are getting the realmless "system user" names in the first place, so the last line really shouldn't be necessary. Cheers, Jeff
[GENERAL] contrecord is requested
So we have a db we're trying to rewind and get synced to the master. pg_rewind says it doesn't need rewinding, and when we try to bring it up, it gets this error: "contrecord is requested by 2E7/4028" And fails to get back up. Is this a known issue? Possible bug in the continuation record code? The only references I can find for it are in the xlogreader code. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Needing verification on instructions for streaming replication
I see. Thanks Jeff, On Mon, Sep 11, 2017 at 2:38 PM, Jeff Janes wrote: > On Mon, Sep 11, 2017 at 7:02 AM, Dave Florek > wrote: > >> Hi, >> >> I'm trying to setup streaming replication on a cluster of postgresql >> databases and I followed the instructions outlined here ( >> https://wiki.postgresql.org/wiki/Streaming_Replication) but I'm stuck on >> the archive and restore commands. In reading the comments from the archive >> command, WAL archiving needs a shared directory between the primary and >> standby machines in order to replicate. Is this something actually >> required? If so, does postgresql have any recommendations on where this >> shared directory ought to be hosted? >> > > You can use streaming replication without any WAL archive at all. If the > only reason you want a WAL archive is so that the standby can fall behind > the primary without losing the ability to catch back up again, then you can > use a replication slot to make the primary hold onto WAL files until the > standby is done with them. That is going to be simpler than using a WAL > archive, provided the master has enough room to keep them around should the > standby fail for a while. > > If you want a WAL archive for other reasons, then the choice of how to > host it would depend on those reasons. > > Cheers, > > Jeff >
Re: [GENERAL] pg_restore error -- missing pg table
--- t...@sss.pgh.pa.us wrote: From: Tom Lane To: "" cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] pg_restore error -- missing pg table Date: Mon, 11 Sep 2017 13:43:16 -0400 "" writes: > /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms > --schema=public . > ... > pg_restore: [archiver (db)] could not execute query: ERROR: function > _pg_sv_column_array(oid, smallint[]) does not exist > LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column... > ^ > HINT: No function matches the given name and argument types. You might need > to add explicit type casts. Hm, so what is _pg_sv_column_array()? There's no built-in function by that name. The most likely theory is that pg_dump didn't dump that function, or it did but pg_restore isn't restoring it, perhaps because of the --schema restriction. I'm not sure why the function name isn't showing up as schema-qualified, though, if it isn't in the public schema. regards, tom lane === Ah, so I made the assumption that because it started with "pg_" that it was a built-in and that was false. :( Tracking that down, it seems that came from the extension "pgtap". So I'll work on dealing with that. It seems that pgtap loaded itself in public (instead of the common schema as it should have). I have no idea why things aren't in the right order though. Thanks for the pointer! I think that will allow us to get it all going again. Kevin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Needing verification on instructions for streaming replication
On Mon, Sep 11, 2017 at 7:02 AM, Dave Florek wrote: > Hi, > > I'm trying to setup streaming replication on a cluster of postgresql > databases and I followed the instructions outlined here ( > https://wiki.postgresql.org/wiki/Streaming_Replication) but I'm stuck on > the archive and restore commands. In reading the comments from the archive > command, WAL archiving needs a shared directory between the primary and > standby machines in order to replicate. Is this something actually > required? If so, does postgresql have any recommendations on where this > shared directory ought to be hosted? > You can use streaming replication without any WAL archive at all. If the only reason you want a WAL archive is so that the standby can fall behind the primary without losing the ability to catch back up again, then you can use a replication slot to make the primary hold onto WAL files until the standby is done with them. That is going to be simpler than using a WAL archive, provided the master has enough room to keep them around should the standby fail for a while. If you want a WAL archive for other reasons, then the choice of how to host it would depend on those reasons. Cheers, Jeff
Re: [GENERAL] pg_restore error -- missing pg table
"" writes: > /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms > --schema=public . > ... > pg_restore: [archiver (db)] could not execute query: ERROR: function > _pg_sv_column_array(oid, smallint[]) does not exist > LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column... > ^ > HINT: No function matches the given name and argument types. You might need > to add explicit type casts. Hm, so what is _pg_sv_column_array()? There's no built-in function by that name. The most likely theory is that pg_dump didn't dump that function, or it did but pg_restore isn't restoring it, perhaps because of the --schema restriction. I'm not sure why the function name isn't showing up as schema-qualified, though, if it isn't in the public schema. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_ident mapping Kerberos Usernames
On 09/11/2017 08:51 AM, rob stone wrote: Hi Rob, How would that work? I was under the impression the first column was for socket type and limited to local, host, hostssl, and hostnossl? Thunderbird's config has been fixed, so here is the line from pg_hba.conf line without the formatting issues: host all all 192.168.1.0/24 gss include_realm=1 map=testnet krb_realm=A.DOMAIN.TLD Thanks, Ryan Hello Ryan, I'm probably incorrect about this as I don't use pg_ident but my understanding is that each line in pg_ident consists of three fields being:- mask-name external-credentials internal-credentials so that the external log-on is converted to its Postgres log-on and then the mask-name is used to find a line in pg_hba.conf to verify that the external-credentials were submitted from an allowable IP address. Maybe somebody more knowledgeable than myself could provide a better example. Cheers, Rob Hi Rob, That's my understanding of the syntax and workflow from the reading and small experiments I've done as well. There's two of us, so we're on our way to a consensus. :) Thanks! Ryan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_restore error -- missing pg table
We've been doing backups and restores with many tests and uses and never had a problem until now. Sadly, I'm not sure where to even begin looking to figure out what to do, so any pointers would be very much appreciated! The vital stats: Pg: 9.5.1 (this will get upgraded to 10.0 when it's available) OS: Centos 5.9 (the upgrade to 6.x is scheduled for real soon now) For a little background. Our DB is broken into 3 schemas: public, logging, common. We do that because logging data can get quite large and is not the data that is needed if something goes wrong and we need to restore -- that data sits in public. The common schema holds the few functions that both need so a drop of public or logging doesn't hurt the other. I like to think this is all pretty straight forward with no surprises. We do backups like this (simplified): cd $EXP rm -f $EXP/* $PGPATH/pg_dump --clean --create --format=d --jobs=2 --file=$EXP --dbname=nms public /bin/tar -czf $TARNAME . We do the restore like this (simplified): cd $EXP rm -f $EXP/* tar -xzf $TARNAME # stop services & users # rename schema in case we need to restore # create an empty schema to restore into $PGPATH/pg_restore $VERBOSE --jobs=2 --dbname=nms public . # check for errors and restore saved schema if required The error we're getting, with supporting SQL, looks like: DROP SCHEMA IF EXISTS savepublic CASCADE; ALTER SCHEMA public RENAME TO savepublic; CREATE SCHEMA public AUTHORIZATION nmsroot; /opt/pgsql-9.5/bin/pg_restore -v --clean --create --dbname=nms --schema=public . ... pg_restore: processing item 446 VIEW pg_all_foreign_keys pg_restore: creating VIEW "public.pg_all_foreign_keys" pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 446; 1259 136598 VIEW pg_all_foreign_keys nmsroot pg_restore: [archiver (db)] could not execute query: ERROR: function _pg_sv_column_array(oid, smallint[]) does not exist LINE 6: _pg_sv_column_array(k1.conrelid, k1.conkey) AS fk_column... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. Command was: CREATE VIEW pg_all_foreign_keys AS SELECT n1.nspname AS fk_schema_name, c1.relname AS fk_table_name, k1.conname AS ... pg_restore: [archiver (db)] could not execute query: ERROR: relation "pg_all_foreign_keys" does not exist Command was: ALTER TABLE pg_all_foreign_keys OWNER TO nmsroot; pg_restore: processing item 6841 ACL pg_all_foreign_keys ... pg_restore: setting owner and privileges for ACL "public.pg_all_foreign_keys" pg_restore: [archiver (db)] Error from TOC entry 6841; 0 0 ACL pg_all_foreign_keys nmsroot pg_restore: [archiver (db)] could not execute query: ERROR: relation "pg_all_foreign_keys" does not exist Command was: REVOKE ALL ON TABLE pg_all_foreign_keys FROM PUBLIC; REVOKE ALL ON TABLE pg_all_foreign_keys FROM nmsroot; GRANT ALL ON TABL... ... WARNING: errors ignored on restore: 3 DROP SCHEMA IF EXISTS public CASCADE; ALTER SCHEMA savepublic RENAME TO public; Error: Problem with pg_restore, reverted to saved database copy. I don't understand why pg_all_foreign_keys is having issues here, nor even what to start investigating. To the best of my knowledge, the server never ran out of disk space so it should be a complete backup. None of the files from the backup are corrupt that I can tell. The server and database both seem fine (other than the missing data that was accidentally dropped and is forcing this restore). What am I missing and what do I need to investigate? Has anyone else ever seen this before and if so what did you do to fix it? Thanks! Kevin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Needing verification on instructions for streaming replication
Hi, I'm trying to setup streaming replication on a cluster of postgresql databases and I followed the instructions outlined here ( https://wiki.postgresql.org/wiki/Streaming_Replication) but I'm stuck on the archive and restore commands. In reading the comments from the archive command, WAL archiving needs a shared directory between the primary and standby machines in order to replicate. Is this something actually required? If so, does postgresql have any recommendations on where this shared directory ought to be hosted? Thanks,
Re: [GENERAL] pg_ident mapping Kerberos Usernames
> > > > Hi Rob, > > How would that work? I was under the impression the first column was > for socket type and limited to > local, host, hostssl, and hostnossl? > > Thunderbird's config has been fixed, so here is the line from > pg_hba.conf line without the > formatting issues: > > host all all 192.168.1.0/24 gss include_realm=1 map=testnet > krb_realm=A.DOMAIN.TLD > > > Thanks, > Ryan Hello Ryan, I'm probably incorrect about this as I don't use pg_ident but my understanding is that each line in pg_ident consists of three fields being:- mask-name external-credentials internal-credentials so that the external log-on is converted to its Postgres log-on and then the mask-name is used to find a line in pg_hba.conf to verify that the external-credentials were submitted from an allowable IP address. Maybe somebody more knowledgeable than myself could provide a better example. Cheers, Rob -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Final pg_dumpall should happen in Single-User-Mode
I did a stupid mistake. I run the final pg_dumpall without switching to single-user-mode first. With "final" I mean that the dump should be transferred to a different machine. After the dump the old machine was shut down forever. My mistake was that I stopped the services, but one process was still alive and this process inserted to the database after pg_dumpall has started. I guess I am not the first one who did this stupid mistake. What do you think? Wouldn't it be polite to add a note about this topic to the docs? Here are the docs for pg_dumpall: https://www.postgresql.org/docs/current/static/app-pg-dumpall.html How could the wording look like? I am not a native speaker Or do you have a better/different idea? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general