Re: [GENERAL] not aborting transactions on failed select
On 11 September 2013 07:53, Sergey Shelukhin ser...@hortonworks.com wrote: The query suffers from the auto-lower-casing of unquoted table names, which is not ANSI compliant. Technically we could add quotes (and stay ANSI), but then MySQL would break without explicitly setting it to use ANSI mode, so it's a lose-lose situation if we do not want to have DB-specific code. Whether identifiers get folded to lower case or to upper case (which IS according to the standard) should not matter to you. If you're using case sensitive table names, the standard says to quote your identifiers. It would appear that MySQL is doing something non-standard compliant that's biting you here. I haven't used MySQL in 10 years, but if memory serves me correctly, it tends to use the literal file name of the table, which would be case-sensitive on a case-sensitive file system. I remember the hilarity of the same issue when we moved a MySQL database from a file system that was not case sensitive (NTFS) to one that was (EXT2FS) - we had to rewrite all our queries because MySQL could no longer find the tables in our queries. That was when we wrote off MySQL for the project we were working on back then. From your remark it isn't entirely clear what your problem is; it would appear that your ORM is generating case-sensitive table names and that your query outside the ORM assumes that it doesn't need to quote those identifiers? Perhaps your ORM can be told not to do that? -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest.
[GENERAL] problem in installation of postgresql-jdbc
Hi All, I am trying to install postgresql-jdbc but facing java error. sun java is already installed but postgresql-jdbc installing openjdj. please let me know how to resolve this and if you require any other details. [root@172-24-1-54 ~]# java -version java version 1.6.0_33 Java(TM) SE Runtime Environment (build 1.6.0_33-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03, mixed mode) [root@172-24-1-54 ~]# yum install postgresql-jdbc.noarch Dependencies Resolved === PackageArch Version RepositorySize === Installing: postgresql-jdbcnoarch 8.4.701-8.el6redhat63 402 k Installing for dependencies: flac x86_64 1.2.1-6.1.el6redhat63 243 k giflib x86_64 4.1.6-3.1.el6redhat63 37 k java-1.6.0-openjdk x86_64 1:1.6.0.0-1.45.1.11.1.el6redhat63 25 M jline noarch 0.9.94-0.8.el6 redhat63 86 k jpackage-utils noarch 1.7.5-3.12.el6 redhat63 59 k libasyncns x86_64 0.8-1.1.el6 redhat63 24 k libsndfile x86_64 1.0.20-5.el6 redhat63 233 k pulseaudio-libsx86_64 0.9.21-13.el6redhat63 462 k rhino noarch 1.7-0.7.r2.2.el6 redhat63 778 k tzdata-javanoarch 2012c-1.el6 redhat63 154 k Transaction Summary === Install 11 Package(s) -- ViVek Raghuwanshi Mobile -+91-09595950504 Skype - vivek_raghuwanshi IRC vivekraghuwanshi http://vivekraghuwanshi.wordpress.com/ http://in.linkedin.com/in/vivekraghuwanshi
Re: [GENERAL] problem in installation of postgresql-jdbc
Vivek Singh Raghuwanshi wrote: I am trying to install postgresql-jdbc but facing java error. It would be helpful to know which error you are facing. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [pgeu-general] [GENERAL] Call for design: PostgreSQL mugs
patrick keshishian wrote: One more cute idea that came to me last night. Here is a very poor attempt at it by yours truly; keep in mind I'm not a graphics artist. This image is for illustration purposes only! http://sidster.com/gallery/2013/09/10/elephant_paw.sml.jpg Can you picture a bunch of these on a meeting table? If that image does not scream Stampede!, I don't know what does. =) +1 Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_largeobjects
On Wed, Sep 11, 2013 at 10:19 AM, James Sewell james.sew...@lisasoft.comwrote: Hello all, I have a table which makes use of pg_largeobjects. I am inserting rows into the table as user1. If I connect to the database as user2 I can SELECT data, but can not open the large object for reading (user1 can do this). I don't want to set lo_compat_privileges as then user3 (who can't SELECT from the services tables) would be able to read the large object. GRANT SELECT,UPATE ON LARGE OBJECT to user2; Will this work... --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/
Re: [GENERAL] problem in installation of postgresql-jdbc
java version 1.6.0_33 is already installed but after firing yum install postgresql-jdbc java-1.6.0-openjdk is also going to installed as dependencies. On Wed, Sep 11, 2013 at 1:53 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote: Vivek Singh Raghuwanshi wrote: I am trying to install postgresql-jdbc but facing java error. It would be helpful to know which error you are facing. Yours, Laurenz Albe -- ViVek Raghuwanshi Mobile -+91-09595950504 Skype - vivek_raghuwanshi IRC vivekraghuwanshi http://vivekraghuwanshi.wordpress.com/ http://in.linkedin.com/in/vivekraghuwanshi
Re: [GENERAL] problem in installation of postgresql-jdbc
On 9/11/2013 1:47 AM, Vivek Singh Raghuwanshi wrote: java version 1.6.0_33 is already installed but after firing yum install postgresql-jdbc java-1.6.0-openjdk is also going to installed as dependencies. We never install the RPM of the jdbc driver as its just a single .jar file... we just bundle the proper .jar file with our applications. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_largeobjects
James Sewell james.sew...@lisasoft.com wrote: is there any reason to use pg_largeobjects if I am storing data under 1GB which doesn't require random reads any more? If individual large objects might need to be referenced from multiple locations, it gives you an easy way to do that without needing to create a new table with id and document columns (and possibly a comment column). It gives you the ability to stream documents in rather than including them in a SQL statement, which can reduce stress on RAM. It gives you the ability to set the security for individual documents, although it sounds like that is not something you find useful. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Call for design: PostgreSQL mugs
On Wed, 04 Sep 2013 00:08:52 +0200 Andreas 'ads' Scherbaum adsm...@wars-nicht.de wrote: PostgreSQL folks! We are looking for the next big thing. Actually, it's a bit smaller: a new design for mugs. So far we had big blue elephants, small blue elephants, frosty elephants, white SQL code on black mugs ... Now it's time to design something new. What's in for you? Fame, of course - we will announce the designer of the next mug along with the mugs itself. Plus 4 mugs for you and your friends. Do you have a cool idea? Please let us know. Either reply here or send an email to pgeu-bo...@postgresql.org. CREATE OR REFILL mug{ id_owner INTEGER PRIMARY KEY, owner_name VARCHAR(40), CONSTRAINT id_owner PRIMARY KEY (mug), CONSTRAINT fk_content FOREIGN KEY (content) REFERENCES content (content_id) )WITH ( OIDS=FALSE ); Thanks, --- --- Eduardo Morras emorr...@yahoo.es -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] not aborting transactions on failed select
Folding to uppercase is according to standard, lowercase no, so in our case we could expect it to work if we have uppercase tables... in fact, it should work whether FS is case sensitive or not in MySQL, tables are created and used uppercase. Due to presence of a large number of historical installations renaming all the tables is not viable. Again, as I said, this is not the problem here, we'll figure out how to deal with the queries in this case. Probably put in quotes and switch MySQL to ANSI mode within session, some testing would be needed to see how it affects ORM. But the problem is that even when queries work, for the future potential breakage the fall-back to ORM should always work... and it doesn't. On Wed, Sep 11, 2013 at 12:57 AM, Alban Hertroys haram...@gmail.com wrote: On 11 September 2013 07:53, Sergey Shelukhin ser...@hortonworks.comwrote: The query suffers from the auto-lower-casing of unquoted table names, which is not ANSI compliant. Technically we could add quotes (and stay ANSI), but then MySQL would break without explicitly setting it to use ANSI mode, so it's a lose-lose situation if we do not want to have DB-specific code. Whether identifiers get folded to lower case or to upper case (which IS according to the standard) should not matter to you. If you're using case sensitive table names, the standard says to quote your identifiers. It would appear that MySQL is doing something non-standard compliant that's biting you here. I haven't used MySQL in 10 years, but if memory serves me correctly, it tends to use the literal file name of the table, which would be case-sensitive on a case-sensitive file system. I remember the hilarity of the same issue when we moved a MySQL database from a file system that was not case sensitive (NTFS) to one that was (EXT2FS) - we had to rewrite all our queries because MySQL could no longer find the tables in our queries. That was when we wrote off MySQL for the project we were working on back then. From your remark it isn't entirely clear what your problem is; it would appear that your ORM is generating case-sensitive table names and that your query outside the ORM assumes that it doesn't need to quote those identifiers? Perhaps your ORM can be told not to do that? -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: [GENERAL] not aborting transactions on failed select
Sergey Shelukhin wrote Due to presence of a large number of historical installations {doing such and such} is not viable. Yeah, PostgreSQL faces this same issue If you intend to stay here long, and we hope you do (welcome by the way), it is customary to bottom-post on these lists. One other thought is that exception generation and handling is expensive. It probably is a fair trade-off - since the ORM is already slow the added overhead of the failing optimization query shouldn't be that noticeable. Failure means either bad code or incorrect pre-conditions; both of which should be explicitly checked and reported on. If those checks fail the system can auto-configure to use the backup (ORM) channel instead of the primary (optimized) channel. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/not-aborting-transactions-on-failed-select-tp5770387p5770478.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] invalid frontend message type 136
On 09/10/2013 11:30 AM, David Johnston wrote: Steve Crawford wrote Sorry, I should have included that. The error occurred when an older client running 8.3.7 (I know, ferreting and finishing upgrades on clients with old libraries is in progress) on CentOS 5.3 (32-bit). Of all the machines connecting to the server, this one pretty lightly loaded and typically creates a new connection to the server somewhat over 10,000 times/week. This is the first instance of the error. These machines/connections are buried behind firewalls in an automated system and this particular connection is password-less. The server is PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit. Which client (not just the version): psql; jdbc; .net; libpq Unfortunately I'm not sure. There are many interrelated process that communicate with the server and since the connection bailed on connect there isn't much info to tell me which one. So it could be psql, code in one of several C-programs or connections from Perl or other scripting program. But they are all using the same libraries. I am not surprised that an old client talking to a new server would cause a protocol error to be issued. Fair enough. But this is *one* error from among millions of successful connections in a farm of similar servers. This server happens to be one from an older series that is in-line for updates but is not the only one running this version. The nature of our operation along with some very specific requirement forced by specialized underlying hardware requires careful hand-rolling of updates to the machines. I'm doubtful you will get much help on the mailing lists for something like this ... Actually, all I want to know is: 1) A definition of the error - I can research further from there. PostgreSQL has by far the most outstanding documentation of any open-source (or possibly closed-source) product I use but it still lacks in a comprehensive (i.e. any) glossary of log/error messages. 2) An idea of whether or not to worry. If it is an odd and harmless transient (dealt with automatically by our error-checking/validation/recovery code) then I'll ignore it. But if it's something that could be an early indication of lurking corruption or pending failure then I need to investigate aggressively. Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] autovacuum out of memory errors
Hi, Starting this morning I have been getting out of memory errors from the postgres autovacuum process. I have searched through previous similar questions but not found anything other than suggestions to reduce either 'shared_buffers' or 'maintenance_work_mem' but these seem very instance specific and would like a better understanding both of the root cause and solution before making changes to my postgres setup. We have a single postgres cluster installed from the Debian repository hosting 36 databases the details of which are below. The table being vacuumed at the time is about 19GB in size. Errors from log: TopMemoryContext: 232408 total in 13 blocks; 11792 free (8 chunks); 220616 used TopTransactionContext: 24576 total in 2 blocks; 23312 free (25 chunks); 1264 used Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used TOAST to main relid map: 24576 total in 2 blocks; 11872 free (5 chunks); 12704 used AV worker: 24576 total in 2 blocks; 19816 free (7 chunks); 4760 used Autovacuum Portal: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used Vacuum: 8192 total in 1 blocks; 8080 free (0 chunks); 112 used smgr relation table: 24576 total in 2 blocks; 13920 free (4 chunks); 10656 used TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used CacheMemoryContext: 817840 total in 20 blocks; 171584 free (1 chunks); 646256 used pbl_product_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pbl_maxallocation_idx: 2048 total in 1 blocks; 440 free (0 chunks); 1608 used pbl_endtimenotz_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pbl_endtimegmt_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pbl_begintimenotz_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pbl_begintimegmt_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pbl_allocated_ix: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used productbookinglevel_product_id_key: 2048 total in 1 blocks; 656 free (0 chunks); 1392 used productbookinglevel_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pg_index_indrelid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1448 free (4 chunks); 1624 used pg_foreign_data_wrapper_name_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_enum_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_class_relname_nsp_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used pg_foreign_server_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_statistic_relid_att_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used pg_cast_source_target_index: 3072 total in 2 blocks; 1600 free (2 chunks); 1472 used pg_language_name_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_authid_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used pg_amop_fam_strat_index: 3072 total in 2 blocks; 1384 free (2 chunks); 1688 used pg_index_indexrelid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used pg_ts_template_tmplname_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used pg_ts_config_map_index: 3072 total in 2 blocks; 1496 free (4 chunks); 1576 used pg_opclass_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used pg_foreign_data_wrapper_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_auth_members_member_role_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used pg_ts_dict_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_conversion_default_index: 3072 total in 2 blocks; 1432 free (3 chunks); 1640 used pg_operator_oprname_l_r_n_index: 3072 total in 2 blocks; 1432 free (3 chunks); 1640 used pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1600 free (2 chunks); 1472 used pg_enum_typid_label_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used pg_ts_config_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_user_mapping_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_opfamily_am_name_nsp_index: 3072 total in 2 blocks; 1496 free (4 chunks); 1576 used pg_type_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376 used pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_constraint_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328 used pg_rewrite_rel_rulename_index: 3072 total in 2 blocks; 1648 free (2 chunks); 1424 used
[GENERAL] PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Hi, *I ran into a major problem when I tried to import a backup from 9.1. into a 9.3 PG.* I just installed PG 9.3 on a new linux box. Then I wanted to import a plaintext dump of a DB that was created by pg_dump of PG 9.1 There are a lot of views that have joins to a subquery in the from-clause. something like SELECT ... some columns ... FROM maintable AS m JOIN someflag AS f ON m.flag_1_id = f.id LEFT JOIN ( child_table AS c JOIN someotherflag AS f ON c.flag_2_id = f.id ) AS x ON m.id = x.main_id This works with PG 9.1 and PG 9.2 but PG 9.3 complains: ERROR: table name f specified more than once *Are there no separate namespaces for subqueries anymore in PG 9.3 ?* Do I have to change ALL those views in the old PG 9.1 server before I can import the backup into 9.3 or is there another way to work around this issue ?
Re: [GENERAL] PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
On 09/11/2013 09:53 AM, Andreas wrote: Hi, *I ran into a major problem when I tried to import a backup from 9.1. into a 9.3 PG.* I just installed PG 9.3 on a new linux box. Then I wanted to import a plaintext dump of a DB that was created by pg_dump of PG 9.1 There are a lot of views that have joins to a subquery in the from-clause. something like SELECT ... some columns ... FROM maintable AS m JOIN someflag AS f ON m.flag_1_id = f.id LEFT JOIN ( child_table AS c JOIN someotherflag AS f ON c.flag_2_id = f.id ) AS x ON m.id = x.main_id This works with PG 9.1 and PG 9.2 but PG 9.3 complains: ERROR: table name f specified more than once *Are there no separate namespaces for subqueries anymore in PG 9.3 ?* Do I have to change ALL those views in the old PG 9.1 server before I can import the backup into 9.3 or is there another way to work around this issue ? My guess you are seeing this: http://www.postgresql.org/docs/9.3/interactive/release-9-3.html Improve view/rule printing code to handle cases where referenced tables are renamed, or columns are renamed, added, or dropped (Tom Lane) Table and column renamings can produce cases where, if we merely substitute the new name into the original text of a rule or view, the result is ambiguous. This change fixes the rule-dumping code to insert manufactured table and column aliases when needed to preserve the original semantics. You would be advised to use the 9.3 version of pg_dump to dump the 9.1 database. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Andreas-3-2 wrote Hi, *I ran into a major problem when I tried to import a backup from 9.1. into a 9.3 PG.* I just installed PG 9.3 on a new linux box. Then I wanted to import a plaintext dump of a DB that was created by pg_dump of PG 9.1 There are a lot of views that have joins to a subquery in the from-clause. something like SELECT ... some columns ... FROM maintable AS m JOIN someflag AS f ON m.flag_1_id = f.id LEFT JOIN ( child_table AS c JOIN someotherflag AS f ON c.flag_2_id = f.id ) AS x ON m.id = x.main_id This works with PG 9.1 and PG 9.2 but PG 9.3 complains: ERROR: table name f specified more than once *Are there no separate namespaces for subqueries anymore in PG 9.3 ?* Do I have to change ALL those views in the old PG 9.1 server before I can import the backup into 9.3 or is there another way to work around this issue ? Not running 9.3 yet so cannot test this myself. Ignore the pg_dump and view. Does (should) the above query work if executed in psql? David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770510.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] autovacuum out of memory errors
tim truman mitram...@gmail.com wrote: [ memory usage map ] There doesn't seem to be any memory context using an unusually large amount of RAM. 522f9128.1151 ERROR: out of memory 522f9128.1151 DETAIL: Failed on request of size 336150396. 522f9128.1151 CONTEXT: automatic vacuum of table client.public.product SELECT version(); PostgreSQL 8.4.11 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit http://www.postgresql.org/support/versioning/ maintenance_work_mem | 1GB max_connections | 1000 shared_buffers | 11GB wal_buffers | 8MB work_mem | 8MB free -m total used free shared buffers cached Mem: 48396 48139 257 0 173 44771 -/+ buffers/cache: 3194 45201 Swap: 956 20 936 Clearly the free command was not run while the database service was running. Can we see what that looks like when it is running and handling roughly the same workload that has been causing problems (including any non-database processes which have been running on the machine during failures)? -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Adrian Klaver-3 wrote My guess you are seeing this: http://www.postgresql.org/docs/9.3/interactive/release-9-3.html Improve view/rule printing code to handle cases where referenced tables are renamed, or columns are renamed, added, or dropped (Tom Lane) Table and column renamings can produce cases where, if we merely substitute the new name into the original text of a rule or view, the result is ambiguous. This change fixes the rule-dumping code to insert manufactured table and column aliases when needed to preserve the original semantics. You would be advised to use the 9.3 version of pg_dump to dump the 9.1 database. Maybe; but the supplied query does not seem to be ambiguous and the dump phase has already completed. pg_restore simply issues a CREATE VIEW and does not perform interpolation of the contents. If the select statement is indeed correctly represented then I could very well see creating one like that by hand and inserting it as part of an external database schema installation (i.e., not via pg_restore) and would expect it to work. According to this such a scenario should also fail with the same message. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770512.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Am 11.09.2013 21:02, schrieb David Johnston: Andreas-3-2 wrote Hi, *I ran into a major problem when I tried to import a backup from 9.1. into a 9.3 PG.* I just installed PG 9.3 on a new linux box. Then I wanted to import a plaintext dump of a DB that was created by pg_dump of PG 9.1 There are a lot of views that have joins to a subquery in the from-clause. something like SELECT ... some columns ... FROM maintable AS m JOIN someflag AS f ON m.flag_1_id = f.id LEFT JOIN ( child_table AS c JOIN someotherflag AS f ON c.flag_2_id = f.id ) AS x ON m.id = x.main_id This works with PG 9.1 and PG 9.2 but PG 9.3 complains: ERROR: table name f specified more than once *Are there no separate namespaces for subqueries anymore in PG 9.3 ?* Do I have to change ALL those views in the old PG 9.1 server before I can import the backup into 9.3 or is there another way to work around this issue ? Not running 9.3 yet so cannot test this myself. Ignore the pg_dump and view. Does (should) the above query work if executed in psql? David J. No, it doesn't work in psql of PG 9.3. I got from EnterpriseDB I did check like this: Open pgAdmin 1.18 of PG 9.3 Open connections to both servers ... to the remote 9.1 and the 9.3 here in my LAN. Take the SQL definition of one of the problematic views out of the 9.1 server as pgAdmin displays it. Paste this into a sql editor window of the 9.3 server and execute it. The creation of the view is rejected. When I try to run the query within the view directly PG 9.3 balks too. It doesn't accept the reused alias within the subquery. This internal alias references not even the same table as the one outside the subquery. It appeares as if there is just a global namespace for the whole query that spans over the names within the subquery, too. If this is the case then I can't switch to PG 9.3 at all, because I had to reevaluate every query at my application throws at the DB and not only those 70 views that get rejected while the initial import of the sql dump. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Andreas-3-2 wrote No, it doesn't work in psql of PG 9.3. I got from EnterpriseDB Can you please create a minimal self-contained query that exhibits this behavior and file a bug report? I quickly cobbled this together - works on 9.0.x WITH tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-a'::varchar) ) , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) ) , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) ) SELECT * FROM ( tblA AS dup_tbl JOIN tblB ON (dup_tbl.a_id = tblB.b_id) ) AS refD JOIN tblC AS dup_tbl ON (dup_tbl.c_id = refD.a_id) If it fails on 9.3 it would do. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770519.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Try these too, please: WITH tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) ) , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) ) , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) ) SELECT * FROM ( tblA JOIN tblB ON (tblA.a_id = tblB.b_id) ) AS refD JOIN tblA ON (tblA.a_id = refD.a_id) ; WITH tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) ) , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) ) , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) ) SELECT * FROM ( tblA JOIN tblB ON (tblA.a_id = tblB.b_id) ) AS refD JOIN tblA ON (tblA.a_id = tblB.a_id) ; --this one should fail trying to reference tblB The reference to tblA is used inside refD and outside of it as well; but no table aliases are used. Thanks David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770520.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Hi David, your 3 examples work as you expected. That is 1+2 work and 3 throws an error. I tried to figure out an example and found something peculiar. The issue arises when there is another join in the subquery after the one with the reused table alias. There is no error without this following join. Look at this rather chunky sample. If I join flag_2 before flag_1 it works. It won't with flag_2 after flag_1. The query works as soon as the reused alias joins last in the subquery. If there are 2 reused aliases then the query wont work at all without renaming one alias. The error shows with pgAdmin aws well as psql (9.3). So probaply it's not an namespace issue but the query-parser screws up. :( I never placed a bug-report. :} Could you give a hand? droptable if exists sub_tab; droptable if exists main_tab; droptable if exists flag_1; droptable if exists flag_2; create temporary table flag_1 ( flag_1_id integer primary key, flag_1_t text ); insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table flag_2 ( flag_2_id integer primary key, flag_2_t text ); insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table main_tab ( main_id integer primary key, main_t text, flag_1_id integer references flag_1 ( flag_1_id ) ); insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 'Main 3', 3 ); create temporary table sub_tab ( sub_id integer primary key, sub_t text, main_id integer references main_tab ( main_id ), flag_1_id integer references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( flag_2_id ) ); insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 2 ), ( 3, 'Sub 3', 3, 1, 3 ); select m.main_id, m.main_t, f.flag_1_t, x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t frommain_tabas m joinflag_1 as f using ( flag_1_id ) left join ( sub_tab as s joinflag_2 as f2 using ( flag_2_id ) -- that way it works joinflag_1 as f using ( flag_1_id ) --joinflag_2 as f2 using ( flag_2_id ) -- that way it doesn't work ) as x using ( main_id ); -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Just another addition... If I remove the aliases for the tbles flag_1 and flag_2 the problem still comes up. So one either has to mind the order of the joins or use unique aliases. It's really an issue as there are bound to be some queries in sql-functions or some that get assembled dynamically in my application. Those won't get executed in the initial import of the db-dump but will come up anytime later when the query gets used the first time. This is a no go:( regards Andreas Am 12.09.2013 00:33, schrieb Andreas: Hi David, your 3 examples work as you expected. That is 1+2 work and 3 throws an error. I tried to figure out an example and found something peculiar. The issue arises when there is another join in the subquery after the one with the reused table alias. There is no error without this following join. Look at this rather chunky sample. If I join flag_2 before flag_1 it works. It won't with flag_2 after flag_1. The query works as soon as the reused alias joins last in the subquery. If there are 2 reused aliases then the query wont work at all without renaming one alias. The error shows with pgAdmin aws well as psql (9.3). So probaply it's not an namespace issue but the query-parser screws up. :( I never placed a bug-report. :} Could you give a hand? droptable if exists sub_tab; droptable if exists main_tab; droptable if exists flag_1; droptable if exists flag_2; create temporary table flag_1 ( flag_1_id integer primary key, flag_1_t text ); insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table flag_2 ( flag_2_id integer primary key, flag_2_t text ); insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table main_tab ( main_id integer primary key, main_t text, flag_1_id integer references flag_1 ( flag_1_id ) ); insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 'Main 3', 3 ); create temporary table sub_tab ( sub_id integer primary key, sub_t text, main_id integer references main_tab ( main_id ), flag_1_id integer references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( flag_2_id ) ); insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 2 ), ( 3, 'Sub 3', 3, 1, 3 ); select m.main_id, m.main_t, f.flag_1_t, x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t frommain_tabas m joinflag_1 as f using ( flag_1_id ) left join ( sub_tab as s joinflag_2 as f2 using ( flag_2_id ) -- that way it works joinflag_1 as f using ( flag_1_id ) --joinflag_2 as f2 using ( flag_2_id ) -- that way it doesn't work ) as x using ( main_id ); -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Andreas-3-2 wrote I never placed a bug-report. :} Could you give a hand? Sure. PostgreSQL homepage (postgresql.org) Report a Bug link on right-hand side of page. Fill in the form. SEND THIS droptable if exists sub_tab; droptable if exists main_tab; droptable if exists flag_1; droptable if exists flag_2; create temporary table flag_1 ( flag_1_id integer primary key, flag_1_t text ); insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table flag_2 ( flag_2_id integer primary key, flag_2_t text ); insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table main_tab ( main_id integer primary key, main_t text, flag_1_id integer references flag_1 ( flag_1_id ) ); insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 'Main 3', 3 ); create temporary table sub_tab ( sub_id integer primary key, sub_t text, main_id integer references main_tab ( main_id ), flag_1_id integer references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( flag_2_id ) ); insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 2 ), ( 3, 'Sub 3', 3, 1, 3 ); select m.main_id, m.main_t, f.flag_1_t, x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t frommain_tabas m joinflag_1 as f using ( flag_1_id ) left join ( sub_tab as s joinflag_2 as f2 using ( flag_2_id ) -- that way it works joinflag_1 as f using ( flag_1_id ) --joinflag_2 as f2 using ( flag_2_id ) -- that way it doesn't work ) as x using ( main_id ); -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Risk of set system wise statement_timeout
I have been reading few posted comment about the risk for autovacuum for older postgres liek version 8. I am currently running 9.2.4. We have a need to terminate any query running longer than 2 hours. Most of our query should finish within 15 minutes. We don't have very large amount of changes in the system and we run autovacuum daily. Running the larger table for autovacuum should be fast. Under my situation, setting statement_timeout = 720 which is 2 hours seems very low risk trigger fail to autovacuum. Any one have any idea not to do it or any workaround to decrease the risk of fail autovacuum. -- Best regards, Alex Lai OMI SIPS DBA ADNET Systems , Inc. m...@sesda3.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Odd behavior in functions w/ anyarray anyelement
Greetings, I'm trying to track down some undocumented (or perhaps not well documented) behavior I'm encountering in regards to custom functions (in plpgsql) utilizing anyelement and anyarray as arguments and/or return types. I arrived at this point when I was attempting to write the function ANYARRAY_REMOVE(anyarray, anyelement), which returned anyarray. This function would succeed when calling: SELECT ANYARRAY_REMOVE(ARRAY[1,2,3], 2) ... But would fail when calling: SELECT ANYARRAY_REMOVE(ARRAY[1,2,3], ARRAY[1,2]) ... With the error: function anyarray_remove(integer[], integer[]) does not exist. From that point I wrote a bunch of simply anyarray/element related functions to better understand how these pseudo-types behave, which has left me more confused than when I started. Here are those functions, queries to interface with those functions, and what I would expect each query to return or throw vs. what actually happens. Nothing from what I have read and understand at these URLs document this behavior: - http://forums.devshed.com/postgresql-help-21/what-s-anyarray-isn-t-it-the-same-as-array-148195.html - http://www.postgresql.org/message-id/44649bb2.50...@tada.se - http://www.postgresql.org/docs/9.1/static/extend-type-system.html Queries Tested On: - Windows 2003 R2 (64-Bit), PostgreSQL 9.1.0 - Ubuntu Linux 12.04 LTS (64-bit), PostgreSQL 9.2.4 /**/ DROP FUNCTION IF EXISTS anyel_anyel(anyelement); CREATE OR REPLACE FUNCTION anyel_anyel(anyelement) RETURNS anyelement AS $BODY$ BEGIN RETURN $1; END; $BODY$ LANGUAGE plpgsql; /**/ -- Works as expected. Should accept TEXT, should output TEXT to match input data-type. -- Expected: Returns hiho as TEXT -- Actual: Returns hiho as TEXT SELECT anyel_anyel('hiho'::TEXT); -- Works as expected. Should accept INTEGER, should output INTEGER to match input data-type. --- Expected: Returns 1 as INTEGER --- Actual: Returns 1 as INTEGER SELECT anyel_anyel(1::INTEGER); -- Works as expected. Should accept TEXT[], should output TEXT[] to match input data-type. -- Expected: Returns {one,two} as TEXT[] -- Actual: Returns {one,two} as TEXT[] SELECT anyel_anyel(ARRAY['one', 'two']::TEXT[]); -- Works as expected. Should accept INTEGER[], should output INTEGER[] to match input data-type. -- Expected: Returns {1,2,3} as INTEGER[] -- Actual: Returns {1,2,3} as INTEGER[] SELECT anyel_anyel(ARRAY[1,2,3]::INTEGER[]); /**/ DROP FUNCTION IF EXISTS anyar_anyar(anyarray); CREATE OR REPLACE FUNCTION anyar_anyar(anyarray) RETURNS anyarray AS $BODY$ BEGIN RETURN $1; END; $BODY$ LANGUAGE plpgsql; /**/ -- Works as expected. Should not accept TEXT because not an array. -- Expected: ERROR: function anyar_anyar(text) does not exist -- Actual: ERROR: function anyar_anyar(text) does not exist SELECT anyar_anyar('hiho'::TEXT); -- Works as expected. Should not accept INTEGER because not an array. -- Expected: Throws ERROR: function anyar_anyar(integer) does not exist -- Actual: Throws ERROR: function anyar_anyar(integer) does not exist SELECT anyar_anyar(1::INTEGER); -- Works as expected. Should accept TEXT[], should output TEXT[]. -- Expected: Returns {one,two} as TEXT[] -- Actual: Returns {one,two} as TEXT[] SELECT anyar_anyar(ARRAY['one', 'two']::TEXT[]); -- Works as expected. Should accept INTEGER[], should output INTEGER[]. -- Expected: Returns {1,2,3} as INTEGER[] -- Actual: Returns {1,2,3} as INTEGER[] SELECT anyar_anyar(ARRAY[1,2,3]::INTEGER[]); /**/ DROP FUNCTION IF EXISTS anyar_anyel(anyarray); CREATE OR REPLACE FUNCTION anyar_anyel(anyarray) RETURNS anyelement AS $BODY$ BEGIN RETURN $1; END; $BODY$ LANGUAGE plpgsql; /**/ -- Works as expected. Should not accept TEXT because not an array. -- Expected: ERROR: function anyar_anyel(text) does not exist -- Actual: ERROR: function anyar_anyel(text) does not exist SELECT anyar_anyel('hiho'::TEXT); -- Works as expected: Should not accept INTEGER because not an array. -- Expected: ERROR: function anyar_anyel(integer) does not exist -- Actual: function anyar_anyel(integer) does not exist SELECT anyar_anyel(1::INTEGER); -- Does not work as expected. Should accept TEXT[], should output TEXT[] to match input data-type. -- Expected: Returns {one,two} as TEXT[] -- Actual: Returns {one,two} as TEXT SELECT anyar_anyel(ARRAY['one', 'two']::TEXT[]); -- Does not work as expected. Should accept INTEGER[], should output INTEGER[] to match input data-type. -- Expected:
[GENERAL] ZFS snapshots - to use pg_start_backup() and pg_stop_backup() or no?
I was trying to figure out how to get the following syntax to work: echo select pg_start_backup('zfs_snapshot'); \\! zfs snapshot zroot/zpgsql@test; \\ select pg_stop_backup(); | psql postgres The above command successfully starts the backup and creates the snapshot but then fails to stop the backup. I've tried various combinations of \ and \\ here with different whitespace and I just can't seem to find a combination that works. I don't understand the proper use of \\ (described as the separator metacommand). However, in my research, I noted that a bunch of people seem to just not even bother with pg_start_backup/pg_stop_backup and I guess aren't that worried about the crash recovery process if they need to perform a restore. I also find the omission of the start/stop backup functions from the File System Level Backup page: http://www.postgresql.org/docs/9.2/static/backup-file.html Is the pg_start_backup() and pg_stop_backup() even necessary? It would be nice to understand the proper syntax for the psql pipe, regardless whether or not it is even necessary in this case. Thanks, Greg Haase
Re: [GENERAL] ZFS snapshots - to use pg_start_backup() and pg_stop_backup() or no?
On Sep 11, 2013, at 4:29 PM, Gregory Haase haa...@onefreevoice.com wrote: I was trying to figure out how to get the following syntax to work: echo select pg_start_backup('zfs_snapshot'); \\! zfs snapshot zroot/zpgsql@test; \\ select pg_stop_backup(); | psql postgres I do: psql -c select pg_start_backup('whatever'); zfs snapshot pool/fs@sn psql -c select pg_stop_backup(); That way no need to shell out from psql :) The above command successfully starts the backup and creates the snapshot but then fails to stop the backup. I've tried various combinations of \ and \\ here with different whitespace and I just can't seem to find a combination that works. I don't understand the proper use of \\ (described as the separator metacommand). Keep in mind that echo \\ will actually only echo '\' because \ is a shell escape as well... However, in my research, I noted that a bunch of people seem to just not even bother with pg_start_backup/pg_stop_backup and I guess aren't that worried about the crash recovery process if they need to perform a restore. I also find the omission of the start/stop backup functions from the File System Level Backup page: http://www.postgresql.org/docs/9.2/static/backup-file.html Is the pg_start_backup() and pg_stop_backup() even necessary? If all of your Postgres files are part of *the same* consistent snapshot (i.e. are on one FS that gets snapshotted), then the start/stop backup should not be necessary. It will just look like a server crash instead. pg_start_backup is used when you do not have filesystem snapshotting available, and is described in detail on the next manual page: http://www.postgresql.org/docs/9.2/static/continuous-archiving.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ZFS snapshots - to use pg_start_backup() and pg_stop_backup() or no?
Good point on not needing to shell out. I think my process was a mental holdover from the fact that MySQL releases 'flush tables with read lock' on client disconnect. Typically how fast is a crash recovery for a ~1TB database with heavy OTLP load? Are we talking several seconds, several minutes, several hours? Thanks, -G On Wed, Sep 11, 2013 at 4:46 PM, Steven Schlansker ste...@likeness.comwrote: On Sep 11, 2013, at 4:29 PM, Gregory Haase haa...@onefreevoice.com wrote: I was trying to figure out how to get the following syntax to work: echo select pg_start_backup('zfs_snapshot'); \\! zfs snapshot zroot/zpgsql@test; \\ select pg_stop_backup(); | psql postgres I do: psql -c select pg_start_backup('whatever'); zfs snapshot pool/fs@sn psql -c select pg_stop_backup(); That way no need to shell out from psql :) The above command successfully starts the backup and creates the snapshot but then fails to stop the backup. I've tried various combinations of \ and \\ here with different whitespace and I just can't seem to find a combination that works. I don't understand the proper use of \\ (described as the separator metacommand). Keep in mind that echo \\ will actually only echo '\' because \ is a shell escape as well... However, in my research, I noted that a bunch of people seem to just not even bother with pg_start_backup/pg_stop_backup and I guess aren't that worried about the crash recovery process if they need to perform a restore. I also find the omission of the start/stop backup functions from the File System Level Backup page: http://www.postgresql.org/docs/9.2/static/backup-file.html Is the pg_start_backup() and pg_stop_backup() even necessary? If all of your Postgres files are part of *the same* consistent snapshot (i.e. are on one FS that gets snapshotted), then the start/stop backup should not be necessary. It will just look like a server crash instead. pg_start_backup is used when you do not have filesystem snapshotting available, and is described in detail on the next manual page: http://www.postgresql.org/docs/9.2/static/continuous-archiving.html
Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
I took your sample and modified it a bit. ERROR: table name tblb specified more than once Switch tblB and tblC in the subquery and it works. WITH tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) ) , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) ) , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) ) select * fromtblA jointblBon tblA.a_id = tblB.b_id join( tblB jointblCon tblC.c_id = tblB.b_id ) as x on tblA.a_id = x.c_id; Am 12.09.2013 00:52, schrieb David Johnston: Andreas-3-2 wrote I never placed a bug-report. :} Could you give a hand? Sure. PostgreSQL homepage (postgresql.org) Report a Bug link on right-hand side of page. Fill in the form. SEND THIS droptable if exists sub_tab; droptable if exists main_tab; droptable if exists flag_1; droptable if exists flag_2; create temporary table flag_1 ( flag_1_id integer primary key, flag_1_t text ); insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table flag_2 ( flag_2_id integer primary key, flag_2_t text ); insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table main_tab ( main_id integer primary key, main_t text, flag_1_id integer references flag_1 ( flag_1_id ) ); insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 'Main 3', 3 ); create temporary table sub_tab ( sub_id integer primary key, sub_t text, main_id integer references main_tab ( main_id ), flag_1_id integer references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( flag_2_id ) ); insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 2 ), ( 3, 'Sub 3', 3, 1, 3 ); select m.main_id, m.main_t, f.flag_1_t, x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t frommain_tabas m joinflag_1 as f using ( flag_1_id ) left join ( sub_tab as s joinflag_2 as f2 using ( flag_2_id ) -- that way it works joinflag_1 as f using ( flag_1_id ) --joinflag_2 as f2 using ( flag_2_id ) -- that way it doesn't work ) as x using ( main_id ); -- View this message in context: http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_largeobjects
Hey, This does work, but as I'm using DEFAULT PRIVs to give access to tables it becomes a (the only) step which can't be done at schema creation time and has to be done at data insertion time. It feels to me that ALTER DEFAULT PRIVILEGES should be extended to support large objects (either by default from the table permissions or as a new GRANT option). Thoughts on this? Cheers, James Sewell, PostgreSQL Team Lead / Solutions Architect __ Level 2, 50 Queen St, Melbourne VIC 3000 *P* (03) 8370 8000 * **W* www.lisasoft.com *F*(03) 8370 8000 On Wed, Sep 11, 2013 at 6:40 PM, Raghavendra raghavendra@enterprisedb.com wrote: On Wed, Sep 11, 2013 at 10:19 AM, James Sewell james.sew...@lisasoft.comwrote: Hello all, I have a table which makes use of pg_largeobjects. I am inserting rows into the table as user1. If I connect to the database as user2 I can SELECT data, but can not open the large object for reading (user1 can do this). I don't want to set lo_compat_privileges as then user3 (who can't SELECT from the services tables) would be able to read the large object. GRANT SELECT,UPATE ON LARGE OBJECT to user2; Will this work... --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ -- -- The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
[GENERAL] fsync and wal_sync_method
Hi, I am using pg 9.2.4 and ubuntu 12.04 to set up a pair of pg-Master and pg-Slave with streaming replication. The archive_command is enabled and the rsync is used in pg-Master to save all its archived WAL files to the 3rd machine for backup purpose, by default, both fsync and wal_sync_method are commented out in postgresql.conf: archive_command = '(cp %p /usr/pg_arcxlog/%f rsync -aq %p pg@pg_slave:/usr/pg_arcxlog/%f' #fsync = on # turns forced synchronization on or off #wal_sync_method = fsync# the default is the first option Thus I think the latest WAL might not be flushed to disk from time to time in pg-Master, therefore the WAL saved in the 3rd server might not contain the latest WAL. In order to avoid possible data lost, should 'fsync be turned ON in pg-Master and pg-Slave? if yes what would be the best settings of wal_sync_method for these two servers respectively? Please advise. regards -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Odd behavior in functions w/ anyarray anyelement
Joshua Burns wrote CREATE OR REPLACE FUNCTION anyel_anyar(anyelement) RETURNS anyarray AS $BODY$ BEGIN RETURN $1; END; $BODY$ LANGUAGE plpgsql; Two possible interpretations: 1) must return an array of whatever type is supplied; this is (apparently) the defined behavior 2) must return an array whose base type is the same as the base type of the input The first interpretation seems the most useful. Your last two examples for this function should indeed fail. That said I am not really sure why they have to. In theory RETURN anyarray, if paired with an array anyelement, could output/require an array with one additional dimension compared to the input. In your example you should expect something like: 3) ARRAY[ARRAY['one','two']::text[]]::text[][] I guess this could be considered a third interpretation So the bigger question is: should PostgreSQL really care? Option #2 then makes the fewest assumptions: the base types must match AND the output must be some form of array. And the biggest question is whether there are use-cases for the more complex/flexible behavior so that someone may be enticed to implement it - and consider the backward compatibility concerns. Regardless, hopefully this aids your understanding. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Odd-behavior-in-functions-w-anyarray-anyelement-tp5770537p5770555.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Odd behavior in functions w/ anyarray anyelement
Joshua Burns wrote DROP FUNCTION IF EXISTS anyar_anyel(anyarray); CREATE OR REPLACE FUNCTION anyar_anyel(anyarray) RETURNS anyelement AS $BODY$ BEGIN RETURN $1; END; $BODY$ LANGUAGE plpgsql; Similar to my comment on anyelement-anyarray: The original goal here is to reduce dimensionality. In the simple case an array with a base type is required as input and the output is a scalar having the same base type. For both of these the most common usage is to go between 0-dim. and 1-dim. In theory this limited behavior should probably be restricted to anynonarray-anyarray and vice-versa while the anyelement variations can allow the more permissive/flexible check of identical base types. Again, hopefully this is educational. I'm going from the described behavior and my general understanding of how things work. I am not familiar with the source code nor the design considerations that preceded its creation. I'm not sure any real conclusions/goals can be drawn absent something more specific than example queries. The behavior described (up/down-grading between 1-dim arrays and scalar values) explains away all your unexpected results. But your expectations are not unreasonable - just not commonly used/needed in practice; or if they are someone else has a solution/work-around I am not familiar with. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/Odd-behavior-in-functions-w-anyarray-anyelement-tp5770537p5770556.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] fsync and wal_sync_method
On Wed, Sep 11, 2013 at 6:11 PM, ascot.m...@gmail.com ascot.m...@gmail.comwrote: Hi, I am using pg 9.2.4 and ubuntu 12.04 to set up a pair of pg-Master and pg-Slave with streaming replication. The archive_command is enabled and the rsync is used in pg-Master to save all its archived WAL files to the 3rd machine for backup purpose, by default, both fsync and wal_sync_method are commented out in postgresql.conf: archive_command = '(cp %p /usr/pg_arcxlog/%f rsync -aq %p pg@pg_slave :/usr/pg_arcxlog/%f' #fsync = on # turns forced synchronization on or off #wal_sync_method = fsync# the default is the first option Thus I think the latest WAL might not be flushed to disk from time to time in pg-Master, therefore the WAL saved in the 3rd server might not contain the latest WAL. In order to avoid possible data lost, should 'fsync be turned ON in pg-Master and pg-Slave? if yes what would be the best settings of wal_sync_method for these two servers respectively? Please advise. regards I think you are confusing fsync and switching WAL segments for archiving purposes. Here's a link to the docs to get you started: http://www.postgresql.org/docs/9.2/static/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT fsync is enabled unless you disable it, the fact that it is commented out means that it is set to its default (on). wal_sync_method at its default is probably fine, but if you are interested, look at the pg_test_fsync tool: http://www.postgresql.org/docs/9.2/static/pgtestfsync.html
[GENERAL] Best way to populate nested composite type from JSON`
Hi everyone; I have been starting to experiment with the JSON accessors in 9.3. I immediately found that my preferred use, populating nested composite types, is not supported. Also of course one cannot manipulate JSON nodes, which leads to the question of how best to do this. I had some ideas: 1. A custom C function utilizing the internal functions PostgreSQL offers for JSON parsing (probably looking at the json_populate_record() as a base) 2. Utilizing hstore as an intermediate representation along with json_each() 3. Perhaps using something like pl/perlu or the like. This leads to a few questions, including whether it would be worthwhile looking at trying to handle nested types in json_populate_record() and if anyone is working on such a thing currently. -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more.shtml
Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2
Here is a minimal query that demonstrates the problem. In 9.1 it works: chris=# select * FROM current_user u join (current_user u cross join current_user v) x on true; u | u | v ---+---+--- chris | chris | chris (1 row) On 9.3 it fails: ERROR: table name u specified more than once It may be a silly example but it works. On Wed, Sep 11, 2013 at 5:07 PM, Andreas maps...@gmx.net wrote: I took your sample and modified it a bit. ERROR: table name tblb specified more than once Switch tblB and tblC in the subquery and it works. WITH tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) ) , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) ) , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) ) select * fromtblA jointblBon tblA.a_id = tblB.b_id join( tblB jointblCon tblC.c_id = tblB.b_id ) as x on tblA.a_id = x.c_id; Am 12.09.2013 00:52, schrieb David Johnston: Andreas-3-2 wrote I never placed a bug-report. :} Could you give a hand? Sure. PostgreSQL homepage (postgresql.org) Report a Bug link on right-hand side of page. Fill in the form. SEND THIS droptable if exists sub_tab; droptable if exists main_tab; droptable if exists flag_1; droptable if exists flag_2; create temporary table flag_1 ( flag_1_id integer primary key, flag_1_t text ); insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table flag_2 ( flag_2_id integer primary key, flag_2_t text ); insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' ); create temporary table main_tab ( main_id integer primary key, main_t text, flag_1_id integer references flag_1 ( flag_1_id ) ); insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 'Main 3', 3 ); create temporary table sub_tab ( sub_id integer primary key, sub_t text, main_id integer references main_tab ( main_id ), flag_1_id integer references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( flag_2_id ) ); insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 2 ), ( 3, 'Sub 3', 3, 1, 3 ); select m.main_id, m.main_t, f.flag_1_t, x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t frommain_tabas m joinflag_1 as f using ( flag_1_id ) left join ( sub_tab as s joinflag_2 as f2 using ( flag_2_id ) -- that way it works joinflag_1 as f using ( flag_1_id ) --joinflag_2 as f2 using ( flag_2_id ) -- that way it doesn't work ) as x using ( main_id ); -- View this message in context: http://postgresql.1045698.n5.** nabble.com/PG-9-3-complains-**about-specified-more-than-** once-Those-views-worked-in-PG-**9-1-9-2-tp5770489p5770534.htmlhttp://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more.shtml