Re: [GENERAL] Migrating from MySQL
Heyho! On Sunday 27 June 2010 08.22:09 RP Khare wrote: I downloaded PostgreSQL plus advanced server from EnterpriseDB website. Should I go with the original community PGSQL edition or EnterpriseDB edition? If you work on a Linux/BSD/... OS distribution with a sane package manager, I always try to work with the version that comes with the distribution: you won't have to care about security updates and the like, they'll just come in with the OS updates. Of course, on Windows (and commercial Unices?), you don't have that choice. cheers -- vbi -- Could this mail be a fake? (Answer: No! - http://fortytwo.ch/gpg/intro) signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Scheduling backup
use cron? On 6/30/2010 10:37 AM, RP Khare wrote: Is there any way to schedule PGSQL databases backups? I want to take hourly dumps of my production database. . Rohit Prakash Build a bright career through MSN Education Sign up now. http://education.in.msn.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Scheduling backup
In response to RP Khare : Is there any way to schedule PGSQL databases backups? I want to take hourly dumps of my production database. You can use the OS-scheduler, for instance, CRON for UNIX-like systems. Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Scheduling backup
On 30/06/10 8:37 AM, RP Khare wrote: Is there any way to schedule PGSQL databases backups? I want to take hourly dumps of my production database. . Rohit Prakash Build a bright career through MSN Education Sign up now. http://education.in.msn.com/ You can use Crontab (linux)/Schedule Jobs(Windows). With this if you want then your can use pgagent for scheduling Jobs. For more information on pgagent, please refer following link: http://www.pgadmin.org/docs/1.4/pgagent.html -- Thanks Regards, Vibhor Kumar. EnterpriseDB Corporation The Enterprise Postgres Company Office: 732-331-1300 Ext-2022 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com Follow us on Twitter: http://www.twitter.com/enterprisedb -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Fwd: [GENERAL] Scheduling backup
Rohit, yes, there is. - Click on the Start-Icon (XP) or Windows-Icon(W7) to bring up your ProgrammsMenu - click on Control Panel - click on administration icon - DOUBLE-CLICK on planned tasks - click on new planned tasks, in the add planned task wizzard you can add the commands for running pg_dump - to learn more about the PostgreSQL PG_DUMP utility, please read up on http://developer.postgresql.org/pgdocs/postgres/app-pgdump.html best wishes, Harald On Wed, Jun 30, 2010 at 09:37, RP Khare passionate_program...@hotmail.comwrote: Is there any way to schedule PGSQL databases backups? I want to take hourly dumps of my production database. . Rohit Prakash -- Build a bright career through MSN Education Sign up now.http://education.in.msn.com/ -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 no fx, no carrier pigeon - Using PostgreSQL is mostly about sleeping well at night. -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 no fx, no carrier pigeon - Using PostgreSQL is mostly about sleeping well at night.
Re: [GENERAL] Scheduling backup
On 06/30/10 12:37 AM, RP Khare wrote: Is there any way to schedule PGSQL databases backups? I want to take hourly dumps of my production database. if those are your backup requirements, you should take a look at PITR, Point In Time Recovery, where you take just an occasional full database dump, then archive WAL logs which record the changes to the database, minute by minute. these logs can be used to play back the database transactions to any specific Point In Time. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pgpool2 + slony 1 in master/slave mode with no load balance and no select replication - slaves handle all reads
In my current configuration all queries are sent to the master, i want write queries to be sent to master, and reads to slave. I have 1 pgpool setuped with 2 postgres servers in master/slave replication with slony1 Can this be done with 1 pgpool or i must use 2 pgpools for writes and reads. here is the pgconfig # # pgpool-II configuration file sample # $Header: /cvsroot/pgpool/pgpool-II/pgpool.conf.sample,v 1.32 2010/01/31 02:22:24 t-ishii Exp $ # Host name or IP address to listen on: '*' for all, '' for no TCP/IP # connections listen_addresses = 'localhost' # Port number for pgpool port = # Port number for pgpool communication manager pcp_port = 9898 # Unix domain socket path. (The Debian package defaults to # /var/run/postgresql.) socket_dir = '/tmp' # Unix domain socket path for pgpool communication manager. # (Debian package defaults to /var/run/postgresql) pcp_socket_dir = '/tmp' # Unix domain socket path for the backend. Debian package defaults to /var/run/postgresql! backend_socket_dir = '/tmp' # pgpool communication manager timeout. 0 means no timeout, but strongly not recommended! pcp_timeout = 10 # number of pre-forked child process num_init_children = 32 # Number of connection pools allowed for a child process max_pool = 4 # If idle for this many seconds, child exits. 0 means no timeout. child_life_time = 300 # If idle for this many seconds, connection to PostgreSQL closes. # 0 means no timeout. connection_life_time = 0 # If child_max_connections connections were received, child exits. # 0 means no exit. child_max_connections = 0 # If client_idle_limit is n (n 0), the client is forced to be # disconnected whenever after n seconds idle (even inside an explicit # transactions!) # 0 means no disconnect. client_idle_limit = 0 # Maximum time in seconds to complete client authentication. # 0 means no timeout. authentication_timeout = 60 # Logging directory logdir = '/tmp' # pid file name pid_file_name = '/var/run/pgpool/pgpool.pid' # Replication mode replication_mode = false # Load balancing mode, i.e., all SELECTs are load balanced. # This is ignored if replication_mode is false. load_balance_mode = false # if there's a data mismatch between master and secondary # start degeneration to stop replication mode replication_stop_on_mismatch = false # If true, replicate SELECT statement when load balancing is disabled. # If false, it is only sent to the master node. replicate_select = false # Semicolon separated list of queries to be issued at the end of a # session reset_query_list = 'ABORT; DISCARD ALL' # for 8.2 or older this should be as follows. #reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT' # If true print timestamp on each log line. print_timestamp = true # If true, operate in master/slave mode. master_slave_mode = true # If true, cache connection pool. connection_cache = true # Health check timeout. 0 means no timeout. health_check_timeout = 20 # Health check period. 0 means no health check. health_check_period = 10 # Health check user health_check_user = 'postgres' # Execute command by failover. # special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %M = old master node id # %% = '%' character failover_command = 'echo host:%h, new master id:%m, old master id:%M port number:%p /tmp/failover.log' # Execute command by failback. # special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %M = old master node id # %% = '%' character # failback_command = 'echo host:%h, new master id:%m, old master id:%M port number:%p /tmp/failback.log' # If true, trigger fail over when writing to the backend communication # socket fails. This is the same behavior of pgpool-II 2.2.x or # earlier. If set to false, pgpool will report an error and disconnect # the session. fail_over_on_backend_error = true # If true, automatically locks a table with INSERT statements to keep # SERIAL data consistency. If the data does not have SERIAL data # type, no lock will be issued. An /*INSERT LOCK*/ comment has the # same effect. A /NO INSERT LOCK*/ comment disables the effect. insert_lock = true # If true, ignore leading white spaces of each query while pgpool judges # whether the query is a SELECT so that it can be load balanced. This # is useful for certain APIs such as DBI/DBD which is known to adding an # extra leading white space. ignore_leading_white_space = true # If true, print all statements to the log. Like the log_statement option # to PostgreSQL, this allows for observing queries without engaging in full # debugging. log_statement = false # If true, print all statements to the log. Similar to
[GENERAL] Problem with temporary tables
Hello. I am trying to use temporary tables inside a stored procedure, but I get a rather puzzling error. I am currently using PostgreSQL 8.2.7 and this is my stored procedure: CREATE OR REPLACE FUNCTION test() RETURNS bigint AS $$ DECLARE v_oid bigint; BEGIN -- create tmp-table used to map old-id to new-id CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP; SELECT INTO v_oid oid FROM pg_class WHERE relname = 'idmap'; RAISE NOTICE 'OOID of idmap %', v_oid; INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1); RETURN 1; END; $$ LANGUAGE plpgsql; The first time I invoke the stored procedure, everything goes fine: # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at SQL statement NOTICE: OOID of idmap 475391180 test -- 1 (1 row) COMMIT The second time I invoke the stored procedure, I get an error: # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at SQL statement NOTICE: OOID of idmap 475391184 ERROR: relation with OID 475391180 does not exist CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement ROLLBACK The strange part is that the second time, the OID of the idmap is the same as the one in the first invocation! Am I doing something wrong or is this a bug? Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
In response to Andrea Lombardoni : Hello. The strange part is that the second time, the OID of the idmap is the same as the one in the first invocation! Am I doing something wrong or is this a bug? The plan is cached, to avoid this problem, use dynamic SQL. In your case: EXECUTE 'CREATE TEMPORARY TABLE idmap ...' Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
Am I doing something wrong or is this a bug? The plan is cached, to avoid this problem, use dynamic SQL. In your case: EXECUTE 'CREATE TEMPORARY TABLE idmap ...' Nice idea, but the problem persists, see log below. I am beginning to mentally place this into the 'bug' area :) CREATE OR REPLACE FUNCTION test() RETURNS bigint AS $$ DECLARE v_oid bigint; BEGIN -- create tmp-table used to map old-id to new-id -- type: 1=skill 3=function EXECUTE 'CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP'; SELECT INTO v_oid oid FROM pg_class WHERE relname = 'idmap'; RAISE NOTICE 'OOID of idmap %', v_oid; -- add id mapping (type=1) INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1); RETURN 1; END; $$ LANGUAGE plpgsql; # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at execute statement NOTICE: OOID of idmap 475391188 test -- 1 (1 row) COMMIT # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at execute statement NOTICE: OOID of idmap 475391192 ERROR: relation with OID 475391188 does not exist CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement ROLLBACK -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
On Wednesday 30 June 2010 6:21:44 am Andrea Lombardoni wrote: Am I doing something wrong or is this a bug? The plan is cached, to avoid this problem, use dynamic SQL. In your case: EXECUTE 'CREATE TEMPORARY TABLE idmap ...' Nice idea, but the problem persists, see log below. I am beginning to mentally place this into the 'bug' area :) CREATE OR REPLACE FUNCTION test() RETURNS bigint AS $$ DECLARE v_oid bigint; BEGIN -- create tmp-table used to map old-id to new-id -- type: 1=skill 3=function EXECUTE 'CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP'; SELECT INTO v_oid oid FROM pg_class WHERE relname = 'idmap'; RAISE NOTICE 'OOID of idmap %', v_oid; -- add id mapping (type=1) INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1); RETURN 1; END; $$ LANGUAGE plpgsql; # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at execute statement NOTICE: OOID of idmap 475391188 test -- 1 (1 row) COMMIT # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at execute statement NOTICE: OOID of idmap 475391192 ERROR: relation with OID 475391188 does not exist CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement ROLLBACK You need to use EXECUTE for the INSERT statement as well per error: CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
You need to use EXECUTE for the INSERT statement as well per error: CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement Thanks, this works and solves my problem. Still, I find this behaviour to be rather quirky. Ideally the generated query plan should notice such cases and either report an error or use the version of the temporary table currently 'live'. At least this quirk should be highlighted both in the plpgsql documentation page http://www.postgresql.org/docs/8.2/interactive/plpgsql-overview.html and linked from the CREATE TABLE page http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html I will propose these changes in the appropriate mailing lists. Have a nice day! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
On Wed, Jun 30, 2010 at 2:41 PM, Andrea Lombardoni and...@lombardoni.ch wrote: You need to use EXECUTE for the INSERT statement as well per error: CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement Thanks, this works and solves my problem. Still, I find this behaviour to be rather quirky. it is fixed on 8.3. So you are out of luck with 8.2 on this one I'm afraid, gotta go with eXECUTe -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
Hello in PostgreSQL 8.2 and older you have to respect one rule - newer to drop temp table. You don't must do it. After session end, all temp tables are removed. you can execute some initialisation part like CREATE OR REPLACE FUNCTION check_tab() RETURNS void AS $$ BEGIN BEGIN TRUNCATE TABLE foo; EXCEPTION WHEN others THEN CREATE TABLE foo(a int); END; RETURN; END; $$ LANGUAGE plpgsql; This problem is solved from 8.3 2010/6/30 Andrea Lombardoni and...@lombardoni.ch: Hello. I am trying to use temporary tables inside a stored procedure, but I get a rather puzzling error. I am currently using PostgreSQL 8.2.7 and this is my stored procedure: CREATE OR REPLACE FUNCTION test() RETURNS bigint AS $$ DECLARE v_oid bigint; BEGIN -- create tmp-table used to map old-id to new-id CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP; SELECT INTO v_oid oid FROM pg_class WHERE relname = 'idmap'; RAISE NOTICE 'OOID of idmap %', v_oid; INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1); RETURN 1; END; $$ LANGUAGE plpgsql; The first time I invoke the stored procedure, everything goes fine: # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at SQL statement NOTICE: OOID of idmap 475391180 test -- 1 (1 row) COMMIT The second time I invoke the stored procedure, I get an error: # begin;select test();commit; BEGIN NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index idmap_pkey for table idmap CONTEXT: SQL statement CREATE TEMPORARY TABLE idmap (oldid bigint PRIMARY KEY, type bigint, newid bigint) ON COMMIT DROP PL/pgSQL function test line 9 at SQL statement NOTICE: OOID of idmap 475391184 ERROR: relation with OID 475391180 does not exist CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement ROLLBACK The strange part is that the second time, the OID of the idmap is the same as the one in the first invocation! Am I doing something wrong or is this a bug? Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres table contents versioning
Is there an equivalent of svn/git etc. for the data in a database's tables? Can I set something up so that I can see what was in the table two days/months etc. ago? I realize that in the case of rapidly changing hundred million row tables this presents an impossible problem. The best kludge I can think of is copying the tables to a directory and git-ing the directory. Thanks, John -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
On Wednesday 30 June 2010 6:41:18 am Andrea Lombardoni wrote: You need to use EXECUTE for the INSERT statement as well per error: CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement Thanks, this works and solves my problem. Still, I find this behaviour to be rather quirky. Ideally the generated query plan should notice such cases and either report an error or use the version of the temporary table currently 'live'. In the temporary table case it does for versions of 8.3+. From release notes for 8.3: Automatically re-plan cached queries when table definitions change or statistics are updated (Tom) Previously PL/PgSQL functions that referenced temporary tables would fail if the temporary table was dropped and recreated between function invocations, unless EXECUTE was used. This improvement fixes that problem and many related issues. At least this quirk should be highlighted both in the plpgsql documentation page http://www.postgresql.org/docs/8.2/interactive/plpgsql-overview.html and linked from the CREATE TABLE page http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html http://www.postgresql.org/docs/8.2/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN I will propose these changes in the appropriate mailing lists. Have a nice day! -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres table contents versioning
In response to John Gage : Is there an equivalent of svn/git etc. for the data in a database's tables? Can I set something up so that I can see what was in the table two days/months etc. ago? You can use tablelog: 15:53 akretschmer ??tablelog 15:53 pg_docbot_adz For information about 'tablelog' see: 15:53 pg_docbot_adz http://andreas.scherbaum.la/blog/archives/100-Log-Table-Changes-in-PostgreSQL-with-tablelog.html 15:53 pg_docbot_adz http://pgfoundry.org/projects/emaj/ 15:53 pg_docbot_adz http://pgfoundry.org/projects/tablelog/ -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres table contents versioning
John Gage jsmg...@numericable.fr wrote: Is there an equivalent of svn/git etc. for the data in a database's tables? Can I set something up so that I can see what was in the table two days/months etc. ago? I realize that in the case of rapidly changing hundred million row tables this presents an impossible problem. The best kludge I can think of is copying the tables to a directory and git-ing the directory. If you're looking at this from a disaster recovery point of view, you should read up on PostgreSQL's PITR capabilities. If you need the information in your application, you should google for temporal databases on how to amend your table structures. Tim -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter table schema, default sequences stay the same
Sim Zacks s...@compulab.co.il writes: I haven't consumed enough caffeine today to recall the details, but I think you could have ended up with default expressions like the above if the database had been dumped and reloaded from 8.0 or earlier. nextval(regclass) was introduced in 8.1 precisely to solve this type of problem. My first database was an 8.0 and at some point we upgraded to 8.2 so that must have caused the ?corruption? Yeah, that would fit. The upgrade process was intentionally designed to work that way, because the pre-8.1 behavior of DEFAULT nextval('foo') was that it always referred to the sequence currently named foo, because 'foo' was just a text constant. We didn't want to risk breaking things for anyone who was depending on that behavior. But now you have to explicitly put in ::text if that's how you want it to behave --- otherwise you get a regclass constant, which will track the original sequence's identity despite renamings. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with temporary tables
On Wed, Jun 30, 2010 at 9:51 AM, Adrian Klaver adrian.kla...@gmail.com wrote: On Wednesday 30 June 2010 6:41:18 am Andrea Lombardoni wrote: You need to use EXECUTE for the INSERT statement as well per error: CONTEXT: SQL statement INSERT INTO idmap (oldid, type, newid) VALUES(1, 1, 1) PL/pgSQL function test line 16 at SQL statement Thanks, this works and solves my problem. Still, I find this behaviour to be rather quirky. Ideally the generated query plan should notice such cases and either report an error or use the version of the temporary table currently 'live'. In the temporary table case it does for versions of 8.3+. From release notes for 8.3: Automatically re-plan cached queries when table definitions change or statistics are updated (Tom) Previously PL/PgSQL functions that referenced temporary tables would fail if the temporary table was dropped and recreated between function invocations, unless EXECUTE was used. This improvement fixes that problem and many related issues. Even in 8.3+ it's still better not to drop tables between function calls if it can be reasonably done. temp tables are a headache generally and I try to use them as little as possible. for cases where 'on commit drop' (transaction local data) might be used in the cases where arrays are not appropriate for local function storage, I use a persistent table. Since I generally like being pedantic, I'll give you a workaround you didn't ask for. :-) for cases where 'on commit drop' (transaction local data) might be used in the cases where arrays are not appropriate for local function storage, I use a persistent table. create table i_dislike_on_commit_drop ( tx_time timestamptz default now(), pid int default pg_backend_pid(), col1 text, col2 text, [etc] ); create index idocd_idx on i_dislike_on_commit_drop(tx_time, pid); for scratch inserts, you just: insert into i_dislike_on_commit_drop(col1, col2) values (...) for select, updates and deletes should you need them, be sure to WHERE...AND (tx_time, pid) = (now(), pg_backend_pid()); just make sure to delete the records once in a while (you can freely delete them all...in process records won't get hit). you might be tempted to use a view to wrap the table and filter out records, I don't think that's a good idea (requires righting nasty insert/update rules). merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Backend Crash v8.4.2
On Tue, Jun 29, 2010 at 9:34 AM, Tom Lane t...@sss.pgh.pa.us wrote: Kelly Burkhart kelly.burkh...@gmail.com writes: The crash left a core file, does the stack trace indicate anything crucial? (gdb) where #0 0x0068d884 in SearchCatCacheList () #1 0x0001 in ?? () #2 0x00bbcbe0 in ?? () #3 0x7f3b3a86a580 in ?? () #4 0x72ddbea20068dae0 in ?? () #5 0x7fff78faa720 in ?? () #6 0x in ?? () Current language: auto The current source language is auto; currently asm. That's pretty much useless unless you can install debug symbols and try again. I will say though that this is probably a new bug --- I don't recall seeing anything crashing in SearchCatCacheList recently. I had our system people install the debug symbols and I get the same stack trace. I believe the symbols are indeed installed, yesterday when I started gdb I saw a bunch of lines like this: Missing separate debuginfo for /usr/lib64/libssl.so.0.9.8 Try: zypper install -C debuginfo(build-id)=c1d9e2a7e013149b5acc4d3580724d4827f5827c I don't see that now. Can anyone provide some guidance on how I can go about discovering the cause? Please try to create a reproducible test case. One thing you can get to start from is the query that was being executed --- try this in gdb: p debug_query_string I was able to see the query: select sd.close, s.minimum_trade_increment from symbol_daily sd, symbol s where s.symbol_name = sd.symbol_name and s.exchange_name = sd.exchange_name and sd.symbol_name = $1 and sd.trading_dt = last_trading_dt() It's a well established query done probably several times each morning. I don't know how to create a reproducible test case as I can't determine anything that we did yesterday that was any different from any other day. -K -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Filtering by tags
No one with any response on this? -- a. Anders Steinlein wrote: What's the recommended way of storing tags in a database, and then filtering based on the existence, or *non*-existence, of those tags on some entities? Our application stores contacts, where each contact may have any number of tags. We do this with the tables contacts, contacts_tags and tags. We also have segments, which defines filters on contacts based on specific tags they must have and/or must *not* have. This is defined by the tables segments and segments_tags. (See bottom of post for table definitions). Finding contacts matching a given segment which has BOTH positive (required tags) and negative (non-existing tags) requirements is easy enough (simplified): SELECT segmentid, email FROM segments_tags st INNER JOIN contacts_tags ct USING (tagname) INNER JOIN contacts USING (email) WHERE st.tagtype = 1 GROUP BY 1, 2 HAVING COUNT(*) = (SELECT COUNT(*) FROM segments_tags WHERE segmentid = st.segmentid AND tagtype = 1) EXCEPT SELECT segmentid, email FROM segments_tags st INNER JOIN contacts_tags ct USING (tagname) INNER JOIN contacts USING (email) WHERE st.tagtype = 0; However, segments which ONLY contain negative requirements (that's tagtype = 0) doesn't work, for obvious reasons. Is there a way to make this work with a single query for both cases? Possibly using CTE (which I'm not very familiar with)? Table definitions: Table public.contacts Column | Type | Modifiers ---+-+- email | email | not null name | text | status | character(1) | not null default 'a'::bpchar statuschanged | timestamp without time zone | Indexes: contacts_pkey PRIMARY KEY, btree (email) Table public.contacts_tags Column | Type | Modifiers -+---+--- email | email | not null tagname | text | not null Indexes: contacts_tags_pkey PRIMARY KEY, btree (email, tagname) contacts_tags_tagname btree (tagname) Foreign-key constraints: contacts_tags_email_fkey FOREIGN KEY (email) REFERENCES contacts(email) ON UPDATE CASCADE ON DELETE CASCADE contacts_tags_tagname_fkey FOREIGN KEY (tagname) REFERENCES tags(tagname) ON UPDATE CASCADE ON DELETE CASCADE Table public.tags Column | Type | Modifiers ---+-+--- tagname | text | not null createdat | timestamp without time zone | not null default now() Indexes: tags_pkey PRIMARY KEY, btree (tagname) Table public.segments Column | Type | Modifiers -+-+- segmentid | integer | not null default nextval('segments_segmentid_seq'::regclass) segmentname| text | not null createdat | timestamp without time zone | not null default now() Indexes: segments_pkey PRIMARY KEY, btree (segmentid) Table public.segments_tags Column | Type | Modifiers ---+-+-- segmentid | integer | not null tagname | text | not null tagtype | integer | not null Indexes: segments_tags_pkey PRIMARY KEY, btree (segmentid, tagname) Foreign-key constraints: segments_tags_segmentid_fkey FOREIGN KEY (segmentid) REFERENCES segments(segmentid) ON UPDATE RESTRICT ON DELETE CASCADE segments_tags_tagname_fkey FOREIGN KEY (tagname) REFERENCES tags(tagname) ON UPDATE CASCADE ON DELETE CASCADE Regards, -- a. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Backend Crash v8.4.2
Kelly Burkhart kelly.burkh...@gmail.com writes: I had our system people install the debug symbols and I get the same stack trace. I believe the symbols are indeed installed, yesterday when I started gdb I saw a bunch of lines like this: Missing separate debuginfo for /usr/lib64/libssl.so.0.9.8 Try: zypper install -C debuginfo(build-id)=c1d9e2a7e013149b5acc4d3580724d4827f5827c I don't see that now. That sounds like you have symbols now for the system libraries, but not postgresql itself. It's a well established query done probably several times each morning. I don't know how to create a reproducible test case as I can't determine anything that we did yesterday that was any different from any other day. Best guess from here is that you managed to run into some sort of cache-reload bug; those are very sensitive to concurrent operations since you only see them when a shared cache inval event happens at just the wrong time. I would recommend an update to 8.4.4 since we did stomp two or three critters of that ilk in the last few months, but I can't really guarantee that we found the one that bit you. While you're at it, please try to make sure you install a non-symbol- stripped version of 8.4.4. If it does happen again, at least you'll be prepared to collect more data. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql partitioning - single hot table or distributed
Hi Vick, Currently we aren't deleting anything due to business requirements though at some point we will have to start deleting out some data. I suspect when we do it won't be as simple as just dropping the oldest data; some customers will have data that we want to keep permanently, while others will be ok to be dropped after 30 days. Inserting directly into the specific partition is interesting, but if you're going to go down that route then aren't you starting to implement the partitioning yourself in application code. In that case what benefit does keeping the Postgresql partitioning in place actually give you? Thanks for the reply. Sam On 30 June 2010 02:39, Vick Khera vi...@khera.org wrote: On Tue, Jun 29, 2010 at 4:00 PM, sam mulube sam.mul...@gmail.com wrote: Alternatively we wondered about partitioning by the server_id foreign key, using for example the modulo of the foreign key id. This would give us a finite number of partitions (rather than the potentially unbounded date option), and would likely cause writes to be much more evenly distributed between the partitions. Do you ever delete? If so, what is the delete pattern? I have several large tables split into 100 partitions which keeps them in the O(10million) row size range each, and that has proven to be ideal for our use. We insert into the partition directly, and select from it whenever possible avoiding the constraint exclusion step. The only time we get bad performance is when doing a join that cannot run constraint exclusion, and that turns out to be pretty expensive. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Filtering by tags
Anders Steinlein and...@steinlein.no wrote: No one with any response on this? [...] Insert a LEFT JOIN in the first subquery? Tim (too lazy to test :-)) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Backend Crash v8.4.2
On Wed, Jun 30, 2010 at 11:07 AM, Tom Lane t...@sss.pgh.pa.us wrote: Best guess from here is that you managed to run into some sort of cache-reload bug; those are very sensitive to concurrent operations since you only see them when a shared cache inval event happens at just the wrong time. I would recommend an update to 8.4.4 since we did stomp two or three critters of that ilk in the last few months, but I can't really guarantee that we found the one that bit you. While you're at it, please try to make sure you install a non-symbol- stripped version of 8.4.4. If it does happen again, at least you'll be prepared to collect more data. We'll plan on upgrading. RE: stripped symbols, I assume you mean configuring with --enable-debug specified, I see from my config.log that I did not specify that flag. I just built with debug symbols on a non-production machine and the stack trace is different. I assume it's completely invalid because symbol addresses from different builds are not guaranteed to line up. Correct? Or is this helpful? Program terminated with signal 11, Segmentation fault. #0 0x0068d884 in RelationCacheInitializePhase2 () at relcache.c:2588 2588LOAD_CRIT_INDEX(IndexRelidIndexId); (gdb) where #0 0x0068d884 in RelationCacheInitializePhase2 () at relcache.c:2588 #1 0x in ?? () (gdb) Thanks, -K -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Filtering by tags
On Wed, Jun 30, 2010 at 05:54:51PM +0200, Anders Steinlein wrote: No one with any response on this? Fun problem, how about: SELECT x.email, x.segmentid FROM ( SELECT c.email, t.segmentid, t.tagname, t.tagtype FROM contacts c, segments_tags t) x LEFT JOIN contacts_tags t USING (email,tagname) GROUP BY x.email, x.segmentid HAVING NOT bool_or((x.tagtype = 0) (t.tagname IS NULL)); The HAVING statement is a little obscure, but could also be written: HAVING COUNT(CASE WHEN x.tagtype = 0 AND t.tagname IS NULL THEN 1 END) = 0 AND COUNT(CASE WHEN x.tagtype = 1 AND t.tagname IS NOT NULL THEN 1 END) = 0; it works by keeping count of the number of bad tags; i.e. if the tag type is zero then expect the tag entry not to be found, and the reverse if the tag type is one. Because of the cross join in the inner select this is going to be *slow*, so you may want to limit things a bit by only working with one contact or segment type at a time. Hope that gives you a few ideas! -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] dropdb weirdness
Silly ideas, but is dropdb confusing the postgres user on the host and a database named postgres? (does the 1st database the command was run on still exist?) Does it do it right if the -U and -W switches are used? Steve On 29 June 2010 22:38, Geoffrey li...@serioustechnology.com wrote: Tom Lane wrote: Adrian Klaver adrian.kla...@gmail.com writes: On Tuesday 29 June 2010 1:04:27 pm Geoffrey wrote: dropdb: could not connect to database postgres: FATAL: database postgres does not exist Why is it not 'seeing' the database name I'm passing to it? Why is it trying to drop a database named postgres?? It needs to connect to the database cluster to run the DROP DATABASE command and is trying to use the system database postgres. Did you drop the postgres database? Does the user you are connecting as have the permissions to postgres? does not exist is not a permissions problem ;-) What I'm wondering is if this indicates use of 8.1 or later dropdb script against a pre-8.1 server. Before 8.1 there wasn't a postgres database by default, and dropdb would instead try to connect to template1. You can work around this by forcing dropdb to connect to an existing database name, but it'd probably be better to keep your client tools in sync with the server version. regards, tom lane I know the version of dropdb is 8.3.6. There SHOULD be only one version of postgres installed on this machine, but I will verify that tomorrow. This is a standard RHEL workstation running on a laptop. The weird thing about this is, I've used this script on three other machines just fine. Further, it worked on another database on this same machine, but two others failed with this same error. I'm instructing the user how to run the script remotely, so I don't have eyes on what she's doing. She says she is running it as the postgres user. I don't have access to her cluster, so I can't verify if the postgres database is there, although I would expect it is as all these machines were set up the same way. She's offline for the day, so I'll catch up with her tomorrow and ask her to list her databases in that cluster. Thanks to all for the feedback. -- Until later, Geoffrey I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them. - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Backend Crash v8.4.2
Kelly Burkhart kelly.burkh...@gmail.com writes: RE: stripped symbols, I assume you mean configuring with --enable-debug specified, I see from my config.log that I did not specify that flag. Ah, if you built it yourself, that explains why your sysadmins' installation of symbol packages didn't help. If you're building with gcc, --enable-debug is pretty much always a good idea: it doesn't cost anything but some extra disk space. With some other compilers --enable-debug disables optimization and hence isn't a good idea for production builds. I just built with debug symbols on a non-production machine and the stack trace is different. I assume it's completely invalid because symbol addresses from different builds are not guaranteed to line up. Correct? Or is this helpful? Again, depends if it's gcc. If so, and everything is identical between this machine and the one where you did the original build, this'd probably work. Program terminated with signal 11, Segmentation fault. #0 0x0068d884 in RelationCacheInitializePhase2 () at relcache.c:2588 2588 LOAD_CRIT_INDEX(IndexRelidIndexId); That looks interesting, indeed. I don't think I want to trust it entirely because of the likelihood that there's some difference between this build and the original; but if it's not too far off from reality then it places the failure in relcache.c rather than SearchCatCacheList. And that makes sense because we have indeed fixed several cache-related bugs in relcache over the past six months or so. At this point I'd *strongly* encourage you to update to 8.4.4. And please do build with --enable-debug in future, if you're using gcc. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] reloading dump produces errors
I thought you could use 8.3.* tools against any 8.3.* database, is this not correct? I'm getting the following errors: pg_dumpall -g -p 5436 -h matrix server version: 8.3.7; pg_dumpall version: 8.3.6 aborting because of version mismatch (Use the -i option to proceed anyway.) Would using the '-i' option be safe in this case? -- Until later, Geoffrey I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them. - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] reloading dump produces errors
On Wed, Jun 30, 2010 at 2:16 PM, Geoffrey li...@serioustechnology.com wrote: I thought you could use 8.3.* tools against any 8.3.* database, is this not correct? I'm getting the following errors: pg_dumpall -g -p 5436 -h matrix server version: 8.3.7; pg_dumpall version: 8.3.6 aborting because of version mismatch (Use the -i option to proceed anyway.) Would using the '-i' option be safe in this case? Chances are it will be fine. I don't think the developers will guarantee anything except doing a dump from a version = to your server version, though. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql partitioning - single hot table or distributed
On Wed, Jun 30, 2010 at 10:20 AM, sam mulube sam.mul...@gmail.com wrote: Inserting directly into the specific partition is interesting, but if you're going to go down that route then aren't you starting to implement the partitioning yourself in application code. In that case what benefit does keeping the Postgresql partitioning in place actually give you? Your benefit comes in very fast lookups when constraint exclusion (or index query) are working for you. You also get simplified syntax for doing joins against the entire data set. If however you always only search on the ID, then there is really not much benefit other than having smaller indexes. Smaller indexes are great when you have to re-index. Instead of taking say 15 minutes to reindex the one big table, you take 10-15seconds per partition, which means your applications can move along with only a few seconds delay while you reindex instead of being blocked. For one of my tables I could have gone the trigger method for routing inserts because those are onsies-twosies, but the main table I have it is much more efficient to compute the partition up front since I do a *lot* of inserts in a big batch. The drawback to partitioning by an ID number using modulo is that for constraint exclusion to work you have to actually add something like AND (my_id % 42) = 0 to match the constraint. The exclusion is not done by executing the constraint, but by proving the constraint will hold true for the WHERE clause. My reasoning was if I'm going to add that to my select queries, I might as well just pick the table by doing that arithmetic in my application up front. It is faster than having the DB do the constraint proof for each of the 100 partitions. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] php and connection
Hi, in a PHP application working on Postgres normally the new connection to the database is made per request. This can potentially cause too big overhead, so I've got some questions: - is the overhead really noticeable? - could this be solved using persistent connections, or the persistent connections in php and postgres don't work properly? - could this be solved using something like pgpool? regards Szymon
Re: [GENERAL] php and connection
On Wed, 2010-06-30 at 20:42 +0200, Szymon Guz wrote: Hi, in a PHP application working on Postgres normally the new connection to the database is made per request. This can potentially cause too big overhead, so I've got some questions: - is the overhead really noticeable? It can be. - could this be solved using persistent connections, or the persistent connections in php and postgres don't work properly? Don't use them. - could this be solved using something like pgpool? Yes, using a connection pooler will solve the problem. I prefer pgbouncer. Joshua D. Drake regards Szymon -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Can anyone see why I keep getting the below IDENTIFY_SYSTEM error in my logs when I start my replication database? I use postgres primary as my prefix in the syslog and postgres replication as the replication one. Jun 30 14:10:25 postgres primary[19617]: [2-1] LOG: connection received: host=192.168.2.8 port=55138 Jun 30 14:10:25 postgres primary[19617]: [3-1] LOG: connection authorized: user=foobar database=postgres Jun 30 14:10:25 postgres primary[19617]: [4-1] ERROR: syntax error at or near IDENTIFY_SYSTEM at character 1 Jun 30 14:10:25 postgres primary[19617]: [4-2] STATEMENT: IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-1] FATAL: could not receive database system identifier and timeline ID from the primary server: ERROR: syntax error at or near IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-2] LINE 1: IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-3] ^ Jun 30 14:10:25 postgres replication[19616]: [1-4] Jun 30 14:10:25 postgres primary[19617]: [5-1] LOG: On primary's postgresql.conf I have this: port = 5432 wal_level = archive# minimal, archive, or hot_standby archive_mode = on max_wal_senders = 1# max number of walsender processes #hot_standby = off On the replication database's postgresql.conf I have this: port = 5433 wal_level = minimal archive_mode = off #max_wal_senders = 0# max number of walsender processes hot_standby = off -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] php and connection
On 30 June 2010 19:43, Joshua D. Drake j...@commandprompt.com wrote: On Wed, 2010-06-30 at 20:42 +0200, Szymon Guz wrote: Hi, in a PHP application working on Postgres normally the new connection to the database is made per request. This can potentially cause too big overhead, so I've got some questions: - is the overhead really noticeable? It can be. - could this be solved using persistent connections, or the persistent connections in php and postgres don't work properly? Don't use them. - could this be solved using something like pgpool? Yes, using a connection pooler will solve the problem. I prefer pgbouncer. +1 I can't really add to that. Thom -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] php and connection
2010/6/30 Thom Brown thombr...@gmail.com On 30 June 2010 19:43, Joshua D. Drake j...@commandprompt.com wrote: On Wed, 2010-06-30 at 20:42 +0200, Szymon Guz wrote: Hi, in a PHP application working on Postgres normally the new connection to the database is made per request. This can potentially cause too big overhead, so I've got some questions: - is the overhead really noticeable? It can be. - could this be solved using persistent connections, or the persistent connections in php and postgres don't work properly? Don't use them. - could this be solved using something like pgpool? Yes, using a connection pooler will solve the problem. I prefer pgbouncer. +1 I can't really add to that. Thom Thanks for the answers that ensured me in what I was thinking about. regards Szymon
[GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Can anyone see why I keep getting the below IDENTIFY_SYSTEM error in my logs when I start my replication database process? I use postgres primary as my prefix in the syslog and postgres replication as the replication one so their outputs are distinguishable. Jun 30 14:10:25 postgres primary[19617]: [2-1] LOG: connection received: host=192.168.2.8 port=55138 Jun 30 14:10:25 postgres primary[19617]: [3-1] LOG: connection authorized: user=foobar database=postgres Jun 30 14:10:25 postgres primary[19617]: [4-1] ERROR: syntax error at or near IDENTIFY_SYSTEM at character 1 Jun 30 14:10:25 postgres primary[19617]: [4-2] STATEMENT: IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-1] FATAL: could not receive database system identifier and timeline ID from the primary server: ERROR: syntax error at or near IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-2] LINE 1: IDENTIFY_SYSTEM Jun 30 14:10:25 postgres replication[19616]: [1-3] ^ Jun 30 14:10:25 postgres replication[19616]: [1-4] Jun 30 14:10:25 postgres primary[19617]: [5-1] LOG: On primary's postgresql.conf I have this: port = 5432 wal_level = archive# minimal, archive, or hot_standby archive_mode = on max_wal_senders = 1# max number of walsender processes #hot_standby = off On the replication database's postgresql.conf I have this: port = 5433 wal_level = minimal archive_mode = off #max_wal_senders = 0# max number of walsender processes hot_standby = off -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Find users that have ALL categories
Is this the most efficient way to write this query? Id like to get a list of users that have the categories 1, 2, and 3? SELECT user_id FROM user_categories WHERE category_id IN (1,2,3) GROUP BY user_id HAVING COUNT(*) = 3 users_categories (user_id, category_id) 1 | 1 1 | 2 1 | 3 2 | 1 2 | 2 3 | 1 4 | 1 4 | 2 4 | 3 The result should produce 1 4. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Looking for multithreaded build of libpq for Windows
I found out the libpq.lib come with Postgres installer (8.4.4) is not thread-safe. Where can I find a thread-safe build? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Looking for multithreaded build of libpq for Windows
On Tue, Jun 29, 2010 at 8:28 PM, MD mingdeng2...@gmail.com wrote: I found out the libpq.lib come with Postgres installer (8.4.4) is not thread-safe. Where can I find a thread-safe build? It's always thread-safe on Windows. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DBI::Oracle problems
On Wed, Jun 30, 2010 at 10:10:02AM +1000, Howard Rogers wrote: I am stumped, despite working on this for a week! I am trying to create a 64-bit postgresql 8.4 database server which can retrieve data from various 64-bit Oracle 10gR2 and 11gR2 databases. Try downloading the latest version of DBI-Link using the Download Source link at http://github.com/davidfetter/DBI-Link There is also a low-traffic mailing list for the project, where questions like this are more on point :) Cheers, David. -- David Fetter da...@fetter.org http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Looking for multithreaded build of libpq for Windows
On Jun 29, 3:28 pm, MD mingdeng2...@gmail.com wrote: I found out the libpq.lib come with Postgres installer (8.4.4) is not thread-safe. Where can I find a thread-safe build? Anyone knows? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Zoid z...@the-lounge.us writes: Can anyone see why I keep getting the below IDENTIFY_SYSTEM error in my logs when I start my replication database process? Are you sure the primary is 9.0? It sure looks like you're getting a plain backend connection instead of a walsender, which is what I'd expect to happen if the server were too old to recognize the replication parameter in the connection request. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Tom, Actually, I'm using postgresql beta2 and my replication process is connectiong to my primary (or backend) at 5432 via the connect_info line of the recovery.conf file. I had already saw that the error is identical if I merely connect directly to the primary and issue the command IDENTITY_SYSTEM, so it seems to me the primary (or backend) is not treating the IDENTITY_SYSTEM command as Streaming Replication Protocol, but I'm not sure why it would anyway as I've connected to thesame port psql does. Tom Lane wrote: Zoid z...@the-lounge.us writes: Can anyone see why I keep getting the below IDENTIFY_SYSTEM error in my logs when I start my replication database process? Are you sure the primary is 9.0? It sure looks like you're getting a plain backend connection instead of a walsender, which is what I'd expect to happen if the server were too old to recognize the replication parameter in the connection request. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Execute permission for functions
Hi there, I'm a newbie to postgresql and I have some problems working with its permissions. For security purpose, I want that my application service account only has execution permissions to the functions I created. so what I did is: Create a group testgroup (not super user) Create a user testuser belongs to testgroup Now I want to give function execute permission to testgroup: first I grant the schema: GRANT USAGE ON SCHEMA aaa TO testgroup; second I grant the execution permission to the group: GRANT EXECUTE ON FUNCTION aaa.testFunction(integer) TO testgroup; I thought that's it and logged in with testuser. But I found I couldn't execute the function and gave me error like: ERROR: permission denied for relation Tablename (the function select from that Tablename) Then I tried to give that testuser the select permission to that Tablename, testuser can then execute that function. so conclusion: it seems it's useless to give execution permission to a group. The group will have to have all the base table select/insert/ delete etc permisisons in order to execute function depending on what's in the function. Then I wonder why Postgresql is working like that? There's no way for me to lock down all base table access. Or is there something I miss? Please help me if you have any idea about locking down base table access and only give function execution access to a group. Thanks a lot! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Execute permission for functions
On 06/30/2010 02:09 PM, mirthcyy wrote: Hi there, I'm a newbie to postgresql and I have some problems working with its permissions. For security purpose, I want that my application service account only has execution permissions to the functions I created. so what I did is: Create a group testgroup (not super user) Create a user testuser belongs to testgroup Now I want to give function execute permission to testgroup: first I grant the schema: GRANT USAGE ON SCHEMA aaa TO testgroup; second I grant the execution permission to the group: GRANT EXECUTE ON FUNCTION aaa.testFunction(integer) TO testgroup; I thought that's it and logged in with testuser. But I found I couldn't execute the function and gave me error like: ERROR: permission denied for relation Tablename (the function select from that Tablename) Then I tried to give that testuser the select permission to that Tablename, testuser can then execute that function. so conclusion: it seems it's useless to give execution permission to a group. The group will have to have all the base table select/insert/ delete etc permisisons in order to execute function depending on what's in the function. Then I wonder why Postgresql is working like that? There's no way for me to lock down all base table access. Or is there something I miss? Please help me if you have any idea about locking down base table access and only give function execution access to a group. Thanks a lot! From here: http://www.postgresql.org/docs/8.4/interactive/sql-createfunction.html Look at SECURITY DEFINER. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Zoid z...@the-lounge.us writes: Actually, I'm using postgresql beta2 and my replication process is connectiong to my primary (or backend) at 5432 via the connect_info line of the recovery.conf file. I had already saw that the error is identical if I merely connect directly to the primary and issue the command IDENTITY_SYSTEM, so it seems to me the primary (or backend) is not treating the IDENTITY_SYSTEM command as Streaming Replication Protocol, but I'm not sure why it would anyway as I've connected to thesame port psql does. The port's not the issue --- replication connections are supposed to use the same port. Try something like psql -h primary-server replication=1 regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Cannot open table in new database
We've set up a Sql database for the first time and get an error reported back to our application from the ODBC session object when we try to open one of the tables. [42p01][7]ERROR Relation SqlAnal does not exist; table not found! Here is my database outline in the Admin tool Servers(1) Concept Patterns (localhost:5432) Databases(1) Postgres casts languages schemas Public - - Tables(2) DocHeader SqlAnal Replication Tablespaces(2) pg_default pg_global Group Roles Login Roles(1) Concept We can access the list of tables from the ODBC driver which shows the above tables. We've obviously done something wrong but we don't know where to look. The Help button is not helpful at all. Any ideas on what we are doing incorrectly? Cheers Phil Jackson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
On 07/01/2010 09:46 AM, Phil Jackson wrote: We've set up a Sql database for the first time and get an error reported back to our application from the ODBC session object when we try to open one of the tables. [42p01][7]ERROR Relation SqlAnal does not exist; table not found! Here is my database outline in the Admin tool Servers(1) Concept Patterns (localhost:5432) Databases(1) Postgres casts languages schemas Public - - Tables(2) DocHeader SqlAnal Replication Tablespaces(2) pg_default pg_global Group Roles Login Roles(1) Concept We can access the list of tables from the ODBC driver which shows the above tables. We've obviously done something wrong but we don't know where to look. The Help button is not helpful at all. Any ideas on what we are doing incorrectly? Cheers Phil Jackson Using mixed case :) See here for gotchas of using mixed case: http://www.postgresql.org/docs/8.4/interactive/sql-syntax-lexical.html 4.1.1. Identifiers and Key Words -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] loading many queries from a file
Hi, I've got a file with many SQL queries, also some function definitions and so on. I'd like to load it to database, but using some library like JDBC/ODBC/DBI, not using the obvious psql. Do you know how I could load those many queries? Usually there could be loaded only one query, I saw that psql parses the file looking for the semicolon. Is there any other way than parsing the file and loading queries one by one? regards Szymon
[GENERAL] Problems building from source
Hi all, I recently had some problems with a pre-built version of PostgreSQL and so I decided to try and build my own copy from source, but have run into even more problems. I downloaded a copy of the source, unpacked it into a directory and had a quick look at the possible configure options (./configure --help) just to see if anything specific caught my eye. The only thing I saw was --with-gnu-ld. So I ran configure in the postgresql src root directory as ./configure --with-gnu-ld --prefix=/mingw. I got as far as the zlib checks when configure told me that I didnt have zlib installed (a quick look at my files confirmed this), so I went to www.zlib.net and downloaded version 1.2.5 (this appears to be the latest), I then built and installed this. Running configure again (with the same options) gave me a wonderful error. configure: error: zlib version is too old Use --without-zlib to disable zlib support. How can the latest version be too old? Seeing this, I uninstalled that version and went in search of other versions. The versions I found may not have been newer (may have been older) but I tried them anyways, all with the same results, apparently all versions were too old. The other versions I found were on the MinGW/MSYS download site (http://sourceforge.net/projects/mingw/files/), I tried the most recent versions listed on there. Faced with that, I then changed my configure line to be ./configure --prefix=/mingw --with-gnu-ld --without-zlib (who needs zlib anyways, right?). Now I was thinking that surely this would be the end of it and I would soon have a fully built version of PostgreSQL sadly not. I now get this error. checking for atexit... yes checking test program... failed configure: error: Could not execute a simple test program. This may be a problem related to locating shared libraries. Check the file 'config.log' for the exact reason. And now I have no idea. I had a look at config.log and to find this. configure:21835: checking for atexit configure:21891: gcc -qlanglvl=extc89 -o conftest.exe -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I./src/include/port/win32 -DEXEC_BACKEND -Wl,--allow-multiple-definition conftest.c 5 gcc.exe: unrecognized option '-qlanglvl=extc89' configure:21898: $? = 0 configure:21920: result: yes configure:23234: checking test program configure:23254: gcc -qlanglvl=extc89 -o conftest.exe -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv -I./src/include/port/win32 -DEXEC_BACKEND -Wl,--allow-multiple-definition conftest.c 5 gcc.exe: unrecognized option '-qlanglvl=extc89' configure:23258: $? = 0 configure:23264: ./conftest.exe ./configure: line 23266: ./conftest.exe: Invalid argument configure:23268: $? = 126 configure: program exited with status 126 configure: failed program was: Having a look at line 23266 of configure and Im lost (never could understand those things). Can anyone tell me what is going on here? I have built other libraries/packages with this toolchain (wxWidgets, xerces-c, boost), so I dont think it is broken. Some system info. OS: Windows 7 64-bit MSYS ver: 1.0.14 MinGW ver: 4.5.0 Make ver: GNU make 3.81 If it matters . My IDE is Eclipse Galileo and I am compiling my project using the MinGW toolchain. If you need any more info just let me know. Regards Bidski
[GENERAL] Can't EXTRACT number of months from an INTERVAL
I need to read a timestamp from the database and turn that into an integer describing how many months ago the event happened, rounding downward. The events are guaranteed to be in the past. To start with, I tried subtracting a sample timestamp as would be found in the DB from my benchmark date: uatrackingdb= select timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00'; ?column? -- 596 days (1 row) I get a result in just days; no years or months cited. I don't understand that. When I try to extract the months part of this value, I get: uatrackingdb= select extract ('months' from (select timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00')); date_part --- 0 (1 row) It doesn't matter if I use explicitly cited timestamps as I've shown here or select a value from a timestamp field in the DB, the results are the same. Using DATEs instead of TIMESTAMPs just makes things worse, I can't even get my expressions to parse correctly. Attempting to coerce the 595 days into an INTERVAL gets me nowhere. Can anyone please advise me on how to proceed? Topher Eliot christopher.el...@nagrastar.com +01 303 706-5766 []
Re: [GENERAL] Cannot open table in new database
Hi Adrian The link says that; Identifier and key word names are case insensitive. But I have renamed the source table in lowercase and this gets me one step further. I'll carry on and see what happens next. Cheers Phil Jackson On 6/30/2010 3:18 PM, Adrian Klaver wrote: On 07/01/2010 09:46 AM, Phil Jackson wrote: We've set up a Sql database for the first time and get an error reported back to our application from the ODBC session object when we try to open one of the tables. [42p01][7]ERROR Relation SqlAnal does not exist; table not found! Here is my database outline in the Admin tool Servers(1) Concept Patterns (localhost:5432) Databases(1) Postgres casts languages schemas Public - - Tables(2) DocHeader SqlAnal Replication Tablespaces(2) pg_default pg_global Group Roles Login Roles(1) Concept We can access the list of tables from the ODBC driver which shows the above tables. We've obviously done something wrong but we don't know where to look. The Help button is not helpful at all. Any ideas on what we are doing incorrectly? Cheers Phil Jackson Using mixed case :) See here for gotchas of using mixed case: http://www.postgresql.org/docs/8.4/interactive/sql-syntax-lexical.html 4.1.1. Identifiers and Key Words -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] left outer join fails because column .. does not exist in left table?
I have a JOIN error that is rather opaque...at least to me. I've using other JOIN queries on this project, which seem very similar to this one, which looks like: SELECT S.subjectid,STY.studyabrv,labID,boxnumber,wellrow,wellcolumn FROM DNASample D, IBG_Studies STY, Subjects S, ibg_projects P LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid) WHERE D.subjectidkey=S.id AND STY.studyindex=D.studyindex AND IPJ.projects_index=P.ibg_projects_index ORDER BY studyabrv,boxnumber,wellcolumn,wellrow But when I run it I get this error: ERROR: column dnasampleid specified in USING clause does not exist in left table I am rather mystified by this, since this field is definitely in the dnasample table, as the primary key. Nor do not see how to start debugging such an error. Any suggestions appreciated... --Rick -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres table contents versioning
jsmg...@numericable.fr (John Gage) writes: Is there an equivalent of svn/git etc. for the data in a database's tables? Can I set something up so that I can see what was in the table two days/months etc. ago? I realize that in the case of rapidly changing hundred million row tables this presents an impossible problem. The best kludge I can think of is copying the tables to a directory and git-ing the directory. There's a whole set of literature on the notion of temporal data. Richard Snodgrass' book is rather good. http://www.cs.arizona.edu/people/rts/ The typical approach involves adding a timestamp or two to tables to indicate when the data is considered valid. That's rather different from Git :-). -- output = (cbbrowne @ gmail.com) http://linuxdatabases.info/info/languages.html HEADLINE: Suicidal twin kills sister by mistake! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] left outer join fails because column .. does not exist in left table?
On Wed, Jun 30, 2010 at 7:01 PM, rick.ca...@colorado.edu wrote: I have a JOIN error that is rather opaque...at least to me. I've using other JOIN queries on this project, which seem very similar to this one, which looks like: SELECT S.subjectid,STY.studyabrv,labID,boxnumber,wellrow,wellcolumn FROM DNASample D, IBG_Studies STY, Subjects S, ibg_projects P LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid) WHERE D.subjectidkey=S.id AND STY.studyindex=D.studyindex AND IPJ.projects_index=P.ibg_projects_index ORDER BY studyabrv,boxnumber,wellcolumn,wellrow But when I run it I get this error: ERROR: column dnasampleid specified in USING clause does not exist in left table I am rather mystified by this, since this field is definitely in the dnasample table, as the primary key. Nor do not see how to start debugging such an error. Any suggestions appreciated... Capitalization maybe? pgsql folds to lower case. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Can't EXTRACT number of months from an INTERVAL
On Jun 30, 2010, at 18:45 , Eliot, Christopher wrote: I need to read a timestamp from the database and turn that into an integer describing how many months ago the event happened, rounding downward. The events are guaranteed to be in the past. =# select timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00'; ?column? -- 596 days (1 row) =# select justify_interval(timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00'); justify_interval --- 1 year 7 mons 26 days (1 row) =# select extract('months' from justify_interval(timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00')); date_part --- 7 (1 row) This is likely not what you want: you're probably looking for 19. One way would be: =# select 12 * extract('years' from a.i) + extract('months' from a.i) from (values (justify_interval(timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00'))) as a (i); ?column? -- 19 (1 row) If you're willing to make the assumption that each month has 30 days: =# select cast(extract('days' from timestamp '2010-06-26 00:00:00' - timestamp '2008-11-07 00:00:00') as int) / 30; ?column? -- 19 (1 row) And you're dealing only with dates): =# select (cast('2010-06-26' as date) - cast('2008-11-07' as date)) / 30; ?column? -- 19 (1 row) Datetime math can be difficult as it can be very contextual. Michael Glaesemann grzm seespotcode net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
On Thursday 01 July 2010 11:11:29 am Phil Jackson wrote: Hi Adrian The link says that; Identifier and key word names are case insensitive. But I have renamed the source table in lowercase and this gets me one step further. I'll carry on and see what happens next. Cheers Phil Jackson You need to go to bottom of that section where you would find :) Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case. For example, the identifiers FOO, foo, and foo are considered the same by PostgreSQL, but Foo and FOO are different from these three and each other. (The folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to FOO not foo according to the standard. If you want to write portable applications you are advised to always quote a particular name or never quote it.) -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] left outer join fails because column .. does not exist in left table?
rick.ca...@colorado.edu writes: SELECT S.subjectid,STY.studyabrv,labID,boxnumber,wellrow,wellcolumn FROM DNASample D, IBG_Studies STY, Subjects S, ibg_projects P LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid) WHERE D.subjectidkey=S.id AND STY.studyindex=D.studyindex AND IPJ.projects_index=P.ibg_projects_index ORDER BY studyabrv,boxnumber,wellcolumn,wellrow ERROR: column dnasampleid specified in USING clause does not exist in left table I am rather mystified by this, since this field is definitely in the dnasample table, as the primary key. It appears you're used to mysql, which processes commas and JOINs left-to-right (more or less, I've never bothered to figure out their behavior exactly). We follow the SQL standard, which says JOIN binds tighter than comma. Therefore, the left-hand argument of the JOIN is only ibg_projects not the cross join of DNASample x IBG_Studies x Subjects x ibg_projects. You could probably get the behavior you're expecting by writing ... FROM (DNASample D CROSS JOIN IBG_Studies STY CROSS JOIN Subjects S CROSS JOIN ibg_projects P) LEFT OUTER JOIN ibg_ps_join IPJ USING ... Or it might be enough to rearrange to ... FROM DNASample D LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid), IBG_Studies STY, Subjects S, ibg_projects P WHERE ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
Adrian Klaver adrian.kla...@gmail.com writes: On Thursday 01 July 2010 11:11:29 am Phil Jackson wrote: The link says that; Identifier and key word names are case insensitive. But I have renamed the source table in lowercase and this gets me one step further. You need to go to bottom of that section where you would find :) ... If you want to write portable applications you are advised to always quote a particular name or never quote it.) In particular, what probably bit you here is that you created the tables using a tool that double-quoted the mixed-case names. Once you've done that, you are condemned to always double-quote those names forevermore. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
Hi Adrian I had missed that bit. That makes sense now. Cheers Phil Jackson On 6/30/2010 5:04 PM, Adrian Klaver wrote: On Thursday 01 July 2010 11:11:29 am Phil Jackson wrote: Hi Adrian The link says that; Identifier and key word names are case insensitive. But I have renamed the source table in lowercase and this gets me one step further. I'll carry on and see what happens next. Cheers Phil Jackson You need to go to bottom of that section where you would find :) Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case. For example, the identifiers FOO, foo, and foo are considered the same by PostgreSQL, but Foo and FOO are different from these three and each other. (The folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to FOO not foo according to the standard. If you want to write portable applications you are advised to always quote a particular name or never quote it.) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] left outer join fails because column .. does not exist in left table?
On Wed, Jun 30, 2010 at 8:05 PM, Tom Lane t...@sss.pgh.pa.us wrote: rick.ca...@colorado.edu writes: SELECT S.subjectid,STY.studyabrv,labID,boxnumber,wellrow,wellcolumn FROM DNASample D, IBG_Studies STY, Subjects S, ibg_projects P LEFT OUTER JOIN ibg_ps_join IPJ USING (dnasampleid) WHERE D.subjectidkey=S.id AND STY.studyindex=D.studyindex AND IPJ.projects_index=P.ibg_projects_index ORDER BY studyabrv,boxnumber,wellcolumn,wellrow ERROR: column dnasampleid specified in USING clause does not exist in left table I am rather mystified by this, since this field is definitely in the dnasample table, as the primary key. It appears you're used to mysql, which processes commas and JOINs left-to-right (more or less, I've never bothered to figure out their behavior exactly). Note that even MySQL now follows the standard on this, without needing some special strict switch or anything. Of course, a lot of folks are still using older versions that are in fact still broken. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] loading many queries from a file
Szymon Guz mabew...@gmail.com wrote: I've got a file with many SQL queries, also some function definitions and so on. I'd like to load it to database, but using some library like JDBC/ODBC/DBI, not using the obvious psql. Do you know how I could load those many queries? Usually there could be loaded only one query, I saw that psql parses the file looking for the semicolon. Is there any other way than parsing the file and loading queries one by one? Is your file friendly or arbitrary? With DBI, you can exe- cute several commands in one do() call, but you cannot split one command across several calls. The PostgreSQL frontend/ backend protocol seems to allow that by a cursory look, but that's *very* far from JDBC/ODBC/DBI. If there are no guarantees on the format of your file, I would try to adapt psql's psqlscan.l Co. Tim -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DBI::Oracle problems
Thank you David. I must say, I find mailing lists extremely confusing, and wish there was a proper forum type place to go to! My apologies for mailing to the wrong place: I am now not sure whether to keep it here or not! I only wrote here after noting that previous questions about DBI-Link (some going back to 2005, admittedly) were posted here -but, as I say, apologies if that was wrong. Anyway: I have been using the version 2.0.0 of the DBI-Link, which was last updated three years ago. I see from your link that there are 2010 files available, so I'll definitely give that a whirl. @Alexander: yes, my server is configured properly. At least, the postgres user and the root user (as well as the oracle user) can all use SQL*Plus to connect to the remote database without having to set any *additional* environment variables, as I mentioned originally. Quite what else I'm supposed to set, if anything, I can't imagine! I do note, however, that after I run the DBI-Link SQL statements, I get nothing at all in the dbi_link.dbi_connection_environment table, which is probably why it's not working the second time around. Why the function being called can't read the environment variables which are most definitely set, or why it can't insert what it's read into the relevant table, I have no idea. I'll try with the latest software David linked to and see how I get on. Thanks to you both, HJR On Thu, Jul 1, 2010 at 5:24 AM, David Fetter da...@fetter.org wrote: On Wed, Jun 30, 2010 at 10:10:02AM +1000, Howard Rogers wrote: I am stumped, despite working on this for a week! I am trying to create a 64-bit postgresql 8.4 database server which can retrieve data from various 64-bit Oracle 10gR2 and 11gR2 databases. Try downloading the latest version of DBI-Link using the Download Source link at http://github.com/davidfetter/DBI-Link There is also a low-traffic mailing list for the project, where questions like this are more on point :) Cheers, David. -- David Fetter da...@fetter.org http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Hmm. I tried the replication=1 switch but I was prompted with the below but I noticed the local requirement assumes a UNIX socket which i'm not using. And both databases are actually on the same box (just different ports). psql: FATAL: no pg_hba.conf entry for replication connection from host [local], user postgres I've been using this to start the replication database processs, which does start ok, spit out to syslog then barfs on the IDENTIFY_SYSTEM returned from the primary. /usr/local/pgsql/bin/postgres -p5433 -D replication_database -o -d5 Can I be sure that the Primary even has the capability to answer the IDENTIFY_SYSTEM command? I'm beginning to think maybe this is zero'd out until the beta is finalized. It would take a code search but if the functionality is linked to Major Version and Minor version variables, I guess its possible that it won't work until the beta2 is normalized to an normal release.So I'll look for an alternate means to extract the System Identity on the primary. Tom Lane wrote: Zoid z...@the-lounge.us writes: Actually, I'm using postgresql beta2 and my replication process is connectiong to my primary (or backend) at 5432 via the connect_info line of the recovery.conf file. I had already saw that the error is identical if I merely connect directly to the primary and issue the command IDENTITY_SYSTEM, so it seems to me the primary (or backend) is not treating the IDENTITY_SYSTEM command as Streaming Replication Protocol, but I'm not sure why it would anyway as I've connected to thesame port psql does. The port's not the issue --- replication connections are supposed to use the same port. Try something like psql -h primary-server replication=1 regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
On Thu, Jul 1, 2010 at 11:35 AM, Zoid z...@the-lounge.us wrote: Hmm. I tried the replication=1 switch but I was prompted with the below but I noticed the local requirement assumes a UNIX socket which i'm not using. And both databases are actually on the same box (just different ports). psql: FATAL: no pg_hba.conf entry for replication connection from host [local], user postgres I've been using this to start the replication database processs, which does start ok, spit out to syslog then barfs on the IDENTIFY_SYSTEM returned from the primary. /usr/local/pgsql/bin/postgres -p5433 -D replication_database -o -d5 Can I be sure that the Primary even has the capability to answer the IDENTIFY_SYSTEM command? I'm beginning to think maybe this is zero'd out until the beta is finalized. It would take a code search but if the functionality is linked to Major Version and Minor version variables, I guess its possible that it won't work until the beta2 is normalized to an normal release. So I'll look for an alternate means to extract the System Identity on the primary. Hmm... you'd like to get the system identifier from the postgres server via SQL rather than starting replication? If so, you can do that by adding replication entry into pg_hba.conf and performing the following $ psql replication=1 -c IDENTIFY_SYSTEM systemid | timeline -+-- 5488763631978937207 |1 (1 row) Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
Fujii Masao masao.fu...@gmail.com writes: Hmm... you'd like to get the system identifier from the postgres server via SQL rather than starting replication? If so, you can do that by adding replication entry into pg_hba.conf and performing the following $ psql replication=1 -c IDENTIFY_SYSTEM Cute, but all Zoid wants is to get his replication slave to start ;-) Can you figure out what's going wrong? I'm wondering about some sort of incompatibility between various pre-beta states of the code, but don't have a clear answer. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LINE 1: IDENTIFY_SYSTEM error infinitum
On Thu, Jul 1, 2010 at 1:02 PM, Tom Lane t...@sss.pgh.pa.us wrote: Fujii Masao masao.fu...@gmail.com writes: Hmm... you'd like to get the system identifier from the postgres server via SQL rather than starting replication? If so, you can do that by adding replication entry into pg_hba.conf and performing the following $ psql replication=1 -c IDENTIFY_SYSTEM Cute, but all Zoid wants is to get his replication slave to start ;-) Can you figure out what's going wrong? I'm wondering about some sort of incompatibility between various pre-beta states of the code, but don't have a clear answer. I'm suspicious of that incompatibility, too. So reinstalling the postgres 9.0beta2 would fix the problem, I guess. Though I'm not convinced.. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems building from source
On Thu, Jul 1, 2010 at 00:30, Bidski bid...@bigpond.net.au wrote: Hi all, I recently had some problems with a pre-built version of PostgreSQL and so I decided to try and build my own copy from source, but have run into even more problems. I downloaded a copy of the source, unpacked it into a directory and had a quick look at the possible configure options (./configure --help) just to see if anything specific caught my eye. The only thing I saw was --with-gnu-ld. So I ran configure in the postgresql src root directory as ./configure --with-gnu-ld --prefix=/mingw. I got as far as the zlib checks when configure told me that I didnt have zlib installed (a quick look at my files confirmed this), so I went to www.zlib.net and downloaded version 1.2.5 (this appears to be the latest), I then built and installed this. Running configure again (with the same options) gave me a wonderful error. configure: error: zlib version is too old Use --without-zlib to disable zlib support. How can the latest version be too old? Seeing this, I uninstalled that version and went in search of other versions. The versions I found may not have been newer (may have been older) but I tried them anyways, all with the same results, apparently all versions were too old. The other versions I found were on the MinGW/MSYS download site (http://sourceforge.net/projects/mingw/files/), I tried the most recent versions listed on there. More likely, it's not finding the right one. Probably it's picking up some completely different version of it because it's earlier in the search path. Faced with that, I then changed my configure line to be ./configure --prefix=/mingw --with-gnu-ld --without-zlib (who needs zlib anyways, right?). Now I was thinking that surely this would be the end of it and I would soon have a fully built version of PostgreSQL sadly not. I now get this error. checking for atexit... yes checking test program... failed configure: error: Could not execute a simple test program. This may be a problem related to locating shared libraries. Check the file 'config.log' for the exact reason. Is mingw even fully supported on 64-bit today? 64-bit PostgreSQL builds isn't - it should just work, but I don't believe anybody has ever tried it. But this error shows a much more fundamental problem than the PostgreSQL code, and you need to get that solved first. Any particular reason why you don't want to build with the supported toolset, which is the Microsoft (free) compilers? (See http://www.postgresql.org/docs/9.0/static/install-win32-full.html) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems building from source
Magnus Hagander mag...@hagander.net writes: On Thu, Jul 1, 2010 at 00:30, Bidski bid...@bigpond.net.au wrote: configure: error: zlib version is too old Use --without-zlib to disable zlib support. How can the latest version be too old?? More likely, it's not finding the right one. Probably it's picking up some completely different version of it because it's earlier in the search path. Look at the configure.in source code: if test $with_zlib = yes; then # Check that zlib.h defines z_streamp (versions before about 1.0.4 # did not). While we could work around the lack of z_streamp, it # seems unwise to encourage people to use such old zlib versions... AC_CHECK_TYPE(z_streamp, [], [AC_MSG_ERROR([zlib version is too old Use --without-zlib to disable zlib support.])], [#include zlib.h]) fi There is approximately 0.000% chance that typedef z_streamp is not present in any version of zlib.h available in the wild today. I interpret this message as the build toolchain is broken enough to make AC_CHECK_TYPE fail. Haven't dug in to see what the most likely true cause is, but we can dismiss zlib version compatibility out of hand, I think. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems building from source
Magnus Hagander mag...@hagander.net writes: More likely, it's not finding the right one. Probably it's picking up some completely different version of it because it's earlier in the search path. Here is the start of my PATH environment variable. c:\msys\1.0\bin;C:\mingw\bin;C:\mingw\lib; If it is finding something else, its not looking in my PATH variable for it. 64-bit PostgreSQL builds isn't - it should just work, but I don't believe anybody has ever tried it. But this error shows a much more fundamental problem than the PostgreSQL code, and you need to get that solved first. As I said, I have built other packages/libraries (both built the library/package itself AND compiled/linked it into my own program and run it successfully) with my current set up, so if there is something wrong with the toolchain then it is only showing its face because PostgreSQL is doing something that the other packages weren't doing. Honestly, I think that is enough to say that MinGW/MSYS can run on 64 bit systems and if PostgreSQL is only at the stage of should just work that there might be a bigger problem than my toolchain at play here. Any particular reason why you don't want to build with the supported toolset, which is the Microsoft (free) compilers? (See http://www.postgresql.org/docs/9.0/static/install-win32-full.html) I am building with my current toolset simply because it appears to be working and it is already set up and I would rather not have to go through the hassle of having to set up another toolset. Regards Bidski -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general