Re: [GENERAL] pg_dump and pgpool
> On Thu, 2004-12-30 at 09:20, Tom Lane wrote: > > Scott Marlowe <[EMAIL PROTECTED]> writes: > > >> I don't think it's worth that price to support a fundamentally bogus > > >> approach to backup. > > > > > But it's not bogus. IT allows me to compare two databases running under > > > a pgpool synchronous cluster and KNOW if there are inconsistencies in > > > data between them, so it is quite useful to me. > > > > As a data comparison tool it is certainly bogus. What about different > > row ordering between the two databases, for instance? > > Apparently pgpool knows that different order is ok. I think pgpool actually behaves different from what you expect. pgpool just ignores the content of data. Let me show you an example. on the master: test=# select * from t1; i --- 1 2 on the secondary: test=# select * from t1; i --- 1 3 (2 rows) result from pgpool: test=# select * from t1; i --- 1 2 However it checks the packet length. Here is another example. on the master: test=# select * from t2; t - abc (1 row) on the secondary: test=# select * from t2; t -- abcd (1 row) result from pgpool: test=# select * from t2; t - abc (1 row) LOG: pid 1093: SimpleForwardToFrontend: length does not match between backends master(13) secondary(14) kind:(D) > Having three psql's > open, one to the front end pgpool, one to each of the backends, I can > insert data in different orders on each backend, select it on each, and > get a different order, but from the front end it works: > > on the MASTER database: > test=# \d > List of relations > Schema | Name | Type | Owner > +--+---+-- > public | test | table | postgres > (1 row) > > test=# insert into test values (2); > INSERT 11839388 1 > test=# insert into test values (1); > INSERT 11839389 1 > test=# select * from test; > id > > 2 > 1 > (2 rows) > > on the SLAVE database: > test=# \d > List of relations > Schema | Name | Type | Owner > +--+---+-- > public | test | table | postgres > (1 row) > > test=# insert into test values (1); > INSERT 13612414 1 > test=# insert into test values (2); > INSERT 13612415 1 > test=# select * from test; > id > > 1 > 2 > (2 rows) > > On the front end: > test=# select * from test; > id > > 2 > 1 > (2 rows) > > Now I add a wrong row to the slave database: > > test=# insert into test values (3); > INSERT 13612416 1 > > and I get this error from the front end: > test=# select * from test; > ERROR: kind mismatch between backends > HINT: check data consistency between master and secondary > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > The connection to the server was lost. Attempting reset: Succeeded. > After deleting the row, things return to normal: > test=# delete from test where id=3; > DELETE 1 > >From the front end I get: > test=# select * from test; > id > > 2 > 1 > (2 rows) > > > AFAICS this could only work if you were doing physical rather than > > logical replication (eg, shipping WAL logs) in which case the OIDs would > > be just as much in sync as everything else. > > So, for me, the OIDs are the ONLY problem I'm getting here. Note that > the application we're running on the front end only connects to the > database with a single thread, and serializes in the intermediate layer > (not my choice, but it seems to work pretty well so far...) so sequences > also aren't an issue, as all the queries will go in one at a time. I think in this case the row ordering problem will not hurt you. > > Basically my point is that you are proposing to do a lot of work in > > order to solve the first problem you are running up against, but that > > will only get you to the next problem. I'm not prepared to accept a > > significant increase in complexity and loss of maintainability in > > pg_dump in order to move one step closer to the dead end that you will > > certainly hit. > > I'm certainly willing to do the vast majority of the work. As Greg I > think mentioned, maybe a fresh start using the information_schema would > make sense as a sort of non-pg specific backup tool or something. > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] [PATCHES] reqd patch
--- ramesh phule <[EMAIL PROTECTED]> escribió: > > Dear Sir, > > I am working as Teacher in college. > > I am teaching PostgreSQL to student. > > I am facing problem in running CURSOR. version of > postgresql is 7.3.2 > An old version, maybe you want to update to 7.4.6. In the next days/weeks?? will be out the vwersion 8. with great improvements. > > Cursor creation and fetching works fine. > > but , 1) refcursor datatype is not recognised by > postgres > refcursor datatype is deprecated in newer versions, you can do the same with the DECLARE CURSOR statement or with Set Returning Functions (Functions that returns more than one row). > 2) fetch curs1 into curs2 , is not working. > is curs2 a variable of the same datatype og the column you retrieved from the select? (i suppose is only one column). > pls can u help me . can u send me one example of > cursur using above. > I think you can find that here: http://www.postgresql.org/docs/7.3/static/plpgsql-cursors.html regards, Jaime Casanova PD: The patches mailing list is not appropiate for this kind of questions, maybe you want to use [EMAIL PROTECTED] or [EMAIL PROTECTED] I will redirect this post to general _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Large Objects
On Mon, Dec 27, 2004 at 10:39:48 -0600, Dan Boitnott <[EMAIL PROTECTED]> wrote: > I need to do some investigation into the way Postgres handles large > objects for a major project involving large objects. My questions are: I don't know the answer to all of your questions. >* Is it practical/desirable to store files MIME-Encoded inside a > text field? This should be possible if the files aren't too large. bytea is another type that might be better to use. > * The obvious disadvantages: > * slow, Slow, SLOW If you always need to access the whole file this might not be too bad. But if you only need to access a small part, you are going to pay a big cost as the whole record will need to be retrieved before you can pick out the part you want. > * significant increase in per-file storage requirements It might not be too bad as large records can be compressed. That should get back some of the bloat from uuencoding. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Tracking back foreign keys?
On Sun, Dec 26, 2004 at 09:43:59 -0800, Benjamin Smith <[EMAIL PROTECTED]> wrote: > > Something like > > Select pg_table.name from pg_table where pg_field references images.id > > ? > > How else do I put it? The output I'd like would be something like > images.id / tablename / table.primary key > 11 / Vendors / 14 > 12 / Customers / 9 > > Can this be done? You can use joins to get this information. To find orphaned images you can use outer joins. To combine information from several tables in one query you can union the queries for each table. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] [PATCHES] reqd patch
On Thu, Dec 30, 2004 at 09:26:48PM -0800, ramesh phule wrote: > Subject: [PATCHES] reqd patch > To: pgsql-patches@postgresql.org I'm not sure what "reqd patch" has to do with your question -- a subject with the word "cursor" would be more appropriate since that's the topic. Also, pgsql-patches is supposed to be for "Patches for new features and bug fixes," so I'm copying pgsql-general and setting the Reply-To header to that list. > Cursor creation and fetching works fine. > > but , 1) refcursor datatype is not recognised by postgres What do you mean "not recognised"? What exactly are you doing, what are you expecting to happen, and what actually does happen? It would be helpful if we could see the exact statements you're executing and the exact error message you're getting. > 2) fetch curs1 into curs2 , is not working. Please be more specific than "not working" -- what are you trying to do and what happens? Is curs2 a cursor? I wouldn't expect that to work since the PL/pgSQL documentation says that the FETCH INTO target "may be a row variable, a record variable, or a comma-separated list of simple variables." > pls can u help me . can u send me one example of cursur The PostgreSQL documentation has examples in the "Cursors" section of the PL/pgSQL chapter. If those aren't helpful, then please describe what you're trying to do so we can provide an example that's more relevant to the problem. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] looking for connection leaks
Wiebe de Jong wrote: > I think I am having a problem with database connection leakage > between PostgreSQL 7.3.4 on a Linux box and JBoss 2.4.3 on a separate > Linux box > > What would be a good tool for monitoring connection usage and finding > leaks? Plain text formatting is encouraged. I've converted this one. That version of JBoss is very old. There newer versions in the 2.4 series, but I'd highly recommend switching to the actively maintained 3 or 4 series. Don't know about connection logging with PostgreSQL, but JBoss can do this quite easily. Quick and easy, just turn on debug for the connections. If you want something more customized, every connection has a unique object reference; you can just print it out in the log. -- Guy Rouillier ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Postgresql website issues.
On Thu, 2004-12-30 at 13:19 -0800, Mike Cox wrote: > Am I the only one who has trouble reading the website? Some of the > fonts are way too small, especially the " Mirrors | Donate | Contact" > set. The other problem is that it is too light. My eyes strain to > read the text as the background is white, and the text is also a light > color. > agreed. Font sizes are a little 'too' small in some areas. The font size for the body is set to "font-size: 69%;" maybe 72% would be a little less of a strain. ;-) -- /*** * Robby Russell | Owner.Developer.Geek * PLANET ARGON | www.planetargon.com * Portland, OR | [EMAIL PROTECTED] * 503.351.4730 | blog.planetargon.com * PHP/PostgreSQL Hosting & Development *--- Now supporting PHP5 --- / ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] vaccum db fail
Matthew Terenzio <[EMAIL PROTECTED]> writes: > Trying vacuumdb eventuallydisplays this: > ERROR: catalog is missing 3 attribute(s) for relid 31457 > vacuumdb: vacuum databasename failed > There is another thread which states this is a corrupted systems > catalog. Is this definitely the case? Does it mean I need to re-build > the database? Re-install? You've definitely got some problems in the pg_attribute catalog. If you are lucky, reindexing pg_attribute will get you out of this. Otherwise it's time to initdb and reload (or at least drop that one database, recreate it, reload it). I wouldn't trust half measures such as dropping the one known-corrupt table, because the odds are that more rows in pg_attribute got hit than just the ones reported in the first failure. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ISO_8859_8 encoding
"Sim Zacks" <[EMAIL PROTECTED]> writes: > Here are the results of pgconfig and ldd. They both look correct. > [EMAIL PROTECTED] /usr/local/pgsql/bin/pg_config --pkglibdir > /usr/local/pgsql/lib > [EMAIL PROTECTED] ldd /usr/local/pgsql/lib/utf8_and_iso8859.so > libc.so.6 -> /lib/i686/libc.so.6 > /lib/ld-linux.so.2 -> /lib/ld-linux.so.2 Hmph. Looks fine to me too. But are you certain that /usr/local/pgsql/bin/pg_config matches up with the postmaster you are actually talking to? I'm wondering if you have multiple PG installations on that machine... (It's probably a misfeature that you can't ask the backend directly what it thinks $libdir expands to.) regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Making a varchar bigger
Derik Barclay <[EMAIL PROTECTED]> writes: > Specificaly I am looking at executing something like this: > UPDATE pg_attribute SET atttypmod = 16 where attname = 'x' AND attrelid = > (select pg_class.oid from pg_class where relname = 'mytable'); If you have any indexes or views referencing this column, then it's a bit harder. But for the table itself I believe that will work. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] pgodbc error
In my postgresql.conf I set it to log the query durations and this seems to goof up odbc. I know this sounds stupid but I did a search and actually found taht others have had the problem though I found no solution. I am running 7.4 on a suse 9.2 box. I have noticed that it appears to work on Windows in 8.0. Can it be fixed on 7.4? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] vaccum db fail
I honestly have no idea what that error means, however I'd imagine that you should be able to dump the DB, and reimport it as a last resort to fixing this. On Tue, 28 Dec 2004 18:33:24 -0500, Matthew Terenzio <[EMAIL PROTECTED]> wrote: > Trying vacuumdb eventuallydisplays this: > > ERROR: catalog is missing 3 attribute(s) for relid 31457 > vacuumdb: vacuum databasename failed > > There is another thread which states this is a corrupted systems > catalog. Is this definitely the case? Does it mean I need to re-build > the database? Re-install? > > Thanks. > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > -- ~ L. Friedman[EMAIL PROTECTED] LlamaLand http://netllama.linux-sxs.org ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] using pg_config to get LDFLAGS?
Hi! I'm maintaining a bunch of FreeBSD ports for postgresql stuff. I get problems when building postgresql with Kerberos, because packages like libpqxx does not automatically understand that it will also need to link with libkrb.so. Is there any "default" way to get gnu configure for such packages to understand that it needs to link with libkrb.so? Using pg_config --configure, the LDFLAGS are there to extract, but perhaps a cleaner way is possible? # # without kerberos: # pg_config --configure '--with-libraries=/usr/local/lib' '--with-includes=/usr/local/include' '--with-docdir=/usr/local/share/doc/postgresql' '--enable-nls' '--prefix=/usr/local' 'i386-portbld-freebsd5.3' 'LDFLAGS=' 'CFLAGS=-O -pipe -march=athlon-xp' 'host_alias=i386-portbld-freebsd5.3' 'build_alias=i386-portbld-freebsd5.3' 'target_alias=i386-portbld-freebsd5.3' 'CC=cc' # # with kerberos # pg_config --configure '--with-libraries=/usr/local/lib' '--with-includes=/usr/local/include' '--with-docdir=/usr/local/share/doc/postgresql' '--enable-nls' '--with-krb5' '--prefix=/usr/local' 'i386-portbld-freebsd5.3' 'LDFLAGS= -L/usr/lib -lkrb5 -lasn1 -lcrypto -lroken -lcrypt -lcom_err' 'CFLAGS=-O -pipe -march=athlon-xp' 'host_alias=i386-portbld-freebsd5.3' 'build_alias=i386-portbld-freebsd5.3' 'target_alias=i386-portbld-freebsd5.3' 'CC=cc' Regards, Palle ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] 8.0 Beta3 worked, RC1 didn't!
First I had install PostgreSQL 8.0 RC2 on Windows 2003 server. Remote clients connect to databese, so I have to set listen_addresses = '*' in the posrgresql.conf file. After this change I have error message: "could not receive server response to SSL negotiation packet; connection reset by peer (0x2746/10054)". After this I had install PostgreSQL 8.0 RC2 on Windows XP professional - there was no problem (no error message) when I set listen_addresses = '*' in the posrgresql.conf file. Remote clients can connect to database without error messages and any problems. So, I think, that problem is PostgreSQL 8.0 RC2 on Win 2003 server. I must use win 2003 server. What can I (we) do? - Original Message - From: A. Mous To: pgsql-general@postgresql.org ; [EMAIL PROTECTED] Sent: Tuesday, December 28, 2004 9:56 PM Subject: FW: [GENERAL] 8.0 Beta3 worked, RC1 didn't! OK, thats three so far that are experiencing the exact same problem. Does anyone in the know have any suggestions as to how this can be resolved? Much thanks, Peter -Original Message-From: Edgars Diebelis [mailto:[EMAIL PROTECTED] Sent: December 28, 2004 5:41 AMTo: [EMAIL PROTECTED]Subject: Re: [GENERAL] 8.0 Beta3 worked, RC1 didn't! Hallo, I have the same problem as you: I had install PostgreSQL 8.0 RC2. Remote clients connect to databese, so I have to set listen_addresses = '*' in the posrgresql.conf file. After this change I have error message: "could not receive server response to SSL negotiation packet; connectionreset by peer (0x2746/10054)". Have you any solution? Is this configuration problem or is this bug? Please help! Thx, Edgars
[GENERAL] Postgresql website issues.
Am I the only one who has trouble reading the website? Some of the fonts are way too small, especially the " Mirrors | Donate | Contact" set. The other problem is that it is too light. My eyes strain to read the text as the background is white, and the text is also a light color. Also, on the news area, when one clicks on "PostgresSQL performs well on 1TB database", the copyright notice along with the privacy policy are on top of the side menu, covering it. I used Mozilla 1.7 to view the site. As a former professional graphics designer, the new postgresql site is also not balanced in terms of color. There needs to be four differnt colors in the site, not two (blue, and various shades of grey). This is similar to how people find things that conform to the golden ratio as attractive. If these issues are fixed, than this new design will be much better than the previous website. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] postgresql 8.0 Rc1
I do see the following error in /var/log/messages when I try to start my application. It was working fine in 7.4.6. I did upgrade from 7.4.6 to 8.0 Rc1. "ERROR:unregcongnized configuration parameter 'ksqo'" . This is happening after I execute this statement "res = SQLAllocHandle(SQL_HANDLE_STMT, *phdbc, phstmt)"
[GENERAL] win32 installer problem
Probably not even worth a bug report, but an entry in the install FAQ.. or a fix in the installer.. whatever. should be pretty uncommon. rc2 would not install (had not attempted to use any prior rc/beta either). last 2 lines of initdb.log showed: command ""C:/Program Files/PostgreSQL/8.0.0-rc2/bin/postgres.exe" -boot -x1 -F template1": Invalid argument initdb: removing contents of data directory "C:/Program Files/PostgreSQL/8.0.0-rc2/data" pure lucky guessing solved the issue, but RC2 will not install if your environment has a non-default ComSpec (say if you're using cygwin or 4nt or console or modified it just for kicks.) deleting the env var, installing, and then setting it back seems to work fine. or perhaps the install can locally or temporarily set it to %SystemRoot%\system32\cmd.exe and everything should work fine. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] 8.0 Beta3 worked, RC1 didn't!
I can't find any solution. Is it a bug or a config problem ? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Tracking back foreign keys?
In one of my apps, I have an "images manager" that's become unmanageable. I'd like to be able to easily track key dependencies. Let's say I have a set of tables like this: create table Customers ( id serial unique not null primary key, name varchar not null, address varchar not null image integer references images(id) ); create table Vendors ( id serial unique not null primary key, name varchar not null, vendor_account varchar, picture integer references images(id) ); create table Images ( id serial unique not null, filename varchar not null, mime varchar not null ); I know that in the images table I have lots of cruft, "dead wood", but when I delete from images, is there a "nice" way of finding out what dependencies there are? Something like Select pg_table.name from pg_table where pg_field references images.id ? How else do I put it? The output I'd like would be something like images.id / tablename / table.primary key 11 / Vendors / 14 12 / Customers / 9 Can this be done? -Ben -- "I kept looking around for somebody to solve the problem. Then I realized I am somebody" -Anonymous ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] ECPG Segfaulting on EXEC SQL connect
Hello, I'm trying to convert a series of C programs written originally using Informix ESQL to use Postgres' ECPG. All of my test programs written from scratch seem to work ok, and I can now precompile, compile and link without error, but when I actually run the program, I get a segfault. This appears to be in the code called by EXEC SQL CONNECT... as evidenced below (output from gdb): Program received signal SIGSEGV, Segmentation fault.0x08053640 in error_message ()(gdb) up#1 0x4005b95a in _init () from /usr/lib/libpq.so.3(gdb) up#2 0x4005ba8f in _init () from /usr/lib/libpq.so.3(gdb) up#3 0x4005c35d in fe_getauthname () from /usr/lib/libpq.so.3(gdb) up#4 0x4005ed2d in pqPacketSend () from /usr/lib/libpq.so.3(gdb) up#5 0x4005c44c in PQconnectStart () from /usr/lib/libpq.so.3(gdb) up#6 0x4005c90a in PQsetdbLogin () from /usr/lib/libpq.so.3(gdb) up#7 0x4003bf7f in ECPGconnect () from /usr/lib/libecpg.so.4(gdb) up#8 0x0804ed15 in cgiMain () at register_customer.pgc:575575 EXEC SQL connect to pdev_changename;(gdb) The database is called pdev_changename, and that same connect statement has worked in another test program. I'm precompiling using ecpg -t -C INFORMIX register_customer.pgc -o register_customer.c and compling with gcc -g register_customer.c <> -o register_customer.cgi \ -I/usr/include/pgsql -I/usr/include/pgsql/informix -lecpg -lecpg_compat -L/usr/lib/pgsql I'm using Postgres 8.0.0rc1 on Redhat 9 (kernel 2.4.20-31.9). The same thing happens on fedora core 3, and using Postgres 7.4.6-1.FC3-1. The ability to define variables of type "timestamp" etc. is so useful, so I really want to keep using "-C INFORMIX" if I can. Can anyone help shed any light on this? Thanks, John john(at)roundel(dot)net
[GENERAL] looking for connection leaks
I think I am having a problem with database connection leakage between PostgreSQL 7.3.4 on a Linux box and JBoss 2.4.3 on a separate Linux box What would be a good tool for monitoring connection usage and finding leaks? Wiebe de Jong
Re: [GENERAL] 8.0 Beta3 worked, RC1 didn't!
Yes, this is exactly the same behaviour as mine. I've also seen the exact same entries in the log file (tested this last night). Many, many of these entries spanning only a couple of seconds. Does anyone have any ideas I can try to rectify this problem? Thanks, Peter -Original Message- From: Nicolas COUSSEMACQ [mailto:[EMAIL PROTECTED] Sent: December 23, 2004 9:33 AM To: pgsql-general@postgresql.org Subject: 8.0 Beta3 worked, RC1 didn't! I have the same problem ! When I setup Postgres 8.0 Beta 4 on a Windows Xp or 2003 Server, it works parfectly with parameter listen_adresses set to '*' or localhost. I have been testing Beta5, RC1 and RC2 on my XP workstation and there is no problem, event if I accept external connections ( listen_adresses = '*'). Then I tried to setup Beta5, RC1 or RC2 on a station with 2003 Server, I can only acces the Database when listen_adresses = localhost. If i set listen_adresses = '*', i have a connection problem in PgAdmin saying "Could not recieve server response to SSL negociation packet : Connection reset by peer (0X2746/10054). It appends when I launch pgadmin directly logged on the station, when i'm connected with remote access and even from my XP workstation. The log file contains many lines such these ones : 2004-12-23 16:55:17 FATAL: could not attach to proper memory at fixed address: shmget(key=5432001, addr=00DC) failed: Invalid argument 2004-12-23 16:55:17 FATAL: could not attach to proper memory at fixed address: shmget(key=5432001, addr=00DC) failed: Invalid argument 2004-12-23 16:55:17 LOG: background writer process (PID 680) exited with exit code 0 2004-12-23 16:55:17 LOG: terminating any other active server processes 2004-12-23 16:55:17 LOG: all server processes terminated; reinitializing If I switch the listen_addresses parameter back to localhost', I can connect to the DB in PgAdmin from the server screen or remote acces. Those these information help you ? ""A. Mous"" <[EMAIL PROTECTED]> a écrit dans le message de news:[EMAIL PROTECTED] > Hi all, > > I'm using psql 8.0.0 on a client's site who's running win server 2003. > We've had him on beta 3 for some time, and no problems at all (yes, in a > sense, he is a beta tester as well, but doesn't know it!). Today I tried to > upgrade the db to RC1 and had some problems. > > Remote clients connect to this database, so I have to set listen_addresses = > '*' in the posrgresql.conf file. This is the only change to the config > file. Doing this with RC1 and trying to connect locally with through psql > resulted in the following error message: > > "could not receive server response to SSL negotiation packet; connection > reset by peer (0x2746/10054)" > > Removing the modified line in the config file resolved the problem > (locally), however, no clients can connect! Beta 3 does not seem to have > this issue, so we had to revert back to it for now. > > I would appreciate any ideas that some of you may have. Much thanks, > > -Peter > > > ---(end of broadcast)--- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Large Objects
I need to do some investigation into the way Postgres handles large objects for a major project involving large objects. My questions are: * Can large objects be stored within a field value or must they be referenced by OID? * Are large objects backed up in the normal way or does special action have to be taken? * If the objects are restored or migrated will they retain their OID? * If not, is there another means of referencing them that would be persistent through migrations? * Is it practical/desirable to store files MIME-Encoded inside a text field? * The obvious advantages: * definitely portable across migrations and backups * based on universal technology * easy to code * easy to reference and version-control * The obvious disadvantages: * slow, Slow, SLOW * significant increase in per-file storage requirements Any help would be greatly appreciated. Hope I'm posting to the right list. Dan ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] vaccum db fail
Trying vacuumdb eventuallydisplays this: ERROR: catalog is missing 3 attribute(s) for relid 31457 vacuumdb: vacuum databasename failed There is another thread which states this is a corrupted systems catalog. Is this definitely the case? Does it mean I need to re-build the database? Re-install? Thanks. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ISO_8859_8 encoding
Here are the results of pgconfig and ldd. They both look correct. Do I need a link to libc.so.6 in my pglib/lib directory? It isn't there on my other machine and that is working fine. [EMAIL PROTECTED] /usr/local/pgsql/bin/pg_config --pkglibdir /usr/local/pgsql/lib [EMAIL PROTECTED] ldd /usr/local/pgsql/lib/utf8_and_iso8859.so libc.so.6 -> /lib/i686/libc.so.6 /lib/ld-linux.so.2 -> /lib/ld-linux.so.2 [EMAIL PROTECTED] locate libc.so.6 /var/ftp/lib/libc.so.6 /lib/i686/libc.so.6 /lib/libc.so.6 [EMAIL PROTECTED] locate ld-linux.so.2 /var/ftp/lib/ld-linux.so.2 /lib/ld-linux.so.2 "Tom Lane" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > "Sim Zacks" <[EMAIL PROTECTED]> writes: > > Error: Could not access file "$libdir/utf8_and_iso8859": No such file or > > directory. The file utf8_and_iso8859.so is in the /usr/local/pgsql/lib > > directory. Is there somewhere I have to assign the $libdir variable or is > > this something else completely? > > It may be complaining about another shlib that that one depends on. > What does "ldd" say when you run it on utf8_and_iso8859.so? > > Also, check "pg_config --pkglibdir" which is the definitive answer > as to what $libdir means. > > regards, tom lane > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] possible to DELETE CASCADE?
Cool. Thanks for all the advice, guys. I'll just keep my script manually deleting dependencies, then. It gives me peace of mind. :-) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Recursive update
Hi, I'm doing some PL/pgSQL programming at work and today I realized a small glitch in my application design. The following recreates my problem... I have two tables with triggers on each: CREATE TABLE "public"."tableone" ( "id" SERIAL, "columnone" VARCHAR(64), "columntwo" VARCHAR(64), "checkfield" BOOLEAN, CONSTRAINT "tableone_pkey" PRIMARY KEY("id") ) WITH OIDS; CREATE TRIGGER "tableone_update" BEFORE UPDATE ON "public"."tableone" FOR EACH ROW EXECUTE PROCEDURE "public"."tableone_update"(); CREATE TABLE "public"."tabletwo" ( "id" SERIAL, "name" VARCHAR(64), CONSTRAINT "tabletwo_pkey" PRIMARY KEY("id") ) WITH OIDS; CREATE TRIGGER "tabletwo_insert" BEFORE INSERT ON "public"."tabletwo" FOR EACH ROW EXECUTE PROCEDURE "public"."tabletwo_insert"(); Then I have the trigger functions: CREATE OR REPLACE FUNCTION "public"."tableone_update" () RETURNS trigger AS' begin if ((new.columntwo!=old.columntwo) OR (old.columntwo is null AND new.columntwo is not null)) then return new; end if; -- Call the "insert function". insert into tabletwo (name) values (''inserted by function insert_into_tabletwo''); return new; end; 'LANGUAGE 'plpgsql' IMMUTABLE CALLED ON NULL INPUT SECURITY INVOKER; CREATE OR REPLACE FUNCTION "public"."tabletwo_insert" () RETURNS trigger AS' begin if exists (select * from tableone where (checkfield=false OR checkfield is null) and id=1) then update tableone set columntwo=''updated by insert in tabletwo'',checkfield=true WHERE id=1; end if; return new; end; 'LANGUAGE 'plpgsql' IMMUTABLE CALLED ON NULL INPUT SECURITY INVOKER; Now I do the following: INSERT INTO tableone (id,columnone, columntwo,checkfield) VALUES (1,'from initial insert','from initial insert',false); UPDATE tableone set columnone='updated by me' WHERE id=1; The content of tableone is then: id | columnone | columntwo | checkfield +-+---+ 1 | from initial insert | updated by insert in tabletwo | true I believe I'm having the same issue as in this thread: http://groups-beta.google.com/group/comp.databases.postgresql.general/browse_thread/thread/5ead4260393ecd57/37c8b4a1b7562221?q=recursive+update+postgresql&_done=%2Fgroups%3Fq%3Drecursive+update+postgresql%26hl%3Den%26lr%3D%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DN%26tab%3Dwg%26&_doneTitle=Back+to+Search&&d#37c8b4a1b7562221 I'm trying to do an update on a table which results in another subsequent update of the same table. Only the second (subsequent) is actually executed on the table. I'm not sure I understand why this is not allowed, because I can see many cases where recursive updates would be a very nice thing to have available. Is this maybe a thing which should be raised for the PostgreSQL developer team? Thanks in advance and happy new year! -- Thomas Braad Toft ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Making a varchar bigger
Specificaly I am looking at executing something like this: UPDATE pg_attribute SET atttypmod = 16 where attname = 'x' AND attrelid = (select pg_class.oid from pg_class where relname = 'mytable'); On December 23, 2004 05:13 pm, Derik Barclay wrote: > Hello Group, > > I've run into a bit of a problem, I have a varchar(6) field that I now need > to have as a varchar(12). > > I am relatively new to postgres and am unsure how best to do this. What I > would like to do is alter the meta-data directly. Change the value of > atttypmod in pg_attribute directly. Initial tests look good, however I am > unsure how safe this is. Or what else needs to be done? does it need to be > re-indexed? > > I have looked at the sections on renaming/changing columns, though this is > more a solid an option, I am dealing with millions of records and the > downtime for the backfill is not acceptable. > > I found some old post in here hinting at doing this, however the links in > them are now dead. -- Givex - http://www.givex.com/ Derik Barclay <[EMAIL PROTECTED]>, Systems Software Engineer +1 416 350 9660 +1 416 250 9661 (fax) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql 8.0 rc1
"Prasad Duggineni" <[EMAIL PROTECTED]> writes: > I do see the following error in /var/log/messages when I try to start my = > application. It was working fine in 7.4.6. I did upgrade from 7.4.6 to = > 8.0 Rc1. > "ERROR:unregcongnized configuration parameter 'ksqo'" . > This is happening after I execute this statement "res =3D = > SQLAllocHandle(SQL_HANDLE_STMT, *phdbc, phstmt)" ksqo is *long* gone. I think you need a newer version of the ODBC driver. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] postgresql 8.0 rc1
I do see the following error in /var/log/messages when I try to start my application. It was working fine in 7.4.6. I did upgrade from 7.4.6 to 8.0 Rc1. "ERROR:unregcongnized configuration parameter 'ksqo'" . This is happening after I execute this statement "res = SQLAllocHandle(SQL_HANDLE_STMT, *phdbc, phstmt)"
Re: [GENERAL] possible to DELETE CASCADE?
On Thu, Dec 30, 2004 at 11:40:21 -0800, Miles Keaton <[EMAIL PROTECTED]> wrote: > On Thu, 30 Dec 2004 11:10:38 -0800, I wrote: > > Is it possible for a query to delete a record and all of its > > foreign-key dependents? > > > Sorry - to be more clear : I like having my foreign keys RESTRICT from > this kind of cascading happening automatically or accidently. > > So I'm looking for a query that could force it to happen, if truly intended. Patient: Doctor, it hurts when I do this. Doctor: Then stop doing that. Maybe you should only allow a special account to be able to delete from the parent table and control access to that special account. Depending on what kind of accidents you are trying to prevent, this may help. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] syntax for inserting unicode character literal
What is the syntax for inserting a unicode character literal? I thought it would be something like '\u05D0', but that doesn't work. Any help would be appreciated! Thanks, Tim ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] possible to DELETE CASCADE?
On Thu, 2004-12-30 at 11:10 -0800, Miles Keaton wrote: > Is it possible for a query to delete a record and all of its > foreign-key dependents? > > I see DROP CASCADE, but not a DELETE CASCADE. > > What I'm trying to do: > I have a "clients" table. > I have many different tables that use the clients.id as a foreign key. > When I delete a client, I want it to delete all records in those many > different tables that reference this client. > > Right now I have my script passing many queries to delete them > individually. ("delete from history where client_id=?; delete from > payments where client_id=?" -- etc) > > Any shortcut way to do this? You can use ON DELETE CASCADE when you create/alter the table. for example: CREATE TABLE foo_type ( id SERIAL PRIMARY KEY NOT NULL, name TEXT ); INSERT INTO foo_type(name) VALUES ('type 1'); INSERT INTO foo_type(name) VALUES ('type 2'); CREATE TABLE foo ( id SERIAL PRIMARY KEY NOT NULL, foo_type_id INT REFERENCES foo_type ON DELETE CASCADE, name TEXT ); INSERT INTO foo (foo_type_id, name) VALUES (1, 'bar'); INSERT INTO foo (foo_type_id, name) VALUES (1, 'bar2'); INSERT INTO foo (foo_type_id, name) VALUES (2, 'bar3'); > test=> SELECT * FROM foo; > id | foo_type_id | name > +-+-- > 1 | 1 | bar > 2 | 1 | bar2 > 3 | 2 | bar3 > (3 rows) > > test=> SELECT * FROM foo_type; > id | name > + > 1 | type 1 > 2 | type 2 > (2 rows) Now, I will test it: test=> DELETE FROM foo_type WHERE id = 1; DELETE 1 test=> SELECT * FROM foo; id | foo_type_id | name +-+-- 3 | 2 | bar3 (1 row) -- /*** * Robby Russell | Owner.Developer.Geek * PLANET ARGON | www.planetargon.com * Portland, OR | [EMAIL PROTECTED] * 503.351.4730 | blog.planetargon.com * PHP/PostgreSQL Hosting & Development *--- Now supporting PHP5 --- / ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] possible to DELETE CASCADE?
On Thu, 30 Dec 2004 11:10:38 -0800, I wrote: > Is it possible for a query to delete a record and all of its > foreign-key dependents? Sorry - to be more clear : I like having my foreign keys RESTRICT from this kind of cascading happening automatically or accidently. So I'm looking for a query that could force it to happen, if truly intended. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] possible to DELETE CASCADE?
Miles Keaton wrote: Is it possible for a query to delete a record and all of its foreign-key dependents? I see DROP CASCADE, but not a DELETE CASCADE. What I'm trying to do: I have a "clients" table. I have many different tables that use the clients.id as a foreign key. When I delete a client, I want it to delete all records in those many different tables that reference this client. Right now I have my script passing many queries to delete them individually. ("delete from history where client_id=?; delete from payments where client_id=?" -- etc) You just have to use ON DELETE CASCADE on your foreign key definition in all the table which reference the client. See http://www.postgresql.org/docs/7.4/interactive/sql-createtable.html and look for FOREIGN KEY and ON DELETE CASCADE. That will have the effect you are looking for. If this is not enough you will have to create a trigger for the scenario. -- Thomas Braad Toft ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] possible to DELETE CASCADE?
Is it possible for a query to delete a record and all of its foreign-key dependents? I see DROP CASCADE, but not a DELETE CASCADE. What I'm trying to do: I have a "clients" table. I have many different tables that use the clients.id as a foreign key. When I delete a client, I want it to delete all records in those many different tables that reference this client. Right now I have my script passing many queries to delete them individually. ("delete from history where client_id=?; delete from payments where client_id=?" -- etc) Any shortcut way to do this? ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] pg_dump and pgpool
On Thu, 2004-12-30 at 09:20, Tom Lane wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > >> I don't think it's worth that price to support a fundamentally bogus > >> approach to backup. > > > But it's not bogus. IT allows me to compare two databases running under > > a pgpool synchronous cluster and KNOW if there are inconsistencies in > > data between them, so it is quite useful to me. > > As a data comparison tool it is certainly bogus. What about different > row ordering between the two databases, for instance? Apparently pgpool knows that different order is ok. Having three psql's open, one to the front end pgpool, one to each of the backends, I can insert data in different orders on each backend, select it on each, and get a different order, but from the front end it works: on the MASTER database: test=# \d List of relations Schema | Name | Type | Owner +--+---+-- public | test | table | postgres (1 row) test=# insert into test values (2); INSERT 11839388 1 test=# insert into test values (1); INSERT 11839389 1 test=# select * from test; id 2 1 (2 rows) on the SLAVE database: test=# \d List of relations Schema | Name | Type | Owner +--+---+-- public | test | table | postgres (1 row) test=# insert into test values (1); INSERT 13612414 1 test=# insert into test values (2); INSERT 13612415 1 test=# select * from test; id 1 2 (2 rows) On the front end: test=# select * from test; id 2 1 (2 rows) Now I add a wrong row to the slave database: test=# insert into test values (3); INSERT 13612416 1 and I get this error from the front end: test=# select * from test; ERROR: kind mismatch between backends HINT: check data consistency between master and secondary server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. After deleting the row, things return to normal: test=# delete from test where id=3; DELETE 1 >From the front end I get: test=# select * from test; id 2 1 (2 rows) > AFAICS this could only work if you were doing physical rather than > logical replication (eg, shipping WAL logs) in which case the OIDs would > be just as much in sync as everything else. So, for me, the OIDs are the ONLY problem I'm getting here. Note that the application we're running on the front end only connects to the database with a single thread, and serializes in the intermediate layer (not my choice, but it seems to work pretty well so far...) so sequences also aren't an issue, as all the queries will go in one at a time. > Basically my point is that you are proposing to do a lot of work in > order to solve the first problem you are running up against, but that > will only get you to the next problem. I'm not prepared to accept a > significant increase in complexity and loss of maintainability in > pg_dump in order to move one step closer to the dead end that you will > certainly hit. I'm certainly willing to do the vast majority of the work. As Greg I think mentioned, maybe a fresh start using the information_schema would make sense as a sort of non-pg specific backup tool or something. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Update rule
You get infinite recursion because your ON UPDATE rule does another UPDATE which of course calls the rule ; so no, it's not a bug ; also your UPDATE updates almost the whole table so it won't do what you had in mind in the first place. You should rather change the NEW row in your update so that NEW.dmodtar = current_date and NEW.umodtar = current_user. Which means you'll rather be using a trigger for this. Read the docs on CREATE TRIGGER and see the examples, I think there's one which looks like what you want. Hello ! I wish to create a rule to write in records who and when modified them. I wrote : CREATE OR REPLACE RULE tarif_upd AS ON UPDATE TO tarifs DO ALSO UPDATE tarifs SET dmodtar = current_date, umodtar = current_user WHERE dmodtar <> current_date AND umodtar <> current_user ; But I obtain an infinite recursion ! It seems not using the where clause ? Is it a bug ? How can I workaround ... Best regards. Luc ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Update rule
Hello ! I wish to create a rule to write in records who and when modified them. I wrote : CREATE OR REPLACE RULE tarif_upd AS ON UPDATE TO tarifs DO ALSO UPDATE tarifs SET dmodtar = current_date, umodtar = current_user WHERE dmodtar <> current_date AND umodtar <> current_user ;But I obtain an infinite recursion ! It seems not using the where clause ? Is it a bug ?How can I workaround ... Best regards. Luc
Re: [GENERAL] pg_dump and pgpool
On Thu, 2004-12-30 at 09:46, Tatsuo Ishii wrote: > > On Wed, 2004-12-29 at 17:30, Tom Lane wrote: > > > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > > On Wed, 2004-12-29 at 16:56, Tom Lane wrote: > > > >> No, we'd be throwing more, and more complex, queries. Instead of a > > > >> simple lookup there would be some kind of join, or at least a lookup > > > >> that uses a multicolumn key. > > > > > > > I'm willing to bet the performance difference is less than noise. > > > > > > [ shrug... ] I don't have a good handle on that, and neither do you. > > > What I am quite sure about though is that pg_dump would become internally > > > a great deal messier and harder to maintain if it couldn't use OIDs. > > > Look at the DumpableObject manipulations and ask yourself what you're > > > going to do instead if you have to use a primary key that is of a > > > different kind (different numbers of columns and datatypes) for each > > > system catalog. Ugh. > > > > Wait, do you mean it's impossible to throw a single SQL query with a > > proper join clause that USES OIDs but doesn't return them? Or that it's > > impossible to throw a single query without joining on OIDs. I don't > > mind joining on OIDs, I just don't want them crossing the connection is > > all. And yes, it might be ugly, but I can't imagine it being > > unmaintable for some reason. > > > > > I don't think it's worth that price to support a fundamentally bogus > > > approach to backup. > > > > But it's not bogus. IT allows me to compare two databases running under > > a pgpool synchronous cluster and KNOW if there are inconsistencies in > > data between them, so it is quite useful to me. > > > > > IMHO you don't want extra layers of software in > > > between pg_dump and the database --- each one just introduces another > > > risk of getting a wrong backup. You've yet to explain what the > > > *benefit* of putting pgpool in there is for this problem. > > > > Actually, it ensures that I get the right backup, because pgpool will > > cause the backup to fail if there are any differences between the two > > backend servers, thus telling me that I have an inconsistency. > > > > That's the primary reason I want this. The secondary reason, which I > > can work around, is that I'm running the individual databases on > > machines that only answer the specific IP of the pgpool machine's IP, so > > remote backups aren't possible, and only the pgpool machine would be > > capable of doing the backups, but we have (like so many other companies) > > a centralized backup server. I can always allow that machine to connect > > to the database(s) to do backup, but my fear is that by allowing > > anything other than pgpool to hit those backend databases they could be > > placed out of sync with each other. Admitted, a backup process > > shouldn't be updating the database, so this, as I said, isn't really a > > big deal. More of a mild kink really. As long as all access is > > happening through pgpool, they should stay coherent to each other. > > Pgpool could be modified so that it has "no SELECT replication mode", > where pgpool runs SELECT on only master server. I could do this if you > think it's usefull. > > However problem is pg_dump is not only running SELECT but also > modifying database (counting up OID counter), i.e. it creates > temporary tables. Is this a problem for you? Does it? I didn't know it used temp tables. It's not that big of a deal, and I'm certain I can work around it. I just really like the idea of a cluster of pg servers running sychronously behind a redirector and looking, for all the world, like one database. But I think it would take log shipping for it to work the way I'm envisioning. I'd much rather see work go into making pgpool run atop >2 servers than this exercise in (_very_) likely futility. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump and pgpool
Scott Marlowe <[EMAIL PROTECTED]> writes: > On Wed, 2004-12-29 at 23:12, Greg Stark wrote: > > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > > > What's happening is that there are two databases behind pgpool, and each > > > has managed to assign a different (set of) OID(s) to the table(s). So, > > > when pg_dump asks for an OID, it gets two different ones. > > > > If pgpool is so good at maintaining consistency between databases how did > > they > > end up with different OIDs? > > That's rather disingenuous, considering that OIDs are more of an > internal artifact of the databases, while the USER data stored therein > is what I, or anyone else, would consider the word consistency applies > to. The fact that both databases have different OIDs for the same > objects has nothing to do with the userland data being consistent or > not. It's not like the database uses random() to generate OIDs. To reach different OIDs you would have had to issue a different sequence of DDL statements. It's a bit of a pain since there's no sure way to resync the databases using DDL. But surely if you dumped one of the copies and restored a clean copy of the database on both machines they would end up with consistent OIDs? Scott Marlowe <[EMAIL PROTECTED]> writes: > No. I have a perfectly consistent database. It happens to reside on a > cluster of two machines which have different internal ids assigned to > the same objects, which, when I throw bog standard SQL at them, I get > the same answer from both. That is consistent. But you're not throwing BOG-standard SQL at them, you're running pg_dump against them which is using non-BOG-standard SQL. If you want pg_dump to work against them I think you need to keep them consistent at a lower level. > Users are consistently warned to never use OIDs as PKs, yet PostgreSQL > the database does just that. My data is coherent. I'll explain more in > my reply to Tom Lane... If Postgres used sequences then your sequences would be out of sync. The point is that at the level pg_dump is working the databases really are inconsistent. Perhaps one day pg_dump could be reimplemented entirely in terms of information_schema where the inconsistencies happen to be hidden. But I doubt it can be today. And I suspect you could arrive at inconsistent information_schema if you use different ddl anyways. -- greg ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] pg_dump and pgpool
> On Wed, 2004-12-29 at 17:30, Tom Lane wrote: > > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > On Wed, 2004-12-29 at 16:56, Tom Lane wrote: > > >> No, we'd be throwing more, and more complex, queries. Instead of a > > >> simple lookup there would be some kind of join, or at least a lookup > > >> that uses a multicolumn key. > > > > > I'm willing to bet the performance difference is less than noise. > > > > [ shrug... ] I don't have a good handle on that, and neither do you. > > What I am quite sure about though is that pg_dump would become internally > > a great deal messier and harder to maintain if it couldn't use OIDs. > > Look at the DumpableObject manipulations and ask yourself what you're > > going to do instead if you have to use a primary key that is of a > > different kind (different numbers of columns and datatypes) for each > > system catalog. Ugh. > > Wait, do you mean it's impossible to throw a single SQL query with a > proper join clause that USES OIDs but doesn't return them? Or that it's > impossible to throw a single query without joining on OIDs. I don't > mind joining on OIDs, I just don't want them crossing the connection is > all. And yes, it might be ugly, but I can't imagine it being > unmaintable for some reason. > > > I don't think it's worth that price to support a fundamentally bogus > > approach to backup. > > But it's not bogus. IT allows me to compare two databases running under > a pgpool synchronous cluster and KNOW if there are inconsistencies in > data between them, so it is quite useful to me. > > > IMHO you don't want extra layers of software in > > between pg_dump and the database --- each one just introduces another > > risk of getting a wrong backup. You've yet to explain what the > > *benefit* of putting pgpool in there is for this problem. > > Actually, it ensures that I get the right backup, because pgpool will > cause the backup to fail if there are any differences between the two > backend servers, thus telling me that I have an inconsistency. > > That's the primary reason I want this. The secondary reason, which I > can work around, is that I'm running the individual databases on > machines that only answer the specific IP of the pgpool machine's IP, so > remote backups aren't possible, and only the pgpool machine would be > capable of doing the backups, but we have (like so many other companies) > a centralized backup server. I can always allow that machine to connect > to the database(s) to do backup, but my fear is that by allowing > anything other than pgpool to hit those backend databases they could be > placed out of sync with each other. Admitted, a backup process > shouldn't be updating the database, so this, as I said, isn't really a > big deal. More of a mild kink really. As long as all access is > happening through pgpool, they should stay coherent to each other. Pgpool could be modified so that it has "no SELECT replication mode", where pgpool runs SELECT on only master server. I could do this if you think it's usefull. However problem is pg_dump is not only running SELECT but also modifying database (counting up OID counter), i.e. it creates temporary tables. Is this a problem for you? -- Tatsuo Ishii ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] pg_dump and pgpool
Scott Marlowe <[EMAIL PROTECTED]> writes: >> I don't think it's worth that price to support a fundamentally bogus >> approach to backup. > But it's not bogus. IT allows me to compare two databases running under > a pgpool synchronous cluster and KNOW if there are inconsistencies in > data between them, so it is quite useful to me. As a data comparison tool it is certainly bogus. What about different row ordering between the two databases, for instance? AFAICS this could only work if you were doing physical rather than logical replication (eg, shipping WAL logs) in which case the OIDs would be just as much in sync as everything else. Basically my point is that you are proposing to do a lot of work in order to solve the first problem you are running up against, but that will only get you to the next problem. I'm not prepared to accept a significant increase in complexity and loss of maintainability in pg_dump in order to move one step closer to the dead end that you will certainly hit. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] pg_dump and pgpool
On Wed, 2004-12-29 at 17:30, Tom Lane wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > > On Wed, 2004-12-29 at 16:56, Tom Lane wrote: > >> No, we'd be throwing more, and more complex, queries. Instead of a > >> simple lookup there would be some kind of join, or at least a lookup > >> that uses a multicolumn key. > > > I'm willing to bet the performance difference is less than noise. > > [ shrug... ] I don't have a good handle on that, and neither do you. > What I am quite sure about though is that pg_dump would become internally > a great deal messier and harder to maintain if it couldn't use OIDs. > Look at the DumpableObject manipulations and ask yourself what you're > going to do instead if you have to use a primary key that is of a > different kind (different numbers of columns and datatypes) for each > system catalog. Ugh. Wait, do you mean it's impossible to throw a single SQL query with a proper join clause that USES OIDs but doesn't return them? Or that it's impossible to throw a single query without joining on OIDs. I don't mind joining on OIDs, I just don't want them crossing the connection is all. And yes, it might be ugly, but I can't imagine it being unmaintable for some reason. > I don't think it's worth that price to support a fundamentally bogus > approach to backup. But it's not bogus. IT allows me to compare two databases running under a pgpool synchronous cluster and KNOW if there are inconsistencies in data between them, so it is quite useful to me. > IMHO you don't want extra layers of software in > between pg_dump and the database --- each one just introduces another > risk of getting a wrong backup. You've yet to explain what the > *benefit* of putting pgpool in there is for this problem. Actually, it ensures that I get the right backup, because pgpool will cause the backup to fail if there are any differences between the two backend servers, thus telling me that I have an inconsistency. That's the primary reason I want this. The secondary reason, which I can work around, is that I'm running the individual databases on machines that only answer the specific IP of the pgpool machine's IP, so remote backups aren't possible, and only the pgpool machine would be capable of doing the backups, but we have (like so many other companies) a centralized backup server. I can always allow that machine to connect to the database(s) to do backup, but my fear is that by allowing anything other than pgpool to hit those backend databases they could be placed out of sync with each other. Admitted, a backup process shouldn't be updating the database, so this, as I said, isn't really a big deal. More of a mild kink really. As long as all access is happening through pgpool, they should stay coherent to each other. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] pg_dump and pgpool
On Wed, 2004-12-29 at 23:12, Greg Stark wrote: > Scott Marlowe <[EMAIL PROTECTED]> writes: > > > What's happening is that there are two databases behind pgpool, and each > > has managed to assign a different (set of) OID(s) to the table(s). So, > > when pg_dump asks for an OID, it gets two different ones. > > If pgpool is so good at maintaining consistency between databases how did they > end up with different OIDs? That's rather disingenuous, considering that OIDs are more of an internal artifact of the databases, while the USER data stored therein is what I, or anyone else, would consider the word consistency applies to. The fact that both databases have different OIDs for the same objects has nothing to do with the userland data being consistent or not. > It seems you really do have inconsistent databases and are asking for pg_dump > to be robust against that. No. I have a perfectly consistent database. It happens to reside on a cluster of two machines which have different internal ids assigned to the same objects, which, when I throw bog standard SQL at them, I get the same answer from both. That is consistent. The fact that PostgreSQL has different OIDs underneath is an implementation quirk that I, as a user, shouldn't really have to worry about or even notice. IMHO. > Wouldn't hiding the > inconsistency only be doing you a disservice? If they were inconsistent, then certainly it would. But again, they're NOT inconsistent. You've built your argument on a false premise. > I think you should be trying to figure out why the databases are inconsistent > and working to figure out what you have to change to avoid whatever actions > caused that. I shouldn't have to care what the OIDs used internally are. Users are consistently warned to never use OIDs as PKs, yet PostgreSQL the database does just that. My data is coherent. I'll explain more in my reply to Tom Lane... ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html