Re: [ADMIN] to_timestamp error between postgres version 8.3 and 9.2
Technical Doubts online.technicaldou...@gmail.com writes: insert into registers_info (regid,transid,regdate,canceldate) values ('reg1',1,to_timestamp('2013-07-24','-MM-dd hh24:mi:ss')::TIMESTAMP,to_timestamp('null','-MM-dd hh24:mi:ss')::TIMESTAMP); The above query works fine in Postgres 8.3 version. But the same query when executed in postgres 9.2 its throwing below error ERROR: invalid value null for Detail: Value must be an integer. Yup. It was never intended that 'null' would be valid input for to_timestamp, and that's thrown an error since 8.4. See thread here: http://www.postgresql.org/message-id/37ed240d0808291839t21e19956mdfbdc80aeb1b3...@mail.gmail.com A quick test says that 8.3 and older did this: regression=# select to_timestamp('null','-MM-dd hh24:mi:ss'); to_timestamp --- 0001-01-01 00:00:00-05 BC (1 row) which can hardly be considered a sane interpretation of 'null', even if we wanted to accept that input. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Getting Out of memory errors on server 9.1.9
Brian Wong bw...@imageworks.com writes: I'm getting a Out of memory error on a 9.1.9 server. What do these lines in the log mean? ~ Brian It's a memory usage map. Your problem is evidently here: MessageContext: 3046113280 total in 374 blocks; 24584 free (5 chunks); 3046088696 used As far as I recall at the moment, MessageContext is used to parse, analyze, and possibly to plan an incoming query. So the short answer is that you're trying to run a query that's too long and/or too complicated. We'd need a lot more details before suggesting how you might work around that. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] After upgrading from 9.1.1 to 9.1.9, pgadmin's server status window gives error
Brian Wong bw...@imageworks.com writes: Any one has any clue how to fix this? ~ Brian Well, you don't have adminpack installed in two out of those three databases. Does it show up in the pg_available_extensions view? If so, you just need CREATE EXTENSION adminpack. If not, you probably forgot to install the postgresql-contrib package (or local equivalent). regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] WTF? 9.2.4 Logs have the wrong day of the week?
Scott Whitney sc...@journyx.com writes: Everything you need to see is right here: Uh, everything except the log_timezone setting. I believe log files will be named according to the local time in the log_timezone at the instant they're created. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] vacuum freeze performance, wraparound issues
Natalie Wenz nataliew...@ebureau.com writes: ... With the speed postgres is capable of, and the ever-falling prices of storage making larger, faster databases possible, has the possibility of changing the transaction id to a 64-bit (or even 128-bit!) value been considered? Not terribly seriously --- the penalties from making row headers 8 bytes bigger have always seemed to outweigh the advantages. (128 bits is right out; we don't even have 128-bit LSNs.) We'd probably take a patch to make 64-bit XIDs available as a compile-time option, if someone wanted to do the legwork to write and test it. But let me ask you this: if such an option existed, would you be willing to dump and reload your database to take advantage of it? The conversion costs of changing row header format seem like they'd discourage exactly those people whom such a feature could help. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] 0-byte 2PC File
Vincent Lau vincent@rentrakmail.com writes: We are running 8.4.13 on this particular database and recently we have been getting empty two phase commit state files inside the pg_twophase directory. These prepared transactions can't be rolled back due to the empty state file. Here is an example of the error message. ERROR: two-phase state file for transaction 121421120 is corrupt The only way to get rid of this is to restart the database, which is just a bandaid solution at this point. Is there a waiting transaction corresponding to that file? If not you could just manually remove the file, I think. Not that that isn't a band-aid, but it'd be less painful than a restart. Do you see any error messages logged at the time the transaction PREPARE is attempted? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Dump/Reload pg_statistic to cut time from pg_upgrade?
Jerry Sievers gsiever...@comcast.net writes: Kevin Grittner kgri...@ymail.com writes: Jerry Sievers gsiever...@comcast.net wrote: Planning to pg_upgrade some large (3TB) clusters using hard link method. Run time for the upgrade itself takes around 5 minutes. Unfortunately the post-upgrade analyze of the entire cluster is going to take a minimum of 1.5 hours running several threads to analyze all tables. This was measured in an RD environment. At least for some combinations of source and destination server versions, it seems like it ought to be possible for pg_upgrade to just move the old cluster's pg_statistic tables over to the new, as though they were user data. pg_upgrade takes pains to preserve relation OIDs and attnums, so the key values should be compatible. Except in releases where we've added physical columns to pg_statistic or made a non-backward-compatible redefinition of statistics meanings, it seems like this should Just Work. In cases where it doesn't work, pg_dump and reload of that table would not work either (even without the anyarray problem). regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Ubuntu Precise and auto startup of pg databases
Tony Dare wadedare4...@comcast.net writes: I don't know if anyone else has had this problem but i found that the /etc/init.d/postgresql script simply would not start up my databases after a reboot. though it worked fine just after creating the database and getting all the config files in the expected places. Was able to down and up the dbs all day long, if I wished. Then I rebooted and there was no databases started. Executing /etc/init.d/postgresql start 9.2 resulted in No database clusters on the system or suchlike. I tracked the problem down to the script calling on pg_lsclusters, which in turn looks for a /var/run/postgresql directory. All well and good, except that /var/run/postgresql is on a tempfs filesystem and is wiped out after a reboot. (Q: should that even be on a tempfs?) You should complain about that to the Ubuntu packager (ie, file an Ubuntu bug). It's unlikely that that person reads this list ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] exceeded MAX_ALLOCATED_DESCS while trying to open file
steve.tout...@inspq.qc.ca writes: Using postgres 9.2, I created a FOREIGN TABLE on a csv file. It works. But I need to change the structure of the data and I created several sql select using union, like select from .foreign table... where... union select from .foreign table... where... union ... It works for the first 10 select but then I get ERROR: exceeded MAX_ALLOCATED_DESCS while trying to open file /srv/www/data/antidote/registre.csv FYI, I've committed some changes that will relax this limit in future releases. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] exceeded MAX_ALLOCATED_DESCS while trying to open file
Alvaro Herrera alvhe...@2ndquadrant.com writes: k...@rice.edu wrote: Each SELECT opens your FOREIGN TABLE so you are opening it 10 times. This is hardcoded by a #define in ./src/backend/storage/file/fd.c during the build so you would need to recompile the software yourself to change it. Can you re- write your query using a single SELECT or possibly read the data into a temporary table for processing? To me this sounds like a bug in whatever FDW is being used. Yeah, we need to fix file_fdw so that it's using virtual rather than physical file descriptors ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Transaction ID wrap limit is log entries
Armand du Plessis a...@bank.io writes: The autovacuum completed (after many hours) however it didn't seem to have frozen any old pages as it just kicks off again right away with the same reason (VACUUM ANALYZE public.messages (to prevent wraparound)) I suspect it's unable to move the table's min XID forward at all because there is some old open transaction preventing cleanup of very old dead tuples. Look for old xact_start times in pg_stat_activity. Also look for old entries in pg_prepared_xacts. If you find any, get rid of them. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] top posting?
Craig James cja...@emolecules.com writes: I've been participating in newsgroups since UUCP days, and I've never encountered a group before that encouraged bottom posting. Bottom posting has traditionally been considered rude -- it forces readers to scroll, often through pages and pages of text, to see a few lines of original material. Quoting entire messages is rude, IMO, no matter where you attach your comments. What is respectful of your readers' time is to quote just enough to remind them what you're responding to. And once you do that, it's more natural to append your responses after what you're responding to. I'm aware that there are a whole lot of people nowadays who don't get this element of email etiquette, or who use tools that make it hard to do things that way (suggesting that the tool authors don't get it either). But that's what's been considered good style for about three decades, in my book. The most efficient strategy, one that respects other members' time, is to briefly summarize your point at the TOP of a posting, then to *briefly* quote only the relevant parts of the post to which you are replying, and bottom-post after the quoted text. I think we're on the same page, actually, except for the brief-summary bit. I'm not sure that a brief summary without context is really all that efficient or intelligible. It's probably a tenable approach if there's only a few threads going on at a time, but how often is that a good assumption? I guess in an ideal world the Subject: line would provide enough context ... but people aren't terribly good about picking good subjects to start with, and they're positively bad about adjusting the Subject: when the thread diverges into sub-topics. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Paul Hinze paul.t.hi...@gmail.com writes: [ multiple CREATE INDEX CONCURRENTLY commands will deadlock with each other ] Hm. I guess the reason nobody noticed this before now is that generally the idea with CREATE INDEX CONCURRENTLY is to minimize the impact on system load, hence you wouldn't do more than one at a time. Still, it's surely a POLA violation that you *can't* do more than one at a time. The cause is that each one will wait for all older snapshots to be gone --- and it does that before dropping its own snapshot, so that the other ones will see it as something to be waited out too. Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Andres Freund and...@2ndquadrant.com writes: On 2013-04-25 13:17:31 -0400, Tom Lane wrote: Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? Except that it still will unnecessarily wait for other CICs, just not deadlock, I don't see a problem. We could have a PROC_IN_CIC flag or something so we can ignore other index creations, but I am not sure if its worth the complication. I'm not sure it's a good idea to ignore other CICs altogether --- they could be executing user-defined index functions that do strange things like consult other tables. Since this seems to me to be a bit outside the intended use-case for CIC anyway, I think it's good enough if they just don't deadlock. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
anara...@anarazel.de and...@anarazel.de writes: I don't have access to the code ATM an I wonder whether DROP CONCURRENTLY has a similar problem? Depends a bit on how the waiting is done... It's not a problem --- that code doesn't depend on waiting for snapshots to expire, it just checks for other sessions holding locks on the target table. (I also did some experimental testing to verify this.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Pb migrating database from Postgres 8.1 to 8.4
chanh.t...@free.fr writes: I'm facing pbs trying to migrate my current database from Postgres 8.1 to 8.4. pg_restore: [archiver (db)] could not execute query: ERROR: could not find function gtsvector_in in file /usr/lib64/pgsql/tsearch2.so Looks like you're missing the shared library that underlies tsearch2. The other errors seem to be consequences of this one. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Pb migrating database from Postgres 8.1 to 8.4
Robert Treat r...@xzilla.net writes: On Fri, Apr 19, 2013 at 10:19 AM, Tom Lane t...@sss.pgh.pa.us wrote: chanh.t...@free.fr writes: I'm facing pbs trying to migrate my current database from Postgres 8.1 to 8.4. pg_restore: [archiver (db)] could not execute query: ERROR: could not find function gtsvector_in in file /usr/lib64/pgsql/tsearch2.so Looks like you're missing the shared library that underlies tsearch2. The other errors seem to be consequences of this one. It might be cleaner to try and remove the tsearch bits from your schema dump and get a fresh install of tsearch by itself before loading in your own schema. Actually, on second look, I think this is expected behavior when trying to update from pre-8.3 text search --- that whole conversion was none too clean. Note the warning in the tsearch2 module documentation about 3. Load the dump data. There will be quite a few errors reported due to failure to recreate the original tsearch2 objects. These errors can be ignored, but this means you cannot restore the dump in a single transaction (eg, you cannot use pg_restore's -1 switch). If you weren't actually using the tsearch2 features before, you might be best advised to uninstall tsearch2 from the old database before you convert. If you were, please read the documentation about text search conversion carefully --- both http://www.postgresql.org/docs/8.4/static/textsearch-migration.html and the contrib/tsearch2 page. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Wall replay
Keith Ouellette keith.ouelle...@airgas.com writes: I am realatively new to PostgreSQL. I have two servers in a Master/Slave relationship using WAL as the replication method. I have what I hope to be an easy question. If the Master goes down and before the slave is promoted, there may be a case where a log file may not have been shipped to the slave yet. If the new Master starts to process new transactions, is there a way to have it process the unshipped WAL files if they are copied to it? I do not see that case in the documentation. You'd have to ship the remaining WAL over *before* the slave has come up live, else it's too late. If you're concerned about gaps of that sort, though, you should not be using log file shipping at all. Set up streaming replication instead. (If the PG version you're using hasn't got that, high time to move to something newer.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] after 9.2.4 patch vacuumdb -avz not analyzing all tables
Mike Broers mbro...@gmail.com writes: After patching to 9.2.4 I am noticing some mysterious behavior in my nightly vacuumdb cron job. I have been running vacuumdb -avz nightly for a while now, and have a script that tells me the next day if all the tables in pg_stat_user_tables have been vacuumed and analyzed in the last 24 hours. Since the patch some tables do not seem to be getting analyzed. This did not happen after I patched from earlier versions of 9.2 like 9.2.1 to 9.2.2, 9.2.3. I pipe the output of the vacuumdb command to a log file, and there dont appear to be any errors, its just not analyzing all the tables as it did before. Some tables are still being analyzed ok. Its vacuuming all but a few tables that I know are being blocked by idle in transaction sessions. [ pokes around ... ] You certain 9.2.3 didn't do this too? This appears to be an intentional behavior of the 9.2.3 patch that made it cancel truncation when there were conflicting lock requests: /* * Report results to the stats collector, too. An early terminated * lazy_truncate_heap attempt suppresses the message and also cancels the * execution of ANALYZE, if that was ordered. */ if (!vacrelstats-lock_waiter_detected) pgstat_report_vacuum(RelationGetRelid(onerel), onerel-rd_rel-relisshared, new_rel_tuples); else vacstmt-options = ~VACOPT_ANALYZE; However I've got to say that both of those side-effects of exclusive-lock abandonment seem absolutely brain dead now that I see them. Why would we not bother to tell the stats collector what we've done? Why would we think we should not do ANALYZE when we were told to? Would someone care to step forward and defend this behavior? Because it's not going to be there very long otherwise. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Invalid SQL not rejected?
Craig James cja...@emolecules.com writes: Hmmm the subselect is invalid, but not rejected. The outer select returns every row in the customer_order_matches table. This seems pretty wrong. This is PG 9.2.1 running on Ubuntu. db= select count(1) from customer_order_matches where customer_order_item_id in (select customer_order_item_id from customer_order where customer_order_id = 105862140); count --- 36180 (1 row) Unfortunately, if customer_order_item_id doesn't exist in customer_order, that's still a perfectly valid outer reference to customer_order_matches.customer_order_item_id. People get confused by that kind of thing regularly, but it's required by spec to work like this. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] after 9.2.4 patch vacuumdb -avz not analyzing all tables
Jeff Janes jeff.ja...@gmail.com writes: I believe the rationale was so that an autovacuum would still look like it was needed, and get fired again the next naptime, so that it could continue with the truncation attempts. (Rather than waiting for 20% turnover in the table before trying again). I'm not convinced by this argument. If the DBA is desperate to get the space back, they can go do vacuum full. Well, that's why I think the lock abandonment shouldn't apply to manual plain vacuum. You shouldn't need to do a vacuum full for that; that'd be a huge increase in the cost, not to mention that it'd transiently require twice the disk space, hardly a good thing if you're short. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] slave restarts with kill -9 coming from somewhere, or nowhere
Bert bier...@gmail.com writes: These are my memory settings: work_mem = 4GB How is it possible that one connection (query) uses all the ram? And how can I avoid it? Uh ... don't do the above. work_mem is the allowed memory consumption per query step, ie per hash or sort operation. A complex query can easily use multiples of work_mem. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] upgrading postgresql84 package from rhel-x86_64-server-5 repo
=?ISO-8859-1?Q?Devrim_G=FCnd=FCz?= dev...@gunduz.org writes: http://yum.postgresql.org/8.4/redhat/rhel-5Server-x86_64/ I can see 8.4.17 there. Maybe you checked somewhere else? I suppose he's asking about the official RHEL packages, which indeed are only at 8.4.13. The right way to make something happen there would be to pester his Red Hat support contact. (Although really, since RHEL5 is now in production phase 2 which means it only gets critical updates, nothing is very likely to happen until a security bug comes along that RH's security team deem sufficiently interesting. This morning's releases aren't likely to get that done, since the high-dollar problem isn't in 8.4. If you want newer packages it's probably time to move to RHEL6.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Trouble connecting locally after 9.2.4 upgrade
Campbell, Lance la...@illinois.edu writes: Error: psql: symbol lookup error: psql: undefined symbol: PQconnectdbParams This looks like psql is picking up a rather ancient version of libpq.so. Applying ldd or local equivalent to the psql executable would help you figure out where it's getting that library from. Bottom line though is probably that you forgot to worry about rpath or something similar while configuring the new build. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Trouble connecting locally after 9.2.4 upgrade
Campbell, Lance la...@illinois.edu writes: I figured it all out. When I ran the configuration script I told it to put the code in pgsql-9.2.4. Then after the code installed I moved it to pgsql-9.2. Then the install could not find the proper libraries. So it used the system one. I reinstalled but this time with the target directory of pgsql-9.2. It worked fine. I don't need the environment variable LD_LIBRARY_PATH now. Hah. Right, because configure would have set things up to build psql etc with rpaths pointing to the lib directory under the installation directory it was told to use. So it'd work until you moved the lib directory. Moral: be careful about relocating installed trees. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] slave restarts with kill -9 coming from somewhere, or nowhere
Bert bier...@gmail.com writes: I'm running the latest postgres version (9.2.3), and today for the first time I encountered this: 12774 2013-04-02 18:13:10 CEST LOG: server process (PID 28463) was terminated by signal 9: Killed AFAIK there are only two possible sources of signal 9: a manual kill, or the Linux kernel's OOM killer. If it's the latter there should be a concurrent entry in the kernel logfiles about this. If you find one, suggest reading up on how to disable OOM kills, or at least reconfigure your system to make them less probable. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] select exact term
Marc Fromm marc.fr...@wwu.edu writes: I am struggling with the syntax. In php I create my where clause as shown, using ~* for case insensitive: $search = art; $strSQL2 = WHERE (title ~* [[::]]'$search'[[::]] OR description ~* [[::]]'$search'[[::]]) ; When executed zero records are returned even though the ILIKE statement shown below returns records that do have the word art. Your php app must not be bothering to check for errors :-( ... that's invalid SQL syntax. The bracket constructs are part of the regexp string and need to be inside the single quotes. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Postgresql 8.4 GSSAPI auth with fallback to password prompting?
Stephen Frost sfr...@snowman.net writes: * Tim Watts (tim.j.wa...@kcl.ac.uk) wrote: I would have to respectfully take another point of view: that that particular judgement is probably better placed with the sysadmin rather than a blanket decision by the devs. It's not a blanket decision by any means- the current situation is that such an option doesn't exist. It's not it exists, but we disabled it because we felt like it. Were someone to write the code to support such an option, it's entirely possible it'd get committed (though likely with strong caveats about its use in the documentation). I'm not sure it would. Allowing a fallback would amount to a protocol change, meaning that old clients might fail in strange ways. You'd need a lot stronger case than has been made here to justify dealing with that. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Unexpected chunk size when doing dump
Vasiliy I Ozerov voze...@2reallife.com writes: I have an error, when doing database dump: pg_dump: [archiver (db)] query failed: ERROR: unexpected chunk size 384 (expected 389) in final chunk 1 for toast value 11589 in pg_toast_2619 That looks like a data corruption problem :-(. You might want to check into whether your disks/filesystems are properly configured to honor fsync. Assuming this is the only problem, you're in luck as far as recovery goes, because pg_toast_2619 is the toast table for pg_statistic, which means there's no irreplaceable data in there. I'd suggest truncating pg_statistic (as superuser) and then doing a database-wide ANALYZE to rebuild the content. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Postgres 9.1 statistics in pg_stat_database
Steve Crawford scrawf...@pinpointresearch.com writes: Additionally (correct me if I'm wrong, Tom), background activity like autovacuum will generate countable transactions. Yes, absolutely. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] PostgreSQL's share_buffer calculation using shmget() versus kernel.shmmax
Mel Llaguno mllag...@coverity.com writes: Thanks for your reply. I agree with your statement that you should set the configuration parameters first, but I would like to be able to calculate the SHMMAX value based on those parameters. This is particularly useful when suggesting postgresql.conf optimizations to our customers whose machine have a lot of RAM (64+GB). Having to guess this value is far from ideal; what I'd like is the formula used by postgresql that generates the shmget() value displayed in the pgctl.log. There's some rather old information in Table 17-2 here: http://www.postgresql.org/docs/9.2/static/kernel-resources.html#SYSVIPC As Pavan says, the shared_buffers term is usually the only one worth worrying about. The traditional advice is to not set that to more than about a quarter of your physical RAM, which would mean that this script you're using to set SHMMAX is leaving lots of headroom, which is perfectly OK. (AFAIK there is no penalty to setting SHMMAX larger than you need.) There's more info worth looking at here: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Installing Postgres without the postgres user and group on Linux?
Plugge, Joe R. jrplu...@west.com writes: Is it possible to install and then subsequently run the postgres engine on Linux without creating the postgres user and group? We have some folks implementing LDAP that are resistant to creating a postgres account and group and would rather create machine specific accounts (like one does for Windows with SQL Server). The server source code has no particular allegiance to any OS username. You can run it under whatever account you want. I always thought that if the postgres user and group were not there upon install (which is done as root), that the installation process would create them for you, am I missing something? Particular packagings of Postgres might act like that --- for instance, the Fedora/Red Hat RPMs would try to create such a usergroup. But you did not say whose packaging you were thinking of using. In any case you could run the executables under some other user ID if you then adjusted file ownerships and startup scripts appropriately. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Installing Postgres without the postgres user and group on Linux?
Plugge, Joe R. jrplu...@west.com writes: Understood, but I thought that the install routine (in this case compiled from source on RHEL 6) will create the local user and group named postgres and chown the config and data dirs to that The unmodified Postgres sources will certainly do no such thing. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Escaping a blocked sendto() syscall without causing a restart
Jerry Sievers gsiever...@comcast.net writes: Does anyone know if one of the signals below can be sent to break out ,of this state *without* the postmaster sensing a crashed backend? I've seen several times in the past at other companies, backends that will not respond to cancel nor SIGTERM due to syscall that's blocked on IO. Quite often though apparently the backend would notice the broken socket eventually and receive the signals and exit cleanly. I've got one that's been wedged like that for a couple days now. I recall trying several in a similar situation a while ago and of course one of them interrupted the syscall all right but it was an abort and we got the customary spontaneous postmaster restart. Offhand it looks to me like most signals would kick the backend off the send() call ... but it would loop right back and try again. See internal_flush() in pqcomm.c. (If you're using SSL, this diagnosis may or may not apply.) We can't do anything except repeat the send attempt if the client connection is to be kept in a sane state. It's possible that if the interrupt was a SIGTERM (forced exit) we could mark the connection dead and return early, but it would probably take some thought and experimentation to get useful behavior that way. And I'm not at all sure if we could get it to work in SSL mode ... So the short answer is no, you probably can't kill the session without causing a restart. Possibly we should add a TODO to make this better. What you might consider instead, if this is a recurring problem, is adjusting the postmaster-side TCP keepalive parameters so that dead connections are noticed more quickly. The default connection timeout according to the TCP standards is on the order of hours, but you can reduce that quite a lot if your network environment is at all reliable. (But it's not clear to me why your stuck-for-a-couple-days case wouldn't have timed out long since. Are you sure this isn't a client-side problem, ie client is wedged? If so, why not kill the client instead?) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Casting bytea to varchar
Albe Laurenz laurenz.a...@wien.gv.at writes: You cannot specify a conversion function while altering a column's type, you'd have to use a new column like this: Sure you can; that's the whole point of the USING option. It'd look something like ALTER TABLE test ALTER COLUMN val TYPE varchar(255) USING convert(val); with whatever is appropriate in place of convert(). But the real question of course is what is appropriate for the OP's situation --- he didn't specify exactly what he's got in his bytea column or how that should be converted to varchar. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] How to display full query in logs
Campbell, Lance la...@illinois.edu writes: What I see: UPDATE abc.table set def=$1 WHERE id=$2 What I want to see: UPDATE abc.table set def='hi bob' WHERE id=12 It doesn't work that way. What you should be seeing, though, is that the value of each parameter is included in a DETAIL line just after the statement text. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] General queston on getting rid of unused WAL files
Benjamin Krajmalnik k...@servoyant.com writes: Looking at pg_xlogs, I see that postgres is sequentially reusing all of the available WAL files, while looking at the log file, it is recycling approximately 70-80 files every checkpoint, but it is not removing any files. [ squint... ] It should be removing those files until it gets under approximately 2 * checkpoint_segments WAL files. So, what have you got checkpoint_segments set to? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Mapping a Data File Back to a Table
Greg Schenzel init...@unixdev.net writes: So I've been digging through a database on an embedded device for two days trying to figure out which table some data is stored in. There are litterally 1000 tables in the database and I haven't come across the right one(s) yet. I know that the data I'm looking for is stored in /var/lib/pgsql/data/base/16384/2613 from strings and grep. Is there any way to map this back to a table or group of tables? Any insight would be greatly appreciated! Just for the record, the 16384 is the OID of a database (look at pg_database.oid) and 2613 is the filenode of a table (look at pg_class.relfilenode in the relevant database). contrib/oid2name can help mechanize this but personally I never bother with that. See http://www.postgresql.org/docs/9.2/static/storage.html for the authoritative skinny (but note this stuff does change across releases, so be sure to read the correct version of the docs). regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Using pre-configured vs building Postgres
Armin Resch resc...@gmail.com writes: Yet, now, we switch to a more modern distro (OpenSuse 12.1), which does have the RPM for Perl 5.14.2 pre-installed with the option to install more RPMs such as perl-DBI-1.616-7.1.3 perl-DBD-Pg-2.18.0-3.1.4 postgresql-9.1.1-3.1.4 Once our custom applications are tested and behave, the idea is to freeze all prerequisites until we go to a different generation (potentially different distro altogether in = 5 years). Before I try to obtain a handful of CPAN modules which are missing or 'unsupported' in OpenSuse, my question to this admin forum is what pro's and con's to consider when deciding whether to use a pre-configured postgresql versus building it ourselves. Well, if you're starting deployment now with a five-year plan, it's pretty dumb not to be using the latest major release (ie 9.2). 9.1 will be out of support in four years. But even if 9.1 is the release series you want to freeze on, it does not speak well at all for OpenSUSE's update practices if they're still shipping 9.1.1. That's more than a year out of date, and has assorted known security and data-loss issues, cf http://www.postgresql.org/docs/9.1/static/release.html If that's actually the latest version available from them, you'd be very well advised to use your own build of PG instead, and pay attention to our update releases. I can't speak to the question of whether their Perl packages are equally out of date --- you might be all right using those. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] my query is so slow,It costs 944,319.072 ms,Can someone Help!!
Jerome i...@countrybright.com.cn writes: I have a postgresql 9.1 in my server,And here 's my query,It costs 944,319.072 ms,it is so slow,query result have 27 records. It looks to me like your problem is here: - Seq Scan on m_transaction (cost=0.00..188132.06 rows=1 width=78) (actual time=0.015..932.400 rows=28936 loops=1) Filter: ((m_inoutline_id IS NOT NULL) AND ((ad_client_id)::text = ANY ('{0,FF80818129E99DC80129E9AD546A0708}'::text[])) AND (movementdate = '2012-12-01 00:00:00'::timestamp without time zone) AND (movementdate '2013-01-01 00:00:00'::timestamp without time zone) AND ((ad_org_id)::text = ANY ('{44523886B0E0423584BB0E87133F3EE4,4510F92E007E48B28EE6CEB088A78348,0}'::text[]))) If that rowcount estimate weren't so badly off (1 vs 28936), the planner would have picked a more appropriate join method. So you need to look into why it's misestimating the effect of those filter conditions so much. First question of course is whether the table has been analyzed lately. If ANALYZE fixes it then it would seem auto-analyze is asleep at the switch --- you didn't turn that off did you? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Phantom half-install: How can I remove it completely?
Jonathan Hayward jonathan.hayw...@pobox.com writes: I have been experiencing spooky behavior with Postgres after using the Mac installer for Postgres. There seems to be an existing half-baked Postgres installation that was already there and remains there after I deleted the Mac GUI installer version from /Applications and /Library. That's probably the client-side-only support that Apple installs by default --- /usr/bin/psql, at least, is Apple's. I'm not sure what might break if you remove that. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] log_min_messages=debug5, despite an explicit setting to warning on postgresql.conf
Kong Man kong_mansatian...@hotmail.com writes: We ran into a production issue during our maintenance window when I switched over LifeKeeper/PostgreSQL nodes from one to another. We noticed that the database was 1000 times slower than it usually is, then realized that the log_min_messages setting was set to 'debug5' for no reason. How was the postmaster started? From memory, -d5 on its command line would do that, and would not have been overridable from postgresql.conf. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [Autovacuum] Issue to understand some logs
Baptiste LHOSTE blho...@alaloop.com writes: As I said, because they are still visible to other transactions. Try to see if you have long-lasting transactions. How can I do that ? I check running query in pg_stat_activity, but there is no query on that table. Does select * from pg_prepared_xacts find anything? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] allow a user to see current_query in pg_stat_activity in 8.4
Greg Williamson gwilliamso...@yahoo.com writes: We want to create a role used by a monitor to check for IDLE in Transaction with the most restrictive permissions we can on a 8.4.13 instance. The user has been granted connect privilege to the database and some limited permissions to user tabhles that need to be monitored. But pg_stat_activity shows only insufficient privilege Yup. You can only see the query of another session if (1) you are a superuser, or (2) you are the same user the other session is logged in as. Table permissions have nothing to do with this. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] timezone, how postgres deal with it?
Anibal David Acosta a...@devshock.com writes: This query returns false select ('2012-11-20 17:00:00-02:00'::timestamp with time zone) = ('2012-11-20 18:00:00-03:00'::timestamp with time zone) But are exactly same time No, they aren't. You're apparently still confused about the sign of timezone offsets. The first one corresponds to 19:00 GMT, the second to 21:00 GMT. BTW, why are you posting the exact same question you posted two days ago on another list? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] How to check history of sequences and constraints (version 9.1)
Shams Khan shams.kha...@gmail.com writes: I want to check the last reset valued of sequence and the date of foreign key constraint when it was created. Is that possible in postgres 9.1 version?? No, the system doesn't track any such thing. You could possibly set log_statement to record all commands of interest in the postmaster log and then trawl through that data, but it wouldn't be especially easy. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Resolving Index Bloat
Samuel Stearns sstea...@internode.com.au writes: We have a problem with index bloat on a couple of our tables even though we have applied more aggressive autovac/analyze settings in the schema: Hard to tell much about this without knowing the baseline condition or what's happened since the baseline. It looks like your tables have grown circa 2X (eg billingitemrating), but is that due to new data or heavy update activity? If the baseline condition is freshly-built-or-REINDEXed indexes, a fair amount of bloat is to be expected. The traditional rule of thumb about btree indexes is that the steady-state load factor is about two-thirds full. By default, PG builds indexes tightly packed --- so just allowing the index to reach steady state will incur 50% bloat on average. It's usually counterproductive to try to maintain a fill factor better than that, unless the table receives only minimal insert/update traffic. (Indeed, usually the better policy for a heavy-update table is to create the indexes with 66% fillfactor to begin with.) Your indexes on billingitemrating seem to have expanded a bit more than what would be expected from the combination of these factors, but I'm not sure they're enormously out of line. You could delve a bit deeper by using contrib/pgstattuple to measure the actual dead space in both the tables and the indexes. Also, it'd be useful to know the data types of the columns being indexed. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Postgre Eating Up Too Much RAM
Craig Ringer cr...@2ndquadrant.com writes: On 11/14/2012 06:12 AM, Aaron Bono wrote: Am I reading this right? Are there individual connections using over 300 MB or RAM by themselves? If I recall correctly, RSS is charged against a PostgreSQL back-end when it touches `shared_buffers`. So that doesn't necessarily mean that the back-end is using the full amount of memory listed as RSS. Yeah. Since Aaron's got shared_buffers set to 256MB, the shared memory segment is something more than that (maybe 270-280MB, hard to be sure without checking). The RSS numbers probably count all or nearly all of that for each process, but of course there's really only one copy of the shared memory segment. RSS is likely double-counting the postgres executable as well, which means that the actual additional memory used per process is probably just a few meg, which is in line with most folks' experience with PG. The free stats didn't look like a machine under any sort of memory pressure --- there's zero swap usage, and nearly half of real RAM is being used for disk cache, which means the kernel can find no better use for it than caching copies of disk files. Plus there's still 10G that's totally free. Maybe things get worse when the machine's been up longer, but this sure isn't evidence of trouble. I'm inclined to think that the problem is not RAM consumption at all but something else. What exactly happens when the server hangs? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] SIGQUIT vs SIGINT
Edwin Grubbs ed...@grubbs.org writes: Redhat cluster comes with a default configuration file for postgres-8.4 that by default just sends a SIGQUIT to stop postgres. Ouch. That's news to me. I can't find any more information on why not to use SIGQUIT besides the documentation indicating This is recommended only in emergencies. Can you provide any information on why their default config should be changed? You already pointed out the core reason: it results in substantially increased time to restart the service, because of the need for WAL replay. One could also argue that there's an increased risk of data corruption, because that's a relatively poorly tested shutdown sequence. (Sure, we would like to think that SIGQUIT is perfectly safe, but it's poorly tested.) One thought that comes to mind is that because the postmaster exits without waiting for its children, the postmaster.pid lock file will be removed before all activity has stopped, meaning that there is a nonzero risk of data corruption if a new postmaster is started immediately afterwards. Another argument is that SIGQUIT does not allow for a clean shutdown of client connections, which could confuse client-side logic with difficult-to-predict consequences. There are probably some other risks I'm not thinking of. Basically, SIGQUIT is the panic button. You push it when there is a damn good reason to, not when you are doing a routine shutdown. http://en.wikipedia.org/wiki/Big_red_button Here is the bug that I submitted. The comments that I have received back are on the support ticket, so you can't see that. https://bugzilla.redhat.com/show_bug.cgi?id=871659 FWIW, I've added a private comment to that bugzilla encouraging a change. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] MySQL Blackhole Engine
Deron fecas...@gmail.com writes: We are looking at integrating a 3rd party application. They currently use MySQL for their database solution. We are asking them to see about porting to PostgreSQL. One of the issues they are running into is that they use MySQL BLACKHOLE storage engine. Really? I've always assumed that was utterly useless except as a template for writing new mysql storage engines. What are they using it for? You could presumably approximate the behavior by attaching a BEFORE INSERT trigger to a table that just always returns NULL. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Fwd: Errors on pg_dumpall
Terry Khatri terrykhatri...@gmail.com writes: Okay, I have found a reference to that oid in pg_type as under : select * from pg_type where typnamespace=74363 ; Okay, so you have a composite type named city (either a table's rowtype, or a standalone composite type --- hard to tell from just this entry) that somehow escaped deletion when the containing schema was deleted. Do you have an idea what triggered that? A reproducible test case would be even better. What I should do next ? Well, you can just manually delete those rows in pg_type, if you're sure that there's nothing else linking to them. It might be safer to try to do it as drop type city first. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] autovacuum with lots of open file references to deleted files
Tore Halset hal...@pvv.ntnu.no writes: On Oct 15, 2012, at 2:27 AM, Tom Lane t...@sss.pgh.pa.us wrote: Tore Halset hal...@pvv.ntnu.no writes: On this box I drop a 80GB database each night followed by a restore of a similar sized database. It is a restore of our production database to a development server. This box is running 9.2rc1 (sorry). du and df reported quite different numbers and lsof show that autovacuum is holding lots of deleted files. After killing the autovacuum daemon, some disk space was restored and the du and df numbers was more equal. Hm. I've been able to reproduce some leakage of file descriptors in the autovac launcher, but it required (a) fairly small shared_buffers and (b) very heavy update activity on large tables. So I'm not sure that it would explain the consistent leakage you seem to be seeing. Can you tell us more about your usage pattern on the development server? A cron job dropdb one of the databases and createdb it and then pg_restore. Roughly 80GB dump. So far, my guess is that this is fixed by commits a1f064fc2 + d7598aeea. Out production PostgreSQL running a 9.1 variant does not have this problem. It does not have the nightly dropdb/createdb/pg_restore, but otherwise similar usage patterins. It is also configured to use more memory. 9.1 has the same bug (and the same patches), but I think that the aspect you're running into is specific to DROP DATABASE scenarios. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] autovacuum with lots of open file references to deleted files
Greg Williamson gwilliamso...@yahoo.com writes: So far, my guess is that this is fixed by commits a1f064fc2 + d7598aeea. Is there any idea of when this will be released ? No. I'd guess that there will be update releases before the end of the year, but they are not imminent. We have some open issues that have to be settled first, notably http://archives.postgresql.org/pgsql-hackers/2012-10/msg00511.php regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [GENERAL] Streaming Replication Server Crash
Craig Ringer ring...@ringerc.id.au writes: On 10/19/2012 04:40 PM, raghu ram wrote: 2012-10-19 12:26:46 IST [1338]: [18-1] user=,db= LOG: server process (PID 15565) was terminated by signal 10 That's odd. SIGUSR1 (signal 10) shouldn't terminate PostgreSQL. Was the server intentionally sent SIGUSR1 by an admin? Do you know what triggered the signal? SIGUSR1 is used for all sorts of internal cross-process signaling purposes. There's no need to hypothesize any external force sending it; if somebody had broken a PG process's signal handling setup for SIGUSR1, a crash of this sort could be expected in short order. But having said that, are we sure 10 is SIGUSR1 on the OP's platform? AFAIK, that signal number is not at all compatible across different flavors of Unix. (I see SIGUSR1 is 30 on OS X for instance.) Are you running any procedural languages other than PL/PgSQL, or any custom C extensions? Anything that might have unwittingly cleared the signal handler for SIGUSR1? libperl has a bad habit of thinking it can mess with the process's signal setup ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [GENERAL] Streaming Replication Server Crash
Craig Ringer ring...@ringerc.id.au writes: On 10/22/2012 08:52 PM, Tom Lane wrote: But having said that, are we sure 10 is SIGUSR1 on the OP's platform? AFAIK, that signal number is not at all compatible across different flavors of Unix. (I see SIGUSR1 is 30 on OS X for instance.) Gah. I incorrectly though that POSIX specified signal *numbers*, not just names. That does not appear to actually be the case. Thanks. This isn't the first time I've wondered exactly which signal was meant in a postmaster child-crash report. Seems like it might be worth expending some code on a symbolic translation, instead of just printing the number. That'd be easy enough (for common signal names) on Unix, but has anyone got a suggestion how we might do something useful on Windows? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Odd DEALLOCATE statements never finish in 9.2?
Craig James cja...@emolecules.com writes: I installed 9.2 on our new server and am seeing something odd that doesn't happen in 8.4: postgres=# select datname, pid, usename, query from pg_stat_activity where query != 'IDLE'; The query column now means current or last query --- there's a separate column to check for the idle status. Those DEALLOCATEs are evidently what your app does before going idle. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] pg_dump out of memory
Neil Morgan n.a.mor...@brighton.ac.uk writes: I am running a PostgreSQL 8.3 server (not my choice, would prefer 9.1) but I am experiencing memory issues when using pg_dump. 8.3.what? Does anyone have any ideas please? For starters, turn on log_statements so you can see what query is triggering this. It'd be even better if you can get a stack trace from the point of the errfinish call. https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Database just hangs at startup
Terry Khatri terrykhatri...@gmail.com writes: I am having problems making any connection to the database, it was working fine and problems started when I added a wrong cidr in pg_hba.conf to allow connection from other hosts in the network, I took out that entry and restarted it but it just does not work it starts up fine but all connections just hang forever, whether I try it with psql or pgadmin III. listen_address='^' Well, that setting is certainly wrong. Perhaps you meant listen_address='*' regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] TCPIP connection on local host using md5
Jimy McDermott jmcderm...@catbird.com writes: pg_hba.conf: # TYPE DATABASEUSERADDRESS METHOD # local is for Unix domain socket connections only local all all peer # IPv4 local connections: hostall all 127.0.0.1/32ident # IPv6 local connections: hostall all ::1/128 ident # Allow replication connections from localhost, by a user with the # replication privilege. host all all 192.168.20.250/32 md5 # This line supports the remote connection that works great host all report 127.0.0.1/32md5 # This line is intended to support the local connection that is failing pg_hba.conf lines are considered in order, so the second one (with the ident auth method) is matching the incoming connection before the last one ever gets considered. You need to re-order those lines. The rule of thumb is most specific rule first. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] autovacuum with lots of open file references to deleted files
Tore Halset hal...@pvv.ntnu.no writes: On this box I drop a 80GB database each night followed by a restore of a similar sized database. It is a restore of our production database to a development server. This box is running 9.2rc1 (sorry). du and df reported quite different numbers and lsof show that autovacuum is holding lots of deleted files. After killing the autovacuum daemon, some disk space was restored and the du and df numbers was more equal. autovacuum hold roughly 100GB of deleted files. This running PostgreSQL instance has dumped/restored the 80GB database ~20 times. Hm. I've been able to reproduce some leakage of file descriptors in the autovac launcher, but it required (a) fairly small shared_buffers and (b) very heavy update activity on large tables. So I'm not sure that it would explain the consistent leakage you seem to be seeing. Can you tell us more about your usage pattern on the development server? What nondefault settings are you using on it? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] query from 9.0 fails to return on 9.2
Bill MacArthur webmas...@dhs-club.com writes: Here is the top line of the *really long* explain as run on a 9.0 server (it takes only a couple of seconds to return): HashAggregate (cost=1148.97..1149.36 rows=1 width=205) However, the same query run on the same machine after upgrade to 9.2.1 never returns. The query was sitting there for 10 hours unbeknownst to me. I'm wondering what values of join_collapse_limit from_collapse_limit you're using on each machine. If it's not an oops-forgot-to-copy-the-configuration problem, you're going to need to put together a self-contained test case. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] 9.2 won't load C-language function
Craig James cja...@emolecules.com writes: ERROR: could not load library /usr/local/pgsql/lib/libchmoogle.so: libopenbabel.so.4: cannot open shared object file: No such file or directory But libopenbabel.so.4 does exist: # ls -l /usr/local/pgsql/lib/libopenbabel.so.4 -rwxr-xr-x 1 root root 18250709 2012-10-09 18:05 /usr/local/pgsql/lib/libopenbabel.so.4* It may exist, but is it in the dynamic linker's search path? You didn't say what platform this is, but on Linux you normally have to negotiate with ldconfig, or else set rpath in the referencing shlib, if you want to use shlibs that are in nonstandard directories. What changed in how libraries are loaded between 8.4 and 9.2? Nothing, AFAIR. But you may have dropped an rpath spec when rebuilding your library, or forgotten about a ldconfig adjustment you made on the old machine. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Mixing 8.4.x and 9.2.x clients and servers
Craig James cja...@emolecules.com writes: Is it OK to mix 9.2.x and 8.4.x clients and servers? Can they talk to each other? In other words, if I install 9.2 on a new server alongside older servers running 8.4.x, can clients on the 8.4 systems talk to 9.2 servers, and can the 9.2 clients talk to 8.4 servers? In general, sure. There might be some specific incompatibilities (for instance, 9.2 clients expecting server features that weren't there in 8.4) but the protocol is the same. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Database size stays constant but disk space keeps shrinking -- postgres 9.1
Greg Williamson gwilliamso...@yahoo.com writes: I'll leave it to you and Tom to puzzle over the the postgres-related open files. Meanwhile, I'm a bit curious about the other 800+ and whether they are associated with scripts or processes that are connected to PostgreSQL. These all seem to be from two places -- repmgr (transient ) and this stats application. Do you have any characterization yet of which deleted files are being held open by which processes? In particular I'm wondering if the held-open deleted files are in a recently-dropped database, and whether they are being held open by regular backends or one of the background processes such as bgwriter, and if the former what are those backends doing exactly. It's entirely expected that recently-deleted files might be held open for a little while, but there are mechanisms that are supposed to prevent them from being held open indefinitely. I'm guessing that your usage pattern might be tripping over some gap in those mechanisms, but we don't have enough info yet to speculate about what. BTW, I now think that my question about the unexpected OID value shown for one deleted file may have been a red herring --- it seems not implausible that lsof was just lying to you. It has to do some guesswork to reconstruct file paths for deleted files, and I don't think it's always right about that. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Database size stays constant but disk space keeps shrinking -- postgres 9.1
Greg Williamson gwilliamso...@yahoo.com writes: Have you checked to see if there are any processes that have open handles to deleted files (lsof -X | grep deleted). lsof -X | grep deleted | wc -l shows: 835 such files. A couple: postgres 2540 postgres 50u REG8,3 409600 93429 /var/lib/postgresql/9.1/main/base/2789 200/11816 (deleted) postgres 2540 postgres 51u REG8,3 18112512 49694570 /var/lib/postgresql/9.1/main/base/2789 200/2791679 (deleted) ... So, which processes are holding these open, and what are they doing exactly? Let's see output from ps and pg_stat_activity, maybe even attach to them with gdb and get stack traces. We've a planned restart scheduled soon which will let me find any scripts that might be keeping things open, A restart will destroy all the evidence, so let's not be in a hurry to do that before we've identified what's happening. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Database size stays constant but disk space keeps shrinking -- postgres 9.1
Greg Williamson gwilliamso...@yahoo.com writes: postgres 2540 postgres 50u REG8,3 409600 93429 /var/lib/postgresql/9.1/main/base/2789200/11816 (deleted) postgres 2540 postgres 51u REG8,3 18112512 49694570 /var/lib/postgresql/9.1/main/base/2789200/2791679 (deleted) Thanks for the suggestions -- I'll post back when I have more info. Many of these do not seem to have a link to any identifiable process that is still running, but some do and they have pointed me away from the hourly drop / rebuild, at least for now. Looks like the stats database may be the issue. BTW, looking at that again --- the filenames appear to be ordinary tables in database 2789200, but there is something mighty odd about the first one: 11816 is an OID that should only be handed out during initdb. And in 9.1 what it would be handed out to is pg_shdescription. Now it's not impossible that pg_shdescription's original table file would get deleted: a VACUUM FULL or CLUSTER on that catalog would do it. But AFAICS there is no situation in which that relfilenode number would appear in a regular database --- it should be under the global/ subdirectory of $PGDATA. So unless you miscopied that filename, there is something odd going on here above and beyond the problem of open file descriptors not getting closed. Do you have any nonstandard maintenance practices in this installation, such as doing database-wide VACUUM FULL every so often? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Debian Squeeze - no xml support for Pg 9.2
Volkmar Herbst volkmar.her...@gmx.de writes: I rebuild with /usr/local/pgsql/postgresql-9.2.0/configure --with-libxml --with-python. Configuring make and make install went well. OK ... However I get an error in postgres doing xml operations: select 'test/test'::xml ERROR: unsupported XML feature Then you're not actually using a postmaster with xml support in it; most likely, you're still talking to the previous build. Did you restart? Did you make sure to select the same install directory and default port as before? (Debian's build of postgres is quite nonstandard in these respects, so it seems likely that you'd need more configure options than you mentioned in order to have a drop-in replacement for a Debian build. But not being a user of Debian's package I'm not the best person to ask for details.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] CPU causes 100% load in user space when ntp client runs and postgresql is under heavy load
Dennis Brouwer dennis.brou...@m4n.nl writes: I now have excluded ntp as root cause for the CPU cycles being wasted in user space. Good, cause that wasn't making any sense at all. I installed perf and monitored two servers (with different postgresql versions and hardware specification) which are hanging and have some output. Since I'm no die-hard at interpreting the output of perf top what would be the next step to do? I'd suggest asking for help in pgsql-performance. I don't know much about perf either (still an oprofile guy), but the people who do know it hang out there. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] CPU causes 100% load in user space when ntp client runs and postgresql is under heavy load
Dennis Brouwer dennis.brou...@m4n.nl writes: Last week I was repeatedly able to run all these tests on the database without any issue but recently, all of a sudden at random, some of the queries performed a factor 100 less. It may take hours to complete the transaction. At the same moment we see a dramatic decrease in IO and the CPU is nearly 100% busy in user space. After days of testing I may have found the cause: the ntp client. If I stop the ntp client the problem vanishes. I have started reading on spinlocks and other related material but this all is rather complicated stuff and kindly ask in what direction I should search. The issue can be reproduced for both postgresql-9.1 and postgresql-9.2 and perhaps can be rephrased as: Very high CPU load in user space (at random) with ntp enabled and (long?) running transactions. That's really bizarre. What ntp client are you using exactly? Is it configured to adjust the system clock by slewing, or by stepping? Can you identify what part of the code is eating CPU (try perf or oprofile)? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Problem creating temporary tables . . .
=?iso-8859-1?Q?Rub=E9n_Crespo_Flores?= ruben.cres...@gmail.com writes: ERROR: could not find tuple for attrdef 299038853 SQL state: XX000 Context: SQL statement create temporary table tmp_tablatotlin15 (... Hm ... that's a bit odd, but try reindexing pg_attrdef. I tried reindexing pg_attrdef and pg_attribute but the result was the same. There's something awfully odd about that. A look in the 9.0 sources says that that specific error text only appears in RemoveAttrDefault and getObjectDescription - and the latter is only used in error messages. It's not apparent why a CREATE TABLE operation would either be dropping column defaults or issuing error messages that cite an already-existing default. Could you show a complete example of this behavior? BTW, please keep the mailing list cc'd, so that more people can help you. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Problem creating temporary tables . . .
=?iso-8859-1?Q?Rub=E9n_Crespo_Flores?= ruben.cres...@gmail.com writes: El 20/09/2012, a las 12:36, Tom Lane escribió: There's something awfully odd about that. A look in the 9.0 sources says that that specific error text only appears in RemoveAttrDefault and getObjectDescription - and the latter is only used in error messages. It's not apparent why a CREATE TABLE operation would either be dropping column defaults or issuing error messages that cite an already-existing default. Could you show a complete example of this behavior? Here is an example : Hmm ... unsurprisingly, this doesn't fail for me. I also tried setting debugger breakpoints at the two places where the message could be generated, and neither of them are ever reached while running this function. So something's pretty broken at your end. After thinking for awhile, though, I do have a theory, and it squares with your observation that you need to open a lot of connections to see the problem. I think that there's a broken partial table definition in a high-numbered pg_temp_nnn schema. When a new session first tries to use a temp table, it either creates or cleans out the pg_temp_nnn schema corresponding to its session number. If some previous user of that schema had left it not-cleaned-out as a result of crashing, that would result in table drop attempts, and from there it's not hard to see why you'd get to RemoveAttrDefault. If things are sufficiently confused this could end up trying to remove column defaults that aren't there. What I'd suggest doing is looking in pg_depend for the row with classid = 2604 and objid = 259154466, and manually removing that row. That should let the schema drop get past this problem. There might be some more, similar, problems. Also, it'd probably be a smart thing to reindex pg_depend, just in case this isn't a data problem but an index problem. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] max_locks_per_transaction and partitioned tables
Michael Holt mh...@terapeak.com writes: We've had a system in operation for a few years that makes use of a substantial amount of partitioning. The parent table now has over 4,000 children tables. Within the last couple of days the server started giving out of shared memory errors with the suggestion to increase the max_locks_per_transaction. If the parent table is queried will it require a lock for each one of the child tables? I'm guessing it will. Yup, it will. I'm a bit astonished that you've gotten this far without horrid performance problems. The underlying mechanisms for inheritance aren't really designed to scale past perhaps a hundred child tables. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Problem creating temporary tables . . .
=?iso-8859-1?Q?Rub=E9n_Crespo_Flores?= ruben.cres...@gmail.com writes: Hi, I'm using PostgreSQL 9.0.4, with the replication activated to one server. About 3 weeks ago, we have a disk space problem on both servers. I recovered the DB but since then (or at least had not noticed before), in 10 executions of functions that generate temporary tables, 3 returns the following error message: ERROR: could not find tuple for attrdef 299038853 SQL state: XX000 Context: SQL statement create temporary table tmp_tablatotlin15 (... Hm ... that's a bit odd, but try reindexing pg_attrdef. BTW, 9.0.4 is a bit old, you really ought to update to 9.0.latest. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Problem in taking the database dump
himanshu.joshi himanshu.jo...@orkash.com writes: I am facing a strange problem while taking dump of a postgres database using pg_dump. whenever I start the pg_dump command to take the backup of that particular database, it stops within few minutes with the below mentioned error: *[root@s8-msd-2 bin]# ./pg_dump -Upostgres pdc_uima_dummy /hdd2-1/pdc_uima_dummy_108_18092012.sql pg_dump: SQL command failed pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613 pg_dump: The command was: COPY public.object_data (id, wkb_geometry, obj_type_id, obj_beginoffset, obj_endoffset, source_id, sentence_id, last_mod_date, obj_norm_value, obj_text, entity_name, frequency) TO stdout;* This looks like a data corruption issue -- that is, there's some row in that table that contains a damaged variable-width field, such that the system is computing a silly value for the width of the field. You need to find and remove the damaged row(s). regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Linux: ps output shows multiple instances of PostgreSQL running on the same port
David Throckmorton throckw...@gmail.com writes: Following a recent database crash, I encountered the following when doing debug: [root@my-db myhost]# ps -ef | grep pgsql postgres 19643 5737 0 09:42 ?00:00:00 /usr/pgsql-9.0/bin/postmaster -p 5432 -D /var/lib/pgsql/9.0/data postgres 19644 5737 0 09:42 ?00:00:00 /usr/pgsql-9.0/bin/postmaster -p 5432 -D /var/lib/pgsql/9.0/data postgres 19657 5737 0 09:42 ?00:00:00 /usr/pgsql-9.0/bin/postmaster -p 5432 -D /var/lib/pgsql/9.0/data postgres 19658 5737 0 09:42 ?00:00:00 /usr/pgsql-9.0/bin/postmaster -p 5432 -D /var/lib/pgsql/9.0/data pgstartup.log doesn't indicate anything about multiple instances starting, and indicates that PID 5737 was the initial PID for the instance: Almost certainly, these are just regular postmaster child processes. I'm not sure why ps is showing you the original postmaster command line rather than the usual modified process title --- that's a known behavior on some Unixen but I've not heard of it on Linux. Are you using a nonstandard flavor of ps? Also, is process 5737 still around? If it was dead, I would rather expect ps to show the orphaned processes as children of pid 1. If it is around, what is ps showing as its command line? Can you connect to the database? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] pg_restore problem Found unexpected Block id
ramana.pls ramana@gmail.com writes: I am getting an error while restoring the Database Backup. I am postgre sql 8.0 with Windows XP O.S. 8.0? You do know that's been out of support for several years now, right? while restoring the data i am getting pg_restore: [custom archiver] found unexpected block ID (0) when reading data -- expected 4034 This looks somewhat like a pg_restore bug we fixed awhile back. I'd suggest updating to a more modern Postgres version and seeing if it works any better. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [BUGS] [ADMIN] Repeatable crash in pg_dump (with -d2 info)
Robert Haas robertmh...@gmail.com writes: On Mon, Aug 27, 2012 at 9:58 AM, Bruce Momjian br...@momjian.us wrote: From Tom Lane in the above thread: Hmm. I can see how that would happen if you're using one of the Windows environments wherein malloc's done inside libpq have to be free'd inside libpq. (The PQExpBuffer support code is in libpq...) Late reply, but I don't see any way we could fix this easily. To me it seems like mostly a case of chasing down all the places where this happens. It's not impossible to do; it's just a bunch of work that nobody's gotten excited about doing yet. We've fixed similar issues in many other cases, IIUC. Well, the problem with what I suspect you're thinking of is that even after we fixed all the existing trouble spots, it would keep on breaking. Unless we found some mechanical way to warn about unsafe coding; which in itself would be more work than I want to put into this. However, a plan B occurs to me: what about preventing pg_dump from using libpq's copy of PQExpBuffer? IIUC, a copy of the PQExpBuffer code statically linked into pg_dump would be free of this problem. There's not so much code there that this would be an intolerable price to pay. (Besides, we could probably arrange for the extra copy to happen only on Windows.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Re: What else could I've done? COPY to unlogged tbl hung/locked the table
amacvar kahitarich-postgresfo...@yahoo.com writes: In addition here is a backtrace of the process while it hung again today: (thanks to our SA Eric) #0 0x00396c8d4627 in semop () from /lib64/libc.so.6 #1 0x005dc313 in PGSemaphoreLock () #2 0x00616586 in LWLockAcquire () #3 0x004a718c in ginHeapTupleFastInsert () #4 0x0049c808 in gininsert () Interesting. The most plausible explanation I can see for this stack trace is that ginHeapTupleFastInsert is blocked trying to get buffer lock on one of the GIN index pages it locks directly --- either the metapage or the tail page of the fast-insert list. That would suggest a deadlock condition, that is somebody else trying to get the same two locks in the other order. (One of the ways that the LWLock mechanism is lightweight is that it has no deadlock detection, so if this is what's happening the deadlock would persist until you kill the processes.) However, if that's the case then there must be another process also blocked. Next time it happens, could you look around at *all* the postgres processes and collect stack traces from any that don't seem to be making progress? It doesn't appear to me that ginHeapTupleFastInsert itself is in the wrong here, so we need to find out what the conflicting process is doing. Also, as a temporary workaround, you might be able to dodge the problem by disabling the FASTUPDATE storage parameter for your GIN indexes. That won't help find the bug though, so if you could refrain from doing that until we've identified the cause, it'd be appreciated. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Canot access PostgreSQL via psql -h (Vmware Fusion)
Kevin Grittner kevin.gritt...@wicourts.gov writes: CS DBA cs_...@consistentstate.com wrote: psql: could not connect to server: No route to host That problem has nothing to do with PostgreSQL; you might have better luck on a list related to the other technologies. FWIW, on Linux I would start with `netstat -plnt` to see if the process was listening on the expected port and host address. No route to host doesn't have anything to do with whether there's anything listening on the target port ... what's more likely is that the OP mistyped the IP address, or else there's something wrong with his network layout, interface netmask assignments, that sort of thing. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] creating trigger in 8.3 -- very slow
Greg Williamson gwilliamso...@yahoo.com writes: I've got a fairly large table (186 million rows) that is moderately busy ( a few inserts per second at most) that I am trying to create a trigger on -- it is just hanging. CREATE TRIGGER per se is pretty quick ... but it requires an exclusive lock on the table. Most likely your problem is that it's backed up behind existing holders of locks. The pg_locks view should help you figure out what's blocking it. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] JDBC keep alive issue
Haifeng Liu liuhaif...@live.com writes: I have a program running like a daemon, which analyze data and write to postgresql 9.1 on centos 5.8. There is only one connection between my program and the postgresql database, and I hope the connection may keep alive all the time. But I failed, the connection will be reset after idle for about 2 hours. jdbc driver: 9.1-901, connection url has parameter tcpKeepAlive=true; postgresql:9.1, keep alive related settings use default values(commented); centos 5.8 64bit, net.ipv4.tcp_keepalive_intvl = 75, probes = 9, time = 7200. IIRC, time = 7200 (seconds) means to start sending keepalive packets after 2 hours of idle time. So if you have something in the way that is dropping the connection after 2 hours, these settings will not activate keepalive soon enough to save it. I'd try setting that to 3600. There is no firewall or any other device which behaves force idle connection cleanup. Seems pretty darn unlikely given these symptoms. Look harder... maybe something you thought was just a bridge has got routing behavior. BTW, in the real world connections drop for all sorts of reasons, and kernel keepalive configurations can't prevent them all. You might be better advised to build some reconnect-after-connection-loss logic into your application, rather than spending time on making this work. regards, tom lane - Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Question about ALTER TABLE DROP CONSTRAINT on 9.1 -- psql crashes
Greg Williamson gwilliamso...@yahoo.com writes: We have a small database (few hundred megs of data, lass than half that in indexes) that suffers from index bloat. Currently we handle this with an hourly REINDEX command. This works but causes some small issues, so I have been expploring other methods. When I try to to drop constraints (typically UNIQUE) I sometimes get a crash of the psql client at the ALTER TABLE X DROP CONSTRAINT, and sometimes after I recreate the constraint at a commit. Transcript below: usher=# BEGIN; BEGIN gusher=# ALTER TABLE ourcodes DROP CONSTRAINT IF EXISTS ourcodes_pkey CASCADE; ALTER TABLE gusher=# ALTER TABLE ourcodes ADD CONSTRAINT ourcodes_pkey PRIMARY KEY (id); NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index ourcodes_pkey for table ourcodes ALTER TABLE gusher=# COMMIT; FATAL: terminating connection due to administrator command WTF? That should certainly not happen. Have you perhaps got a script that runs around sending SIGTERM to backends that it thinks are blocking something? Does anything show up in the postmaster log when this happens? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Question about ALTER TABLE DROP CONSTRAINT on 9.1 -- psql crashes
Greg Williamson gwilliamso...@yahoo.com writes: Every minute on the minute there is a cronjob that kills IDLE IN CONNECTION procs older than 3 seconds ... possibly that process killed the process I was running, but to me the user it seemed as if the COMMIT came before the disconnect. That sure sounds like the source of your problem. It might be sane if it killed only processes that *had been idle* for at least three seconds, but I'm not sure there is any easy way to determine that ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Question about ALTER TABLE DROP CONSTRAINT on 9.1 -- psql crashes
Craig Ringer ring...@ringerc.id.au writes: On 08/10/2012 10:06 AM, Tom Lane wrote: That sure sounds like the source of your problem. It might be sane if it killed only processes that *had been idle* for at least three seconds, but I'm not sure there is any easy way to determine that ... wouldn't: select * from pg_stat_activity where current_query = 'IDLE in transaction' AND query_start current_timestamp - INTERVAL '3 seconds'; do it? No, that would find sessions that were idle and whose last command started at least 3 seconds ago. But it might have completed only microseconds ago. The symptoms Greg is describing are consistent with this kill script running during a short interval between his index-build command and his COMMIT. As of 9.2 there's a state_change timestamp column in pg_stat_activity that would provide a safer check, but it's not in existing releases ... regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Messed up time zones
Laszlo Nagy gand...@shopzeus.com writes: So how do I create a query that results in something like: a -- Sun Oct 30 02:00:00 2011 +0500 Sun Oct 30 02:00:00 2011 +0600 (2 rows) Set the timezone setting to the zone you have in mind, and then just print the values. The reason there's no manual way to do rotation across zones is that there's no need for one because it's done automatically during printout of a timestamptz value. I suspect that you have not correctly internalized what timestamptz values actually are. Internally they are just time values specified in UTC (or UT1 if you want to be picky). On input, the value is rotated from whatever zone is specified in the string (or implicitly specified by timezone) to UTC. On output, the value is rotated from UTC to whatever the current timezone setting is. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Messed up time zones
Laszlo Nagy gand...@shopzeus.com writes: I have intentionally chosen an example where the local time is changed from summer time to winter time (e.g. local time suddenly goes back one hour). It demonstrates that you cannot use at time zone expression to convert a timestamptz into a desired time zone manually. Um, yes you can. The trick is to use a timezone name, not an abbreviation, in the AT TIME ZONE construct (for instance, 'Europe/Budapest' not just 'CET'). That will do the rotation in a DST-aware fashion. As far as I'm concerned, I'm going to set the system's clock to UTC, store everything in timestamp field (in UTC), and use a program to convert fetched values before displaying them. [ shrug... ] If you really insist on re-inventing that wheel, go ahead, but it sounds to me like you'll just be introducing additional points of failure. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17
Radovan Jablonovsky radovan.jablonov...@replicon.com writes: In usual load there are not much pressure on memory, but it is possible to have all clients start using heavy reports. They are valid requests and could consume all memory. In this border and not likely but possible scenario it could be useful to let OOM killer to kill client's processes/connections but leave PostgreSQL system processes (postmaster, writer, stat, log, streaming, ...) excluded from reach of OOM killer. The code already supports excluding the postmaster itself from OOM kills while letting its children be subject to them. Being selective about which children are subject is pointless, though: if any child is kill -9'd, we have to zap the rest and restart, because there is no way to be sure that the victim left shared memory in a consistent state. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] pg_dump on Postgres 9.1
Alanoly Andrews alano...@invera.com writes: On this issue, instead of going for a newer version of xlc, as suggested, I opted to get a newer version of the Postgres source code, 9.1.4. After compiling it with the same xlc version, I found that pg_dump works as expected. So, the problem appears to be somewhere in the 9.1 source code, at least for binaries created from it for AIX (6.1). Hm, maybe this 9.1.2 fix? Author: Tom Lane t...@sss.pgh.pa.us Branch: master Release: REL9_2_BR [cf22e851b] 2011-11-10 16:09:01 -0500 Branch: REL9_1_STABLE Release: REL9_1_2 [febda37fd] 2011-11-10 16:09:16 -0500 Avoid platform-dependent infinite loop in pg_dump. If malloc(0) returns NULL, the binary search in findSecLabels() will probably go into an infinite loop when there are no security labels, because NULL-1 is greater than NULL after wraparound. (We've seen this pathology before ... I wonder whether there's a way to detect the class of bugs automatically?) Diagnosis and patch by Steve Singer, cosmetic adjustments by me regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [pgsql] How bad is this full vacuum error?
Gary Webster webs...@lexmark.com writes: How bad is this? 2012-08-01 06:30:03 PDT 15961 [local] cp_repository_na2 ERROR: missing chunk number 0 for toast value 1086399 in pg_toast_987417 2012-08-01 06:30:03 PDT 15961 [local] cp_repository_na2 STATEMENT: VACUUM (FULL); If it's repeatable, it's corrupted data :-(. If it just happened once, and doesn't recur on the next try, it might have been related to a system catalog race condition that we fixed last fall. (What PG version are you running?) I'm unsure about that though, because the toast table name seems to indicate that it's for a user table not a system catalog, and I'm not very sure why VACUUM would be doing toast dereferences into a user table. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Timeout error on pgstat
Craig Ringer ring...@ringerc.id.au writes: On 08/02/2012 04:27 AM, Anibal David Acosta wrote: I have a lot (maybe 1 every 10 seconds) of this error WARNING: pgstat wait timeout A quick search suggests this can be due to excessive I/O. However, this thread: http://postgresql.1045698.n5.nabble.com/pgstat-wait-timeout-td5078125.html sounds very similar to your issue. I'm wondering if there's a bug lurking in there somewhere. Yeah, a lot of us are wondering that. In principle, only a ridiculously high load could result in triggering this message, but some people can reproduce it under what apparently isn't very high load. If we could identify what's different about their setups, maybe we could figure out what's going on. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Sometime Update is not modifying data inside database.
Amit Kumar helloam...@gmail.com writes: snip Jul 30 12:40:36 dell16 postgres[11088]: [11-1] 2012-07-30 12:40:36 IST [11088]: [10-1] user=postgres,db=dbpush xx(43225)LOG: statement: START TRANSACTION Jul 30 12:40:36 dell16 postgres[11088]: [12-1] 2012-07-30 12:40:36 IST [11088]: [11-1] user=postgres,db=dbpush xx(43225)LOG: statement: *UPDATE dbpush_camp_main SET status=7 WHERE camp_id IN(137721)* Jul 30 12:40:36 dell16 postgres[11088]: [13-1] 2012-07-30 12:40:36 IST [11088]: [12-1] user=postgres,db=dbpush xx(43225)LOG: statement: COMMIT Jul 30 12:40:36 dell16 postgres[9368]: [5-1] 2012-07-30 12:40:36 IST [9368]: [4-1] user=postgres,db=dbpush xx(43187)LOG: statement: SELECT status FROM dbpush_camp_main WHERE camp_id=137721 /snip I assume what you're complaining about is that PID 9368's select didn't see the result of pid 11088's update? The above trace doesn't prove that anything is wrong. That behavior would be expected if 9368 had taken its snapshot before 11088's commit finished, and you can't really tell if that was the case or not from these log entries. But given they all happened in less than 1 second, it's certainly possible. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] pg_dump on Postgres 9.1
Alanoly Andrews alano...@invera.com writes: Is there any reported bug with pg_dump in Postgres 9.1, on AIX ? The following command hangs for ever and has to be interrupted. It creates a zero-length file. We had a recent report of strange server-side behavior on AIX that went away after rebuilding with a newer version of xlc, suggesting that the code was getting bitten by an xlc optimization bug. Perhaps this is the same thing inside pg_dump. If you're not using the latest xlc, try updating. If you are, does rebuilding with -O0 change the behavior? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] problems with access into system catalogs
Thomas Markus t.mar...@proventis.net writes: a rollback prepared on these does the job If you aren't intentionally using prepared transactions, it's a good idea to disable them by setting max_prepared_transactions to zero (DB restart required). That prevents you from accidentally shooting yourself in the foot like this. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] db size growing out of control when using clustered Jackrabbit
Gary Webster webs...@lexmark.com writes: By routine maintenance, do you mean autovacuum, or something else? Autovacuum does appear to usually get 'auto-canceled' by a lock. That's bad and you should look into the reason why it happens. Ordinary DML (CRUD) operations should not kick autovac off a table. If it's happening, it's probably because something is fooling with the table's schema, which doesn't seem like something you want to have happening during routine operations; especially not on tables that are large enough for this to be an issue in the first place. Or it might be something doing a LOCK TABLE as a substitute for more fine-grained locking; which again is bad for performance reasons that have nothing to do with hobbling autovacuum. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] about select use random
wangqi wan...@edgesoft.cn writes: ¡öSQL-3 select * from wk_mclid1 where id1= (round((random() * (10-1))::integer) + 1)::integer; When we excute SQL-3,sometimes the result was 0 record or more than 1 record. Why? random() is re-evaluated for each row scanned by the SELECT. If you don't want that, you can use a WITH to lock down the result of a single random() call. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] SSL SYSCALL error: EOF detected
francescobocca...@libero.it francescobocca...@libero.it writes: Hi all, i have a problem while trying to delete rows from a table: I received: SSL SYSCALL error: EOF detected How can i fix this problem? This looks like a connection failure (network problem), not anything particularly to do with Postgres at all. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] could not find function LWGEOM_distance_ellipsoid_point
Stephen Davies sdav...@sdc.com.au writes: psql:/data1/pgsql840/pg_upgrade_dump_db.sql:52010: ERROR: could not find function LWGEOM_distance_ellipsoid_point in file /usr/lib/liblwgeom.so I think you'd have better luck asking about that on the postgis lists --- from here it looks like a version-to-version incompatibility in postgis. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] What happens when PostgreSQL fails to log to SYSLOG
Arnold, Sandra arno...@osti.gov writes: Tablelog would be ok for keeping up with transactions for tables. However, we also need to audit who connects successfully and unsuccessfully. As far as I am aware, if a user fails to log in successfully, say three times, PostgreSQL is not able to lock the account for 9.0. If you want custom authorization rules like that, the usual recommendation is to use PAM authentication; you can set up pretty much anything you want with a few PAM modules. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] auto vacuum errors
Isabella Ghiurea isabella.ghiu...@nrc-cnrc.gc.ca writes: Here is my error , the missing files were removed by mistake , I have another similar db cfg with this tables:spherrtmp% and no issues. I 'm thinking to restore this tables content from this second db , will this eliminate the auto vacuum error ? What you ought to do is drop the tables that are reported as causing the error. Or the whole database --- are you actually doing something useful in contrib_regression, or is that just left over from testing? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] auto vacuum errors
Isabella Ghiurea isabella.ghiu...@nrc-cnrc.gc.ca writes: I'm seeing a issue with some temp tables in contrib_regression db , the auto vacuum job will not run for this tables due some files/QID's missing how can I fix this ? Um ... in general, autovacuum just silently ignores temp tables, because it doesn't have any way to access their contents. (One of the reasons that temp tables are fast is that the backend owning a temp table keeps its data in local buffers, not in shared buffers that another process could access.) I wouldn't expect any missing file complaints though. What exactly were you seeing? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin