Re: [GENERAL] pgdump
On 30/01/2015 14:46, Adrian Klaver wrote: On 01/30/2015 01:16 AM, Ramesh T wrote: when i try to dump the database windows based pgadmin 3 it returns message like pg_dump: server version: 9.3.4; pg_dump version: 9.1.3 pg_dump: aborting because of server version mismatch any help..? Use a 9.3+ version of pg_dump. A version of pg_dump is backwards compatible so it can work on its version to Postgres versions going back to 7.2, I believe. That does not work the other way around as you found above. pgAdmin3 is picking up a 9.1.3 version of pg_dump and trying to use it to dump a 9.3 version of Postgres. So you will need to find the 9.3 version of pg_dump and use that. To add to what Adrian says, you tell pgAdmin which pg_dump (and other Postgres client programs) to use under File - Options - Browser - Binary paths, in the PG bin path field (this is on PgAdmin III 1.20). Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?
Alan, I tried as you suggested, I believe the gdb debugger is giving some false indication about threads. Whether I attach to a newly launched backend or a backend that has been executing the suspect perlu function. The “info threads” result is two. Suspiciously they are both at the same location. e.g. * 2Thread 802c06400 (LWP 101353) 0x00080bfa50a3 in Perl_fbm_instr () from /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18 * 1Thread 802c06400 (LWP 101353) 0x00080bfa50a3 in Perl_fbm_instr () from /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18 That seemed odd to me. If we use ‘top’ or ‘ps axuwwH’ to get a thread count for a given process the indication is only one thread for the same situations. I am now pursuing a different causal hypothesis. There are instances of another segmentation fault that do not involve this perl fx. Rather it is a function that is also called regularly even on a basically idle system. Therefore it is perhaps happenstance as to which kind might happen. I believe this may relate to our update process. Product developers are frequently updating (daily) environments/packages while running postgres and possibly our application. I am thinking this update process is not properly coordinating with a running postgres and may result in occasional shared library issues. This thought is consistent in that our production testers who update at a much lower frequency almost never see this segmentation fault problem but use the same update script. I’ll attempt some scripts changes and meanwhile ask the developers to make observations that would support this idea. I’ll update the thread with the future observations/outcome. Possibly changing the subject to careless developers cause segmentation fault Thanks for your assistance on this matter. Dave From: Alex Hunsaker [mailto:bada...@gmail.com] Sent: Thursday, January 29, 2015 6:10 PM To: Day, David Cc: pgsql-general@postgresql.org; Tom Lane Subject: Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ? On Thu, Jan 29, 2015 at 1:54 PM, Day, David d...@redcom.commailto:d...@redcom.com wrote: Thanks for the inputs, I’ll attempt to apply it and will update when I have some new information. BTW a quick check would be to attach with gdb right after you connect, check info threads (there should be none), run the plperlu procedure (with the right arguments/calls to hit all the execution paths), check info threads again. If info threads now reports a thread, we are likely looking at the right plperlu code. It should just be a matter of commenting stuff out to deduce what makes the thread. If not, it could be that plperlu is not at fault and its something else like an extension or some other procedure/pl.
[GENERAL] Catalog Bloat
Probably temp table related ...but catalog bloat on one of my databases appears to be pretty bad. Is the below bloat (table and index) something to worry about? pg_stat_all_tables show the relations ARE getting successfully vacuumed... Any suggestions on eliminating? Not sure if tools like pg_reorg are appropriate (or effective) or even vacuum full (yikes). I'd prefer not to take a complete outage - but I would if this bloat is really an issue. (I know about reindex system (duh) - but as that requires me to take an outage, my question about IF the bloat is a cause for concern still stands) schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes +--++-+-++-- pg_catalog | pg_attribute |9.0 |27648000 | pg_attribute_relid_attnam_index | 243.5 |361627648 pg_catalog | pg_attribute |9.0 |27648000 | pg_attribute_relid_attnum_index | 168.5 |253894656 pg_catalog | pg_type | 10.8 |4890624 | pg_type_oid_index | 135.8 |28721152 pg_catalog | pg_type | 10.8 |4890624 | pg_type_typname_nsp_index | 287.2 |60956672 pg_catalog | pg_class| 10.3 |4562944 | pg_class_oid_index | 94.1 |26689536 pg_catalog | pg_class| 10.3 |4562944 | pg_class_relname_nsp_index | 270.1 |77144064 pg_catalog | pg_depend|5.3 |3948544 | pg_depend_reference_index | 337.0 |156901376 pg_catalog | pg_depend|5.3 |3948544 | pg_depend_depender_index | 359.6 |167436288 pg_catalog | pg_index|6.1 |1130496 | pg_index_indexrelid_index | 72.9 | 7659520 pg_catalog | pg_index|6.1 |1130496 | pg_index_indrelid_index | 72.9 | 7659520 (10 rows) Thanks in advance
[GENERAL] Nice article on PostgreSQL HSTORE and JSONB
It's not real deep, but it was interesting to me as an answer to what is this anyway? I've not really looked at HSTORE or JSONB due to other interests at present. http://www.linuxjournal.com/content/postgresql-nosql-database -- He's about as useful as a wax frying pan. 10 to the 12th power microphones = 1 Megaphone Maranatha! John McKown
Re: [GENERAL] Catalog Bloat
On Fri, 30 Jan 2015 09:30:31 -0600 Jeff Amiel jeff.am...@gmail.com wrote: Probably temp table related ...but catalog bloat on one of my databases appears to be pretty bad. Is the below bloat (table and index) something to worry about? pg_stat_all_tables show the relations ARE getting successfully vacuumed... A few hundred meg of extra space on a modern system isn't that much to worry about. The concern I would have (personally) is whether this is a stable amount of bloat or whether it's going to keep getting worse. I recommend you slap something on that system to track it (such as a Cacti graph) and keep an eye on it to see if it's stable. The thing is, even if you cleaned up the bloat, if what you're seeing is the amount of bloat necessary to efficiently use those tables, it's just going to come back anyway. Any suggestions on eliminating? Not sure if tools like pg_reorg are appropriate (or effective) or even vacuum full (yikes). I'd prefer not to take a complete outage - but I would if this bloat is really an issue. I don't know the parameters of the system that uses this DB, but you should be able to VACUUM FULL or REINDEX those tables pretty quickly. If you have a slow period where you can tolerate a few seconds lag while it runs, you can probably sneak it in without any trouble. Of course, such a thing could also bite you in the ass by taking longer than you expect. My experience recommends: 1) Graph the bloat for a while first ... see if it's even worth it. 2) If you decide to do it, do 1 table or index at a time so you don't overcommit yourself. 3) Recreate the system and its bloat in a test environment to get a more realistic idea of how long it will really take and how much it will really interrupt operations. Hopefully you have such an environment available. (I know about reindex system (duh) - but as that requires me to take an outage, my question about IF the bloat is a cause for concern still stands) schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes +--++-+-++-- pg_catalog | pg_attribute |9.0 |27648000 | pg_attribute_relid_attnam_index | 243.5 |361627648 pg_catalog | pg_attribute |9.0 |27648000 | pg_attribute_relid_attnum_index | 168.5 |253894656 pg_catalog | pg_type | 10.8 |4890624 | pg_type_oid_index | 135.8 |28721152 pg_catalog | pg_type | 10.8 |4890624 | pg_type_typname_nsp_index | 287.2 |60956672 pg_catalog | pg_class| 10.3 |4562944 | pg_class_oid_index | 94.1 |26689536 pg_catalog | pg_class| 10.3 |4562944 | pg_class_relname_nsp_index | 270.1 |77144064 pg_catalog | pg_depend|5.3 |3948544 | pg_depend_reference_index | 337.0 |156901376 pg_catalog | pg_depend|5.3 |3948544 | pg_depend_depender_index | 359.6 |167436288 pg_catalog | pg_index|6.1 |1130496 | pg_index_indexrelid_index | 72.9 | 7659520 pg_catalog | pg_index|6.1 |1130496 | pg_index_indrelid_index | 72.9 | 7659520 (10 rows) Thanks in advance -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Server statistics monitoring?
On 1/29/2015 2:12 PM, Israel Brewster wrote: I'm working on setting up a new PostgreSQL database server, and would like to be able to monitor a number of statistics on it, such as: number of connections number of queries query times etc. All these stats are easily available, either from the pg_stat_statements view (which I have enabled) and the like, or the log file (turn on log connects/disconnects), and I figured there would be a plethora of options available for monitoring these things. However, in searching around so far all I've found are a couple of hosted solutions. Granted, they look fairly nice, but we'd much prefer to keep this in-house. Paid solutions are fine, as long as the cost is reasonable, but we do want full-control. Before I spend my time writing my own solution, is there anything out there that can make nice graphs of various postgresql metrics? I would use Munin, along with the check_postgres.pl script from the Bucardo project. check_postgres was developed for Nagios, but is quite easy to use with Munin. you can monitor as many things as you like, the script has some great options like pre table bloat. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HTTP user authentication against PostgreSQL
Ok thanks for the advice John - much appreciated. From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce [pie...@hogranch.com] Sent: Friday, 30 January 2015 10:33 p.m. To: pgsql-general@postgresql.org Subject: Re: [GENERAL] HTTP user authentication against PostgreSQL On 1/30/2015 12:31 AM, Jeremy Palmer wrote: The PostgreSQL DB is currently setup with Kerberos for Windows SSO, as well as MD5 password authentication for another pool of other PostgreSQL users who are not part of our Active Directory. LDAP could be used, but then we would have to move the current external users into the AD, and I'm not sure that can happen due to policy reasons. active directory IS a sort of ldap.I'd be looking at connecting apache to that AD too, and not trying to do it via postgres. associating a postgres connection with each HTTP client session would be inefficent and ugly, especially as http can have multiple threads for a single client session, yet http is considered sessionless. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general This message contains information, which may be in confidence and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or i...@linz.govt.nz) and destroy the original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank You. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] trouble adding a node to BDR
Thanks for the hint. I found a config error, I had added the host= param with the bdr.nodename_local_replica_dsn entry. Sorting out the bdr.nodename_local_replica_dsn entry solved my issue. Thanks! From: Craig Ringer [mailto:cr...@2ndquadrant.com] Sent: Thursday, January 29, 2015 5:00 PM To: Steve Boyle Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] trouble adding a node to BDR On 30 January 2015 at 07:21, Steve Boyle sbo...@connexity.commailto:sbo...@connexity.com wrote: I have two servers/nodes setup with BDR and that is working. I’m trying to add a third node. When the third node tries to sync, I get an error: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 600; 1255 17054 FUNCTION cast_txt_to_int(text, integer) postgres pg_restore: [archiver (db)] could not execute query: ERROR: function cast_txt_to_int already exists with same argument types The thing is, that function does not actually exist on the destination side. How can I get past this error and get the third node online? [https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif] Are you certain your 3rd node's configuration is correct, with all the right connection strings? My first guess would be that it's trying to restore the dump to the second node, as you would've copied the config and changed one dsn but not the other. Please show your configuration for each node. Also, what BDR version are you using? If you built from git, please show git rev-parse --short HEAD from the extension's git tree. If you built from packages, the package version is sufficient. BTW, the version after 0.8.0 will include a lot of sanity checks for connection configurations, making sure everything points at the right nodes before it does anything. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services
Re: [GENERAL] Server statistics monitoring?
On Jan 29, 2015, at 1:20 PM, Michael Heaney mhea...@jcvi.org wrote: On 1/29/2015 5:12 PM, Israel Brewster wrote: I'm working on setting up a new PostgreSQL database server, and would like to be able to monitor a number of statistics on it, such as: number of connections number of queries query times etc. All these stats are easily available, either from the pg_stat_statements view (which I have enabled) and the like, or the log file (turn on log connects/disconnects), and I figured there would be a plethora of options available for monitoring these things. However, in searching around so far all I've found are a couple of hosted solutions. Granted, they look fairly nice, but we'd much prefer to keep this in-house. Paid solutions are fine, as long as the cost is reasonable, but we do want full-control. Before I spend my time writing my own solution, is there anything out there that can make nice graphs of various postgresql metrics? Thanks. Take a look at PoWA: http://dalibo.github.io/powa/ I've downloaded but haven't installed it yet, so can't give you any feedback on its performance or usability. Thanks. I'll give it a shot. --- Israel Brewster Systems Analyst II Ravn Alaska 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7293 --- -- Michael Heaney JCVI
Re: [GENERAL] trouble adding a node to BDR
On 31 January 2015 at 06:42, Steve Boyle sbo...@connexity.com wrote: Thanks for the hint. I found a config error, I had added the host= param with the bdr.nodename_local_replica_dsn entry. Sorting out the bdr.nodename_local_replica_dsn entry solved my issue. Glad to hear it. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services
Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?
On Fri, Jan 30, 2015 at 9:54 AM, Day, David d...@redcom.com wrote: Alan, I tried as you suggested, I believe the gdb debugger is giving some false indication about threads. Whether I attach to a newly launched backend or a backend that has been executing the suspect perlu function. The “info threads” result is two. Suspiciously they are both at the same location. Curious, hrm, well, assuming gdb isn't lying about threads-- I think that would point an extension or a external library (shared_preload_libraries or local_preload_libraries). Does info threads on the postmaster also report threads?
Re: [GENERAL] Subselect with no records results in final empty set
I always have a single row in these selects so the result will have only a row. I must make a single select for a framework that takes as parameter a single select to complete a XML template in a particular situation. -- Original Message -- From: John R Pierce pie...@hogranch.com To: pgsql-general@postgresql.org Sent: 1/29/2015 10:52:25 PM Subject: Re: [GENERAL] Subselect with no records results in final empty set On 1/29/2015 12:36 PM, Sterpu Victor wrote: ON(null) never matched. NULL is neither true nor false. ON somefieldinthejoin IS NULL would be a valid syntax. except, that's NOT a join condition, a join condition would be ON left_table.something = right_table.something ON (1=1) equivalent to ON TRUE but that will cross join everything, so if the left table has N rows and the right table has M rows, you'll end up with N*M rows in the result. is that really what you want ?? -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgdump
On 01/30/2015 01:16 AM, Ramesh T wrote: when i try to dump the database windows based pgadmin 3 it returns message like pg_dump: server version: 9.3.4; pg_dump version: 9.1.3 pg_dump: aborting because of server version mismatch any help..? Use a 9.3+ version of pg_dump. A version of pg_dump is backwards compatible so it can work on its version to Postgres versions going back to 7.2, I believe. That does not work the other way around as you found above. pgAdmin3 is picking up a 9.1.3 version of pg_dump and trying to use it to dump a 9.3 version of Postgres. So you will need to find the 9.3 version of pg_dump and use that. Advanced thanks, -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HTTP user authentication against PostgreSQL
On 1/30/2015 12:31 AM, Jeremy Palmer wrote: The PostgreSQL DB is currently setup with Kerberos for Windows SSO, as well as MD5 password authentication for another pool of other PostgreSQL users who are not part of our Active Directory. LDAP could be used, but then we would have to move the current external users into the AD, and I'm not sure that can happen due to policy reasons. active directory IS a sort of ldap.I'd be looking at connecting apache to that AD too, and not trying to do it via postgres. associating a postgres connection with each HTTP client session would be inefficent and ugly, especially as http can have multiple threads for a single client session, yet http is considered sessionless. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] HTTP user authentication against PostgreSQL
David G Johnston wrote Personally, I would consider having both Apache and PostgreSQL talk to a LDAP database if you really need to have a single point of identity definition. The PostgreSQL DB is currently setup with Kerberos for Windows SSO, as well as MD5 password authentication for another pool of other PostgreSQL users who are not part of our Active Directory. LDAP could be used, but then we would have to move the current external users into the AD, and I'm not sure that can happen due to policy reasons. Doing what you describe here doesn't seem to me to be a good idea as PostgreSQL has no provisions for making its internal catalogs usable in this manner - or even at all outside of the libpq protocol - for security reasons. The internal user database for a system is seldom made accessible for other applications that do not intend to make use of the actual service that system is providing. I'm not apt to have any other useful suggestions but describing why you want to do this thing may encourage others to suggest additional alternatives. Looking at building a JSON RESTFul API that connects to the database to performs CRUD operations. David J. This message contains information, which may be in confidence and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or i...@linz.govt.nz) and destroy the original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank You. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Building extensions against OpenSCG RPM packages
On Thursday, January 29, 2015 10:14 PM, Adrian Klaver wrote: On 01/29/2015 04:36 AM, holger.friedrich-fa-triva...@it.nrw.de wrote: Hello list, What are your experiences with OpenSCG's RPM packages? [deletia] From what I gather it is static binary package built against libraries at a point in time in a distribution that may or may not be in exact sync with the distribution you are running. Which is fine when you run it by itself. Then you try to 'merge' it with a package that comes from another source. Most of the time the close enough rule will apply and things will work. When it does not you get the above. If you want less drama I would say stick with your distributions repo or use the Postgres RPM repo: http://www.postgresql.org/download/linux/redhat/ Or, build from source. Thanks for your reply, Adrian. We certainly opted for less drama and gave up on the OpenSCG packages. I just wanted to know whether my statistical sample of OpenSCG packages may have been skewed, and other people maybe had better experiences with building stuff against the OpenSCG packages. Holger Friedrich -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general