Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 08:02, Dmytrii Nagirniak dna...@gmail.com wrote: Thanks. So far I tried: fsync = off full_page_writes = off It seems it got a *little* faster (down to ~65 seconds from ~76) but is till too far from my target of ~34 secs. If you have lots of very simple queries, then usually much of the overhead is in query planning. There are a few tricks to dumb down the planner to make it faster -- although that may come at the cost of slowing down execution. * If your queries use joins at all, you can reduce planning overhead by setting join_collapse_limit=1 and from_collapse_limit=1 or some other low number. * Set default_statistics_target=5 or another low number and run ANALYZE on the whole database. * Set enable_bitmapscan=off, enable_material=off, enable_mergejoin=off, enable_hashjoin=off -- this will prevent the planner for trying certain kinds of plans in the first place. Just for the heck of it, you might gain a bit by setting track_activities=off, update_process_title=off Regards, Marti -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgresql as main database
Hello all, Good day, I would like to make Postgresql 8.4 as main database for running three(software) and possible exchanging data.Is it possible? if yes what is the implication in terms of performance? regards huu
[GENERAL] Log statement with timestamp
Hi, I need to log DDL or MOD statements with current time when the statement executed. Please let me know if this is possible. Regards, Madhu
Re: [GENERAL] Comment on extension issues with pg_dump after upgrading to 9.1.2
Timothy Garnett wrote: We recently upgraded to 9.1.2 from 9.0.3 and ran into some issues with the process we've been using to dump and restore databases. We typically use a super user (but not the postgres user) to dump and restore databases, but in moving the 9.1.2 we've run into trouble with pg_dump outputing comment on extension lines that cause permission issues on pg_restore. Something like: CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog; COMMENT ON EXTENSION plpgsql IS ''; The create is fine because the extension is already in the template db, but the comment causes problems because the extension is owned by the postgres user, but we're trying to use a different user (and apparently even super users can not comment on extensions owned by other users). This happens even if the comment isn't changing. There also appears to be no way to change the ownership of an extension. We have the same issue with other extensions. So far we've worked around it by setting the comment to NULL in the template and all other databases (in which case pg_dump doesn't output the comment line), but I was wondering if there was an easy way to keep the comments. I don't understand what you are doing. If you dump a 9.0 database, how can there be a CREATE EXTENSION command in the dump? Extensions were added in 9.1. Next, I tried to run COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; in a 9.1 database as a superuser different from postgres, and it worked fine as expected. Maybe you can describe in more detail how you created the dump, how you created the database into which the dump is loaded, and the exact error messages you get when restoring the dump. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Log statement with timestamp
Need to enable log_line_prefix with timestamp %t. http://www.postgresql.org/docs/9.1/static/runtime-config-logging.html --- Regards, Raghavendra EnterpriseDB Corporation Blog: http://raghavt.blogspot.com/ On Thu, Feb 23, 2012 at 2:15 PM, Madhu S R s_r_ma...@yahoo.co.in wrote: Hi, I need to log DDL or MOD statements with current time when the statement executed. Please let me know if this is possible. Regards, Madhu
Re: [GENERAL] Postgresql as main database
Hello all, Good day, I would like to make Postgresql 8.4 as main database for running three(software) and possible exchanging data.Is it possible? if yes what is the implication in terms of performance? It's certainly possible. Obviously it will need a bigger machine than you would use for just one of the databases. Bear in mind that you can't have cross-database queries without using something like the dblink package. I don't know if that affects your exchanging data. Oh - and unless you really have no choice in the matter, use 9.1 rather than 8.4 - you will get better performance, new features and it will be supported for longer. -- Richard Huxton Archonet Ltd
[GENERAL] How to enable thread safety on postgresql 8.3.6
Hi I need to enable option --enable-thread-safety after install completed. Do you have any way for this case. Because I try to remove and reinstall by enable thread safety. But not update feature --enable-thread-safety Postgresql config still value config in the first install. Any way sugest to me how to do it. Thank you Yo -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-enable-thread-safety-on-postgresql-8-3-6-tp5507687p5507687.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to enable thread safety on postgresql 8.3.6
OS : Linux Centos 5.1 -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-enable-thread-safety-on-postgresql-8-3-6-tp5507687p5507721.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql as main database
Twaha Daudi wrote: I would like to make Postgresql 8.4 as main database for running three(software) and possible exchanging data.Is it possible? if yes what is the implication in terms of performance? Please give us more information: - What is three(software)? - Can you specify how and with what you would like to exchange data? - The implications of what compared to what? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql as main database
On Thu, Feb 23, 2012 at 2:00 AM, Twaha Daudi udde...@gmail.com wrote: Hello all, Good day, I would like to make Postgresql 8.4 as main database for running three(software) and possible exchanging data.Is it possible? if yes what is the implication in terms of performance? Yes. As far as implications on performance, it really depends on specifics and I don't see enough info here to comment. Best Wishes, Chris Travers
[GENERAL] Dynamic File Name for COPY TO in Stored Procedure
Hi, What would it be the correct format for using a variable in a stored procedure that uses COPY TO? I have the current stored procedure: CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$ COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1' WITH CSV QUOTE ' '; $delimeter$ LANGUAGE SQL; When I run the stored procedure: psql -d db name -c select Table_To_File('some_absolute_file_name'); I get the error that I must use absolute file names. When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure runs fine and creates the local file. I have tried several different ways to escape the $1 variable to no avail. At best, I get the same error.
Re: [GENERAL] Dynamic File Name for COPY TO in Stored Procedure
On Thursday, February 23, 2012 6:42:53 am Carlos Oliva wrote: Hi, What would it be the correct format for using a variable in a stored procedure that uses COPY TO? I have the current stored procedure: CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$ COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1' WITH CSV QUOTE ' '; $delimeter$ LANGUAGE SQL; When I run the stored procedure: psql -d db name -c select Table_To_File('some_absolute_file_name'); I get the error that I must use absolute file names. When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure runs fine and creates the local file. I have tried several different ways to escape the $1 variable to no avail. At best, I get the same error. You will need to switch to using plpgsql and its EXECUTE statement to build the COPY statement : http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Wednesday, February 22, 2012 9:13:06 pm Dmytrii Nagirniak wrote: Hi guys, I wonder if you can suggest me how to speed-up PG when running specs. I asked it at SO here: http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-tes ting But briefly. PG specs are 2x slower than SQLite. I want it to be on par (don't care about reliability or anything, just need fast specs). Would appreciate some suggestions. Not enough information to make suggestions. Why are you switching databases? What are the specs? What is the application? What is the use case? Single user? Networked, multiple user? Do you see the application/database growing? At this point you are comparing apples and oranges. Sqlite is basically a single user embedded database, Postgres a multi user, networked database. They both work well for the use they are designed for, it is a matter of determining which is a better fit for your anticipated use. Cheers, Dmytrii Nagirniak http://ApproachE.com http://www.ApproachE.com -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 5:13 AM, Dmytrii Nagirniak dna...@gmail.com wrote: I wonder if you can suggest me how to speed-up PG when running specs. I asked it at SO here: http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing But briefly. PG specs are 2x slower than SQLite. I want it to be on par (don't care about reliability or anything, just need fast specs). Would appreciate some suggestions. You really need to explain why this matters... You mention a typical Ruby on Rails app and then discuss SQLite. Well, typical web apps have more than 1 user, so fairly obviously using SQLite isn't appropriate. If SQLite isn't appropriate, why are you testing with it? How does a test run on a database you aren't using in production tell you anything about the success or otherwise of your program. It doesn't, so saying it runs quicker is irrelevant, surely? Perhaps just run half the test, that would make it twice as quick and still just as valid. If Postgres tests run in ~1 minute, what benefit have you gained from saving 30 seconds? How often are you running tests? So please explain a little more. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to enable thread safety on postgresql 8.3.6
ultrayoYO wrote: I need to enable option --enable-thread-safety after install completed. Do you have any way for this case. Because I try to remove and reinstall by enable thread safety. But not update feature --enable-thread-safety Postgresql config still value config in the first install. Any way sugest to me how to do it. What exactly did you type, what happened, and how is that different from what you want to happen? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql latex and newlines
the problem arises when u have a field value that contains a newline character, when this field is not the first column, then all the data after this newline comes in the first column.. u can try this out, writing a function or table, and then add 'enters' or newline in the COMMENT on this function or table. There is also a problem with long lines as u can test by uncommenting the t4bruce3 function. Well, I was hoping you could tell me exactly how you wanted the behavior changed and I could try to implement it in psql. I am afraid I don't know enough about TeX to understand the isssue involved. Hi Bruce, the option i came across is the usage of \pbox, so the suggestion would be to put every field/cell inside a pbox. Instead of \begin{tabular}{l | l | l} \textit{Schema} \textit{Name} \textit{Description} \\ \hline latex\_test t4bruce1 This function and so on enter\\ another enter,\ \ just one more.\\ \\ latex\_test t4bruce2 This function and so on enter\\ another enter,\ \ just one more,\\ so now have even one more.\\ \\ \end{tabular} the generated code could like: \begin{tabular}{l | l | l} \textit{Schema} \textit{Name} \textit{Description} \\ \hline \pbox{\textwidth}{latex\_test} \pbox{\textwidth}{t4bruce1} \pbox{\textwidth}{This function and so on enter\\ another enter,\\ just one more.\\} \\ \pbox{\textwidth}{latex\_test} \pbox{\textwidth}{t4bruce2} \pbox{\textwidth}{This function and so on enter\\ another enter,\\ just one more,\\ so now have even one more.\\} \\ \end{tabular} Comments: - the improvement is that now newlines inside boxes stay inside these boxes (or cells/fields), it is not a complete solution as very long lines are not automatically formatted into several lines inside a box, but if necessary the db-user can do a a search and replace in the generated tex code s\textwidth\20cm for example, makes every box 20cm at maximum. (maybe to be put in the docu somewhere as a comment on textwidth). This all depends on the page format, unfortunately (for now? in latex) the nice (p)aragraph formatting doesn't fully extend inside a table (cf tabular) - i'm not a tex specialist, but also the people i know didn't find a better solution mvg, Wim -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 2/23/2012 9:22 AM, Simon Riggs wrote: On Thu, Feb 23, 2012 at 5:13 AM, Dmytrii Nagirniakdna...@gmail.com wrote: I wonder if you can suggest me how to speed-up PG when running specs. I asked it at SO here: http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing But briefly. PG specs are2x slower than SQLite. I want it to be on par (don't care about reliability or anything, just need fast specs). Would appreciate some suggestions. You really need to explain why this matters... You mention a typical Ruby on Rails app and then discuss SQLite. Well, typical web apps have more than 1 user, so fairly obviously using SQLite isn't appropriate. If SQLite isn't appropriate, why are you testing with it? How does a test run on a database you aren't using in production tell you anything about the success or otherwise of your program. It doesn't, so saying it runs quicker is irrelevant, surely? Perhaps just run half the test, that would make it twice as quick and still just as valid. If Postgres tests run in ~1 minute, what benefit have you gained from saving 30 seconds? How often are you running tests? So please explain a little more. As another Rails developer using PostgreSQL I think I can explain the use case. In standard Rails usage, the ORM handles all SQL query generation and thus the application is database agnostic. It is typical to use SQLite in development and testing and MySQL or PostgreSQL in production. However, if any PostgreSQL specific functionality is used then obviously PostgreSQL must also be used in development and testing. Another common practice is test-driven development. So the test suite for the application may run scores or hundreds of times per day per developer. So the speed of the test suite is of vital importance to developers. A 30 second difference 100's of times per day really can add up. -- Jack Christensen ja...@hylesanderson.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] COPY TO File: Using dynamic file name in stored procedure
Hi, What would it be the correct format for using a variable in a stored procedure that uses COPY TO? I have the current stored procedure: CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$ COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1' WITH CSV QUOTE ' '; $delimeter$ LANGUAGE SQL; When I run the stored procedure: psql -d db name -c select Table_To_File('some_absolute_file_name'); I get the error that I must use absolute file names. When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure runs fine and creates the local file. I have tried several different ways to escape the $1 variable to no avail. At best, I get the same error.
Re: [GENERAL] How to enable thread safety on postgresql 8.3.6
On 02/23/12 3:22 AM, ultrayoYO wrote: OS : Linux Centos 5.1 you haven't installed any security or stability updates since 2007 ? -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 2/23/12 9:06 AM, Jack Christensen wrote: As another Rails developer using PostgreSQL I think I can explain the use case. In standard Rails usage, the ORM handles all SQL query generation and thus the application is database agnostic. It is typical to use SQLite in development and testing and MySQL or PostgreSQL in production. However, if any PostgreSQL specific functionality is used then obviously PostgreSQL must also be used in development and testing. Another common practice is test-driven development. So the test suite for the application may run scores or hundreds of times per day per developer. So the speed of the test suite is of vital importance to developers. A 30 second difference 100's of times per day really can add up. Perhaps the emphasis should be on the tests themselves, and not PG cycles. Is he using Factory or Factory.build?.. that sort of thing. Is he running the entire test suite, when in fact just running one test would do until final checkin? And I'm curious as to why anyone would need to run tests 100s of times a day. How much code can ya write, or is he simply writing tests themselves all day? -ds -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 10:05 AM, David Salisbury salisb...@globe.gov wrote: On 2/23/12 9:06 AM, Jack Christensen wrote: As another Rails developer using PostgreSQL I think I can explain the use case. In standard Rails usage, the ORM handles all SQL query generation and thus the application is database agnostic. It is typical to use SQLite in development and testing and MySQL or PostgreSQL in production. However, if any PostgreSQL specific functionality is used then obviously PostgreSQL must also be used in development and testing. Another common practice is test-driven development. So the test suite for the application may run scores or hundreds of times per day per developer. So the speed of the test suite is of vital importance to developers. A 30 second difference 100's of times per day really can add up. Perhaps the emphasis should be on the tests themselves, and not PG cycles. Is he using Factory or Factory.build?.. that sort of thing. Is he running the entire test suite, when in fact just running one test would do until final checkin? And I'm curious as to why anyone would need to run tests 100s of times a day. How much code can ya write, or is he simply writing tests themselves all day? He's probably doing automated continuous integration testing. Two jobs ago we had a setup to do that and had 40k tests. The whole test suite took about 30 minutes to runm and kicked off automatically when the last one finished and anyone touched any code. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic File Name for COPY TO in Stored Procedure
That worked. Thank you Adrian -Original Message- From: Adrian Klaver [mailto:adrian.kla...@gmail.com] Sent: Thursday, February 23, 2012 9:58 AM To: pgsql-general@postgresql.org Cc: Carlos Oliva Subject: Re: [GENERAL] Dynamic File Name for COPY TO in Stored Procedure On Thursday, February 23, 2012 6:42:53 am Carlos Oliva wrote: Hi, What would it be the correct format for using a variable in a stored procedure that uses COPY TO? I have the current stored procedure: CREATE FUNCTION Table_To_File(text) RETURNS void AS $delimeter$ COPY (SELECT * FROM table_to_xml('table', true, false, '')) TO '$1' WITH CSV QUOTE ' '; $delimeter$ LANGUAGE SQL; When I run the stored procedure: psql -d db name -c select Table_To_File('some_absolute_file_name'); I get the error that I must use absolute file names. When I replace TO '$1' with TO '/tmp/toto.xml', the stored procedure runs fine and creates the local file. I have tried several different ways to escape the $1 variable to no avail. At best, I get the same error. You will need to switch to using plpgsql and its EXECUTE statement to build the COPY statement : http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#P LPGSQL-STATEMENTS-EXECUTING-DYN -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 5:20 PM, Scott Marlowe scott.marl...@gmail.com wrote: He's probably doing automated continuous integration testing. Two jobs ago we had a setup to do that and had 40k tests. The whole test suite took about 30 minutes to runm and kicked off automatically when the last one finished and anyone touched any code. Having lots of tests is a good thing. Bring 'em on. If you use SQLite for that, then it all runs in a single thread and it could easily take 30 minutes or longer. Now all you have to do is parallelise the tests and everything can work 10 times quicker and it would be much faster than the time SQLite produced. So using PostgreSQL for testing would be both quicker and more accurate, if you set the tests up right. The PostgreSQL regression tests are parallelised - if they weren't we'd produce a lot less work -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Upgrade to 9.1 causing function problem
I have recently upgraded my database from 8.4 to 9.1. In the process I have moved everything to a different schema. Postgis is occupying the public schema. Everything is working fine except for some of my own functions. Here is a small function and table that will not update when I perform the following code:select _me_set_process_month('2012-01-01'); It will run but the resulting table will not update. Any ideas? CREATE OR REPLACE FUNCTION _me_set_process_month(date) RETURNS void AS $BODY$ BEGIN update activity_month set action_month = $1; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION _me_set_process_month(date) OWNER TO postgres; CREATE TABLE activity_month ( action_month date NOT NULL, CONSTRAINT idkeymonth PRIMARY KEY (action_month ) ) WITH ( OIDS=FALSE ); ALTER TABLE activity_month OWNER TO postgres; GRANT ALL ON TABLE activity_month TO public; GRANT ALL ON TABLE activity_month TO postgres; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote: I have recently upgraded my database from 8.4 to 9.1. In the process I have moved everything to a different schema. Postgis is occupying the public schema. Everything is working fine except for some of my own functions. Here is a small function and table that will not update when I perform the following code:select _me_set_process_month('2012-01-01'); It will run but the resulting table will not update. Any ideas? What does the Postgres log say? What is your search_path set to? Is there more than one activity_month? My guess is that your search_path is limiting the visibility of the table. Without an explicit schema qualification of the table, Postgres will use the search_path to locate a table. The logs should say something or you can change the function to point to a schema qualified table name, if you do want to change the search_path. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
Here are the log returns: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. Willem On 2012-02-23, at 11:41 AM, Adrian Klaver wrote: On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote: I have recently upgraded my database from 8.4 to 9.1. In the process I have moved everything to a different schema. Postgis is occupying the public schema. Everything is working fine except for some of my own functions. Here is a small function and table that will not update when I perform the following code:select _me_set_process_month('2012-01-01'); It will run but the resulting table will not update. Any ideas? What does the Postgres log say? What is your search_path set to? Is there more than one activity_month? My guess is that your search_path is limiting the visibility of the table. Without an explicit schema qualification of the table, Postgres will use the search_path to locate a table. The logs should say something or you can change the function to point to a schema qualified table name, if you do want to change the search_path. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8
Hi All, I am trying to compile Postgres Source code for ARM cortex A8 architecture. While compiling, I got an error message which read selected processor does not support `swpb r4,r4,[r3]' One of the Postgres forums at the location http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html; Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully. But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile. Hence this email. So please suggest me what I should do now. What sort of performance penalty will be there if I use this option? What actually is the significance of this parameter? Please guide me. This is the configure command I used ./configure CC=/opt/toolchain/bin/armv7l-timesys-linux-gnueabi-gcc --target=armv7l-timesys-linux-gnueabi --prefix=/home/jayashankar/WorkingDirectory/Postgres9.1_Cortex --host=x86_64-unknown-linux-gnu CFLAGS='-march=armv7-a -mtune=cortex-a8 -mfpu=vfpv3 -mthumb' --disable-spinlocks Thanks and Regards Jayashankar Larsen Toubro Limited www.larsentoubro.com This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 12:49 PM, Willem Buitendyk wrote: Here are the log returns: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. Willem On 2012-02-23, at 11:41 AM, Adrian Klaver wrote: On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote: I have recently upgraded my database from 8.4 to 9.1. In the process I have moved everything to a different schema. Postgis is occupying the public schema. Everything is working fine except for some of my own functions. Here is a small function and table that will not update when I perform the following code:select _me_set_process_month('2012-01-01'); It will run but the resulting table will not update. Any ideas? What does the Postgres log say? What is your search_path set to? Is there more than one activity_month? My guess is that your search_path is limiting the visibility of the table. Without an explicit schema qualification of the table, Postgres will use the search_path to locate a table. The logs should say something or you can change the function to point to a schema qualified table name, if you do want to change the search_path. -- Adrian Klaver adrian.kla...@gmail.com Who owes/owned crabdata schema? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8
Jayashankar K B jayashankar...@lnties.com writes: Hi All, I am trying to compile Postgres Source code for ARM cortex A8 architecture. While compiling, I got an error message which read selected processor does not support `swpb r4,r4,[r3]' One of the Postgres forums at the location http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html; Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully. But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile. Hence this email. So please suggest me what I should do now. Try this patch: http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820 BTW, please don't cross-post to multiple PG mailing lists; there's very seldom a good reason to do that. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
crabby=# \dv List of relations Schema | Name| Type | Owner --+---+--+-- crabdata | Total XXX X by XXX | view | postgres also; crabby=# show search_path; search_path - crabdata (1 row) On 2012-02-23, at 12:16 PM, Rob Sargent wrote: On 02/23/2012 12:49 PM, Willem Buitendyk wrote: Here are the log returns: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. Willem On 2012-02-23, at 11:41 AM, Adrian Klaver wrote: On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote: I have recently upgraded my database from 8.4 to 9.1. In the process I have moved everything to a different schema. Postgis is occupying the public schema. Everything is working fine except for some of my own functions. Here is a small function and table that will not update when I perform the following code:select _me_set_process_month('2012-01-01'); It will run but the resulting table will not update. Any ideas? What does the Postgres log say? What is your search_path set to? Is there more than one activity_month? My guess is that your search_path is limiting the visibility of the table. Without an explicit schema qualification of the table, Postgres will use the search_path to locate a table. The logs should say something or you can change the function to point to a schema qualified table name, if you do want to change the search_path. -- Adrian Klaver adrian.kla...@gmail.com Who owes/owned crabdata schema? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 11:49 AM, Willem Buitendyk wrote: Here are the log returns: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. Where did you set the search_path, in postgressql.conf of from a SET command? Willem -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
I have it set in postgresql.conf and I've also used: alter user postgres set search_path = crabdata,public; looking at the logs after issuing the above alter user command I get: 2012-02-23 13:03:09 PST WARNING invalid value for parameter search_path: crabdata, public 2012-02-23 13:03:09 PST DETAIL schema crabdata does not exist If i look in the schemata table in the catalogs I see crabdata schema is there: crabby;pg_toast;postgres crabby;pg_temp_1;postgres crabby;pg_toast_temp_1;postgres crabby;pg_catalog;postgres crabby;public;postgres crabby;information_schema;postgres crabby;crabdata;postgres I should note this is on Windows 7 64 bit - using the 32 bit installation of postgresql 9.1 On 2012-02-23, at 12:57 PM, Adrian Klaver wrote: On 02/23/2012 11:49 AM, Willem Buitendyk wrote: Here are the log returns: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. Where did you set the search_path, in postgressql.conf of from a SET command? Willem -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/12 11:49 AM, Willem Buitendyk wrote: 2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: crabdata 2012-02-23 11:31:44 PST DETAIL schema crabdata does not exist Bizarre because I did set my search path to the schema crabdata but now it is saying that crabdata doesn't exist. But it clearly does. I'm at a loss. add the database name to your log_line_prefix ... I use something like... log_line_prefix = '%m %u@%d[%p]: ' so my log lines look like... 2012-02-21 14:04:21.595 PST user@dbname[]: ERROR: . (where is the pid)...I'm suggesting this so you can confirm that you are getting this error on the database you think it is, and not something else. -- john r pierceN 37, W 122 santa cruz ca mid-left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8
Hi Tom, Sorry about the cross-post. I am not aware of the procedures for patch etc. Could you please tell me how to use the patch ? I have already compiled and got the postgres server. So please let me know the process of patching or kindly point me to a link which explain this. Thanks and Regards Jayashankar -Original Message- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: 24 February 2012 AM 09:20 To: Jayashankar K B Cc: pgsql-general@postgresql.org; pgsql-performa...@postgresql.org Subject: Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8 Jayashankar K B jayashankar...@lnties.com writes: Hi All, I am trying to compile Postgres Source code for ARM cortex A8 architecture. While compiling, I got an error message which read selected processor does not support `swpb r4,r4,[r3]' One of the Postgres forums at the location http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html; Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully. But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile. Hence this email. So please suggest me what I should do now. Try this patch: http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820 BTW, please don't cross-post to multiple PG mailing lists; there's very seldom a good reason to do that. regards, tom lane Larsen Toubro Limited www.larsentoubro.com This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 01:08 PM, Willem Buitendyk wrote: I have it set in postgresql.conf and I've also used: alter user postgres set search_path = crabdata,public; Well search_path is a string, so have you tried?; search_path='crabdata,public' -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
Actually, what works is set search_path='crabdata', 'public' ; On 2/23/12 1:10 PM, Adrian Klaver adrian.kla...@gmail.com wrote: On 02/23/2012 01:08 PM, Willem Buitendyk wrote: I have it set in postgresql.conf and I've also used: alter user postgres set search_path = crabdata,public; Well search_path is a string, so have you tried?; search_path='crabdata,public' -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 01:15 PM, Igor Polishchuk wrote: Actually, what works is set search_path='crabdata', 'public' ; Yea, I was not clear here. I was referring to postgresql.conf setting. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 01:08 PM, Willem Buitendyk wrote: I have it set in postgresql.conf and I've also used: alter user postgres set search_path = crabdata,public; looking at the logs after issuing the above alter user command I get: 2012-02-23 13:03:09 PST WARNING invalid value for parameter search_path: crabdata, public 2012-02-23 13:03:09 PST DETAIL schema crabdata does not exist I should note this is on Windows 7 64 bit - using the 32 bit installation of postgresql 9.1 How are you doing these changes, via psql or PgAdmin or both? Is the previous 8.4 instance of Postgres still up and running? Are you sure you are only working with one database cluster? -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
Both via psql and PgAdmin. Yes only one database cluster. On 2012-02-23, at 1:32 PM, Adrian Klaver wrote: On 02/23/2012 01:08 PM, Willem Buitendyk wrote: I have it set in postgresql.conf and I've also used: alter user postgres set search_path = crabdata,public; looking at the logs after issuing the above alter user command I get: 2012-02-23 13:03:09 PST WARNING invalid value for parameter search_path: crabdata, public 2012-02-23 13:03:09 PST DETAIL schema crabdata does not exist I should note this is on Windows 7 64 bit - using the 32 bit installation of postgresql 9.1 How are you doing these changes, via psql or PgAdmin or both? Is the previous 8.4 instance of Postgres still up and running? Are you sure you are only working with one database cluster? -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On 02/23/2012 01:53 PM, Willem Buitendyk wrote: Both via psql and PgAdmin. Yes only one database cluster. 1) Did you try Johns recommendation, just to be sure?: add the database name to your log_line_prefix ... I use something like... log_line_prefix = '%m %u@%d[%p]: ' 2) So what happens if you change your function to have an explicit schema reference to the table?: update crabdata.activity_month set action_month = $1; -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8
On Thu, Feb 23, 2012 at 3:09 PM, Jayashankar K B jayashankar...@lnties.com wrote: Hi Tom, Sorry about the cross-post. I am not aware of the procedures for patch etc. Could you please tell me how to use the patch ? see general instructions here: http://jungels.net/articles/diff-patch-ten-minutes.html merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 11:15 AM, Simon Riggs si...@2ndquadrant.com wrote: On Thu, Feb 23, 2012 at 5:20 PM, Scott Marlowe scott.marl...@gmail.com wrote: He's probably doing automated continuous integration testing. Two jobs ago we had a setup to do that and had 40k tests. The whole test suite took about 30 minutes to runm and kicked off automatically when the last one finished and anyone touched any code. Having lots of tests is a good thing. Bring 'em on. If you use SQLite for that, then it all runs in a single thread and it could easily take 30 minutes or longer. Now all you have to do is parallelise the tests and everything can work 10 times quicker and it would be much faster than the time SQLite produced. It's funny how once you start thinking of how to optimize to run 8 or 16 or more concurrent tests, you sometimes forget that doing that same thing to some simpler tools might result in very poor performance til you have to run the tests on the old system and start wondering why you ever thought it was fast. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 23/02/2012, at 7:35 PM, Marti Raudsepp wrote: If you have lots of very simple queries, then usually much of the overhead is in query planning. There are a few tricks to dumb down the planner to make it faster -- although that may come at the cost of slowing down execution. * If your queries use joins at all, you can reduce planning overhead by setting join_collapse_limit=1 and from_collapse_limit=1 or some other low number. * Set default_statistics_target=5 or another low number and run ANALYZE on the whole database. * Set enable_bitmapscan=off, enable_material=off, enable_mergejoin=off, enable_hashjoin=off -- this will prevent the planner for trying certain kinds of plans in the first place. Just for the heck of it, you might gain a bit by setting track_activities=off, update_process_title=off Thanks a lot Marti. After I've applied these settings I couldn't see major improvement over fsync=off. It was pretty much within the error margin. But you're right that most of the queries are simple and I hope this will work a little bit faster on small runs (like a single test suite). Will definitely keep an eye on these settings. So far fsync=off is the best I could get. Thanks one more time for the help. Cheers. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 24/02/2012, at 2:06 AM, Adrian Klaver wrote: Would appreciate some suggestions. Not enough information to make suggestions. Jack Christensen pretty nailed it very well. But I'll answer the particulars here too: Why are you switching databases? Need FTS from PG. No other reasons yet. What are the specs? A typical DB spec (test) does the following: 1. Creates a number of records (commonly about 5-ish, but may vary from 1 to ~40 across all tables). 2. Executes some queries against the dataset (**MOST** of them are pretty simple, with only 1-2 joins; only some queries use 5-ish joins, sorting, distinct etc). 3. May update couple of records too (we are talking about a couple only, so it in the range of 1-5, very rarely ~20-30). 4. At the end a spec truncates all the tables (uses truncate, not delete). This repeats on every spec/test (hundreds of those). What is the application? As I said, it is a typical Rails web application. But for the purpose of this thread it is irrelevant since we're basically talking about single user, development/test environment where the only user of the app is the spec/test and no concurrency. What is the use case? Single user? Networked, multiple user? See above. Do you see the application/database growing? No. As I said before the database is used ONLY for running tests and can be recreated at any time. Generally it will never have any data in it (except when specs are running). Cheers.
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 24/02/2012, at 2:22 AM, Simon Riggs wrote: Would appreciate some suggestions. You really need to explain why this matters... I've just replied to Adrian with more details. I suppose you don't mind to take a look there so I won't copy-paste it :) You mention a typical Ruby on Rails app and then discuss SQLite. Well, typical web apps have more than 1 user, so fairly obviously using SQLite isn't appropriate. This is rally irrelevant to this thread, but I believe you are not right here: http://www.sqlite.org/whentouse.html Websites SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). If SQLite isn't appropriate, why are you testing with it? It is appropriate in the first place. But another reason to use SQLite for testing is that is fast and easy to maintain and setup. Since I need to use PG FTS (and that's the only reason), I have to use PG for testing too. And this is where this thread comes in. Perhaps just run half the test, that would make it twice as quick and still just as valid. If Postgres tests run in ~1 minute, what benefit have you gained from saving 30 seconds? How often are you running tests? I think it is irrelevant to the this thread, but here's my math: I run full suite aprox every 5 mins. Which is ~ 100 times a day. Now, 30s*100 = 50mins vs 60s=100mins. This is another ~hour lost a day. On top of that I would run a single test file probably every couple of minutes or so. (So even if it is a couple of seconds slower, it all adds up a lot). This is just a common TDD style. Watch the https://www.destroyallsoftware.com/ for example to see what I mean. Don't get me wrong, I realise that PG and SQLite are totally different beasts. That's totally fine if PG can't beat SQLite on speed in **this particular case**. I just want to try to tune it to be as fast as it can (for **this particular case**, see my reply to Adrian). Cheers.
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 24/02/2012, at 5:15 AM, Simon Riggs wrote: Now all you have to do is parallelise the tests and everything can work 10 times quicker and it would be much faster than the time SQLite produced. So using PostgreSQL for testing would be both quicker and more accurate, if you set the tests up right. That is certainly true. And there are number of techniques to make tests faster. But that is outside of the scope of this thread I believe. I only want to make it run locally (don't care about CI yet) faster. So far I could get the most out of with fsync=off (which is ~15% improvement).
Re: [GENERAL] Optimise PostgreSQL for fast testing
On 02/23/2012 07:16 PM, Dmytrii Nagirniak wrote: That's totally fine if PG can't beat SQLite on speed in **this particular case**. I just want to try to tune it to be as fast as it can (for **this particular case**, see my reply to Adrian). You can find all of the big tunable parameters at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server That's as good of a general how do I make this faster by tweaking the server guide as you'll get. Once you've hit the big tunables--shared_buffers, checkpoint_segments, work_mem, effective_cache_size, and tweaking either synchronous_commit or fsync--there's not too much else you can do except dig into what's slow in individual queries. Only other thing that might help is running ANALYZE against the whole database after any major loading of test data, just to make sure the queries are being executed with good statistics. If you can extract the SQL from the test cases so they can be executed directly with the psql client, you could add \timing before them to see how long each individual query runs, to look for the long running ones. It's possible that every statement is a little slower, which would be unsurprising and not something you can really resolve if so. It could just be a small number that are being executed poorly though, in which case specific query tweaking might be possible. You might get further insight by posting the EXPLAIN ANALYZE plans of whatever the slowest single query is. More on that subject at http://wiki.postgresql.org/wiki/Slow_Query_Questions -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrade to 9.1 causing function problem
On Thursday, February 23, 2012 1:53:42 pm Willem Buitendyk wrote: Both via psql and PgAdmin. Yes only one database cluster. Another thought. Did you CREATE the schema using PgAdmin and if so, might you have inadvertently put in a trailing or leading space ? I ask because if I remember correctly PgAdmin by default quotes object names and that would trap the space character. I know you showed this previously: crabby;crabdata;postgres On the chance that spaces where trimmed out of the above what does the query below show?: SELECT length(schema_name), schema_name from information_schema.schemata; -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thu, Feb 23, 2012 at 5:22 PM, Dmytrii Nagirniak dna...@gmail.com wrote: On 24/02/2012, at 5:15 AM, Simon Riggs wrote: Now all you have to do is parallelise the tests and everything can work 10 times quicker and it would be much faster than the time SQLite produced. So using PostgreSQL for testing would be both quicker and more accurate, if you set the tests up right. That is certainly true. And there are number of techniques to make tests faster. But that is outside of the scope of this thread I believe. Is there a reaon why you can't parallelize your tests? If you could run 10 or so at a time it would be worth benchmarking. Also, look into automating your testing, so that you don't need to run the tests all the time, they run in the background, and if something breaks they send you an alert with the code that broke things etc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Optimise PostgreSQL for fast testing
On Thursday, February 23, 2012 4:00:08 pm Dmytrii Nagirniak wrote: On 24/02/2012, at 2:06 AM, Adrian Klaver wrote: Would appreciate some suggestions. Not enough information to make suggestions. Jack Christensen pretty nailed it very well. But I'll answer the particulars here too: Why are you switching databases? Need FTS from PG. No other reasons yet. Not sure if you need the specific features of FTS from Postgres, otherwise Sqlite has FTS also: http://www.sqlite.org/fts3.html It is different beast than the Postgres version so I don't know if it would apply. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problemas com client_encoding ?
Srs. realizei uma instalação de um de nossos servers e me deparei com a seguinte mensagem [user@local ~]$ psql psql: invalid connection option client_encoding Depois de procurar sobre o assunto, setei uma variavel chamada PGCLIENTENCODING = UTF8 e isso resolveu meu problema, no entanto, ao instalar um outro server com os mesmos procedimentos, essa variável não foi exigida. O que posso ter errado? 1o. Server SO - Centos 5.7 Final PostgreSQL 9.1.1 Depois atualizado para 9.2.1 e o problema continua... 2o. Server SO - Centos 5.7 Final PostgreSQL 9.1.2 Não houve esse tipo de problema Todas as instalações que fazemos aqui, seguimos os pacotes oficiais do repositorio pgdg e o processo de instalação igual em qualquer situação. Alguém pode me dar um HELP ? -- *Atenciosamente, Emanuel Araújo* http://eacshm.wordpress.com/ * * *Linux Certified LPIC-1*