Re: [GENERAL] psql screen size
Messed up - I mean when going up and scrolling command history it shows long queries (eg 2 line long) in single line and the exceeding part overwrites the beginning of the query), or when writing long SQL at some point I'm starting to overwriting it from the beginning of the line. Sometimes when editing query from the history and appending some conditions at the end I'm starting to write in upper lines and some query parts are duplicated there. Each key press moves me up and shorter part of query is duplicated there. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot remember what to do to get the information_schema, pg_catalog etc showing.
Johan Nel wrote: Did an installation of PostgreSQL on a new laptop and I just cannot remember which .sql script to run to have the above schemas showing in pgAdmin or is it something to do with a checkbox/setting during installation? File Options... Display [x] Show System Objects in the treeview After refreshingh the treeview you find the information schema in [database] Catalogs ANSI (information_schame) Catalog Objects Rainer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Date data type
Mike Hall wrote: I'm currently converting an MS Access database to PostgreSQL (version 8.1 as it is the vesion that ships with CentOS 5). I'm having trouble with an INSERT statement attempting to insert an empty value ('') into a field with data type DATE. This produces the following error message: ERROR: invalid input syntax for type date: It does the same whether the field has a NOT NULL restraint or not. I can't find anything relevant in the documentation. What am I doing wrong. An empty string is not a NULL value. An empty string cannot be converted to a valid date. Use NULL instead of '' to insert a NULL value. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Date data type
Mike Hall wrote: I'm currently converting an MS Access database to PostgreSQL (version 8.1 as it is the vesion that ships with CentOS 5). What version of Microsoft Access are you using? I haven't seen this issue with Access 2007, which I've been forced to use in a recent project. It actually seems to have some idea what NULL is and use it vaguely appropriately. What ODBC driver version are you using? Is the data type in the linked table shown as being declared of type DATE too? What type of field is bound to the linked table? A proper date field, or a free-form text field? Does it default to null, or to the empty string? -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Is there a simple function for copiing cursor data into array?
Hi list, I am searching for function which creates or appends to array content of cursor. I found that it is possible to get select result as array so it is probably logical to have something similar for cursors. I mean, there is copy-paste from documentation for 8.3 version: - SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); ?column? - {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31} (1 row) -- What i want is to have some cursor, open it (with parameters optionally) and just call some function which will fill my array with data from cursor. I can do some iteration but it is really can be done in some universal way I think. Any suggestion? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advantage of more than 1 tablespace on 1 disk?
what you want in case of the table that gets read a lot, is partition + cluster. For table that changes a lot, I don't really know myself either. It is usually trial and error type of scenario for me. (partitioning too, but that depends on what parts are updated and under what conditions, etc).
Re: [GENERAL] Redefining an existing Table Schema for Foreign Key Constraint - Question
In article [EMAIL PROTECTED], Brian714 [EMAIL PROTECTED] writes: Currently, the database contains thousands of records in the Customers and Creditcards tables. I would like to re-define the Customers table to follow the following schema: Customers Table id:integer -- primary key first_name:varchar(50) last_name:varchar(50) cc_id:integer references Creditcards.id address:varchar(200) email:varchar(50) password:varchar(20) As you can see, I would like to convert the column that used to be credit_card_number from the Customers table and turn it into a cc_id which is an integer that references the column id from the table Creditcards. This is the usual 1:n relationship, but I think you got it backwards. There are two questions to ask: 1. Are there customers with more than one credit card? 2. Are there credit cards owned by more than one customer? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is there a simple function for copiing cursor data into array?
Hello 2008/11/3 alebu [EMAIL PROTECTED]: Hi list, I am searching for function which creates or appends to array content of cursor. I found that it is possible to get select result as array so it is probably logical to have something similar for cursors. I mean, there is copy-paste from documentation for 8.3 version: - SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); ?column? - {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31} (1 row) -- What i want is to have some cursor, open it (with parameters optionally) and just call some function which will fill my array with data from cursor. I can do some iteration but it is really can be done in some universal way I think. Any suggestion? I am afraid there isn't any simply way Pavel -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Connections getting stuck sending data to client (FIXED)
[Forgot to send this to the list first time] On Fri, Oct 31, 2008 at 12:27:27PM -0600, Scott Marlowe wrote: On Fri, Oct 31, 2008 at 9:36 AM, Chris Butler [EMAIL PROTECTED] wrote: I've been having intermittent problems with our DB server (running postgresql 8.3.3) reaching its connection limit, all because of a SELECT statement that's stuck while sending data. This gets stuck because there's a transaction waiting to do an ALTER TABLE, then the subsequent SELECTs wait for the ALTER. The problem seems to be that the client connection drops while the server is sending data to the client. Are the clients windows? I know there's a well known and published bug in windows where it freaks out on blocking ports and the connection hangs. There's a kb article on it on the ms site. It's been around since windows 95 or so and there appears to be no plans to fix it. The client is a PHP webapp running on a Debian VPS. Shortly after sending my previous message, I did spot an error in the client's firewall. It was sending back an ICMP port-unreachable when rejecting TCP packets, rather than a TCP Reset, which I guess is why the connection stayed open rather than closing immediately. Still unsure why it's dropping the connection, but at least it doesn't get stuck any more. -- Chris Butler Zedcore Systems Ltd UK tel: 0114 238 1828 ext 72 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Advantage of more than 1 tablespace on 1 disk?
Hi, I've got a database with massive tables which fall into 2 categories: Tables which don't change often but get read a LOT, and tables which are heavily added to continuously and sometimes read. Would there be any advantage in moving the latter logging tables to a separate tablespace, bearing in mind it would still be on the same disk? Or have I kinda missed how tablespaces work? Thanks Thom -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] raise notice, psql and having some feedback from scripts
I just have some batch work in scripts that I pass through: psql script.sql script.log or may be run by cron. In sql raise notice is not available. Is there any other way to send messages to the logs without polluting them too much with -a? -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Performance of views
Tom Lane wrote: Scott Marlowe [EMAIL PROTECTED] writes: Note that, at least in older versions, MySQL completely materialized a temporary table from a view, then used that for the view. This is horribly inefficient, and results in a lot of people thinking views are slow. Not sure if this has been addressed in MySQL yet, don't really care anymore, since I rarely use mysql for anything anymore. Some simple experiments with mysql 5.0.67 suggest that this meme is obsolete there too. I found some cases where it looks like we optimize a bit better than they do, but for simple views you seem to get the same plan as if you'd written out the equivalent query in-line. This is true of simple queries, but as soon as your query starts becoming complex (e.g. involving subselects) it just dumps the result of the view query into a temporary table and uses that. (Tested with 5.0.67 too). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] raise notice, psql and having some feedback from scripts
On Mon, Nov 03, 2008 at 03:14:14PM +0100, Ivan Sergio Borgonovo wrote: I just have some batch work in scripts that I pass through: psql script.sql script.log or may be run by cron. In sql raise notice is not available. Is there any other way to send messages to the logs without polluting them too much with -a? You could write a pl/pgsql function that just does a RAISE NOTICE with its argument. As another aid, the output from psql tends to be better if you use -f script.sql rather than piping the file in. Sam -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Are there plans to add data compression feature to postgresql?
Peter Eisentraut wrote: Craig Ringer wrote: So - it's potentially even worth compressing the wire protocol for use on a 100 megabit LAN if a lightweight scheme like LZO can be used. LZO is under the GPL though. Good point. I'm so used to libraries being under more appropriate licenses like the LGPL or BSD license that I completely forgot to check. It doesn't matter that much, anyway, in that deflate would also do the job quite well for any sort of site-to-site or user-to-site WAN link. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] JDBC: How to pass array in prepared statement for stored procedure execution?
Hi list, This question is general for any database JDBC but I think it's ok if I post it here. Lets say I have s stored procedure which gets an array of objects like one of it IN parameters. How can I pass an array of my objects into JDBC PreparedStatement object? The only recomendation I saw was to pass an object which implements Array interface, but it looks quite complex for such a common task and not all methods are well understandable to me. Can anyone give a simple example of how it can be done? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] JDBC: How to pass array in prepared statement for stored procedure execution?
alebu [EMAIL PROTECTED] writes: How can I pass an array of my objects into JDBC PreparedStatement object? The folks who would know this are more likely to notice your question if you ask on pgsql-jdbc ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] gin index creation performance problems
I'm looking for a bit more guidance on gin index creation. The process: - vaccum analyze. - start a transaction that: - drop the triggers to update a tsvector - drop the index on the tsvector - fill several tables - update the tsvector in a table with ~800K records - recreate the gin index - commit To have a rough idea of the data involved: - 800K record - tsvector formed from concatenation of 6 fields - total length of concatenated fields ~ 200 chars * - average N of lexemes in tsvector 10 * [*] guessed 2xXeon HT 3.2GHz, 4Gb RAM, SCSI RAID5 Index creation takes more than 1h. maintenance_work_mem is still untouched. What would be a good value to start from? Anything else to do to improve performances? -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] gin index creation performance problems
On Mon, 3 Nov 2008 16:45:35 +0100 Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: Forgot to add that top say postgresql is using 100% CPU and 15% memory. I'm looking for a bit more guidance on gin index creation. The process: - vaccum analyze. - start a transaction that: - drop the triggers to update a tsvector - drop the index on the tsvector - fill several tables - update the tsvector in a table with ~800K records - recreate the gin index - commit To have a rough idea of the data involved: - 800K record - tsvector formed from concatenation of 6 fields - total length of concatenated fields ~ 200 chars * - average N of lexemes in tsvector 10 * [*] guessed 2xXeon HT 3.2GHz, 4Gb RAM, SCSI RAID5 Index creation takes more than 1h. maintenance_work_mem is still untouched. What would be a good value to start from? Anything else to do to improve performances? -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] gin index creation performance problems
On Mon, 3 Nov 2008, Ivan Sergio Borgonovo wrote: I'm looking for a bit more guidance on gin index creation. The process: - vaccum analyze. - start a transaction that: - drop the triggers to update a tsvector - drop the index on the tsvector - fill several tables - update the tsvector in a table with ~800K records - recreate the gin index - commit To have a rough idea of the data involved: - 800K record - tsvector formed from concatenation of 6 fields - total length of concatenated fields ~ 200 chars * - average N of lexemes in tsvector 10 * [*] guessed 2xXeon HT 3.2GHz, 4Gb RAM, SCSI RAID5 Index creation takes more than 1h. maintenance_work_mem is still untouched. What would be a good value to start from? Anything else to do to improve performances? why you didn't change maintenance_work_mem ? You can change it online just before create index. Bulk gin index creation uses it as a buffer and you can save a lot of IO. All this written in the documentation and there are other parameters you should be concerned about. Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] gin index creation performance problems
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes: maintenance_work_mem is still untouched. What would be a good value to start from? GIN index build time is *very* sensitive to maintenance_work_mem. Try cranking it up to a couple hundred megabytes and see if that helps. Also, if you're on 8.2, I think 8.3 might be faster. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Specific user does not require password for postgres
Hi, I have installed PostgreSQL 8.3.4 on Windows Vista Business Edition. My currently logged on user is the only user that does not need to specify a password when running psql -U postgres I used the following guide: PostgreSQL on Windows with SSL -- --All variables are in CAPITALS-- Create POSTGRES_SERVICE_USER Remove POSTGRES_SERVICE_USER from Users Assign POSTGRES_SERVICE_USER_PASSWORD to POSTGRES_SERVICE_USER Grant POSTGRES_SERVICE_USER Run As Service (secpol.msc) Create PGDIR (e.g. C:\Program Files\PostgreSQL) Create PGDATA (e.g. C:\ProgramData\PostgreSQL\Data) Define PGDATA Define PGBIN (i.e. PGDIR\bin) Add PGBIN to the Path Give POSTGRES_SERVICE_USER change rights on both PGDATA as well as PGDIR Extract PostgreSQL Windows archive to PGDIR As an admin register PostgreSQL service pg_ctl.exe register -U POSTGRES_SERVICE_USER -P POSTGRES_SERVICE_USER_PASSWORD -m smart Start a shell as POSTGRES_SERVICE_USER runas /user:POSTGRES_SERVICE_USER cmd.exe Run initdb -- PGDATA must exist and be writable initdb.exe -A md5 -U POSTGRES_SUPER_USER -W Create certificate openssl req -new -text -out server.req -newkey rsa:8192 openssl rsa -in privkey.pem -out server.key openssl req -x509 -in server.req -text -key server.key -out server.crt Edit postgresql.conf listen_addresses = 'localhost' ssl = on log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_filename = 'postgresql-%Y-%m-%d.log' log_rotation_size = 5MB client_min_messages = log log_min_messages = info log_min_error_statement = info log_timezone = Europe/Amsterdam Edit pg_hba.conf hostssl all postgres ::1/128 md5 hostssl all postgres 127.0.0.1/32 md5 It seems odd to me that the user that is logged in (and created the services and directories) does not need to specify a password I checked ownership on all directories (they are owned by local administrators) and the specific user is nowhere specified in the security permissions Other users (also members of administrators) do need to specify a password. Even though this is only a local development installation, I would really like to understand why this happens Thanks in advance, Serge Fonville
Re: [GENERAL] Are there plans to add data compression feature to postgresql?
Craig Ringer wrote: So - it's potentially even worth compressing the wire protocol for use on a 100 megabit LAN if a lightweight scheme like LZO can be used. LZO is under the GPL though. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advantage of more than 1 tablespace on 1 disk?
Hi Thom, table spaces are not in the first place related with addressing usage pattern of individual tables. They are a mechanism for putting up a *logical* layout of persistent storage. As such they are describing segments of persistence storage that will (or might from the point of view of the schema designer) benefit from being mapped to physical disks independently. (e.g. separating table data and index data, or arrange for indexes used with hot queries to be accessible via different IO channels) For a given machine and disk configuration you may then decide what table space to be mapped to what physical disk. (Hopefully being aware that this will decrease throughput) For addressing your actual question you might have a look at indexing, clustering and/or clustering. Those deal with efficiency of accessing individual tables. Rainer Thom Brown schrieb: Hi, I've got a database with massive tables which fall into 2 categories: Tables which don't change often but get read a LOT, and tables which are heavily added to continuously and sometimes read. Would there be any advantage in moving the latter logging tables to a separate tablespace, bearing in mind it would still be on the same disk? Or have I kinda missed how tablespaces work? Thanks Thom -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Error en generar base espacial
Hola He instalado el: Porgressql 8.3 en /usr/local/postgres_8.3 Postgis 1.3 en /usr/local/ postgis_13 geos 3 en /usr/local/geos_3 proj46 en /usr/local/proj_46 creo la base de datos pero al momento de incorporar la parte espacial con el siguiente comedo: -bash-3.2$ ./psql -f /usr/local/postgis_13/share/lwpostgis.sql -d sig_prueba me manda el siguiente error: Password: BEGIN psql:/usr/local/postgis_13/share/lwpostgis.sql:53: NOTICE: type histogram2d is not yet defined DETAIL: Creating a shell type definition. psql:/usr/local/postgis_13/share/lwpostgis.sql:53: ERROR: could not access file $libdir/liblwgeom: No such file or directory psql:/usr/local/postgis_13/share/lwpostgis.sql:59: ERROR: current transaction is aborted, commands ignored until end of transaction block me podrian dar una ayuda please. gracias.
Re: [GENERAL] valid use of wildcard
The 'real' problem was the database had not been re-indexed in a long while (it is a test system). After re-indexing the db, the query below ran fairly quicky: metadata=# SELECT * FROM viewspace.siap AS t WHERE t.startDate='2008-10-27%' AND t.prop_id LIKE '%' LIMIT 1000; The 'startDate' is a timestamp. I was just questioning the use of the '=' operator with '%' instead of LIKE. I would have expected the '=' to take the '%' as a literal. Thanks Tom, Klint and Scott. I learned some debugging tips from this post. --irene Tom Lane wrote: Klint Gore [EMAIL PROTECTED] writes: Surprisingly, '2008-10-27%' casts to a date in 8.3.3. Yeah, the datetime input code is pretty willing to overlook unexpected punctuation. There are enough odd formats out there that I'm not sure tightening it up would be a good idea. regards, tom lane -- - Irene BargEmail: [EMAIL PROTECTED] NOAO/AURA Inc. http://www.noao.edu/noao/staff/ibarg 950 N. Cherry Ave.Voice: 520-318-8273 Tucson, AZ 85726 USA FAX: 520-318-8360 - -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error en generar base espacial
Eduardo Arévalo escribió: Hola He instalado el: Porgressql 8.3 en /usr/local/postgres_8.3 Postgis 1.3 en /usr/local/ postgis_13 geos 3 en /usr/local/geos_3 proj46 en /usr/local/proj_46 creo la base de datos pero al momento de incorporar la parte espacial con el siguiente comedo: -bash-3.2$ ./psql -f /usr/local/postgis_13/share/lwpostgis.sql -d sig_prueba me manda el siguiente error: Password: BEGIN psql:/usr/local/postgis_13/share/lwpostgis.sql:53: NOTICE: type histogram2d is not yet defined DETAIL: Creating a shell type definition. psql:/usr/local/postgis_13/share/lwpostgis.sql:53: ERROR: could not access file $libdir/liblwgeom: No such file or directory El problema es esta linea: está buscando liblwgeom.so (seguramente en /usr/local/postgres_8.3/lib) pero no lo encuentra. Me imagino que está en /usr/local/postgres_13/lib. Te recomiendo volver a instalar, poniendo todo el resto de los paquetes en /usr/local/postgres_8.3 en vez de ubicaciones específicas para cada uno. Con eso, debería permitirte encontrar la biblioteca fácilmente en la ubicación por omisión. Hay otras soluciones pero esta es la más conveniente. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Debugging infrequent pegged out CPU usage
I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600 MHz, 12 MB cache, 3.0 GHz quad core, and 4 GB RAM. My database is only about 50 MB and there are only about 20 users. For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. After reboot all is well for another week or so, but this brings the system to a grinding halt. * What is the best way to debug this? Can I limit Postgres to a certain number of cores or set the timeout on the queries to a lower value?* I would greatly appreciate any advice on debugging this problem. While there are relatively live few users the data is extremely important and the users will not wait for me to see what is wrong. They demand immediate resolution and the best I can do is reboot. -- Thank you for your time, Jason Long CEO and Chief Software Engineer BS Physics, MS Chemical Engineering http://www.octgsoftware.com HJBug Founder and President http://www.hjbug.com
[GENERAL] TIME column manipulation/comparison hangups
I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Doesn't work, though: ERROR: function to_char(time with time zone, unknown) does not exist So, I tried to force it: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column::TIMESTAMP WITH TIME ZONE, 'HH24MI') Which led to an invalid cast. I also tried using date_trunc() with no success. It seems as if EXTRACT() will work, but it sure feels hacky to do: (extract(hours from now()) = extract(hours from time_column) AND (extract(minutes from now()) = extract(minutes from time_column) Am I missing something obvious? Anyone have any better methods for doing this? I'm working on 8.3. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TIME column manipulation/comparison hangups
On 03/11/2008 19:01, Bill Moran wrote: It seems as if EXTRACT() will work, but it sure feels hacky to do: (extract(hours from now()) = extract(hours from time_column) AND (extract(minutes from now()) = extract(minutes from time_column) I'd have thought that this was the correct way to do it. Anyway, you could encapsulate this in a function to make re-use easier (the following hasn't been tested): create function is_same_minute(time with time zone, time with time zone) returns bool as $$ select (extract(hours from $1) = extract(hours from $2)) and (extract(minutes from $1) = extract(minutes from $2)); $$ language sql; Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TIME column manipulation/comparison hangups
On Mon, Nov 3, 2008 at 12:01 PM, Bill Moran [EMAIL PROTECTED] wrote: I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Use date_trunc where date_trunc('minute',timefield)=date_trunc('minute',now()); I might have the args backwards. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
On Monday 03 November 2008, Jason Long [EMAIL PROTECTED] wrote: I would greatly appreciate any advice on debugging this problem. While there are relatively live few users the data is extremely important and the users will not wait for me to see what is wrong. They demand immediate resolution and the best I can do is reboot. At the risk of being called an ass, I would say your organization needs to hire someone capable of diagnosing the problem. -- Alan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
Jason Long [EMAIL PROTECTED] writes: For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. Well, you need to find out why. Turning on query logging (see log_statement) would probably be a good first step --- it'd help you determine if there's a specific query or queries causing it. Also, have you tried pg_ctl stop -m fast instead of a reboot? (If that doesn't work, it'd also be interesting to see if -m immediate does work.) regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
On Mon, Nov 3, 2008 at 11:30 AM, Jason Long [EMAIL PROTECTED] wrote: I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600 MHz, 12 MB cache, 3.0 GHz quad core, and 4 GB RAM. My database is only about 50 MB and there are only about 20 users. For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. After reboot all is well for another week or so, but this brings the system to a grinding halt. What is the best way to debug this? Can I limit Postgres to a certain number of cores or set the timeout on the queries to a lower value? Best way I've found it to keep track of the server over a period of time. nagios and mrtg are your friends here. You can use some more primitive methods, like ps ax|grep postgres|wc -l to see how many postgres backends are running. You need to figure out exactly what's happening to the machine before it dies, but as its approaching that point. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TIME column manipulation/comparison hangups
In response to Scott Marlowe [EMAIL PROTECTED]: On Mon, Nov 3, 2008 at 12:01 PM, Bill Moran [EMAIL PROTECTED] wrote: I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Use date_trunc where date_trunc('minute',timefield)=date_trunc('minute',now()); I might have the args backwards. Hunh ... # select date_trunc('minute','13:45:15'::time); date_trunc @ 13 hours 45 mins (1 row) # select date_trunc('minute','13:45:15'::time with time zone); ERROR: function date_trunc(unknown, time with time zone) does not exist LINE 1: select date_trunc('minute','13:45:15'::time with time zone); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. # select date_trunc('minute',('13:45:15'::time with time zone)::time); date_trunc @ 13 hours 45 mins (1 row) Curiouser and curiouser ... -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
On Mon, Nov 3, 2008 at 12:25 PM, Scott Marlowe [EMAIL PROTECTED] wrote: On Mon, Nov 3, 2008 at 11:30 AM, Jason Long [EMAIL PROTECTED] wrote: I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600 MHz, 12 MB cache, 3.0 GHz quad core, and 4 GB RAM. My database is only about 50 MB and there are only about 20 users. For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. After reboot all is well for another week or so, but this brings the system to a grinding halt. What is the best way to debug this? Can I limit Postgres to a certain number of cores or set the timeout on the queries to a lower value? Best way I've found it to keep track of the server over a period of time. nagios and mrtg are your friends here. You can use some more primitive methods, like ps ax|grep postgres|wc -l to see how many postgres backends are running. You need to figure out exactly what's happening to the machine before it dies, but as its approaching that point. Also, use the built in pg_xxx tables / views that show you what the server is doing. use vmstat, iostat, top and other tools to keep track. If you're on Windows, ignore all that and ask someone else cause I don't know enough about troubleshooting windows systems to be a lot of help there. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TIME column manipulation/comparison hangups
On Mon, Nov 3, 2008 at 12:25 PM, Bill Moran [EMAIL PROTECTED] wrote: In response to Scott Marlowe [EMAIL PROTECTED]: On Mon, Nov 3, 2008 at 12:01 PM, Bill Moran [EMAIL PROTECTED] wrote: I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Use date_trunc where date_trunc('minute',timefield)=date_trunc('minute',now()); I might have the args backwards. Hunh ... # select date_trunc('minute','13:45:15'::time); date_trunc @ 13 hours 45 mins (1 row) # select date_trunc('minute','13:45:15'::time with time zone); ERROR: function date_trunc(unknown, time with time zone) does not exist LINE 1: select date_trunc('minute','13:45:15'::time with time zone); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. # select date_trunc('minute',('13:45:15'::time with time zone)::time); date_trunc @ 13 hours 45 mins (1 row) Curiouser and curiouser ... Ahhh, not timestamps, but times... You might have to add the time to some date to run it through date_trunc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Specific user does not require password for postgres
On 03/11/2008 16:21, Serge Fonville wrote: Hi, I have installed PostgreSQL 8.3.4 on Windows Vista Business Edition. My currently logged on user is the only user that does not need to specify a password when running psql -U postgres What happens if you log in as a different user and then try to connect as the original user using -U username? Also, is there any chance that you've already connected using pgAdmin, and saved the password? - this causes the password to be saved in a .pgpass file, so you wouldn't be asked for it again when logged inas that user. Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TIME column manipulation/comparison hangups
On Mon, Nov 3, 2008 at 12:31 PM, Scott Marlowe [EMAIL PROTECTED] wrote: On Mon, Nov 3, 2008 at 12:25 PM, Bill Moran [EMAIL PROTECTED] wrote: In response to Scott Marlowe [EMAIL PROTECTED]: On Mon, Nov 3, 2008 at 12:01 PM, Bill Moran [EMAIL PROTECTED] wrote: I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Use date_trunc where date_trunc('minute',timefield)=date_trunc('minute',now()); I might have the args backwards. Hunh ... # select date_trunc('minute','13:45:15'::time); date_trunc @ 13 hours 45 mins (1 row) # select date_trunc('minute','13:45:15'::time with time zone); ERROR: function date_trunc(unknown, time with time zone) does not exist LINE 1: select date_trunc('minute','13:45:15'::time with time zone); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. # select date_trunc('minute',('13:45:15'::time with time zone)::time); date_trunc @ 13 hours 45 mins (1 row) Curiouser and curiouser ... Ahhh, not timestamps, but times... You might have to add the time to some date to run it through date_trunc. Actually, the more I look at this the more I think extract / date_part might be your best answer. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
date_trun() with timezones? (was Re: [GENERAL] TIME column manipulation/comparison hangups)
In response to Scott Marlowe [EMAIL PROTECTED]: On Mon, Nov 3, 2008 at 12:25 PM, Bill Moran [EMAIL PROTECTED] wrote: In response to Scott Marlowe [EMAIL PROTECTED]: On Mon, Nov 3, 2008 at 12:01 PM, Bill Moran [EMAIL PROTECTED] wrote: I'm trying to test the time in a time column to see if it's the same minute as the current time. I wouldn't have thought this would be difficult: WHERE TO_CHAR(now(), 'HH24MI') = TO_CHAR(time_column, 'HH24MI') Use date_trunc where date_trunc('minute',timefield)=date_trunc('minute',now()); I might have the args backwards. Hunh ... # select date_trunc('minute','13:45:15'::time); date_trunc @ 13 hours 45 mins (1 row) # select date_trunc('minute','13:45:15'::time with time zone); ERROR: function date_trunc(unknown, time with time zone) does not exist LINE 1: select date_trunc('minute','13:45:15'::time with time zone); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. # select date_trunc('minute',('13:45:15'::time with time zone)::time); date_trunc @ 13 hours 45 mins (1 row) Curiouser and curiouser ... Ahhh, not timestamps, but times... You might have to add the time to some date to run it through date_trunc. Not quite. As shown in the examples, date_trunc() works fine on TIME WITHOUT TIME ZONE, but there's no such function for TIME WITH TIME ZONE. Is that an oversight, or does the timezone add some ambiguity that date_trunc() can't handle? -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: date_trun() with timezones? (was Re: [GENERAL] TIME column manipulation/comparison hangups)
Bill Moran [EMAIL PROTECTED] writes: Not quite. As shown in the examples, date_trunc() works fine on TIME WITHOUT TIME ZONE, but there's no such function for TIME WITH TIME ZONE. Well, actually there's no date_trunc for time either: regression=# \df date_trunc List of functions Schema |Name| Result data type |Argument data types ++-+--- pg_catalog | date_trunc | interval| text, interval pg_catalog | date_trunc | timestamp with time zone| text, timestamp with time zone pg_catalog | date_trunc | timestamp without time zone | text, timestamp without time zone (3 rows) However, the interval version of the function can capture the time case because there's an implicit cast from time to interval: regression=# select casttarget::regtype,castcontext,castfunc::regprocedure from pg_cast where castsource = 'time'::regtype; casttarget | castcontext |castfunc +-+ interval | i | interval(time without time zone) time with time zone| i | timetz(time without time zone) time without time zone | i | time(time without time zone,integer) (3 rows) There's no implicit cast from timetz to interval, which I suppose is because it would be an information-losing transform. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
Scott Marlowe wrote: On Mon, Nov 3, 2008 at 12:25 PM, Scott Marlowe [EMAIL PROTECTED] wrote: On Mon, Nov 3, 2008 at 11:30 AM, Jason Long [EMAIL PROTECTED] wrote: I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600 MHz, 12 MB cache, 3.0 GHz quad core, and 4 GB RAM. My database is only about 50 MB and there are only about 20 users. For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. After reboot all is well for another week or so, but this brings the system to a grinding halt. What is the best way to debug this? Can I limit Postgres to a certain number of cores or set the timeout on the queries to a lower value? How about preventing this lockup by limiting CPU resources to Postgres or giving up if a query takes too long? I am barely able to log in let alone poke around. Best way I've found it to keep track of the server over a period of time. nagios and mrtg are your friends here. Thank you for the advice. I will investigate these options. You can use some more primitive methods, like ps ax|grep postgres|wc -l to see how many postgres backends are running. You need to figure out exactly what's happening to the machine before it dies, but as its approaching that point. Also, use the built in pg_xxx tables / views that show you what the server is doing. use vmstat, iostat, top and other tools to keep track. If you're on Windows, ignore all that and ask someone else cause I don't know enough about troubleshooting windows systems to be a lot of help there.
pg_casts view (was Re: date_trun() with timezones? (was Re: [GENERAL] TIME column ...))
Tom Lane escribió: However, the interval version of the function can capture the time case because there's an implicit cast from time to interval: regression=# select casttarget::regtype,castcontext,castfunc::regprocedure from pg_cast where castsource = 'time'::regtype; casttarget | castcontext |castfunc +-+ interval | i | interval(time without time zone) time with time zone| i | timetz(time without time zone) time without time zone | i | time(time without time zone,integer) (3 rows) BTW it very much looks like we should have a pg_casts view that displays these things in a human-readable manner (like the above except with castcontext expanded) -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: pg_casts view (was Re: date_trun() with timezones? (was Re: [GENERAL] TIME column ...))
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane escribió: regression=# select casttarget::regtype,castcontext,castfunc::regprocedure from pg_cast where castsource = 'time'::regtype; BTW it very much looks like we should have a pg_casts view that displays these things in a human-readable manner (like the above except with castcontext expanded) There already is a \dC command in psql, which has nice enough output format but doesn't provide any way to select a subset of the table. Maybe we should just agree that its argument is a pattern for the castsource type's name? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: pg_casts view (was Re: date_trun() with timezones? (was Re: [GENERAL] TIME column ...))
Tom Lane escribió: Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane escribi�: regression=# select casttarget::regtype,castcontext,castfunc::regprocedure from pg_cast where castsource = 'time'::regtype; BTW it very much looks like we should have a pg_casts view that displays these things in a human-readable manner (like the above except with castcontext expanded) There already is a \dC command in psql, which has nice enough output format but doesn't provide any way to select a subset of the table. Maybe we should just agree that its argument is a pattern for the castsource type's name? Yeah, that sounds good enough ... I seem to recall having used casttarget as condition a couple of times, but I think it's a strange enough case that it is OK to just modify the query when that's needed; normal usage would seem to be what you propose. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] INSERT with RETURNING clause inside SQL function
Hi all, I'm re-writing some functions and migrating bussines logic from a client application to PostgreSQL. I expected something like this to work, but it doesn't: -- simple table CREATE TABLE sometable ( id SERIAL PRIMARY KEY, text1 text, text2 text ); CREATE OR REPLACE FUNCTION add_something(text, text) RETURNS INTEGER AS $$ INSERT INTO sometable (id, foo, bar ) VALUES (DEFAULT, $1, $2 ) RETURNING id ; $$ LANGUAGE SQL ; Please note the use of RETURNING clause. If I put a SELECT 1; after the INSERT, the function works (but doesn't returns any useful value :) I need the function to return the last insert id. And yes, I'm aware that the same can be achieved by selecting the greatest id in the SERIAL secuence, but is not as readable as RETURNING syntax. And no, for me it's not important that RETURNING is not standard SQL. Does anyone knows why RETURNING doesn't works inside SQL functions? Any advise will be very appreciated. TIA. diego -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
Forgot to mention: using 8.3.3 on FreeBSD. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Debugging infrequent pegged out CPU usage
Jason Long wrote: Scott Marlowe wrote: On Mon, Nov 3, 2008 at 12:25 PM, Scott Marlowe [EMAIL PROTECTED] wrote: On Mon, Nov 3, 2008 at 11:30 AM, Jason Long [EMAIL PROTECTED] wrote: I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600 MHz, 12 MB cache, 3.0 GHz quad core, and 4 GB RAM. My database is only about 50 MB and there are only about 20 users. For some reason Postgres is pegging my CPU and I can barely log on to reboot the machine. After reboot all is well for another week or so, but this brings the system to a grinding halt. What is the best way to debug this? Can I limit Postgres to a certain number of cores or set the timeout on the queries to a lower value? How about preventing this lockup by limiting CPU resources to Postgres or giving up if a query takes too long? I am barely able to log in let alone poke around. You can't limit cpu usage but you can set timeouts. See statement_timeout here: http://www.postgresql.org/docs/8.3/interactive/runtime-config.html Set up your logging. Even if you do have to reboot, you can at least go back through the logs to find out what happened just before the reboot. -- Postgresql php tutorials http://www.designmagick.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Are there plans to add data compression feature to postgresql?
It doesn't matter that much, anyway, in that deflate would also do the job quite well for any sort of site-to-site or user-to-site WAN link. I used to use that, then switched to bzip. Thing is, if your client is really just issuing SQL, how much does it matter? Compression can't help with latency. Which is why I went with 3 tiers, so that all communication with Postgres occurs on the server, and all communication between server client is binary, compressed, and a single request/response per user request regardless of how many tables the data is pulled from. -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
Diego Schulz [EMAIL PROTECTED] writes: I expected something like this to work, but it doesn't: CREATE OR REPLACE FUNCTION add_something(text, text) RETURNS INTEGER AS $$ INSERT INTO sometable (id, foo, bar ) VALUES (DEFAULT, $1, $2 ) RETURNING id ; $$ LANGUAGE SQL ; This case was implemented last week. In existing release branches you'll need to use currval or some other workaround to collect the serial value. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] JDBC and setting statement_timeout
In order to keep my application from freezing up when a query pegs my CPU I set statement_timeout=12, but I read in the manual Setting statement_timeout in postgresql.conf is not recommended because it affects all sessions. I am used JDBC exclusively for the applicatoin and I read here http://www.nabble.com/Implementing-setQueryTimeout()-ts15537669.html http://www.nabble.com/Implementing-setQueryTimeout%28%29-ts15537669.html that setTimeout is not implemented in the JDBC driver for PostgreSQL. I would like for nearly all of my queries to throw and error if they do not complete in a certain time and if I know if might take more I would like to set this explicitly for that query. *Would someone please comment on the status of setQueryTimeout in the JDBC driver? Is there any workaround if this is still not implemented?*
Re: [GENERAL] JDBC and setting statement_timeout
On Mon, 3 Nov 2008, Jason Long wrote: *Would someone please comment on the status of setQueryTimeout in the JDBC driver? Is there any workaround if this is still not implemented?* setQueryTimeout is not implemented, the workaround is to manually issue SET statement_timeout = xxx calls via Statement.execute. Kris Jurka -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
On 04/11/2008 01:20, Diego Schulz wrote: I also tried this (somewhat silly) syntax to circumvent the issue without resorting in currval: Just curious - what have you got against currval()? It seems to me that it would make your life easier Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] gin index creation performance problems
On Mon, 03 Nov 2008 11:04:45 -0500 Tom Lane [EMAIL PROTECTED] wrote: Ivan Sergio Borgonovo [EMAIL PROTECTED] writes: maintenance_work_mem is still untouched. What would be a good value to start from? GIN index build time is *very* sensitive to maintenance_work_mem. Try cranking it up to a couple hundred megabytes and see if that helps. 200MB did improve the situation. I was waiting a clue about a reasonable value from where to start. But the performance is far from being stable. It can take few minutes to more than 20min even with this setting. I can't understand if it is CPU bound or RAM bound. CPU load is always near 100% while postgresql is using 15% of RAM. Still I've 52K of swap used... But anyway the performance is very erratic. BTW Is maintenance_work_mem set per connection? While gin index looks appreciably faster (actually it is lightening fast) for searches I'm considering to revert to gist since even with 200MB maintenance_work_mem it still look a pain to build up the index, especially considering the random time required to build it up. What puzzle me is that while before increasing maintenance_work mem it was always terribly slow now there is a huge variation in rebuilt time with 200MB. Even vacuum full; can be pretty slow (5min) and still 100% CPU use. The index is dropped at the beginning and rebuilt at the end inside a quite busy transaction. Could it be related? Also, if you're on 8.2, I think 8.3 might be faster. 8.3 etch backport. thanks -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Are there plans to add data compression feature to postgresql?
Scott Ribe wrote: It doesn't matter that much, anyway, in that deflate would also do the job quite well for any sort of site-to-site or user-to-site WAN link. I used to use that, then switched to bzip. Thing is, if your client is really just issuing SQL, how much does it matter? It depends a lot on what your requests are. If you have queries that must return significant chunks of data to the client then compression will help with total request time on a slow link, in that there's less data to transfer so the last byte arrives sooner. Of course it's generally preferable to avoid transferring hundreds of KB of data to the client in the first place, but it's not always practical. Additionally, not all connection types have effectively unlimited data transfers. Many mobile networks, for example, tend to have limits on monthly data transfers or charge per MB/KB transferred. Wire compression would be nice for performance on slower networks, but it's mostly appealing for reducing the impact on other users on a WAN, reducing data transfer costs, reducing required WAN capacity, etc. It's appealing because it looks like it should be possible to make it quite simple to enable or disable, so it'd be a simple ODBC/JDBC connection option. Compression can't help with latency. Not with network round trip latency, no. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] I'm puzzled by a foreign key constraint problem
I've been tearing my hair out over this one issue and I'm hoping that someone on this list will have an insight on the matter that will shed some light on why the system is doing what it's doing. I have a database with a number of tables, two of which are projects and resources. We also have a users table and tables for files and folders and such. Files, folders, users, and so forth are considered resources and their identifiers (UUID's) are listed in the resources table. Entries in the resources table are used to do things like attach metadata to resources and to set permissions to allow users other than the owner to access specified resources. Because permissions are associated with resources, there is a foreign key constraint between the resourceid field in the permissions table and the resourceid column of the resource table, which is the primary key of the resource table. The system itself is written in C++ and database wrappers have been created which work with the original database (MS SQL Server) and we are porting to Postgres. The stored procedures (250 or so of them) have been converted into Postgres functions using a tool called SQLWays. I'm using named prepared procedures for each of the Postgres functions that are called, plus three, BEGIN, COMMIT, and ROLLBACK, which consist of the named operation. Those three are necessary because we're returning data in cursors, so each time one of those functions is executed it's done inside a transaction and a status code is returned which determines whether the transaction is committed or rolled back. When I create a project, entries in the project table and the resource table are created in a single function. Then, separate functions are called to set the owner's access to the new project. These other functions are failing because of the resourceid foreign key constraint. Now, I've turned the logging as far up as it will go, and I seem to have verified that the first transaction, the one that populates the resource table, is completed and committed before any calls are made to the function that sets the permissions value. (It's called five times and the all fail.) The logging generated by the foreign key constraint failure includes the UUID for the resource and I can go back in later and do a query for that record in the resource table and that query successfully returns a single row. Also, I can go in at a later time and execute the function that sets the permissions and it works. To me, this implies that it's a race condition. When I first ran into this problem last week I set the foreign key check to deferred and it seemed to stop complaining although I don't remember what steps (if any) I did to test that conclusion. In any case, it happens with the check set to deferred, so that didn't fix the problem. I don't think it should have had an effect anyway. I spent the afternoon reading documentation about constraints and keys and various options associated with them Anyway, I need for these operations to succeed because the lack of permissions causes odd problems in other parts of the system. I don't even know where to begin looking for the problem or what magic might be useful at allowing the system to work as intended. I'm sure I'm doing something wrong, but I don't know what. Does this problem ring a bell with anyone? Is there any information that isn't clear from my description or which I could gather which would be helpful? !DSPAM:1544,490fb75c40305259311678! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
On Mon, Nov 3, 2008 at 10:24 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote: On 04/11/2008 01:20, Diego Schulz wrote: I also tried this (somewhat silly) syntax to circumvent the issue without resorting in currval: Just curious - what have you got against currval()? It seems to me that it would make your life easier Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- I simply don't like having to cast from BIGINT to INTEGER, as currval returns BIGINT while the index of my table is INTEGER. I think isn't as readable and elegant as the single INSERT ... RETURNING value. Being the only choice at this time (that I'm aware of) I'm using something like this: SELECT CAST(CURRVAL('mytable_id_seq') AS INTEGER); If I can avoid messing with sequence manipulation functions, surely I will. Cheers! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] I'm puzzled by a foreign key constraint problem
Jonathan Guthrie wrote: The stored procedures (250 or so of them) have been converted into Postgres functions using a tool called SQLWays. I'm using named prepared procedures for each of the Postgres functions that are called, plus three, BEGIN, COMMIT, and ROLLBACK, which consist of the named operation. You're talking about prepared statements at the ODBC level, right? Is this with client-side or with server-side prepare? PostgreSQL won't actually let you use these in prepared statements at the SQL level: craig= PREPARE teststm AS BEGIN; ERROR: syntax error at or near BEGIN LINE 1: PREPARE teststm AS BEGIN; Nor can you use them in a function: craig= CREATE OR REPLACE FUNCTION testfn() RETURNS void AS $$ craig$ BEGIN; craig$ $$ LANGUAGE 'sql'; CREATE FUNCTION craig= SELECT testfn(); ERROR: BEGIN is not allowed in a SQL function CONTEXT: SQL function testfn during startup This makes sense, given that to invoke a function without a containing transaction is impossible; PostgreSQL will implicitly wrap it in a transaction that's committed as soon as the statement is executed. I don't *think* you can use BEGIN etc in prepared statements at the v3 protocol level for the same reasons, but I'm not 100% certain of that. Given those limitations, I'm assuming you're talking about named prepared statements on the client side. Now, I've turned the logging as far up as it will go, and I seem to have verified that the first transaction, the one that populates the resource table, is completed and committed before any calls are made to the function that sets the permissions value. What transaction isolation level are you using? If you're on the READ COMMITTED level, then yes all you should need is for the transaction that creates the records of interest to commit before another transaction (even one that was already running) can see the values. In any case, I'm a little puzzled as to why you're not doing the creation of the initial records and the related permissions records etc all in the one transaction. Anyway, I need for these operations to succeed because the lack of permissions causes odd problems in other parts of the system. That really shows that you need to do it all in one transaction, then. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
On Mon, Nov 3, 2008 at 8:51 PM, Tom Lane [EMAIL PROTECTED] wrote: Diego Schulz [EMAIL PROTECTED] writes: I expected something like this to work, but it doesn't: CREATE OR REPLACE FUNCTION add_something(text, text) RETURNS INTEGER AS $$ INSERT INTO sometable (id, foo, bar ) VALUES (DEFAULT, $1, $2 ) RETURNING id ; $$ LANGUAGE SQL ; This case was implemented last week. In existing release branches you'll need to use currval or some other workaround to collect the serial value. regards, tom lane Thank you Tom. Happy to read it's implemented now! :) After re-reading the docs: ...the final command _must be a SELECT_ that returns whatever is specified as the function's return type I also tried this (somewhat silly) syntax to circumvent the issue without resorting in currval: CREATE OR REPLACE FUNCTION add_something(text, text) RETURNS INTEGER AS $$ SELECT id FROM ( INSERT INTO sometable (id, foo, bar ) VALUES (DEFAULT, $1, $2 ) RETURNING id ) ; $$ LANGUAGE SQL ; and CREATE OR REPLACE FUNCTION add_something(text, text) RETURNS INTEGER AS $$ SELECT last_insert_id FROM ( INSERT INTO sometable (id, foo, bar ) VALUES (DEFAULT, $1, $2 ) RETURNING id ) AS last_insert_id ; $$ LANGUAGE SQL ; As expected, none of them works as *I* expected. You know, fools keep trying.. and eventually hit :) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT with RETURNING clause inside SQL function
Diego Schulz [EMAIL PROTECTED] writes: On Mon, Nov 3, 2008 at 10:24 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote: Just curious - what have you got against currval()? It seems to me that it would make your life easier I simply don't like having to cast from BIGINT to INTEGER, Under what circumstances do you need an explicit cast? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] I'm puzzled by a foreign key constraint problem
Jonathan Guthrie wrote: When I create a project, entries in the project table and the resource table are created in a single function. Then, separate functions are called to set the owner's access to the new project. These other functions are failing because of the resourceid foreign key constraint. Have you turned statement logging on? Your message suggests that's the case, but didn't say so explicitly. Are the two steps: 1. Create project, resource 2. Set access-rights done in separate connections by any chance? If so it's possible (due to MVCC) that #2 is still seeing the database as it was before #1 committed. Anyway, I need for these operations to succeed because the lack of permissions causes odd problems in other parts of the system. If you want both steps to succeed or fail together though, they need to be in the same transaction. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general