Re: [GENERAL] Using PG with Windows EFS or TrueCrypt for encryption
On Wed, Dec 8, 2010 at 01:19, Brady Mathis bmat...@r-hsoftware.com wrote: Hi - I have searched the lists for comments about using PG with EFS and/or TrueCrypt in order to encrypt the entire database transparently. I found a few posts making reference to this possibility so I have tried them both, but I didn't get either to work. I have PG-8.3 running on Windows server 2008 (64-bit). In the first scenario I just used Windows EFS (encrypting file system) to encrypt the database OID folder in the data\ folder. After I did this, the PG service started, but I could not access the database in pgAdmin. Then I attempted to mount a normal encrypted volume with TrueCrypt, move the data\ and sub-folders to this volume and reconfigure PG to point to this as the data folder. Now, the PG service will not start at all. Has anyone implemented something like this for PG in Windows? Either one of these two should work fine. What you have to worry about is if they honor the synchronous I/O flags and commands properly - I don't know if either of them do. And of course, it'll be really slow. You need to look in your eventlog to get the messages that tell you why it failed... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Fwd: [ADMIN] Create database/table using postgresql stored function
-- Regards, Manasi Save Database Administrator Artificial Machines Private Limited manasi.s...@artificialmachines.com Ph:-9833537392 - Forwarded message from Manasi Save - Date: Wed, 08 Dec 2010 04:43:50 -0500 From: Manasi Save Reply-To: Manasi Save Subject: [ADMIN] Create database/table using postgresql stored function To: pgsql-ad...@postgresql.org Hi All, I want to create postgresql database and tables using stored functions. Does postgresql support create statements in stored functions. Regards, Manasi - End forwarded message -
Re: [GENERAL] Abusing Postgres in fun ways.
I'm creating a data queue on top of postgres and I'm wondering if I've made an incorrect assumption about isolation or synchronization or some similar issue. Is there a particular reason why you are not using any of the proven queuing packages (pgq for example)? Because all the issues seem pretty general to me, and are solved in those packages. Tomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] OID of type by name.
Hey general@, SELECT oid FROM pg_type WHERE typname = 'integer'; oid - (0 rows) SELECT oid FROM pg_type WHERE typname = 'int4'; oid - 23 (1 row) How can I get OID by name rather than alias ? -- // Dmitriy.
[GENERAL] How to obtain the maximum value of a date, between 3 tables...
Hi, I need to obtain the maximum value of a date, but that comparison will be made between 3 tables... I will explain better with a query... [code] select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo [/code] My question is how is the best way to obtain with date is the greatest, d1, d2 or d3 Can you guys give me a clue. Best Regards,
Re: [GENERAL] Using PG with Windows EFS or TrueCrypt for encryption
Brady, Then I attempted to mount a normal encrypted volume with TrueCrypt, move the data\ and sub-folders to this volume and reconfigure PG to point to this as the data folder. Now, the PG service will not start at all. moving data and subfolder on NTFS is a Level-20 operation. The usual cases for PostgreSQL-Service not starting ar: a) user account has wrong privileges b) user account has lost Logon as Service c) password of user account was changed / invalidate by some system policy / administrator d) user account which the PostgreSQL service logs on with is not able to acces the data-directories. d) is usually anaylizable via the system eventviewer. Most likely cause during your copy operation: the permission on the directories where changed. OR: the link to the Data-directory (part of the service-configuration) within services.msc is no longer valid (as in: data in different place) I can confirm that is possible to have a database on a TrueCrypt encrypted volume. It is dog slow. My impression is that data from that encypted volume is not really cached. Harald Has anyone implemented something like this for PG in Windows? Thanks! Brady -- Brady Mathis | bmat...@r-hsoftware.com | 877.696.6547 ext 102 -- GHUM GmbH Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 Amtsgericht Stuttgart, HRB 734971 - persuadere. et programmare
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
On Wed, Dec 8, 2010 at 7:15 AM, Andre Lopes lopes80an...@gmail.com wrote: Hi, I need to obtain the maximum value of a date, but that comparison will be made between 3 tables... I will explain better with a query... [code] select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo [/code] My question is how is the best way to obtain with date is the greatest, d1, d2 or d3 If you don't need to know which table it came from I would probably try select a.last_refresh_date as d1, NULL as d2, NULL as d3 FROM tbl1 as a UNION ALL ... -- Jon -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OID of type by name.
SELECT oid FROM pg_type WHERE typtype = 'b' AND typarray 0 AND typname::regtype = 'integer'; Many thanks to Florian Pflug. 2010/12/8 Dmitriy Igrishin dmit...@gmail.com Hey general@, SELECT oid FROM pg_type WHERE typname = 'integer'; oid - (0 rows) SELECT oid FROM pg_type WHERE typname = 'int4'; oid - 23 (1 row) How can I get OID by name rather than alias ? -- // Dmitriy. -- // Dmitriy.
[GENERAL] Asynchronous query execution
Hello, I would like to know how can we execute the queries asynchronously? If we use and execute plpgsql functions they just completes the execution or throws an error on error. In between next sql statement waits for the previous one to complete the execution. But in few situations it is required not to wait for getting the completion of previous sql statement. How can this e achieved? Waiting for you response. CPK
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
Hi, Thanks for the reply. And there are other options to do it without using a UNION? I don't need to know from witch table comes the greatest date, but the query is complex, this query is part of an UNION. The use of the CASE WHEN could be an alternative? Best Regards, On Wed, Dec 8, 2010 at 1:20 PM, Jon Nelson jnelson+pg...@jamponi.netjnelson%2bpg...@jamponi.net wrote: On Wed, Dec 8, 2010 at 7:15 AM, Andre Lopes lopes80an...@gmail.com wrote: Hi, I need to obtain the maximum value of a date, but that comparison will be made between 3 tables... I will explain better with a query... [code] select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo [/code] My question is how is the best way to obtain with date is the greatest, d1, d2 or d3 If you don't need to know which table it came from I would probably try select a.last_refresh_date as d1, NULL as d2, NULL as d3 FROM tbl1 as a UNION ALL ... -- Jon
Re: [GENERAL] Asynchronous query execution
On 12/08/10 5:35 AM, c k wrote: Hello, I would like to know how can we execute the queries asynchronously? If we use and execute plpgsql functions they just completes the execution or throws an error on error. In between next sql statement waits for the previous one to complete the execution. But in few situations it is required not to wait for getting the completion of previous sql statement. How can this e achieved? one postgresql connection can only run one query at a time. run your asynchronous queries from a thread with its own connection -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Maximum size for char or varchar with limit
Hello, Does anyone know what the maximum length is for char or varchar columns with limit. I saw some answers to this same question referring to section 8.3 of the docs, but I don't see the actual numbers there. I know you can go to 1GB if you don't specify the limit, but I would like to know what the max limit is you can use. It seems to be 10485760 in my install. Is this fixed in PostgreSQL or does it depend on some configuration setting? Does it depend on the version (I am using 8.4)? Here is my test-sql: create table test (x varchar(1)) Error: ERROR: length for type varchar cannot exceed 10485760 SQLState: 22023 ErrorCode: 0 Thanks in advance, Rob -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Asynchronous query execution
On 2010-12-08 14.35, c k wrote: Hello, I would like to know how can we execute the queries asynchronously? If we use and execute plpgsql functions they just completes the execution or throws an error on error. In between next sql statement waits for the previous one to complete the execution. But in few situations it is required not to wait for getting the completion of previous sql statement. How can this e achieved? Is this what you're looking for: http://www.postgresql.org/docs/current/interactive/libpq-async.html Waiting for you response. CPK -- Regards, Robert roppert Gravsjö -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
On Wednesday 08 December 2010 5:47:25 am Rob Gansevles wrote: Hello, Does anyone know what the maximum length is for char or varchar columns with limit. I saw some answers to this same question referring to section 8.3 of the docs, but I don't see the actual numbers there. I know you can go to 1GB if you don't specify the limit, but I would like to know what the max limit is you can use. It seems to be 10485760 in my install. Is this fixed in PostgreSQL or does it depend on some configuration setting? Does it depend on the version (I am using 8.4)? Here is my test-sql: create table test (x varchar(1)) Error: ERROR: length for type varchar cannot exceed 10485760 SQLState: 22023 ErrorCode: 0 Thanks in advance, Rob I think you are looking for this: http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OID of type by name.
Dmitriy Igrishin dmit...@gmail.com writes: How can I get OID by name rather than alias ? SELECT oid FROM pg_type WHERE typtype = 'b' AND typarray 0 AND typname::regtype = 'integer'; Seems like the hard way --- if you think carefully about what regtype is doing, you'll realize that this is incredibly inefficient, as well as a lot of typing. I usually do SELECT 'integer'::regtype::oid when I need a quick numeric lookup. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OID of type by name.
Yeah, thank you very much! I've found it already too, but not post back! Thanks! 2010/12/8 Tom Lane t...@sss.pgh.pa.us Dmitriy Igrishin dmit...@gmail.com writes: How can I get OID by name rather than alias ? SELECT oid FROM pg_type WHERE typtype = 'b' AND typarray 0 AND typname::regtype = 'integer'; Seems like the hard way --- if you think carefully about what regtype is doing, you'll realize that this is incredibly inefficient, as well as a lot of typing. I usually do SELECT 'integer'::regtype::oid when I need a quick numeric lookup. regards, tom lane -- // Dmitriy.
Re: [GENERAL] OID of type by name.
Actually, all I need is to: SELECT oid::regtype, oid FROM pg_type WHERE ... to make cache of OIDs. 2010/12/8 Dmitriy Igrishin dmit...@gmail.com Yeah, thank you very much! I've found it already too, but not post back! Thanks! 2010/12/8 Tom Lane t...@sss.pgh.pa.us Dmitriy Igrishin dmit...@gmail.com writes: How can I get OID by name rather than alias ? SELECT oid FROM pg_type WHERE typtype = 'b' AND typarray 0 AND typname::regtype = 'integer'; Seems like the hard way --- if you think carefully about what regtype is doing, you'll realize that this is incredibly inefficient, as well as a lot of typing. I usually do SELECT 'integer'::regtype::oid when I need a quick numeric lookup. regards, tom lane -- // Dmitriy. -- // Dmitriy.
Re: [GENERAL] Maximum size for char or varchar with limit
Adrian, Thanks for the reply, but this refers to max row or field size, it does not tell me where the max varchar limit of 10485760 comes from and if this is fixed or whether it depends on something else Has anyone some info on this? Rob On Wed, Dec 8, 2010 at 3:34 PM, Adrian Klaver adrian.kla...@gmail.com wrote: On Wednesday 08 December 2010 5:47:25 am Rob Gansevles wrote: Hello, Does anyone know what the maximum length is for char or varchar columns with limit. I saw some answers to this same question referring to section 8.3 of the docs, but I don't see the actual numbers there. I know you can go to 1GB if you don't specify the limit, but I would like to know what the max limit is you can use. It seems to be 10485760 in my install. Is this fixed in PostgreSQL or does it depend on some configuration setting? Does it depend on the version (I am using 8.4)? Here is my test-sql: create table test (x varchar(1)) Error: ERROR: length for type varchar cannot exceed 10485760 SQLState: 22023 ErrorCode: 0 Thanks in advance, Rob I think you are looking for this: http://wiki.postgresql.org/wiki/FAQ#What_is_the_maximum_size_for_a_row.2C_a_table.2C_and_a_database.3F -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
On Wednesday 08 December 2010 7:06:07 am Rob Gansevles wrote: Adrian, Thanks for the reply, but this refers to max row or field size, it does not tell me where the max varchar limit of 10485760 comes from and if this is fixed or whether it depends on something else Has anyone some info on this? Rob In varchar(n) the n is length of character not bytes. The best description of what that means is from section 8.3 The storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. Very long values are also stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to store long strings with no specific upper limit, use text or character varying without a length specifier, rather than making up an arbitrary length limit.) So the answer is, it depends on your encoding. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Asynchronous query execution
On Wed, Dec 8, 2010 at 8:40 AM, John R Pierce pie...@hogranch.com wrote: On 12/08/10 5:35 AM, c k wrote: Hello, I would like to know how can we execute the queries asynchronously? If we use and execute plpgsql functions they just completes the execution or throws an error on error. In between next sql statement waits for the previous one to complete the execution. But in few situations it is required not to wait for getting the completion of previous sql statement. How can this e achieved? one postgresql connection can only run one query at a time. run your asynchronous queries from a thread with its own connection This is only asynchronous from client point of view. Meaning, while the query is running, you can go off and do other work on the client. There is iron clad rule of one query running per database session at a time. From pl/pgsql point of view, only method of working around this is using dblink style tricks to connect to the database from within function and run queries. dblink supports asynchronous querying so you can leverage that: dblink_send_query(text connname, text sql) returns int From client point of view, you have a number of techniques. async_queries/threads and multiple connections would be the most common approaches. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
Adrian Klaver adrian.kla...@gmail.com writes: So the answer is, it depends on your encoding. No, it doesn't. What Rob is looking for is this bit in htup.h: /* * MaxAttrSize is a somewhat arbitrary upper limit on the declared size of * data fields of char(n) and similar types. It need not have anything * directly to do with the *actual* upper limit of varlena values, which * is currently 1Gb (see TOAST structures in postgres.h). I've set it * at 10Mb which seems like a reasonable number --- tgl 8/6/00. */ #define MaxAttrSize (10 * 1024 * 1024) The rationale for having a limit of this sort is (a) we *don't* want the upper limit of declarable length to be encoding-dependent; and (b) if you are trying to declare an upper limit that's got more than a few digits in it, you almost certainly ought to not be declaring a limit at all. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
Thanks Tom, this is very helpful. Rob -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
On 12/08/2010 08:04 AM, Tom Lane wrote: Adrian Klaveradrian.kla...@gmail.com writes: So the answer is, it depends on your encoding. No, it doesn't. What Rob is looking for is this bit in htup.h: /* * MaxAttrSize is a somewhat arbitrary upper limit on the declared size of * data fields of char(n) and similar types. It need not have anything * directly to do with the *actual* upper limit of varlena values, which * is currently 1Gb (see TOAST structures in postgres.h). I've set it * at 10Mb which seems like a reasonable number --- tgl 8/6/00. */ #define MaxAttrSize (10 * 1024 * 1024) The rationale for having a limit of this sort is (a) we *don't* want the upper limit of declarable length to be encoding-dependent; and (b) if you are trying to declare an upper limit that's got more than a few digits in it, you almost certainly ought to not be declaring a limit at all. regards, tom lane Well that explains it :) Would it be possible to change the below section in the docs to state that the declared max value of n is limited to a max string size of 10Mb? I have always taken it to mean that the max value was calculated based off the encoding. Then again it might just be me. The maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.1 pg_last_xact_replay_timestamp limitations
On Tuesday 07 December 2010 21:58:56 you wrote: On Wed, Dec 8, 2010 at 1:31 AM, Gabi Julien gabi.jul...@broadsign.com wrote: pg_last_xact_replay_timestamp() returns null when the server is restarted until a new transaction is streamed to the hot standby server. It might take a long time before this happens. Because of this, we can't rely this function completely. I couldn't reproduce this. Could you provide a self-contained test case? I have merge the pg_last_xact_replay_timestamp path (7ba6e4f0e08bd7bdf4d12974ac1e37fb0459c97c) to 9.0 to get that result so maybe my tests are invalid although the patch looks simple enough. I'll try to reproduce using 9.1alpha. What I have is 1 master and 1 slave. I do not use WAL log shipping, only streaming. Here's my recovery.conf on the slave: standby_mode = 'on' primary_conninfo = 'host=master_host_name port=5432' trigger_file = '/opt/postgresql/data/finish.replication' The master postgresql.conf is fairly normal except for this: === wal_level = hot_standby === Same for the slave except for this: === hot_standby = on === Now if I do: master# /etc/init.d/postgresql start slave# /etc/init.d/postgresql start slave# psql -hlocalhost my_db -c select pg_last_xact_replay_timestamp(), now() as not_modified_since; pg_last_xact_replay_timestamp | not_modified_since ---+--- | 2010-12-08 16:06:09.920219+00 master# psql -hlocalhost my_db -c create table trigger_transaction_shipping(a numeric); drop table trigger_transaction_shipping; DROP TABLE slave# psql -hlocalhost my_db -c select pg_last_xact_replay_timestamp(), now() as not_modified_since; pg_last_xact_replay_timestamp | not_modified_since ---+--- 2010-12-08 16:07:34.408892+00 | 2010-12-08 16:08:07.969588+00 Only after the first update from the master do I get my pg_last_xact_replay_timestamp timestamp. Regards, Gabi Julien -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
Adrian Klaver adrian.kla...@gmail.com writes: On 12/08/2010 08:04 AM, Tom Lane wrote: The rationale for having a limit of this sort is (a) we *don't* want the upper limit of declarable length to be encoding-dependent; and (b) if you are trying to declare an upper limit that's got more than a few digits in it, you almost certainly ought to not be declaring a limit at all. Well that explains it :) Would it be possible to change the below section in the docs to state that the declared max value of n is limited to a max string size of 10Mb? I don't really see any point in that. The value is meant to be an order of magnitude or so more than anything that's sane according to point (b). If you think you need to know what it is, you're already doing it wrong. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Maximum size for char or varchar with limit
On 12/08/2010 09:05 AM, Tom Lane wrote: Adrian Klaveradrian.kla...@gmail.com writes: On 12/08/2010 08:04 AM, Tom Lane wrote: The rationale for having a limit of this sort is (a) we *don't* want the upper limit of declarable length to be encoding-dependent; and (b) if you are trying to declare an upper limit that's got more than a few digits in it, you almost certainly ought to not be declaring a limit at all.ion Well that explains it :) Would it be possible to change the below section in the docs to state that the declared max value of n is limited to a max string size of 10Mb? I don't really see any point in that. The value is meant to be an order of magnitude or so more than anything that's sane according to point (b). If you think you need to know what it is, you're already doing it wrong. regards, tom lane Well the determination of sanity is often in the eye of the beholder and it would be nice to know where the line is. At any rate the answer is the archives now, so I know where to get it. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
On Wed, Dec 8, 2010 at 5:15 AM, Andre Lopes lopes80an...@gmail.com wrote: Hi, I need to obtain the maximum value of a date, but that comparison will be made between 3 tables... I will explain better with a query... [code] select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo [/code] My question is how is the best way to obtain with date is the greatest, d1, d2 or d3 Can you guys give me a clue. Best Regards, How about using GREATEST? http://www.postgresql.org/docs/8.4/interactive/functions-conditional.html (9.16.4) select GREATEST(d1,d2,d3) from (...) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SELECT is immediate but the UPDATE takes forever
2010/12/7 Raimon Fernandez co...@montx.com: I'm using now another database with same structure and data and the delay doesn't exist there, there must be something wrong in my current development database. does autovacuum run on it? is the table massively bloated? is your disk system really, really slow to allocate new space? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
-Original Message- From: Andre Lopes [mailto:lopes80an...@gmail.com] Sent: Wednesday, December 08, 2010 8:16 AM To: postgresql Forums Subject: How to obtain the maximum value of a date, between 3 tables... Hi, I need to obtain the maximum value of a date, but that comparison will be made between 3 tables... I will explain better with a query... [code] select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo [/code] My question is how is the best way to obtain with date is the greatest, d1, d2 or d3 Can you guys give me a clue. Best Regards, This: SELECT GREATEST(q.d1, q.d2, q.d3) FROM (select a.last_refresh_date as d1, ae.last_refresh_date as d2, ha.last_refresh_date as d3 from tbl1 a join tbl2 ae on a.id_anuncio_externo = ae.id_anuncio_externo join tbl3 ha on a.id_anuncio_externo = ha.id_anuncio_externo) q; should do it. Igor Neyman -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.1 pg_last_xact_replay_timestamp limitations
I just tried with postgresql 9.1alpha from http://www.enterprisedb.com/products/pgdevdownload.do (linux x86-32): postgres=# select pg_last_xact_replay_timestamp(), now() as not_modified_since; ERROR: function pg_last_xact_replay_timestamp() does not exist LINE 1: select pg_last_xact_replay_timestamp(), now() as not_modifie... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. A bug in the package? I guess they must have forgot to run genbki.sh and the function is missing from ./share/postgresql/postgres.bki. If I add the line manually and create the data folder after, it is still not working. A few precisions concerning my postgresql.conf I mentionned earlier: master as: = wal_level = hot_standby max_wal_senders = 30 wal_keep_segments = 100 = slave as: = hot_standby = on = On Wednesday 08 December 2010 11:37:51 Gabi Julien wrote: On Tuesday 07 December 2010 21:58:56 you wrote: On Wed, Dec 8, 2010 at 1:31 AM, Gabi Julien gabi.jul...@broadsign.com wrote: pg_last_xact_replay_timestamp() returns null when the server is restarted until a new transaction is streamed to the hot standby server. It might take a long time before this happens. Because of this, we can't rely this function completely. I couldn't reproduce this. Could you provide a self-contained test case? I have merge the pg_last_xact_replay_timestamp path (7ba6e4f0e08bd7bdf4d12974ac1e37fb0459c97c) to 9.0 to get that result so maybe my tests are invalid although the patch looks simple enough. I'll try to reproduce using 9.1alpha. What I have is 1 master and 1 slave. I do not use WAL log shipping, only streaming. Here's my recovery.conf on the slave: standby_mode = 'on' primary_conninfo = 'host=master_host_name port=5432' trigger_file = '/opt/postgresql/data/finish.replication' The master postgresql.conf is fairly normal except for this: === wal_level = hot_standby === Same for the slave except for this: === hot_standby = on === Now if I do: master# /etc/init.d/postgresql start slave# /etc/init.d/postgresql start slave# psql -hlocalhost my_db -c select pg_last_xact_replay_timestamp(), now() as not_modified_since; pg_last_xact_replay_timestamp | not_modified_since ---+--- | 2010-12-08 16:06:09.920219+00 master# psql -hlocalhost my_db -c create table trigger_transaction_shipping(a numeric); drop table trigger_transaction_shipping; DROP TABLE slave# psql -hlocalhost my_db -c select pg_last_xact_replay_timestamp(), now() as not_modified_since; pg_last_xact_replay_timestamp | not_modified_since ---+--- 2010-12-08 16:07:34.408892+00 | 2010-12-08 16:08:07.969588+00 Only after the first update from the master do I get my pg_last_xact_replay_timestamp timestamp. Regards, Gabi Julien -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.1 pg_last_xact_replay_timestamp limitations
Gabi Julien gabi.jul...@broadsign.com writes: I just tried with postgresql 9.1alpha from http://www.enterprisedb.com/products/pgdevdownload.do (linux x86-32): postgres=# select pg_last_xact_replay_timestamp(), now() as not_modified_since; ERROR: function pg_last_xact_replay_timestamp() does not exist LINE 1: select pg_last_xact_replay_timestamp(), now() as not_modifie... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. A bug in the package? No, only lack of a time machine. That function was added on 2010-11-09 according to the git logs. alpha2 froze at the end of October. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Uncommitted Data
On Wed, 2010-12-08 at 00:56 +, Jonathan Tripathy wrote: What does PG do with data that has been inserted into a table, but was never committed? Does the data get discarded once the connection dies? The data is there, but is not visible. You can run an explicit VACUUM to remove the dead rows, or you can wait for it to be garbage collected automatically at some point in the future, depending upon your workload. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_standby logging issues
On Tue, 2010-12-07 at 16:19 -0800, Greg Swisher wrote: Anything more elegant out there? http://projects.2ndquadrant.com/2warm -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Set new owner on cloned database
I am testing a Rails deployment and wish to copy a database assigning it an new owner. I have tried this: createdb --owner=hll_theheart_db_devl --template=hll_th_deploytest_prod hll_theheart_devl While this indeed sets the database owner to hll_theheart_db_devl everything else, schema, tables whatever, remains owned by the original owner. Is there no way to change the owner everywhere in the cloned database using cretedb? Or am I constrained to do a dump all and restore? -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Set new owner on cloned database
On Wed, 8 Dec 2010 13:40:29 -0500 (EST) James B. Byrne byrn...@harte-lyne.ca wrote: I am testing a Rails deployment and wish to copy a database assigning it an new owner. I have tried this: createdb --owner=hll_theheart_db_devl --template=hll_th_deploytest_prod hll_theheart_devl While this indeed sets the database owner to hll_theheart_db_devl everything else, schema, tables whatever, remains owned by the original owner. Is there no way to change the owner everywhere in the cloned database using cretedb? Or am I constrained to do a dump all and restore? http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg51048.html http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg51047.html I'm not sure if there has been any progress in newer postgres to support easier change of owner. I'm not aware of any more current better solution. Unfortunately I think the license of the above didn't help to make people willing to improve and make the code more popular. -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hanging with pg_restore and large objects
Hi. Tom. You wrote: That's pretty curious. Can you take the dump file to a non-Windows machine, or at least one with a different build of pg_restore, and see what happens there? I'm wondering about possible corrupted executable, buggy zlib, etc. I'll try to get a copy of the problematic data file to my Unix box in the coming days, and will report back on what happens. No, not that I've heard of. The most likely theory seems to be that the dump file is corrupt somehow. This raises a question that came up during our discussion of this problem: Is there a way to verify that a dumpfile was not corrupt? That is, without having to run pg_restore on the entire file, only to discover that the end is missing data. I haven't encountered data-recovery problems of this sort before, but it does surprise me that PostgreSQL doesn't check the integrity of the file before trying to read and then apply it. * Is there any obvious way to diagnose or work around this problem? Well, it'd be interesting to trace through it with a debugger. Ideally you shouldn't get an infinite loop (as this seems to be) even with corrupt input. Is the data sufficiently non-proprietary that you'd be willing to show the dump file to someone else? I'm guessing that if we have dummy data in there, then we can share it. I'll get back to you about this in the coming day or two. Thanks for the offer! Reuven -- Reuven M. Lerner -- Web development, consulting, and training Mobile: +972-54-496-8405 * US phone: 847-230-9795 Skype/AIM: reuvenlerner -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Set new owner on cloned database
Le 08/12/2010 22:41, Ivan Sergio Borgonovo a écrit : On Wed, 8 Dec 2010 13:40:29 -0500 (EST) James B. Byrne byrn...@harte-lyne.ca wrote: I am testing a Rails deployment and wish to copy a database assigning it an new owner. I have tried this: createdb --owner=hll_theheart_db_devl --template=hll_th_deploytest_prod hll_theheart_devl While this indeed sets the database owner to hll_theheart_db_devl everything else, schema, tables whatever, remains owned by the original owner. Is there no way to change the owner everywhere in the cloned database using cretedb? No, you can't. --owner changes only the owner of the database. Or am I constrained to do a dump all and restore? http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg51048.html http://www.mail-archive.com/pgsql-hack...@postgresql.org/msg51047.html I'm not sure if there has been any progress in newer postgres to support easier change of owner. I'm not aware of any more current better solution. You should try REASSIGN OWNED BY. See http://www.postgresql.org/docs/9.0/interactive/sql-reassign-owned.html -- Guillaume http://www.postgresql.fr http://dalibo.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pl/perl object destruction (or lack thereof) at session end
Hi, Looking at this: http://www.postgresql.org/docs/9.0/interactive/plperl-under-the-hood.html Specifically, the known limitations part, I see it says: When a session ends normally, not due to a fatal error, any END blocks that have been defined are executed. Currently no other actions are performed. Specifically, file handles are not automatically flushed and objects are not automatically destroyed. So I wondered what implications that has for stored procedures that create objects? Will the following code cause a memory leak if called many times, due to the $big object never being destroyed? CREATE FUNCTION foobar() RETURNS trigger AS $$ use Big::Module; my $big = Big::Module-new; $$ LANGUAGE plperlu Thanks, Toby -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pl/perl interpreter instance(s) - how long does it persist?
Hi, Apologies in advance if this has been covered before, but I've searched extensively without finding anything so far. I would like to know how long the pl/perl interpreter instances persist? I ask because I'm doing some work where we have PL/Perlu code that loads some Perl modules in trigger code. If I change the underlying modules that have been loaded, then when can I expect the new version to take effect? Initial testing seems to give me contrary results. In some cases, creating a new DB then loading the same module in a new stored procedure will get the old version. But in some other cases, the new DB will pick up the new version of the module. I wondered if this might be due to the backend having a number of perl interpreters in different processes, and I'm getting a fresh process sometimes, and sometimes I'm getting an old one that's already loaded the modules. So.. I'm a bit confused and hoped I could come and ask for advice. Thanks, Toby -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pl/perl object destruction (or lack thereof) at session end
On 09/12/10 13:00, Toby Corkindale wrote: Hi, Looking at this: http://www.postgresql.org/docs/9.0/interactive/plperl-under-the-hood.html Specifically, the known limitations part, I see it says: When a session ends normally, not due to a fatal error, any END blocks that have been defined are executed. Currently no other actions are performed. Specifically, file handles are not automatically flushed and objects are not automatically destroyed. So I wondered what implications that has for stored procedures that create objects? Will the following code cause a memory leak if called many times, due to the $big object never being destroyed? CREATE FUNCTION foobar() RETURNS trigger AS $$ use Big::Module; my $big = Big::Module-new; $$ LANGUAGE plperlu Perhaps I should test first and email later.. I couldn't see any memory leaking at all in some quick experimentation (on pg 8.4.5). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pl/perl interpreter instance(s) - how long does it persist?
Toby Corkindale toby.corkind...@strategicdata.com.au writes: I would like to know how long the pl/perl interpreter instances persist? Till end of session. Initial testing seems to give me contrary results. In some cases, creating a new DB then loading the same module in a new stored procedure will get the old version. But in some other cases, the new DB will pick up the new version of the module. Creating a DB doesn't affect the current session ... where in there did you reconnect? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pl/perl interpreter instance(s) - how long does it persist?
On 09/12/10 13:37, Tom Lane wrote: Toby Corkindaletoby.corkind...@strategicdata.com.au writes: I would like to know how long the pl/perl interpreter instances persist? Till end of session. Where session = connection (whether from psql, DBI, etc), yes? Initial testing seems to give me contrary results. In some cases, creating a new DB then loading the same module in a new stored procedure will get the old version. But in some other cases, the new DB will pick up the new version of the module. Creating a DB doesn't affect the current session ... where in there did you reconnect? To be honest, I'm not 100% sure I was reconnecting. I think I had it in my head that the perl instance would be related to role or database, so was creating new databases with new roles and testing code straight away. I thought I tried reconnecting, since that's obvious too, and in my limited local testing that seems to work; in staging/production environs its trickier to bounce the apps, so maybe I haven't been restarting them when I thought I was. Thanks for the advice; it sounds like I have been missing the obvious here, so will head off and do some more checking. Thanks for the quick response! Toby -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
Le mercredi 08 décembre 2010 à 13:35 +, Andre Lopes a écrit : Hi, Thanks for the reply. And there are other options to do it without using a UNION? I don't need to know from witch table comes the greatest date, but the query is complex, this query is part of an UNION. The use of the CASE WHEN could be an alternative? SELECT 'tbl1',last_refresh_date FROM tbl1 UNION SELECT 'tbl2', last_refresh_date FROM tbl2 UNION SELECT 'tbl3', last_refresh_date FROM tbl3 ORDER BY 1 DESC; -- Vincent Veyron http://marica.fr/ Progiciel de gestion des dossiers de contentieux et d'assurance pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.1 pg_last_xact_replay_timestamp limitations
On Thu, Dec 9, 2010 at 1:37 AM, Gabi Julien gabi.jul...@broadsign.com wrote: slave# /etc/init.d/postgresql start slave# psql -hlocalhost my_db -c select pg_last_xact_replay_timestamp(), now() as not_modified_since; pg_last_xact_replay_timestamp | not_modified_since ---+--- | 2010-12-08 16:06:09.920219+00 pg_last_xact_replay_timestamp returns the timestamp of last *replayed* transaction. So it returns NULL until at least one transaction has been replayed. In your case, I guess that you started the master and standby from the same initial database cluster or clean-shutdowned one. In this case, since the standby has no transaction to replay right after the startup, you got NULL until you executed the write query on the master. We should return the timestamp of last valid checkpoint rather than NULL in that case? Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
Le jeudi 09 décembre 2010 à 03:58 +0100, Vincent Veyron a écrit : SELECT 'tbl1',last_refresh_date FROM tbl1 UNION SELECT 'tbl2', last_refresh_date FROM tbl2 UNION SELECT 'tbl3', last_refresh_date FROM tbl3 ORDER BY 1 DESC; Argh... make that : ORDER BY 2 DESC; -- Vincent Veyron http://marica.fr/ Progiciel de gestion des dossiers de contentieux et d'assurance pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Set new owner on cloned database
On Wed, December 8, 2010 17:46, Guillaume Lelarge wrote: You should try REASSIGN OWNED BY. See http://www.postgresql.org/docs/9.0/interactive/sql-reassign-owned.html Thanks for that. I ended up doing a pg_dump followed by a sed followed by a psql which sufficed for my purposes, even it it did seem a bit convoluted. The REASSIGN OWNED BY seems the more sensible approach. -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Error handling in PL/PgSQL (without execution termination)
I am searching for the resource that explains how to handle SQL related exceptions in PL/PgSQL without letting the function's execution terminate. I would like to use his to address possible UNIQUE constraint violation (and resulting exception) attributed to multiple clients concurrently populating the given table. Allan. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SELECT is immediate but the UPDATE takes forever
On 8dic, 2010, at 18:18 , Vick Khera wrote: 2010/12/7 Raimon Fernandez co...@montx.com: I'm using now another database with same structure and data and the delay doesn't exist there, there must be something wrong in my current development database. does autovacuum run on it? no is the table massively bloated? no is your disk system really, really slow to allocate new space? no now: well, after a VACUUM things are going faster ... I'm still trying to analyze the function as it seems there are other bottlechecnk, but at least the first update now is faster as before ... thanks, r. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SELECT is immediate but the UPDATE takes forever
On 7dic, 2010, at 16:37 , Tom Lane wrote: Quoting Raimon Fernandez co...@montx.com: I want to understand why one of my postgresql functions takes an eternity to finish. Maybe there is any check or constraint on belongs_to_compte_id.comptes that might take longer? Or maybe the UPDATE is blocked on a lock ... did you look into pg_stat_activity or pg_locks to check? no, there's no lock, blocked, ... I'm the only user connected with my developer test database and I'm sure there are no locks, and more sure after looking at pg_locks :-) thanks, r. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] use a variable name for an insert in a trigger for an audit
Hello, I have to audit all the changes for all rows of one database. I have a trigger that executes BEFORE any update or delete, and simply copy the row (INSERT INTO) into the replicated table. For example, every table has the same name plus '_audit' at the end and belongs to the schema audit: table public.persons = audit.persons_audit I don't want to create specific triggers/functions for every table, so I want to modify the table_name in the INSERT INTO, using the TG_TABLE_NAME, but I can't make it working. Also I can't see a working solution in the archive, and some examples are quite messy to do, so maybe I have to rethink how I'm doing thinks or just create a specific trigger for each table. Here is my function, and I'm only testing now the INSERT: ... DECLARE tableRemote varchar; BEGIN IF TG_TABLE_NAME = 'assentaments' THEN tableRemote:='audit.'||TG_TABLE_NAME||'_audit'; END IF; -- -- Create a row in table_audit to reflect the operation performed on emp, -- make use of the special variable TG_OP to work out the operation. -- IF (TG_OP = 'DELETE') THEN EXECUTE 'INSERT INTO audit.assentaments_audit SELECT CURRVAL(''audit_id_seq''),5, OLD.*'; RETURN OLD; ELSIF (TG_OP = 'UPDATE') THEN INSERT INTO tableRemote SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*; RETURN OLD; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; ... thanks, regards, -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error handling in PL/PgSQL (without execution termination)
Hello http://www.postgresql.org/docs/9.0/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING Regards Pavel Stehule 2010/12/9 Allan Kamau kamaual...@gmail.com: I am searching for the resource that explains how to handle SQL related exceptions in PL/PgSQL without letting the function's execution terminate. I would like to use his to address possible UNIQUE constraint violation (and resulting exception) attributed to multiple clients concurrently populating the given table. Allan. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error handling in PL/PgSQL (without execution termination)
Thanks Pavel, this is exactly what I have been looking for. Allan On Thu, Dec 9, 2010 at 8:44 AM, Pavel Stehule pavel.steh...@gmail.com wrote: Hello http://www.postgresql.org/docs/9.0/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING Regards Pavel Stehule 2010/12/9 Allan Kamau kamaual...@gmail.com: I am searching for the resource that explains how to handle SQL related exceptions in PL/PgSQL without letting the function's execution terminate. I would like to use his to address possible UNIQUE constraint violation (and resulting exception) attributed to multiple clients concurrently populating the given table. Allan. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Implement online database using Postgresql
Thank You for your suggestion. Any other suggestions are welcome. On Tue, Dec 7, 2010 at 4:50 PM, Vincent Veyron vv.li...@wanadoo.fr wrote: Le mardi 07 décembre 2010 à 13:42 +0530, Kalai R a écrit : So please guide me, What should do to implement online postgresql database? You need a web server to generate and process html forms that display the data. One is Apache, with the right module to process your forms' data. In your case maybe this could help : http://ant.apache.org/antlibs/dotnet/ Can't help you more, as I only do Perl. There is a learning curve, but you'll gain *a lot* of power. -- Vincent Veyron http://marica.fr/ Progiciel de gestion des dossiers de contentieux et d'assurance pour le service juridique