Re: [GENERAL] enforcing transaction mode for read-write queries.
A gem it was i just needed that . Thanks! I tested and it serves my purpose. On Mon, Oct 1, 2012 at 8:54 AM, David Johnston wrote: > On Sep 30, 2012, at 22:22, Rajesh Kumar Mallah > wrote: > > > Hi , > > > > We are a PerlDBI shop and and are doing a code migration from > > implicit transaction mode (ie, AutoCommit=>0) to an explicit mode (ie, > AutoCommit=>1) . > > > > While the code migration is ongoing (or even permanently) We wish that > postgresql reject > > any UPDATE , DELETE , INSERT , nextval , setval etc unless the session > is in a transaction > > mode . ie they should be preceded by an explicit "BEGIN work;". This > shall immensely help > > us to prevent many bugs in the migration. > > > > Is there any way to accomplish that ? Any help shall be greatly > appreciated. > > > > regds > > Rajesh Kumar > > http://www.postgresql.org/docs/9.0/static/runtime-config-client.html > > default_transaction_read_only > > Set this to true for the database then whenever you actually want to allow > modification you override it on a per-transaction basis. > > David J. > > >
[GENERAL] enforcing transaction mode for read-write queries.
Hi , We are a PerlDBI shop and and are doing a code migration from implicit transaction mode (ie, AutoCommit=>0) to an explicit mode (ie, AutoCommit=>1) . While the code migration is ongoing (or even permanently) We wish that postgresql reject any UPDATE , DELETE , INSERT , nextval , setval etc unless the session is in a transaction mode . ie they should be preceded by an explicit "BEGIN work;". This shall immensely help us to prevent many bugs in the migration. Is there any way to accomplish that ? Any help shall be greatly appreciated. regds Rajesh Kumar Mallah.
[GENERAL] Index Scan Backward on wrong index in partitioned table.
Hi , We have a set of partitioned tables and we run the query on main table the query is select uniq_id ,profile_id from general.profile_log where profile_id=3528336 order by uniq_id desc limit 5; there is a index on profile_id on the child tables of profile_log. The query on profile_id does not chooses that index instead it uses an index on uniq_id ( the sorting column). Since the number of child table is huge i am only posting an excerpt of the plan . Filter: (profile_id = 3528336) -> Index Scan Backward using profile_log_2011_08_pkey on profile_log_2011_08 profile_log (cost=0.00..15815.11 rows=3 width=8) Filter: (profile_id = 3528336) -> Index Scan Backward using profile_log_2011_09_pkey on profile_log_2011_09 profile_log (cost=0.00..17851.91 rows=76 width=8) - the index profile_log_2011_09_pkey is the index on the column uniq_id the query is *very slow* , 39 seconds how ever if we just change limit 5 to limit 15 then the plan changes Index Cond: (profile_id = 3528336) -> Bitmap Heap Scan on profile_log_2011_08 profile_log (cost=4.31..16.13 rows=3 width=8) Recheck Cond: (profile_id = 3528336) -> Bitmap Index Scan on profile_log_2011_08_profile_id (cost=0.00..4.31 rows=3 width=0) Index Cond: (profile_id = 3528336) -> Bitmap Heap Scan on profile_log_2011_09 profile_log (cost=4.89..285.93 rows=76 width=8) Recheck Cond: (profile_id = 3528336) -> Bitmap Index Scan on profile_log_2011_09_profile_id (cost=0.00..4.87 rows=76 width=0) Index Cond: (profile_id = 3528336) the index on profile_id is being used and the query is very fast ( 50 ms) I am using the most recent released version of postgresql at this moment which is 9.1.1 can any one please suggest , I think autovaccum is on. regds mallah -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] "all" not inclusive of "replication" in pg_hba.conf
Dear List , It is been found that the entry local all all trust does not renders below redundant in pg_hba.conf local replication replicator01 trust regds mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 10GbE / iSCSI storage for postgresql.
Dear Craig , The other end of the iSCSI shall have all the goodies like the raid controller with a WBC with BBU. There can even be multiple raid cards for multiple servers and disksets. I am even planning for NICs having TOE features . The doubt is will it work withing a acceptable performance range as compared to the situation of DAS (Direct Attached Storage). Has anyone tried like this before ? regds mallah. On Thu, Sep 22, 2011 at 9:44 AM, Craig Ringer wrote: > On 09/22/2011 03:49 AM, Rajesh Kumar Mallah wrote: >> >> Hi , >> >> Can PostgreSQL run fast ( within 80% of DAS) with iSCSI sotrage >> connected via 10GbE ? > > "Maybe". > > What's that 80% of? Sequential read throughput? Random IOPS? Individual read > latency? > > What's the expected workload? Read-heavy, write-heavy, or middle-ground? > Data warehouse/OLAP or OLTP? Lots of small simple transactions, or fewer big > complex transactions? > > Does the system on the other end of the iSCSI link have battery-backed write > caching, flash-logged write cache, or some other way to guarantee writes are > persistent without having to wait for data to flush out to spinning disks? > You'll need something like this for decent write performance especially if > you're doing lots of small transactions. If the SAN doesn't have a safe way > to cache writes you can partly work around the issue by doing fewer bigger > transactions and/or by using a commit_delay. > > What kind of read cache does the SAN have? How much contention with other > users will there be? How big is its write-back cache (if it has one)? Does > it have any kind of QoS to prevent something like someone disk-imaging a > server from starving your Pg instance of read bandwidth? > > -- > Craig Ringer > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 10GbE / iSCSI storage for postgresql.
Hi , Can PostgreSQL run fast ( within 80% of DAS) with iSCSI sotrage connected via 10GbE ? regds mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dumpall behavior in 9.1beta1
oops! sorry for the noise it was an innocuous 'NOTICE' not ERROR as claimed. thanks for your time. regds mallah. On Fri, May 13, 2011 at 7:59 PM, Tom Lane wrote: > Rajesh Kumar Mallah writes: >> We migrated to 9.1beta1 from 9.0.x in our development environment. >> we shall report if any significant problems or issue arises out of this. > >> one small thing that we observed rite now regarding pg_dumpall output is that > >> the ALTER USER is referring to some db objects which are not created yet >> and this raises an ERROR this may cause some inconvenience where restoration >> is being carried out in a txn . > >> example of relevant sql lines is given below > >> CREATE ROLE pdns; >> ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN >> NOREPLICATION PASSWORD 'md56dx' ; >> ALTER ROLE pdns SET search_path TO pdns; > >> in above line the 'pdns' namespace does not exists yet. > > When I do that, I get a harmless NOTICE, not an ERROR. Are you sure you > are getting an error? > > regression=# CREATE ROLE pdns; > CREATE ROLE > regression=# ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB > LOGIN > regression-# NOREPLICATION PASSWORD 'md56dx' ; > ALTER ROLE > regression=# ALTER ROLE pdns SET search_path TO pdns; > NOTICE: schema "pdns" does not exist > ALTER ROLE > regression=# > > regards, tom lane > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_dumpall behavior in 9.1beta1
Dear List , We migrated to 9.1beta1 from 9.0.x in our development environment. we shall report if any significant problems or issue arises out of this. one small thing that we observed rite now regarding pg_dumpall output is that the ALTER USER is referring to some db objects which are not created yet and this raises an ERROR this may cause some inconvenience where restoration is being carried out in a txn . example of relevant sql lines is given below CREATE ROLE pdns; ALTER ROLE pdns WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION PASSWORD 'md56dx' ; ALTER ROLE pdns SET search_path TO pdns; in above line the 'pdns' namespace does not exists yet. regds Rajesh Kumar Mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to add hosts to pg_hba.conf and postgresql.conf?
can you post output of netstat -lnp when run on the machine running postgresql server ? On Tue, Mar 15, 2011 at 9:54 PM, general_lee wrote: > Hi, > > Thanks for the reply. > > Nope, not a Firewall problem. > > I also get connection refused if I telnet by IP address on the Postgres > server. > > I can telnet localhost 5432 > > But not telnet 5432 > > Anything else I can try here? I'm not sure my pg_hba.conf is correct, what > do you think? > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/How-to-add-hosts-to-pg-hba-conf-and-postgresql-conf-tp3708421p3709271.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] base backup and tar problems with disappearing files.
Sorry Alban , For the late update. Yep John, I am talking about the tar'ring of pgdatadir only excluding the pg_xlog dir. We have set up our full backup system in accordance to the admin guide. Even the guide puts forward the limitation of tar in producing distinguishing exit codes. My doubt at this moment is , Is it normal to expect files disappearing from the pgdatadir during the course of taking base backup ? I can think about the temp sorting files disappearing but i am not sure what could cause data files disappear like example given below: tar: /mnt/disk1/pgdatadir/base/16399/861272781: Cannot stat: No such file or directory If the above incidence is normal then i would only worry about making tar not to worry about the disappearing files. The reason of putting this question is that the line in one of the later paras of section "24.3.2" says "Some file system backup tools emit warnings or errors if the files they are trying to copy change while the copy proceeds." , it only says about "change" not disappearance. Since i have deep respect for the excellent documentation quality of PostgreSQL project i read and interpret it by words. Warm Regds -mallah. On Tue, Mar 15, 2011 at 1:42 PM, John R Pierce wrote: > On 03/15/11 12:30 AM, Alban Hertroys wrote: > >> On 15 Mar 2011, at 7:46, Alban Hertroys wrote: >> >> On 15 Mar 2011, at 3:06, Rajesh Kumar Mallah wrote: >>> >>> Dear Friends, >>>> >>>> While taking online basebackup we ignore tar exit codes of 1 . >>>> However under certain circumstances tar exits we code '2' which >>>> stands for 'Fatal Errors' . Eg in case of "Cannot stat: No such file or >>>> directory" >>>> encountered while taking backup of the pgdatadir . My question is >>>> can we ignore such errors of "vanishing files" ? is it normal ? >>>> I think the situation is arising because some table which were created >>>> before start of backup were dropped during the backup. But that is >>>> quite normal also. >>>> >>> >>> You should probably exclude the PG data directories from your file-system >>> backups, there isn't much point in backing them up anyway. >>> >> >> I should refine that a bit... >> >> A file-system level backup backs up the files in a sequential order, while >> the database writes it's transactions in them in a pattern that's much >> closer to random order. As a result of that, your file-system backup is >> likely to contain the database files in an inconsistent state. >> If you subsequently try to recover from that backup, you rely on the >> ability of the database to recover from that inconsistent state. Postgres is >> pretty good at recovering, but there's no guarantee it will succeed. It's >> probably a bad idea to rely on that for your backups. >> >> Instead, for backing up your database, use one of the strategies outlined >> in the fine manual. Those are reliable. >> > > > um, I assumed from the original post that he was talking about taking a > base backup in preparation for setting up WAL replication, presumably > preceded by a call to pg_start_backup(), etc... > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
[GENERAL] base backup and tar problems with disappearing files.
Dear Friends, While taking online basebackup we ignore tar exit codes of 1 . However under certain circumstances tar exits we code '2' which stands for 'Fatal Errors' . Eg in case of "Cannot stat: No such file or directory" encountered while taking backup of the pgdatadir . My question is can we ignore such errors of "vanishing files" ? is it normal ? I think the situation is arising because some table which were created before start of backup were dropped during the backup. But that is quite normal also. regds mallah. tar: Removing leading `/' from member names tar: /mnt/disk1/pgdatadir/base/16399/445408706: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/301908133: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/51363916: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/51363949: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/51369062: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/51369071: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/865944857: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/810163513: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/843721621: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/861272781: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/16399/861272782: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/16399/861272783: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/16399/861272784: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/16399/865944127: file changed as we read it tar: /mnt/disk1/pgdatadir/base/16399/861272783_vm: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/16399/861272781_vm: Cannot stat: No such file or directory tar: /mnt/disk1/pgdatadir/base/pgsql_tmp/pgsql_tmp7934.6: file changed as we read it tar: Error exit delayed from previous errors Dying because tar exited with an exit code: 2 not in {0, 1}
Re: [GENERAL] select count(*)
i "discovered" it as a result of typo :) we usually select expressions without tables eg select 1+2 ; etc and the results are as expected, somehow i failed to stretch the analogy to count(*) which is mostly used over tables or table expression. thanks anyways. regds mallah. On Wed, Mar 9, 2011 at 11:20 PM, Bill Moran wrote: > In response to Rajesh Kumar Mallah : > > > Dear List , > > > > if we simply do select count(*) and not specify any table then it gives 1 > > eg: > > > > bric=# SELECT count(*) from job ; > > count > > --- > > 2380 > > (1 row) > > > > bric=# SELECT count(*) job ; > > job > > - > >1 > > (1 row) > > > > > > > > bric=# SELECT count(*) ; > > count > > --- > > 1 > > (1 row) > > > > > > > > bric=# SELECT count(*) job_non_exist ; > > job_non_exist > > --- > > 1 > > (1 row) > > > > bric=# SELECT count(*) jo1b ; > > jo1b > > -- > > 1 > > (1 row) > > > > bric=# SELECT count(*) none ; > > ERROR: syntax error at or near "none" > > LINE 1: SELECT count(*) none ; > > > > > > I fail to see any progression ? > > When you don't specify a FROM clause, you get 1 because it's > returning 1 row. No matter what you alias the result to, it's not going > to change the result, unless of course you try to alias it to an SQL > reserved word, such as "none", without quoting it. Of course, if you > include the optional AS, it probably makes more sense what's going on: > > SELECT count(*) AS jolb; > SELECT count(*) AS none; > SELECT count(*) AS "none"; > > -- > Bill Moran > http://www.potentialtech.com > http://people.collaborativefusion.com/~wmoran/ >
[GENERAL] select count(*)
Dear List , if we simply do select count(*) and not specify any table then it gives 1 eg: bric=# SELECT count(*) from job ; count --- 2380 (1 row) bric=# SELECT count(*) job ; job - 1 (1 row) bric=# SELECT count(*) ; count --- 1 (1 row) bric=# SELECT count(*) job_non_exist ; job_non_exist --- 1 (1 row) bric=# SELECT count(*) jo1b ; jo1b -- 1 (1 row) bric=# SELECT count(*) none ; ERROR: syntax error at or near "none" LINE 1: SELECT count(*) none ; I fail to see any progression ? regds mallah.
[GENERAL] clarification regarding max_standby_streaming_delay
Hi, We have a setup Streaming Replication with max_standby_streaming_delay=120s if statements are getting canceled with on the standby server with messages like below: FATAL: terminating connection due to conflict with recovery DETAIL: User query might have needed to see row versions that must be removed. HINT: In a moment you should be able to reconnect to the database and repeat your command. can we draw conclusion that Queries that are conflicting with application of the wal entries were very slow and ran over 120s according to docs: http://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html#GUC-MAX-STANDBY-ARCHIVE-DELAY max_standby_streaming_delay (integer) When Hot Standby is active, this parameter determines how long the standby server should wait before canceling standby queries that conflict with about-to-be-applied WAL entries, as described in Section 25.5.2. max_standby_streaming_delay applies when WAL data is being received via streaming replication. The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. This parameter can only be set in the postgresql.conf file or on the server command line. Note that max_standby_streaming_delay is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply WAL data once it has been received from the primary server. Thus, if one query has resulted in significant delay, subsequent conflicting queries will have much less grace time until the standby server has caught up again. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Gripe: bytea_output default => data corruption
Dear Griper!, fortunately someone showed a easy 'fix' ALTER DATABASE foo SET bytea_output='escape' ; Regds Rajesh Kumar Mallah. On Wed, Oct 13, 2010 at 5:03 PM, ljb wrote: > r...@iol.ie wrote: >>... >> In fairness, it *is* flagged in the release note - it's the first item >> under "data types" in the list of incompatibilities. > > Quote: > "bytea output now appears in hex format by default (Peter Eisentraut) > The server parameter bytea_output can be used to select the > traditional output format if needed for compatibility." > > This is inadequate, because it fails to warn that pre-9.0 clients will > decode the data incorrectly without reporting an error. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Adding a New Column Specifically In a Table
Dear Carlos, application code should not depend on column positions. the requirement is not good. regds rajesh kumar mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to search ignoring spaces and minus signs
Dear Andrus, Quick & Dirty Soln: SELECT * from table where regexp_replace( col , '[-\\s+]' , '' , 'g') ilike '%search_term%' ; note above sql will not use any index if you have to search 10000s of rows use alternate approaches. regds Rajesh Kumar Mallah. 2010/10/13 Andrus : > CHAR(20) columns in 8.4 database may contains spaces and - signs like > > 13-333-333 > 12 3-44 > 33 33 333 > 12345 > > User enters code to search without spaces and - signs, like 12344 > How to search for product code ignoring spaces and - signs? > > For example searching for code 12344 should return > 12 3-44 as matching item. > > Andrus. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Adding a New Column Specifically In a Table
Dear Carlos, In old version of postgresql attnum colmn of pg_catalog.pg_attribute system catalog could be modified to get desired results. I am not sure if it should be done anymore. Rajesh Kumar Mallah. On Wed, Oct 13, 2010 at 2:06 PM, Carlos Mennens wrote: > On Wed, Oct 13, 2010 at 2:04 PM, Carlos Mennens > wrote: >> OK so I have read the docs and Google to try and find a way to add a >> new column to an existing table. My problem is I need this new column >> to be created 3rd rather than just dumping this new column to the end >> of my table. I can't find anywhere how I can insert my new column as >> the 3rd table column rather than the last (seventh). Does anyone know >> how I can accomplish this or if it's even possible. Seems like a >> common task but I checked the documentation and may have missed it in >> my reading. >> >> >> ALTER TABLE users ADD COLUMN employer VARCHAR(50) NOT NULL; > > Ah sadly I just found this after I pressed 'send' and realized > PostgreSQL doesn't support it...that sucks :( > > http://wiki.postgresql.org/wiki/Alter_column_position > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] setting application name post connection
Yep it works! thanks. t=> show application_name; application_name -- (1 row) t=> SELECT application_name,current_query from pg_stat_activity where application_name='myapp'; application_name | current_query --+--- (0 rows) t=> set application_name TO 'myapp'; SET t=> SELECT application_name,current_query from pg_stat_activity where application_name='myapp'; application_name | current_query --+--- myapp| SELECT application_name,current_query from pg_stat_activity where application_name='myapp'; (1 row) t=> On Thu, Oct 7, 2010 at 11:49 PM, Scott Marlowe wrote: > On Thu, Oct 7, 2010 at 9:33 PM, Rajesh Kumar Mallah > wrote: >> Dear List, >> >> Is it possible to set application name param after the connection is made? > > Have you tried setting it: > > set application_name = 'newappname'; > > ? > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] setting application name post connection
Dear List, Is it possible to set application name param after the connection is made? In our applications the DB connection is made at a top level and is shared between many sub-applications. We want to override the application name in various sub-applicatons without reconnecting to the database. Is it possible? regds Rajesh Kumar Mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hot standby usage issue
Hi , The 'problem' was remedied to certain extent by increasing max_standby_streaming_delay=30 # values in milliseconds I set it to 5 mins which is acceptable to our application. regds Rajesh Kumar Mallah. On Thu, Oct 7, 2010 at 9:55 AM, Rajesh Kumar Mallah wrote: > Dear List , > > We have setup primary/slave in HS & SR mode. We are diverting a part of > read only queries to the slave. > > The problem is that we are intermittently getting errors > ERROR: canceling statement due to conflict with recovery > on slave. We know that the tables which are participating in these queries > are not frequently updated ones. Can anyone suggest how can we > debug the situation. > > Regds > Rajesh Kumar Mallah. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Hot standby usage issue
both servers are on same gigabit switch. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Hot standby usage issue
Dear List , We have setup primary/slave in HS & SR mode. We are diverting a part of read only queries to the slave. The problem is that we are intermittently getting errors ERROR: canceling statement due to conflict with recovery on slave. We know that the tables which are participating in these queries are not frequently updated ones. Can anyone suggest how can we debug the situation. Regds Rajesh Kumar Mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Non-standard escape sequences from Crystal Reports
Dear Rob, Please do not go offlist. Its not a matter of cancelling, consider the fragment --> relname like 'coil\\_status' the underscore has to be escaped because if its not done '_' will match any *single* character (rule of ilike matching) , its escaped so that it matches '_' in strings. two backslash is required so that ilike gets \_ it is the rule of quoting literal values in sql. if you can add E you can get rid of the warnings. Regds Rajesh Kumar Mallah. On Tue, Oct 5, 2010 at 2:18 PM, Rob Richardson wrote: > Thank you very much! Your examples worked as you said they would work, and > they showed that my database is logging those warnings when they happen. > That indicates that the difference is in the Crystal Reports installation > someplace. > > I don't understand how the backslashes are handled. I added underscores to > a couple of records in my coilstatus table, so one had "Fr_ee" and another > had "Pack_ed". Then, the query "SELECT status from coilstatus where status > like '%\\_%' limit 10;" gave me the warning that you predicted, and it found > the two records with underscored statuses. That tells me that the two > backslashes cancelled each other out, and the query was the same as "SELECT > status from coilstatus where status like '%_%' limit 10;". Why is that? > Do I need to read about regular expressions? > > RobR > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] bytea_output default value change in 9.0 breaks Apache::Session::Postgres
Dear List , recent change of bytea_output format from escape to hex in 9.0 apparently breaks popular persistent session handling perl modules like Apache::Session::Postgres which stores pickled data structures in bytea columns of db table. It is difficult to guess the root cause from the exception thrown by the said module. The problem is fixed by adding bytea_output='escape' in postgresql.conf and issuing a pg_ctl reload. Eg in RT application the error is: error: RT couldn't store your session. This may mean that that the directory /blah/blah/foo/bar isn't writable or a database table is missing or corrupt Regds Rajesh Kumar Mallah. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Non-standard escape sequences from Crystal Reports
Dear Rob , this will give warning: SELECT co_name from foo where co_name ilike '%\\_%' limit 10; and this will not: SELECT co_name from foo where co_name ilike E'%\\_%' limit 10; regds Rajesh Kumar Mallah.
Re: [GENERAL] streaming replication question
Dear Riggs, I am so sorry that i am not able to replicate it this time. May be i did something really stupid that day . I remember at one point of time i ran the standy without any recorvery.conf even. If i ever get that again i will surely try to replicate it and inform. Regds Rajesh Kumar Mallah. On Tue, Oct 5, 2010 at 5:08 AM, Simon Riggs wrote: > On Sat, 2010-10-02 at 12:02 -0400, Rajesh Kumar Mallah wrote: > > FATAL: too many KnownAssignedXids > > That's a bug. > > I've a few guesses about that, but I'll put some better instrumentation > in to see if we can prove what's causing it. > > Can you reproduce that again, or was that failure isolated to that > particular point in the WAL stream? You can try turning HS off, then > turning it back on again later. > > If you suspect a bug in Hot Standby, please set >trace_recovery_messages = DEBUG2 > in postgresql.conf and repeat the action > > Always useful to know > * max_connections > * current number of sessions > * whether we have two phase commits happening > > Thanks > > -- > Simon Riggs www.2ndQuadrant.com > PostgreSQL Development, 24x7 Support, Training and Services > >
Re: [GENERAL] streaming replication question
On Tue, Oct 5, 2010 at 12:09 AM, Fujii Masao wrote: > On Tue, Oct 5, 2010 at 2:57 AM, Rajesh Kumar Mallah > wrote: > > I am currently aiming to setup only SR between 2 servers only. > > "only SR" means that you don't need Hot Standby (i.e., you don't need to > run any query on the standby server)? If so, you can set wal_level to > archive instead of hot_standby, and disable hot_standby parameter on the > standby. > Thanks for the kind replies, May be I am mistaken in understanding of the terms. I do issue read only queries to the standby server. So I think its HS. But I also stream the wal records. > > > My question is in SR setup do i need to keep transferring the archived > WAL > > files also ? > > No if wal_keep_segments is high enough that WAL files required for the > standby > can be saved in pg_xlog directory of the master. > > > when the slave requests WAL records from the master does the master > consult > > the archived > > wal files also for sending the records ? > > No. In 9.0, the master doesn't read and send the archived WAL files. > But I agree to make the master send the archived WAL files, and I'm > now proposing that. > http://archives.postgresql.org/pgsql-hackers/2010-09/msg02040.php > > Regards, > I just wanted to let you know how i am now managing the wal logs. i have mounted the wal archive folder(Xp) of primary on the slave via NFS to a folder (Ys). The folder Xp of primary is periodically rsynced to folder Xs on slave with option --delete in recovery.conf of slave archive_cleanup_command has been specified to cleanup Ys (not Xs), since it is a NFS rw mount it removes the unneeded archived WAL files from the source also ie Xp . subsequently the files are removed from Xs also because of the --delete option of rsync. My original requirement is that , I should be able to divert certain amount of Read Only queries from the master to salve so as to reduce load in primary. Our pilot applications have been modified so that they always contact master when that have to modify data and slave when they have to read data. The current setup provides for the requirement but I am concerned what happens if the slave is shutdown for a prolonged duration how will i get to know when a base backup is required. I am trying to read the docs and practically observing also the effects of various steps. We have a switch in the software that lets was not to use the slave at all ! . Regds Rajesh Kumar Mallah. Tradeindia.com - India's Largest B2B MarketPlace. (uses PostgreSQL for past 10 years) > > -- > Fujii Masao > NIPPON TELEGRAPH AND TELEPHONE CORPORATION > NTT Open Source Software Center >
Re: [GENERAL] Having two simultaneous and similar database
Dear Sai, Why do you want to update at nite only. You can setup streaming replication with pgsql9.0 and have it updated almost instantly. However if you would want it that way only read about various backup options in the link that was provided. regds mallah. On Mon, Oct 4, 2010 at 4:02 PM, Sairam Krishnamurthy wrote: > Hi all. > > I am trying to create two databases in two different machines connected > over the lan. Both the databases have similar tables and fields in them. I > will be updating database A always. I want the update to be backed up to > database B automatically during the night when no one will be using the > databases. > > Can some one tell if if this can be done and if yes how ? > > -- > Thanks, > Sairam Krishnamurthy > +1 612 859 8161 > >
Re: [GENERAL] streaming replication question
Dear Masao, I would also like to inform that i observed the phenomenon only once. And i am still not too clear on the correct setup. Regds Rajesh Kumar Mallah. > > This looks like the bug of HS. But I'm not sure why KnownAssignedXids > overflowed. > > Regards, > > -- > Fujii Masao > NIPPON TELEGRAPH AND TELEPHONE CORPORATION > NTT Open Source Software Center >
Re: [GENERAL] streaming replication question
Dear Masao , Thanks so much for the response , I actually i am in a position to do rigorous stress testing because i have the machines at my disposal. I am sorry that i was not in touch with the lists for a prolong duration and could not followup much on the discussions while HS & SR were being developed.hence i am mostly relying on the published documents only. The wiki that i used does not say much about how to manage the wal files that are archived in primary and standby(possibly). I am currently aiming to setup only SR between 2 servers only. in primary server i have - wal_level = hot_standby max_wal_senders = 5 wal_keep_segments = 32 archive_mode= on archive_command = 'cp %p /mnt/disk1/pgdatadir/wal_archives/%f' - in slave sever i have additionally hot_standby = on # in postgresql.conf and recovery.con in slave is: -- standby_mode = 'on' primary_conninfo = 'host=172.16.10.139 port=5432 user=postgres' trigger_file = '/mnt/disk1/pgdatadir/trigger' restore_command = 'cp /mnt/disk1/pgdatadir/wal_archives/%f "%p"' --- My question is in SR setup do i need to keep transferring the archived WAL files also ? when the slave requests WAL records from the master does the master consult the archived wal files also for sending the records ? anticipating your enlightening answers your respectfully Rajesh Kumar Mallah.
Re: [GENERAL] Any advice on debugging hanging postgresql-8.1.21 (many postmaster's)
Dear Farber, Is there anything that stops you from upgrading to a more respectable release number? Eg 9.0 or at least latest 8.4.X Regds Rajesh Kumar Mallah.
Re: [GENERAL] Streaming Recovery - Automated Monitoring
I hope u checked point #11 http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use - *11.* You can calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. They can be retrieved using * pg_current_xlog_location* on the primary and the * pg_last_xlog_receive_location*/*pg_last_xlog_replay_location* on the standby, respectively. $ psql -c "SELECT pg_current_xlog_location()" -h192.168.0.10 (primary host) pg_current_xlog_location -- 0/200 (1 row) $ psql -c "select pg_last_xlog_receive_location()" -h192.168.0.20 (standby host) pg_last_xlog_receive_location --- 0/200 (1 row) $ psql -c "select pg_last_xlog_replay_location()" -h192.168.0.20 (standby host) pg_last_xlog_replay_location -- 0/200 (1 row) Regds Rajesh Kumar Mallah.
[GENERAL] streaming replication question
Dear List, Firstly thanks to the postgresql global development team for releasing postgresql with such a great enterprise feature of SR & HS. As an enduser i setup SR based on http://wiki.postgresql.org/wiki/Streaming_Replication The master & standby are both powerful machines and are on same gigabit switch. Things worked as expected I updated 1000's of records in single txn and saw them appearing on the standby instantly. Then i wanted to see the impact of shutting down the standby momentarily . This apparently failed with following messages in standby logs: DETAIL: last completed transaction was at log time 2010-10-02 20:57:26.17677+05:30 LOG: restartpoint starting: time LOG: received smart shutdown request FATAL: terminating walreceiver process due to administrator command LOG: restartpoint complete: wrote 1568 buffers (0.3%); write=146.237 s, sync=0.251 s, total=146.489 s LOG: recovery restart point at 25/EA87BA18 DETAIL: last completed transaction was at log time 2010-10-02 21:02:21.89303+05:30 LOG: shutting down <--- - - - - - - - - - - - - -- - - shutdown command issued here LOG: database system is shut down LOG: database system was shut down in recovery at 2010-10-02 21:02:22 IST < first attempt to start was made LOG: entering standby mode LOG: restored log file "0001002500EA" from archive LOG: redo starts at 25/EA87BA18 FATAL: too many KnownAssignedXids CONTEXT: xlog redo insert: rel 1663/16399/303892827; tid 1503/119 LOG: startup process (PID 20527) exited with exit code 1 LOG: terminating any other active server processes LOG: database system was interrupted while in recovery at log time 2010-10-02 20:57:12 IST <--- second attempt was made. HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. LOG: entering standby mode LOG: restored log file "0001002500EA" from archive LOG: redo starts at 25/EA87BA18 FATAL: too many KnownAssignedXids CONTEXT: xlog redo insert: rel 1663/16399/303892827; tid 1503/119 LOG: startup process (PID 20693) exited with exit code 1 LOG: terminating any other active server processes I confess that i have not RT(Fine)M yet . Regds Rajesh Kumar Mallah.
Re: [GENERAL] schema proxying virtual database
thanks that sounds like a really easy & neat solution. On Sun, Sep 13, 2009 at 11:02 PM, Stefan Kaltenbrunner wrote: > Rajesh Kumar Mallah wrote: > >> Dear List, >> >> There are many opensource applications that support postgresql >> (eg , gforge , bricolage , dspam ..) but does not use schemas(namespaces) >> as a result of which you are forced to use/create a new database and loose >> the >> advantage of linking the application data with your existing database. >> as no-cross database queries can be done in PG. >> >> >> my question is , is it a feasible idea to have some special kind of >> database >> in the postgresql cluster that mimics a schema of an existsing database. >> >> say rt3 is a special database that links to existing maindb databases' rt3 >> namespace then any DDL / DML done to public schema of rt3 database >> is redirected to rt3 namespace of the existing database maindb's rt3 >> schema. >> > > well in most cases it is enought to give those apps their own user and > setting the default search_path for that user to it's own schema. That way > you usually don't have to modify the app at all and still get it (more or > less) running in it's own schema. > > > Stefan >
[GENERAL] schema proxying virtual database
Dear List, There are many opensource applications that support postgresql (eg , gforge , bricolage , dspam ..) but does not use schemas(namespaces) as a result of which you are forced to use/create a new database and loose the advantage of linking the application data with your existing database. as no-cross database queries can be done in PG. my question is , is it a feasible idea to have some special kind of database in the postgresql cluster that mimics a schema of an existsing database. say rt3 is a special database that links to existing maindb databases' rt3 namespace then any DDL / DML done to public schema of rt3 database is redirected to rt3 namespace of the existing database maindb's rt3 schema. regds mallah.
[GENERAL] procedural languages in 7.4.6
Hi, In 7.4.6 there is only plpgsql under pl where as 7.4.5 includes it in the main distribution can anyone tell from where plperl for 7.4.6 can be got? Regds mallah. -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] error moving table to tablespace (8.0 beta win32 )
Christian Traber wrote: Hi! just playing with tablespaces... - moved a existing table to a new tablespace - tried to move it back to default tablespace (ALTER TABLE accounts SET TABLESPACE pg_default;) Got the following error in logfile: ERROR: could not create relation 1663/317186/317191: Permission denied Any suggestions? are you able to create new tables in the default tablespace ? looks like a permission issue did you initdb a fresh folder see if the owner of $PGDATA/data/base is the user that running postmaster (usually postgres) . It works fine for me though Regds mallah test=# \d t_b Table "public.t_b" Column | Type | Modifiers +-+--- a | integer | Tablespace: "space" test=# ALTER TABLE t_b SET TABLESPACE pg_default; ALTER TABLE test=# SELECT version(); version -- PostgreSQL 8.0.0beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.2 (Mandrake Linux 10.0 3.3.2-6mdk) (1 row) test=# Best regards, Christian ---(end of broadcast)--- TIP 8: explain analyze is your friend -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+
Re: [GENERAL] PostgreSQL 8.0.0 Officially Goes Beta
The beta link under ftp://ftp3.us.postgresql.org/pub/postgresql/ and possible all mirrors leads to nowhere. I guess many people would click there. Regds mallah. Gaetano Mendola wrote: Marc G. Fournier wrote: After almost 9 months of development, the PostgreSQL Global Development Group is proud to announce that development on PostgreSQL 8.0.0 has now finished, and is ready for some serious testing. I submitted the news to www.osnews.com, do you think that is too much to submit it to slashdot ? Just to avoid to be slashdotted... Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] help required
Ramesh Yeligar wrote: Hi, We have been using pgsql for our retail business, now, due hard drive crash, the databse corrupted and we are unable to start pgsql database. Pl help me if you know any commands or tools to recover this database. What do you have in hand? like the PGDATA folder pg_xlog ? do you have the database backups? Regds Mallah. Thanks, Ramesh ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] alter table cascade does not give notice about dropped indexes
Hi, Looks like alter table does not tells about the indexes it dropped PG version: 7.4.3 Regds mallah. tradein_clients=# \d general.membership_status Table "general.membership_status" ++---++ | Column | Type | Modifiers| ++---++ | userid | integer | not null | | profile_id | integer | not null | | product_id | integer | not null | | num_inq| integer | default 0 | | listing| character varying(50) || | num_leads_featured | integer | default 0 | | num_leads_pic | integer | default 0 | | deleted| boolean | not null default false | ++---++ Indexes: "user_profile_product_id" unique, btree (userid, profile_id, product_id) WHERE (deleted IS FALSE) tradein_clients=# tradein_clients=# ALTER TABLE general.membership_status drop column profile_id cascade; NOTICE: drop cascades to rule _RETURN on view active_membership_status NOTICE: drop cascades to view active_membership_status ALTER TABLE Time: 992.921 ms tradein_clients=# \d general.membership_status Table "general.membership_status" ++---++ | Column | Type | Modifiers| ++---++ | userid | integer | not null | | product_id | integer | not null | | num_inq| integer | default 0 | | listing| character varying(50) || | num_leads_featured | integer | default 0 | | num_leads_pic | integer | default 0 | | deleted| boolean | not null default false | ++---++ tradein_clients=# ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] case for lock_timeout
Tom Lane wrote: <[EMAIL PROTECTED]> writes: I feel lock_timeout it will be a nice feature . Lemme know what would be the solution of the above problem from existing set of features. AFAICS statement_timeout would work just fine for that; or you could use NOWAIT. ok, you mean we should put statement timeout with statements that potentially lock table exclusively. eg OTHER SQLS 1 set statement_timeout = DROP TABLE table_name; set statement_timeout = 0; OTHER SQLS 2 BTW does drop table or alter table have any other reasons to timeout other than waiting for a lock ? In former case the query will get cancelled for an invalid reason. Regds mallah. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] hugetlb feature linux 2.6 kernel
Does postgresql takes advantage of hugetlb feature of linux kernel 2.6 ? http://otn.oracle.com/pub/notes/technote_rhel3.html regds mallah. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Functions in CHECK constraint not getting dumped before
Greetings! The only issue that i noticed now with pg_dump version 7.5dev is that create schemas are not dumped before user definations. which have their search_path set by ALTER USER command. eg ALTER USER arvind SET search_path TO 'erp'; ERROR: schema "erp" does not exist This is relatively harmless to me at least, but the point is to have pg_dump work without requiring any manual editing of the dumpfiles :) Regds Mallah. Rajesh Kumar Mallah wrote: Rajesh Kumar Mallah wrote: Tom Lane wrote: It is found that pg_dump does not dump function referred in CHECK constraint definations before dumping the table defination . As a result the tables do not get restored due to lack of defined functions. Is it something that will be worked upon in future ? This is fixed in CVS tip. Yes it did solve that problem. another problem that i am facing is during pg_dump i get warning like: This problem was also solved . The problem was that the corruption was in template1 and i was looking at a different database. i finally reset the template1 by using methods described in Adventures in PostgreSQL Episode 1: Restoring a Corrupted Template1 using Template0 , May 2002 Josh Berkus. Regds mallah. pg_dump: WARNING: owner of function "txtidx_in" appears to be invalid pg_dump: WARNING: owner of function "txtidx_out" appears to be invalid pg_dump: WARNING: owner of function "qtxt_in" appears to be invalid pg_dump: WARNING: owner of function "qtxt_out" appears to be invalid pg_dump: WARNING: owner of function "mqtxt_in" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_in" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_out" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_consistent" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_compress" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_decompress" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_penalty" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_picksplit" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_union" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_same" appears to be invalid And in the dump there are lines like: REVOKE ALL ON FUNCTION txtidx_in(opaque) FROM PUBLIC; REVOKE ALL ON FUNCTION txtidx_in(opaque) FROM ""; *SET SESSION AUTHORIZATION "103"; <-* GRANT ALL ON FUNCTION txtidx_in(opaque) TO PUBLIC; RESET SESSION AUTHORIZATION; can you pls tell me how to get rid of the warnings and the invalid id "103" in pg_proc i dont see any reference to SYSID 103 in the columns proowner or the acl columns. Any other places where i should look? Regds Mallah. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Functions in CHECK constraint not getting dumped before
Rajesh Kumar Mallah wrote: Tom Lane wrote: It is found that pg_dump does not dump function referred in CHECK constraint definations before dumping the table defination . As a result the tables do not get restored due to lack of defined functions. Is it something that will be worked upon in future ? This is fixed in CVS tip. Yes it did solve that problem. another problem that i am facing is during pg_dump i get warning like: This problem was also solved . The problem was that the corruption was in template1 and i was looking at a different database. i finally reset the template1 by using methods described in Adventures in PostgreSQL Episode 1: Restoring a Corrupted Template1 using Template0 , May 2002 Josh Berkus. Regds mallah. pg_dump: WARNING: owner of function "txtidx_in" appears to be invalid pg_dump: WARNING: owner of function "txtidx_out" appears to be invalid pg_dump: WARNING: owner of function "qtxt_in" appears to be invalid pg_dump: WARNING: owner of function "qtxt_out" appears to be invalid pg_dump: WARNING: owner of function "mqtxt_in" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_in" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_out" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_consistent" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_compress" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_decompress" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_penalty" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_picksplit" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_union" appears to be invalid pg_dump: WARNING: owner of function "gtxtidx_same" appears to be invalid And in the dump there are lines like: REVOKE ALL ON FUNCTION txtidx_in(opaque) FROM PUBLIC; REVOKE ALL ON FUNCTION txtidx_in(opaque) FROM ""; *SET SESSION AUTHORIZATION "103"; <-* GRANT ALL ON FUNCTION txtidx_in(opaque) TO PUBLIC; RESET SESSION AUTHORIZATION; can you pls tell me how to get rid of the warnings and the invalid id "103" in pg_proc i dont see any reference to SYSID 103 in the columns proowner or the acl columns. Any other places where i should look? Regds Mallah. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Functions in CHECK constraint not getting dumped before tables.
Greetings! It is found that pg_dump does not dump function referred in CHECK constraint definations before dumping the table defination . As a result the tables do not get restored due to lack of defined functions. Is it something that will be worked upon in future ? regds mallah. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] passing array as argument and returning an array in
Jenny Zhang wrote: Jenny, although this is old but i find it worth mentioning tom's comment on it. I hit your comment on facing similar issue. The cited example is pretty iffy since it assumes that the valid array entries are all > 0. In recent PG version you can use the array_upper and array_lower functions instead: for i in array_lower(a,1) .. array_upper(a,1) loop -- do something with a[i] end loop; regards, tom lane I got this when I was searching for something else. I will forward this to you. http://archives.postgresql.org/pgsql-general/2003-11/msg00852.php CREATE or REPLACE FUNCTION foo(integer[]) RETURNS int AS 'DECLARE a alias for $1; index integer := 1; total integer := 0; BEGIN WHILE a[index] > 0 LOOP total := total + a[index]; index := index + 1; END LOOP; RETURN total; END; ' LANGUAGE 'plpgsql'; test=> select foo('{1,2}'); foo - 3 (1 row) On Tue, 2003-12-16 at 03:25, K. Deepa wrote: Hi all, I am using postgresql7.4. How to handle arrays in plpgsql. How can I pass an array. Is it possible to retrieve values from an array by indexing it like argument : '{1,2,3}' Return value : varchar array Variables : - a alias for $1 b _varchar Usage : - b[1] = a[1]; b[2] = a[2]; return b; Is it possible. TIA, ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Silent truncation of numeric values.
Hi, How come numeric(10,2) accepting value 10.011 and truncating the .001 part ? Shudnt' it be complaining , i am not sure. Regds Mallah. tradein_clients=# CREATE TABLE t_a ( a numeric (10,2) ); CREATE TABLE tradein_clients=# tradein_clients=# tradein_clients=# \d t_a Table "public.t_a" Column | Type | Modifiers +---+--- a | numeric(10,2) | tradein_clients=# INSERT INTO t_a values(10.01); INSERT 50817524 1 tradein_clients=# INSERT INTO t_a values(10.011); INSERT 50817525 1 tradein_clients=# tradein_clients=# tradein_clients=# SELECT * from t_a; a --- 10.01 10.01 (2 rows) tradein_clients=#
[GENERAL] 7.4RC2 --> 7.4
Is dump-->initdb-->reload required for RC2 --> 7.4 ? Regds Mallah. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] DOMAIN usability
Hi , I think one of the usage patterns of DOMAINS is to have size specifications and validity constraints at one place for easy administration of Database. Eg, instead of declaring email to be varchar(30) in 10s of tables and putting a CHECK constraint for presence of '@' we could declare CREATE DOMAIN email_type varchar (30) CHECK ( value ~* '@') ; And users could use "email_type" in our CREATE TABLEs . There are two main issues (problems) 1. Suppose varchar(30) turns out to be too small oneday and we want to increase it to varchar(100) , what do i do ? a) Create a new domain , b) Apply all the constraints on new domain c) Create new column in each of the tables and copy the old column d) drop the old domain cascaded. any other more elegent method ? 2. Its difficult to see all the constraint defs on a domain . information_schema.domain_constriants does not have the definations just the names are present. Regards Mallah.
[GENERAL] More Praise! 7.4
Its really nice to see/read the pretty printing of VIEW definations!! in 7.3 it was a nightmare to see definations of long views. Kudos to the TEAM. Regds Mallah tradein_clients=# \d XYZ View XYZ Column | Type | Modifiers -+---+--- id | integer | branch | character varying | source | character varying | co_name | character varying | address | text | city | character varying | pin_code | character varying | phone | character varying | fax | character varying | contact | character varying | email | character varying | size | character varying | status | boolean | category_id | integer | keywords | text | edition | smallint | sno | integer | View definition: SELECT eyp_listing.list_id AS id, eyp_listing.branch, 'EYP'::character varying AS source, eyp_listing.co_name, ((( CASE WHEN eyp_listing.address1 IS NOT NULL THEN eyp_listing.address1 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text || ' '::character varying::text) || CASE WHEN eyp_listing.address2 IS NOT NULL THEN eyp_listing.address2 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text) || ' '::character varying::text) || CASE WHEN eyp_listing.address3 IS NOT NULL THEN eyp_listing.address3 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text AS address, eyp_listing.city, eyp_listing.pin_code, eyp_listing.phone, eyp_listing.fax, eyp_listing.contact, eyp_listing.email, eyp_listing.size, true AS status, eyp_listing.category_id, eyp_listing.keywords, eyp_listing.edition, eyp_listing.sno FROM ONLY public.eyp_listing WHERE (eyp_listing.category_id = 1142 OR eyp_listing.category_id = 1143 OR eyp_listing.category_id = 1145 OR eyp_listing.category_id = 1259 OR eyp_listing.category_id = 1161) AND eyp_listing.amount > 0 AND eyp_listing.status::text = 'a'::character varying::text AND eyp_listing.email IS NOT NULL AND eyp_listing.email::text !~~* '%.trade-india.com'::text UNION ALL SELECT iid_listing.list_id AS id, iid_listing.branch, 'IID'::character varying AS source, iid_listing.co_name, ((( CASE WHEN iid_listing.address1 IS NOT NULL THEN iid_listing.address1 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text || ' '::character varying::text) || CASE WHEN iid_listing.address2 IS NOT NULL THEN iid_listing.address2 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text) || ' '::character varying::text) || CASE WHEN iid_listing.address3 IS NOT NULL THEN iid_listing.address3 WHEN '' IS NOT NULL THEN ''::character varying ELSE NULL::character varying END::text AS address, iid_listing.city, iid_listing.pin_code, iid_listing.phone, iid_listing.fax, iid_listing.contact, iid_listing.email, iid_listing.size, true AS status, iid_listing.category_id, iid_listing.keywords, iid_listing.edition, iid_listing.sno FROM ONLY public.iid_listing WHERE (iid_listing.category_id = 1142 OR iid_listing.category_id = 1143 OR iid_listing.category_id = 1145 OR iid_listing.category_id = 1259 OR iid_listing.category_id = 1161) AND iid_listing.amount > 0 AND iid_listing.status::text = 'a'::character varying::text AND iid_listing.email::text !~~* '%.trade-india.com'::text AND iid_listing.email IS NOT NULL UNION ALL SELECT DISTINCT ON (company_id) company_id AS id, trade_india_branch AS branch, 'BRANDING'::character varying AS source, comp_name AS co_name, office_addr AS address, city, pincode AS pin_code, phone_no AS phone, fax_no AS fax, key_exec AS contact, email, 'BRANDING' AS size, false AS status, category_id, serv_prov AS keywords, branding_master.edition, 0 AS sno FROM ONLY public.branding_master JOIN ONLY public.branding_sp_category USING (company_id) WHERE (category_id = 1142 OR category_id = 1143 OR category_id = 1145 OR category_id = 1259 OR category_id = 1161) AND old_company_id = 0 AND status::text = 'a'::character varying::text AND email::text !~~* '%.trade-india.com'::text AND email IS NOT NULL ORDER BY company_id; -- Rajesh Kumar Mallah, Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.
Re: [GENERAL] DOMAIN usability
Peter Eisentraut wrote: Rajesh Kumar Mallah writes: *1.* Suppose varchar(30) turns out to be too small oneday and we want to increase it to varchar(100) , what do i do ? This is no different from the problem of changing a column type in place. It's still being worked on. Yes i realize so. But what could be in principle wrong to allow increasing storage size only eg varchar(30) to varchar(100) not integer to varchar(100) etc. I remeber there was already a long thread of discussion on it. BTW: Searching on archives.postgresql.org takes ages is it using FTS? *2.* Its difficult to see all the constraint defs on a domain . information_schema.domain_constriants does not have the definations just the names are present. You need to join domain_constraints and check_constraints. thanks. Regds Mallah. -- Rajesh Kumar Mallah, Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.
Re: [GENERAL] Problem with FKEYS
Tom Lane wrote: Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes: How come pgsql7.3.4 Was allowing me to delete the master record while referencing records were present in slave table? The only explanation I can think of is that the referencing row shown as being in user_services was actually in a child table --- foreign keys don't work in inheritance hierarchies at the moment. No, That is not the case , they are plain tables. If that's not it, can you provide a self-contained example? Could not replicate it with newly created tables. Only those set of table has the problem. I am ready to provide any required info . Regds Mallah. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] procpid in pg_stat_activity but no correspoding backend
Jan Wieck wrote: Rajesh Kumar Mallah wrote: Hi , I noticed that an entry in pg_stat_activity was existing for a long duration > 2 mins while its backend ( process corresponding to that procpid) was not running. how can it be possible ? version: 7.3.4 There was a bug that caused a crashing backend to hang around in the stats until the slot is reused, because the message telling that the backend terminates is never sent to the collector daemon. I'm not sure if that got fixed for 7.4. Thanks for explaining. Regds Mallah. Jan ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Problem with FKEYS
How come pgsql7.3.4 Was allowing me to delete the master record while referencing records were present in slave table? The problem was detected when the database was migrated to 7.4 and deletions were being refused. Regards Mallah tradein_clients=# \d user_services Table "public.user_services" Column | Type | Modifiers +-+--- userid | integer | not null service_id | integer | not null Indexes: user_services_key unique btree (userid, service_id), user_services_service_id btree (service_id) Foreign Key constraints: $2 FOREIGN KEY (userid) REFERENCES users(userid) ON UPDATE NO ACTION ON DELETE NO ACTION, $1 FOREIGN KEY (service_id) REFERENCES services_master(service_id) ON UPDATE NO ACTION ON DELETE NO ACTION tradein_clients=# SELECT * from user_services where userid=276720 ; userid | service_id + 276720 | 1 (1 row) tradein_clients=# begin work;DELETE from users where userid=276720 ;rollback; BEGIN DELETE 1 ROLLBACK tradein_clients=# SELECT version(); version - PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC 2.96 (1 row) tradein_clients=# ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Problem with create index
On Tuesday 11 Nov 2003 2:00 pm, Stephan Szabo wrote: > On Tue, 11 Nov 2003, Rajesh Kumar Mallah wrote: > > Hmmm... am i doing anything really silly? > > Functional indexes on 7.3 and earlier only allow a function over one or > more columns (which would disallow the below as a function over a function > over a column). The usual workaround would be to make an immutable > function that does both functions on its argument something like: > > create function lower_btrim(text) returns text as 'select > lower(btrim($1));' language 'sql' immutable; > create index profile_master2 on profile_master(lower_btrim(email)); Thanks for the work around > > In 7.4 I believe the below should work. > > > tradein_clients=# CREATE INDEX profile_master2 on profile_master (lower( > > btrim(email) ) ); ERROR: parser: parse error at or near "(" at character > > 61 Yes it does work template1=# CREATE INDEX profile_master2 on t_a(lower(btrim(email))); CREATE INDEX template1=# template1=# CREATE unique INDEX profile_master3 on t_a(lower(btrim(email))); CREATE INDEX template1=# Regds Mallah. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Problem with create index
Hmmm... am i doing anything really silly? tradein_clients=# CREATE INDEX profile_master2 on profile_master (lower( btrim(email) ) ); ERROR: parser: parse error at or near "(" at character 61 tradein_clients=# Regds mallah. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] ALTER TABLE ADD UNIQUE .....
Hi , Looks like ADD UNIQUE( some_fuc( some_feild) ) is not supported with add constraint. the only way is to add the constriant is using UNIQUE INDEX . Is it a bug or intended behaviour? Regds Mallah. in 7.3.4 tradein_clients=# ALTER TABLE general.email_master ADD CONSTRAINT email_master_uniq_lower_btrim_email UNIQUE( lower(email) ); ERROR: parser: parse error at or near "(" at character 99 tradein_clients=# tradein_clients=# tradein_clients=# CREATE UNIQUE INDEX email_master_uniq_lower_btrim_email on general.email_master( lower(email) ); ERROR: Cannot create unique index. Table contains non-unique values Well the SQL has failed but it was parsed successfully. tradein_clients=# SAME PROBLEM IN PGSQL 7.4 RC2 ~~ template1=# alter table t_a add constraint "a" UNIQUE (email); NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "a" for table "t_a" ALTER TABLE template1=# alter table t_a add constraint "b"UNIQUE (lower(email)); ERROR: syntax error at or near "(" at character 54 template1=# ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] PL/PGSQL help for getting number of rows matched.
Hi, We need to implement following logic efficiently. SELECT * from some_table where [ Query 1 ] IF rows_matched = 1 THEN use the single row that matched. ELSIF loop thru the results of [Query 1] END IF; Currently i am doing select count(*) for getting rows_matched in the top and repeating the same query in both branches of IF to get the data of matching rows. I have tried GET DIAGNOSTICS ROW_COUNT but for "SELECTS" if returns 0 or 1 based on matching I am sure there exists better methods. Kindly post a link to better documentation of pl/pgsql or point out section in the standard docs that discuss this issue. Regds Mallah. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Question on Select
select ref_code from tab_b where not exists (select * from tab_a where item_id = tab_b.item_id); make sure u have index on item_id in tab_a . the above sql is quite efficient , other method is to use left join. Alex wrote: Hi, I have two tables, Table A: item_id Table B: item_id, ref_code and i want to list all ref_codes in table B that are not referenced by Table A. Table A has about 3million records./ table B 200 What is the best way to do that ? Thanks Alex ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Problem in restoring data [SOLVED ]
It did solve my problem now its dumped like: SET search_path = archives, pg_catalog; SET CREATE INDEX site_search_gist ON site_search USING gist (keywordidx public.gist_txtidx_ops); CREATE INDEX thanks regds mallah. Tom Lane wrote: Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes: Thanks for the explanation , Shall do that please tell me how to fetch ruleutils.c from 7.3-branch-head (do not know the cvs commands O:-) ) , thanks again. The easiest way to get it is from the cvsweb interface: http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/utils/adt/ruleutils.c Shift-click on the "download" link for the revision you want (1.124.2.2) and you got it. regards, tom lane
Re: [GENERAL] Problem in restoring data
Tom Lane wrote: Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes: I face the following problem in transferring data from pgsql 7.3.4 to pgsql 7.4 RC1 . I am piping the output of pg_dumpall from 7.3 to 7.4 running on different port. The problem is there is a gist index on txtidx type on a non-public schema and when search_path does not include public the index cannot be created. There is a post-7.3.4 bug fix in the 7.3 branch for this mistake: 2003-10-02 18:25 tgl * src/backend/utils/adt/ruleutils.c (REL7_3_STABLE): When dumping CREATE INDEX, must show opclass name if the opclass isn't in the schema search path. Otherwise pg_dump doesn't correctly dump scenarios where a custom opclass is created in 'public' and then used by indexes in other schemas. Since the bug is in the backend and not pg_dump, you can't escape it by using the 7.4 version of pg_dump against the 7.3 server. Ok,I read somewhere its always better to use more recent pg_dump while migrating. You could recompile the server using the 7.3-branch-head version of ruleutils.c, Thanks for the explanation , Shall do that please tell me how to fetch ruleutils.c from 7.3-branch-head (do not know the cvs commands O:-) ) , thanks again. regds mallah. though. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Problem in restoring data
Yes i may drop the indexes even , but is it quite usual to take these work arounds in doing major upgrades. Regds Mallah. Alvaro Herrera wrote: On Mon, Nov 10, 2003 at 01:36:20AM +0530, Rajesh Kumar Mallah wrote: PS: i do not want to edit the (splitted) dump file , my database is ~ 7 GB You can "edit" the dump file by means of perl -pi -e 's/^CREATE INDEX.*gist.*//' dump-file or something like that ... (this isn't a solution to your problem, but may serve as workaround)
Re: [GENERAL] CREATE TABLE with REFERENCE
Stephan Szabo wrote: On Mon, 28 Jul 2003, kay-uwe.genz wrote: i've a little problem with two tables and FOREIGN KEYs. I've read about this long time ago, but didn't remember me where. Well, I hope you can help me. I've create two TABLEs "counties" and "cities". "Countries" have a row "capital" is REFERENCEd "cities". "cities" have a row country REFERENCEd "countries", where a save the country the city is placed. And now PG couldn't create the TABLEs, because the referenced table doesn't exists in time of creation. Is there another method of creating than the ALTER TABLE the first table after the second is living? Not really. That's the correct way to make the constraints. Second question. Is there a method of INSERT INTO both tables VALUES without group them in the same Transaction? You mean insert a row in each table that acts as the pk row for the other? You could fake it by inserting one in with a NULL for the fk column (unless they're both NOT NULL), inserting the other and then updating the first. Otherwise I think you need to be running in a single transaction (although they could be grouped inside a function or as a trigger for example). But isnt' foreign key constraints deferrable inside transactions? i vaugely remember doing it . istn' it the best way of doing it? [ if at all it works :-) ] regds Mallah. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] CREATE TABLE with REFERENCE
kay-uwe.genz wrote: Hi @ all, i've a little problem with two tables and FOREIGN KEYs. I've read about this long time ago, but didn't remember me where. Well, I hope you can help me. I've create two TABLEs "counties" and "cities". "Countries" have a row "capital" is REFERENCEd "cities". "cities" have a row country REFERENCEd "countries", where a save the country the city is placed. And now PG couldn't create the TABLEs, because the referenced table doesn't exists in time of creation. Is there another method of creating than the ALTER TABLE the first table after the second is living? Its given in documents though. ALTER TABLE countries ADD CONSTRAINT "refer_city" FOREIGN KEY (capital) REFERENCES cities (city) UPDATE CASCADE ; mind that city must be pkey in cities for it to work. similarly the other table can be done. Second question. Is there a method of INSERT INTO both tables VALUES without group them in the same Transaction? Hmm not sure but it could be interesting to experiment.. regards Kay-Uwe ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Making pgsql error messages more developers' friendly.
On Saturday 28 Jun 2003 8:50 pm, Jan Wieck wrote: > Rajesh Kumar Mallah wrote: > > Hi Folks, > > > > Shudnt' messages like > > > > ERROR: value too long for type character varying(5) > > Maybe, yes. It's just not that trivial to do. > > > MySQL is better in these small things. > > > > I think in 7.4dev fkey violation are reported better, > > cant such behaviours be extened to other kind of > > exceptions? > > We are working on it. >But pointing to MySQL doesn't help a bit. If you > like MySQL better, then use MySQL instead I am not a MySQL fan. I have been been using PostgreSQL since 7.0.3 version. Posting it here becoz its a public forum and u made an attempt to paint me worng here ;-) Regds Mallah. > and don't bother with the side > effects from the data type abstraction you actually bumped into. > > Sorry, I'm a bit tired of "MySQL does this ...", "MySQL is better here > ..." and so on and so forth. No however good error message system can be > used by the application programmer as replacement for input data > validation. Type checking, foreign keys, check constraints, they all are > last lines of defense, so that a bug in the application or a missing > input validation doesn't cause greater damage. But they are not a > replacement. > > > Jan -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Making pgsql error messages more developers' friendly.
On Friday 27 Jun 2003 12:51 pm, Tom Lane wrote: > "Nigel J. Andrews" <[EMAIL PROTECTED]> writes: > > I was just looking at that fkey violation message yesterday and thinking > > how much better it would be to be able to see the offending value in the > > message. Is that what 7.4 shows? > > You mean like this? > > regression=# create table t1 (f1 int primary key); > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 't1_pkey' > for table 't1' CREATE TABLE > regression=# create table t2 (f2 int references t1); > NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY > check(s) CREATE TABLE > regression=# insert into t2 values(42); > ERROR: $1 referential integrity violation - key (f2)=(42) referenced from > t2 not found in t1 regression=# > > I'm still wondering how to rephrase this to fit in the > recently-agreed-to message style guidelines. I think the > (col names)=(values) part must go into errdetail, but I'm > fuzzy beyond that. Comments? Even this message would be a great relief. not in a position to comment I will install 7.4dev soon and use it to find the improvements. Regds Mallah. > > regards, tom lane > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Making pgsql error messages more developers' friendly.
On Friday 27 Jun 2003 12:14 pm, Tom Lane wrote: > Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes: > > Shudnt' messages like > > ERROR: value too long for type character varying(5) > > Indicate which column and table the server is talking about . > > Send a patch ;-) > I wish i could ;-) > This is not that easy to do: the code that implements the constraint > does not know what it is being applied to. In the general case it > cannot know, because there may be no specific table/column it's being > applied to --- consider "SELECT 'foobar'::char(5)". But if you have > an idea how to give a more-specific error message when possible, > let's see some code. > > (Possible plan: the recently-added error context mechanism might > be able to do something for this...) Thanks for explaining. PostgreSQL already exceeds our expectations in the things which really matters. > > regards, tom lane -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Making pgsql error messages more developers' friendly.
Hi Folks, Shudnt' messages like ERROR: value too long for type character varying(5) Indicate which column and table the server is talking about . Without that we need to open the application source code, find the SQL, then describe the table to find which column is varchar(5) and if there is two columns of varchar(5) we keep wondering. similarly when uniq indexes are violated we do not know which key is causing it. MySQL is better in these small things. I think in 7.4dev fkey violation are reported better, cant such behaviours be extened to other kind of exceptions? Regds Mallah. -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org