Re: [GENERAL] How many fields in a table are too many
There is an article on normalisation at IBM developer works for all those who need a primer on table normalisation. I learnt this from, would you believe, dBase Mac by Jim Heid when I was one of the two users in the world running that version in 1988-1989 =:-D I recomend Database Application Programming with Linux Wiley. There are some good table design chapters even for non Linux people. Good database design helps make PostgreSQL run fast. Cheers Tony Grant -- www.tgds.net Library management software toolkit, redhat linux on Sony Vaio C1XD, Dreamweaver MX with Tomcat and PostgreSQL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Making pgsql error messages more developers' friendly.
Hi Folks, Shudnt' messages like ERROR: value too long for type character varying(5) Indicate which column and table the server is talking about . Without that we need to open the application source code, find the SQL, then describe the table to find which column is varchar(5) and if there is two columns of varchar(5) we keep wondering. similarly when uniq indexes are violated we do not know which key is causing it. MySQL is better in these small things. I think in 7.4dev fkey violation are reported better, cant such behaviours be extened to other kind of exceptions? Regds Mallah. -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Making pgsql error messages more developers' friendly.
Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Shudnt' messages like ERROR: value too long for type character varying(5) Indicate which column and table the server is talking about . Send a patch ;-) This is not that easy to do: the code that implements the constraint does not know what it is being applied to. In the general case it cannot know, because there may be no specific table/column it's being applied to --- consider SELECT 'foobar'::char(5). But if you have an idea how to give a more-specific error message when possible, let's see some code. (Possible plan: the recently-added error context mechanism might be able to do something for this...) regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Making pgsql error messages more developers' friendly.
On Fri, 27 Jun 2003, Rajesh Kumar Mallah wrote: ... similarly when uniq indexes are violated we do not know which key is causing it. MySQL is better in these small things. I think in 7.4dev fkey violation are reported better, cant such behaviours be extened to other kind of exceptions? I was just looking at that fkey violation message yesterday and thinking how much better it would be to be able to see the offending value in the message. Is that what 7.4 shows? -- Nigel J. Andrews ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Making pgsql error messages more developers' friendly.
On Friday 27 Jun 2003 12:14 pm, Tom Lane wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Shudnt' messages like ERROR: value too long for type character varying(5) Indicate which column and table the server is talking about . Send a patch ;-) I wish i could ;-) This is not that easy to do: the code that implements the constraint does not know what it is being applied to. In the general case it cannot know, because there may be no specific table/column it's being applied to --- consider SELECT 'foobar'::char(5). But if you have an idea how to give a more-specific error message when possible, let's see some code. (Possible plan: the recently-added error context mechanism might be able to do something for this...) Thanks for explaining. PostgreSQL already exceeds our expectations in the things which really matters. regards, tom lane -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Making pgsql error messages more developers' friendly.
Nigel J. Andrews [EMAIL PROTECTED] writes: I was just looking at that fkey violation message yesterday and thinking how much better it would be to be able to see the offending value in the message. Is that what 7.4 shows? You mean like this? regression=# create table t1 (f1 int primary key); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 't1_pkey' for table 't1' CREATE TABLE regression=# create table t2 (f2 int references t1); NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s) CREATE TABLE regression=# insert into t2 values(42); ERROR: $1 referential integrity violation - key (f2)=(42) referenced from t2 not found in t1 regression=# I'm still wondering how to rephrase this to fit in the recently-agreed-to message style guidelines. I think the (col names)=(values) part must go into errdetail, but I'm fuzzy beyond that. Comments? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Large Object table data file does not have expected size
I am a little bit confused by the following effect. For test reasons I insert about 10 large objects into the database. Each has a size of 10240 Bytes (10K). So the expected size of the large object datafile should be at least 102400 Bytes (about 976MB) just to hold the large objects. But the according data file is much smaller than that. If I query the large objetcs from psql all data seems to be there (i just randomly picked out some large objects). I first thought there was a bug in my test program (not all large objetcs are inserted), but I couldn't find one. Also this was unlikely, since I always insert the same binary data (for the large objetc) in a simple loop. Has anyone an idea why the datafiles are much smaller than I expect them to be? Thanks, Andreas. - This mail sent through IMP: http://horde.org/imp/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Is it possible to change the wal file size?
Can I change the size of a wal file, and if so where can I change this? I'd like to set the value to bigger value than 16MB. Thanks, Andreas. - This mail sent through IMP: http://horde.org/imp/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Is it possible to change the wal file size?
On 27 Jun 2003 at 10:12, u15074 wrote: Can I change the size of a wal file, and if so where can I change this? I'd like to set the value to bigger value than 16MB. You can not change the size but you can increase the number of WAL files used. See postgresql.conf. Setting it too high does not achieve anything though. You need to have enough disk bandwidth to make use of 200MB WAL logs. Bye Shridhar -- Horngren's Observation: Among economists, the real world is often a special case. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Large Object table data file does not have expected size
Compression? Postgres tends to compress large fields. On Fri, Jun 27, 2003 at 10:08:34AM +0200, u15074 wrote: I am a little bit confused by the following effect. For test reasons I insert about 10 large objects into the database. Each has a size of 10240 Bytes (10K). So the expected size of the large object datafile should be at least 102400 Bytes (about 976MB) just to hold the large objects. But the according data file is much smaller than that. If I query the large objetcs from psql all data seems to be there (i just randomly picked out some large objects). I first thought there was a bug in my test program (not all large objetcs are inserted), but I couldn't find one. Also this was unlikely, since I always insert the same binary data (for the large objetc) in a simple loop. Has anyone an idea why the datafiles are much smaller than I expect them to be? Thanks, Andreas. - This mail sent through IMP: http://horde.org/imp/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ the West won the world not by the superiority of its ideas or values or religion but rather by its superiority in applying organized violence. Westerners often forget this fact, non-Westerners never do. - Samuel P. Huntington pgp0.pgp Description: PGP signature
Re: [GENERAL] Is it possible to change the wal file size?
You can not change the size but you can increase the number of WAL files used. See postgresql.conf. Setting it too high does not achieve anything though. You need to have enough disk bandwidth to make use of 200MB WAL logs. I want write a lot of data (large objetcs) without any interruptions. So I already increased the number of checkpoint_segments to a higher value. But this did not help since I still get interruptions in writing data, when a new log file is created. - This mail sent through IMP: http://horde.org/imp/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] adding fields to a table
However you cannot currently remove a column... Oh yes you can: ALTER TABLE tbl DROP COLUMN whatever http://www.postgresql.org/docs/view.php?version=7.3idoc=0fil e=ddl-alter.html IIRC it was introduced fairly recently (7.3?) Doh! Really should get rid of 'Practical PostgreSQL' as my point of reference, only covers 7.1.x... Come on O'Reilly, print a second edition! ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])