[GENERAL] urgent request : PSQLException: FATAL: could not open relation XXX: No such file or directory
Hi I am working on an application where Postgresql is used as the db. I have a trigger and three functions running on that trigger. The data in db is updated by a thread each 1 minute continuously. I kept my application running for 2 days and on runtime Postgres stopped running. But when I restarted it and restarted my application I got the error PSQLException: FATAL: could not open relation XXX: No such file or directory . I found that some of the files in the db(data folder) are lost(just dissappeared). Can u explain me why it happened and how can I fix this problem, since this is blocking my application. cid:image001.gif@01C92A25.A541A8B0 cid:image002.gif@01C92A25.A541A8B0 Warm regards, Preethi K Valsalan, Team Lead Sphere Networks FZCo. P.O.Box 341010, Dubai UAE Tel: (+9714) 501-5863 Fax: (+9714) 501-5872 mailto:preethi.valsa...@sphere.ae preethi.valsa...@sphere.ae http://www.sphere.ae www.sphere.ae image002.gifimage003.gifimage001.jpg
Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function
On Sun, 01 Feb 2009 00:10:52 -0800 Mike Christensen ima...@comcast.net wrote: Figured out one way to do it, perhaps I can get some feedback on if this is the best way.. Thanks! CREATE TEMP TABLE temp_ratings ( RecipeId uuid, Rating smallint, CONSTRAINT id_pk PRIMARY KEY (RecipeId) ); INSERT INTO temp_ratings(RecipeId, Rating) SELECT RecipeId, Avg(Rating) as Rating FROM RecipeRatings GROUP BY RecipeId; UPDATE Recipes SET Rating = tr.Rating FROM temp_ratings as tr WHERE Recipes.RecipeId = tr.RecipeId AND Recipes.Rating tr.Rating You can have a similarly coincise form using insert into temp table http://www.postgresql.org/docs/8.3/interactive/sql-selectinto.html check what temporary table really means regarding transactions, functions and connections. [1] http://www.postgresql.org/docs/8.3/interactive/sql-createtable.html If specified, the table is created as a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see ON COMMIT below). Existing permanent tables with the same name are not visible to the current session while the temporary table exists, unless they are referenced with schema-qualified names. Any indexes created on a temporary table are automatically temporary as well. Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This makes no difference in PostgreSQL, but see Compatibility. of course depending on the context it may be useful to use on commit that seems to be only supported by the more verbose create path. Still the create path offer some shortcut to avoid to specify the schema of the temp table. create table like [1] and create table as that seems the most promising for your needs http://www.postgresql.org/docs/8.3/interactive/sql-createtableas.html not everything is yet as we dream it, but there is still a lot of syntactic sugar available to exploit. -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Need some help converting MS SQL stored proc to postgres function
Hi guys, I'm in the process of migrating my database from MS SQL 2005 to PostgreSQL and there's one final stored proc that's giving me some problems.. Perhaps someone can give me some help? Here's the sproc: SELECT RecipeId, Avg(Rating) as Rating INTO #ratings FROM RecipeRatings GROUP BY RecipeId UPDATE Recipes SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON (#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating Recipes.Rating) DROP TABLE #ratings The error is: ERROR: syntax error at or near # LINE 3: INTO #ratings ^ ** Error ** ERROR: syntax error at or near # SQL state: 42601 Character: 53 Perhaps there's a different way to create temp tables? Even better is if someone can re-write the query to not use the temp table, I'm far from a SQL expert. Thanks!! Mike -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
R: [GENERAL] complex custom aggregate function
-Messaggio originale- Da: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] Per conto di Scara Maccai Inviato: venerdì 30 gennaio 2009 9.36 A: pgsql-general@postgresql.org Oggetto: [GENERAL] complex custom aggregate function Hi all, I have a table like: value int, quarter timestamp I need an aggregate function that gives back the maximum value using this algorithm: AVG of the first hour (first 4 quarters) (AVG0) same as above, but 1 quarter later (AVG1) same as above, but n quarters later (AVGn) result: the quarter where AVGn was MAX. Example: quartervalue AVGn 2008-01-01 00:00 10 2008-01-01 00:15 15 2008-01-01 00:30 5 2008-01-01 00:45 20 - 12.5 ((10+15+5+20)/4) 2008-01-01 01:15 2 - 21 ((15+5+20+2)/4) 2008-01-01 01:30 30 - 14.25 ((5+20+2+30)/4)) the result should be ('2008-01-01 00:15', 21) It would be very easy if the input to the custom aggregate function was ordered (because I would keep 4 internal counters), but I guess there's no way of forcing the ordering of the input to the function, right? So I have to cache all the (quarter,value) couples and give back a result at the end, right? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general For that purpose, a sliding mean calculation I use the following -- -- Sample table definition -- CREATE TABLE tbl_ayas ( fulldate timestamp without time zone NOT NULL, id_1 real, -- temperature id_2 real, -- pressure .. CONSTRAINT tbl_ayas_pkey PRIMARY KEY (fulldate) ) WITH (OIDS=FALSE); -- -- Function -- CREATE OR REPLACE FUNCTION perl_sliding_mean(integer, real, integer, integer, bpchar, bpchar) RETURNS real AS $BODY$ #BEGIN { strict-import(); } # get values my ($myid, $myval, $mycount, $myvalid, $myslidesum, $myreset) = @_; # reset the arry if requested if ( $myreset eq 't' ) { @stored_sl_val=(); @stored_arr=(); return 0; } # restore the array of array @temp_sl_val = $stored_arr[$myid]; @stored_sl_val = @{$temp_sl_val[0]}; # check if the value is null if ( ! defined $myval ) { # log log log log log log elog(NOTICE, perl_sliding_mean = push null value [undef] ); # sum does not change push(@stored_sl_val, undef); } else { # log log log log log log elog(NOTICE, perl_sliding_mean = push value $myval ); # assign the new value push(@stored_sl_val, $myval); } # log log log log log log elog(NOTICE, perl_sliding_mean = scalar array . scalar @stored_sl_val ); if ( ( scalar @stored_sl_val ) $mycount ) { # log log log log log log elog(NOTICE, perl_sliding_mean = pop element ); # Remove one element from the beginning of the array. shift(@stored_sl_val); } # getting mean # log log log log log log elog(NOTICE, perl_sliding_mean = getting mean ); my $good_values; my $result; foreach (@stored_sl_val) { # log log log log log log elog(NOTICE, arr : . $_ ); if ( defined $_ ) { $result += $_; $good_values ++; } } # log log log log log log elog(NOTICE, perl_sliding_mean = sum : $result, good values : $good_values ); my $mean; if ( $good_values = $myvalid ) { # reset the arry if requested if ( $myslidesum eq 't' ) { $mean = $result; # sum } else { $mean = $result / $good_values; # average } } else { # log log log log log log elog(NOTICE, perl_sliding_mean = good_values myvalid ); $mean = -; # skip later and return null } # save back the array of array elog(NOTICE, perl_sliding_mean = scalar stored_sl_val . scalar @stored_sl_val ); $stored_arr[$myid] = [ @stored_sl_val ]; # return calculated sliding mean or null if ( $mean == - ) { return; } return $mean; $BODY$ LANGUAGE 'plperlu' VOLATILE; COMMENT ON FUNCTION perl_sliding_mean(integer, real, integer, integer, bpchar, bpchar) IS 'Calculate sliding means/sums'; -- -- query -- Select perl_sliding_mean(0,0,0,0,'f','t'); SELECT perl_sliding_mean(0, id_1 , 8, 6, 'f', 'f') AS numeric), 1) AS ayas_temperature, perl_sliding_mean(1, id_2 , 8, 6, 'f', 'f') AS numeric), 1) AS ayas_pressure . Regards, Paolo Saudin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function
Hi, Mike Christensen wrote: Hi guys, I'm in the process of migrating my database from MS SQL 2005 to PostgreSQL and there's one final stored proc that's giving me some problems.. Perhaps someone can give me some help? Here's the sproc: SELECT RecipeId, Avg(Rating) as Rating INTO #ratings FROM RecipeRatings GROUP BY RecipeId UPDATE Recipes SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON (#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating Recipes.Rating) would not UPDATE receipes SET rating = r.rating FROM (SELECT recipeid,avg(rating) as rating GROUP BY recipeid) r WHERE recipeid=r.recipeid AND rating r.rating work too w/o temp table? (untested, can contain errors) Tino smime.p7s Description: S/MIME Cryptographic Signature
Re: [GENERAL] Pet Peeves
rules are very very very very rarely useful. I wouldn't say that. There are many use cases where rules are just the thing. Plus they have an added performance benefit when dealing with multiple rows in a single statement. yes, in general - I wouldn't mind to see postgresql implement fully updatable views. There's being a very long discussion about that on -hackers, and patch was even in cvs-head for a bit, but got dropped. probably enabling triggers for views would be the only way to do it, me thinks. I don't know how oracle guys got around it. The Oracle solution is quite useful in a large set of cases. The basic idea is this: Since a view is arbitrarily complex, there is no way, in general, that the database can know how to update it. Therefore the concept of BEFORE or AFTER triggers doesn't really make sense (before or after something the database can't do anyway). So instead, the only kind of trigger they allow on a view is an INSTEAD OF row-level trigger. The contract of the trigger function is that it will be invoked once for each matching row in the view, and the database will assume that the trigger will do the necessary work to update that row. Thus Oracle assumes that the number of rows updated matches the number of times that it invoked the trigger function. Apart from this last part, this is like defining a rule CREATE RULE my_rule AS ON INSERT/UPDATE/DELETE TO my_view DO INSTEAD SELECT my_fn(old.*, new.*); Of course the problem with using a rule in this way is that the query is rewritten as a SELECT, and the client is told that no rows were updated. This is where the INSTEAD OF trigger comes in handy. Dean. _ Hotmail, Messenger, Photos and more - all with the new Windows Live. Get started! http://www.download.live.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Serial Jumping
On 2009-01-27, db.subscripti...@shepherdhill.biz db.subscripti...@shepherdhill.biz wrote: Hi, I have a table with BIG SERIAL field as Primary KEY. During high load, entries in the BIG SERIAL field are jumped. One could see a row with 1367 and expecting the next INSERT to be 1368, one would end up getting 1369. Please is this normal? if an insert that would have used 1368 failed or is in an unfinished transaction that's entirely normal. if you care about the value you are inserting make sure you know it as the time it is inserted (use returning or use nextval beforehand) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
- no ability to define triggers on views maybe because you can't perform insert/delete/update on them ? Actually I was thinking the value of triggers on views is precisely to allow you to perform insert/delete/update on them. I know you can do this with rules, but there are cases when a trigger is much more convienent to work with. Dean. _ Twice the fun—Share photos while you chat with Windows Live Messenger. Learn more. http://www.microsoft.com/uk/windows/windowslive/products/messenger.aspx -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
The only one I can see that hasn't already been mentioned - no ability to define triggers on views Dean. _ Windows Live Messenger just got better .Video display pics, contact updates more. http://www.download.live.com/messenger -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PGSQL or other DB?
Scott Marlowe wrote: On Sat, Jan 31, 2009 at 2:13 AM, Erik Jones ejo...@engineyard.com wrote: On Jan 30, 2009, at 11:37 AM, durumdara wrote: - I can add/modify a table, or a field to a table without full lock on the table (like DBISAM restructure). Like in FireBird, where the add field change only the table description. I don't know that PG supports this way of the DB modifying. Nope. PostgreSQL is an all or nothing transactional database. I'd never heard of DBISAM before you mentioned it and have never used Firebird. After doing a little reading it turns out that if you plan to use transactions at all (which is very likely given even just the little you've described about the applications you're building) then you should realize that altering tables is not compatible with transactions and doing so will automatically commit any open transactions on the table. Are talking about pgsql or some other database? Everything in pgsql can be done in a transaction, except create / drop database / tablespace. Looking into Firebird I couldn't find how it handles (or doesn't) that at all I but I did see that it will happily let you add a new not null column with no default to a table by writing nulls for the new attribute for any existing columns. That already makes me queasy. That's pretty much what pgsql does. Why does it make you queasy? I think the key is that the new column is NOT NULL, so defaulting the new column's values to NULL results in immediate data integrity inconsistency. If I remember rightly, PG doesn't allow this: you have to create the column as NULL, UPDATE and then add the NOT NULL constraint, or perhaps (I haven't tried this) create the column with a default and then remove it immediately afterwards. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Resp.: [GENERAL] How to catch the id in a INSERT INTO ... RETURNING function?
2009/2/1, A B gentosa...@gmail.com: Hi. I have a table foo(id serial primary key, b int); and I want an insert function create or replace function insert_to_foo(bvalue integer) returns integer as declare newindex integer; begin ... insert into foo (a,b) values (default,bvalue) returning id THIS LINE -- do more with newindex here return newindex; end; Well, the problem is that I want the id of the new post to be saved into the newindex variable for further actions. But how do I catch the value into the variable? Should I do: select id from insert into foo (a,b) values (default,bvalue) returning id; ? Try: INSERT ... RETURNING expressions INTO [STRICT] target; 38.5.3. Executing a Query with a Single-Row Result http://www.postgresql.org/docs/current/interactive/plpgsql-statements.html Osvaldo -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to catch the id in a INSERT INTO ... RETURNING function?
On Sun, Feb 01, 2009 at 11:37:52AM +0100, A B wrote: Hi. I have a table foo(id serial primary key, b int); and I want an insert function create or replace function insert_to_foo(bvalue integer) returns integer as declare newindex integer; begin ... insert into foo (a,b) values (default,bvalue) returning id THIS LINE -- do more with newindex here INSERT INTO foo (a, b) VALUES (DEFAULT, bvalue) RETURNING id INTO newindex; return newindex; end; Well, the problem is that I want the id of the new post to be saved into the newindex variable for further actions. But how do I catch the value into the variable? Should I do: select id from insert into foo (a,b) values (default,bvalue) returning id; ? See in the manual: http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW Regards, Gerhard signature.asc Description: Digital signature
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
all you have to really do is: create sequence foo_bar_new_column_tralala_seq; ALTER TABLE foo_bar ADD COLUMN tralala int NOT NULL DEFAULT nextval('foo_bar_new_column_tralala_seq'); That's all there's to it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function
Figured out one way to do it, perhaps I can get some feedback on if this is the best way.. Thanks! CREATE TEMP TABLE temp_ratings ( RecipeId uuid, Rating smallint, CONSTRAINT id_pk PRIMARY KEY (RecipeId) ); INSERT INTO temp_ratings(RecipeId, Rating) SELECT RecipeId, Avg(Rating) as Rating FROM RecipeRatings GROUP BY RecipeId; UPDATE Recipes SET Rating = tr.Rating FROM temp_ratings as tr WHERE Recipes.RecipeId = tr.RecipeId AND Recipes.Rating tr.Rating Mike Christensen wrote: Hi guys, I'm in the process of migrating my database from MS SQL 2005 to PostgreSQL and there's one final stored proc that's giving me some problems.. Perhaps someone can give me some help? Here's the sproc: SELECT RecipeId, Avg(Rating) as Rating INTO #ratings FROM RecipeRatings GROUP BY RecipeId UPDATE Recipes SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON (#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating Recipes.Rating) DROP TABLE #ratings The error is: ERROR: syntax error at or near # LINE 3: INTO #ratings ^ ** Error ** ERROR: syntax error at or near # SQL state: 42601 Character: 53 Perhaps there's a different way to create temp tables? Even better is if someone can re-write the query to not use the temp table, I'm far from a SQL expert. Thanks!! Mike -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
On Sun, Feb 1, 2009 at 10:20 AM, Dean Rasheed dean_rash...@hotmail.com wrote: - no ability to define triggers on views maybe because you can't perform insert/delete/update on them ? -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Full text index not being used, even though it is in the plan
I am trying to use a full text index, but it seems to be reindexing on every query. The query plan looks fine, but the queries take extremely long (hours even). I think it is reindexing because it is notifying me that certain long words won't be indexed as you can see below, which is what it does when I create the index. = explain select id from source_listings where plainto_tsquery('view') @@ to_tsvector('english', full_listing); QUERY PLAN Bitmap Heap Scan on source_listings (cost=1454.88..7445.47 rows=1595 width=4) Recheck Cond: (plainto_tsquery('view'::text) @@ to_tsvector('english'::regconfig, full_listing)) - Bitmap Index Scan on kw2_index (cost=0.00..1454.48 rows=1595 width=0) Index Cond: (plainto_tsquery('view'::text) @@ to_tsvector('english'::regconfig, full_listing)) = explain analyze select id from source_listings where plainto_tsquery('view') @@ to_tsvector('english', full_listing); NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. ... it just keeps building a new index until I kill it Cancel request sent -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
rules are very very very very rarely useful. yes, in general - I wouldn't mind to see postgresql implement fully updatable views. There's being a very long discussion about that on -hackers, and patch was even in cvs-head for a bit, but got dropped. probably enabling triggers for views would be the only way to do it, me thinks. I don't know how oracle guys got around it. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to catch the id in a INSERT INTO ... RETURNING function?
Hi. I have a table foo(id serial primary key, b int); and I want an insert function create or replace function insert_to_foo(bvalue integer) returns integer as declare newindex integer; begin ... insert into foo (a,b) values (default,bvalue) returning id THIS LINE -- do more with newindex here return newindex; end; Well, the problem is that I want the id of the new post to be saved into the newindex variable for further actions. But how do I catch the value into the variable? Should I do: select id from insert into foo (a,b) values (default,bvalue) returning id; ? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
Grzegorz Jaśkiewicz wrote on 01.02.2009 13:13: probably enabling triggers for views would be the only way to do it, me thinks. I don't know how oracle guys got around it. Oracle *does* have (INSTEAD OF) triggers on views. (and simple views are automatically updateable anyway) Regards Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Warm Standby question
Hi, (Note: I have never used log shipping before, I'm just interested in the concepts, so I'm might be missing a very important aspect) I was reading the blog entry about HA and warm standby: http://scale-out-blog.blogspot.com/2009/02/simple-ha-with-postgresql-point-in-time.html The image that explained how log shipping works, strikes me as being a bit too complex. http://1.bp.blogspot.com/_26KnjtB2MFo/SYVDrEr1HXI/AEY/ncq_AW-Vv-w/s1600-h/pg_warm_standby.png According to the picture it basically works like this: Master - Copy master archive directory - Copy to standby archive dir - copy to pg_xlogs. When I look at this chain I'm asking myself, why do I need the two archive directories? Why can't the master copy the files directly into the pg_xlogs directory of the standby server? Thanks Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
On Sat, Jan 31, 2009 at 9:04 PM, Richard Broersma richard.broer...@gmail.com wrote: On Sat, Jan 31, 2009 at 3:16 PM, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote: Why does this not work: postgres=# ALTER TABLE tab1 ALTER COLUMN nr TYPE serial; ERROR: type serial does not exist serial is really just short-hand for making an integer column use default incrementing function. The following will fully explain what it is so that you can alter the column: http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL Seeing as it works with adding a column, and I've seen instructions for creating a sequence, and then adding a dependency into the system tables, it's quite reasonable to expect that one day it will work with alter table alter column. But it's probably more complicated than just making it a serial type, there's probably some question of setting the sequence according to the max value in the table. I'd be surprised if it's not on the TODO list somewhere. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
2009/2/1 Scott Marlowe scott.marl...@gmail.com: On Sat, Jan 31, 2009 at 9:04 PM, Richard Broersma richard.broer...@gmail.com wrote: On Sat, Jan 31, 2009 at 3:16 PM, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote: Why does this not work: postgres=# ALTER TABLE tab1 ALTER COLUMN nr TYPE serial; ERROR: type serial does not exist serial is really just short-hand for making an integer column use default incrementing function. The following will fully explain what it is so that you can alter the column: http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL ... But it's probably more complicated than just making it a serial type, there's probably some question of setting the sequence according to the max value in the table. I'd be surprised if it's not on the TODO list somewhere. Like: SELECT setval('serial', max(id)) FROM distributors; ? http://www.postgresql.org/docs/current/interactive/sql-createsequence.html Osvaldo -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text index not being used
So this seems to be because the result size is too big. I still don't know why it is looping through every record and printing a warning, but adding a LIMIT makes the queries complete in a reasonable time (although not all that fast). However I need to sort and also have many other facets that may or may not be included in the query. Adding a sort makes it load every record again and take forever. I tried to create an index including all of the fields I query on to see if that would work, but I get an error the the index row is too large: = create index master_index on source_listings(geo_lat, geo_lon, price, bedrooms, region, city, listing_type, to_tsvector('english', full_listing), post_time); NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. ERROR: index row requires 13356 bytes, maximum size is 8191 Any ideas about how to resolve this? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PGSQL or other DB?
On Sun, Feb 1, 2009 at 7:33 AM, Russ Brown pickscr...@gmail.com wrote: Scott Marlowe wrote: On Sat, Jan 31, 2009 at 2:13 AM, Erik Jones ejo...@engineyard.com wrote: On Jan 30, 2009, at 11:37 AM, durumdara wrote: Looking into Firebird I couldn't find how it handles (or doesn't) that at all I but I did see that it will happily let you add a new not null column with no default to a table by writing nulls for the new attribute for any existing columns. That already makes me queasy. That's pretty much what pgsql does. Why does it make you queasy? I think the key is that the new column is NOT NULL, so defaulting the new column's values to NULL results in immediate data integrity inconsistency. If I remember rightly, PG doesn't allow this: you have to create the column as NULL, UPDATE and then add the NOT NULL constraint, or perhaps (I haven't tried this) create the column with a default and then remove it immediately afterwards. OK, I completely misunderstood what the other poster meant. Pgsql does NOT allow creating the not null column with nulls in place. That would make me quesy too. Creating it with a default works in postgresql. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Indices types, what to use. Btree, Hash, Gin or Gist
My Gmail(bloody gmail!) has been auto-replying to the last messager (Scott) so I think we have been having a private discussion on this topic. Here is an update on our discussion. ME : When it comes to the boolean, the content is about 70-30%. I find it strange though that an index on a 50-50% isn't that useful. With an index the DB can skip 50% of the table so it should be useful, but perhaps the intersection of sets is expensive for the DB? Could an index in fact possibly slow down queries? Or will the DB ignore using the index in such cases? Most of my matches contains simple matches, and I don't see the use of partial indexes, but I get the idea. You want to make a bigger difference between the set contained/matched against to get them more unique. I am now reading about fulltext search and its my next step. So I am a bit interested in the Gin/Gist. But I will revive this thread once I am more familiar with fulltext, currently reading up on the topic.. SCOTT : When it comes to the boolean, the content is about 70-30%. I find it strange though that an index on a 50-50% isn't that useful. With an index the DB can skip 50% of the table so it should be useful, but perhaps the intersection of sets is expensive for the DB? If the values are randomly mixed, and you can fit at least a couple of rows in each 8k block, then using an index on a 50/50 mix is a total loser, because you're gonna have to read every single block anyway. If you can fit 10 rows in a single block, then 10% of one value means it's a loser too, because, again, you're gonna have to hit every block anyway. With the mix you list, 70/30, it means that if you can fit 3 rows in one block, and they're randomly distributed, you'll have to hit every block anyway, and an index on bool won't help. Keep in mind random table accesses are about 4 to 10 times more expensive than sequential scans, and you have to add in the random access time of the index as well. Could an index in fact possibly slow down queries? Or will the DB ignore using the index in such cases? The db should ignore it for select queries unless the statistics are wrong. However, indexes ALWAYS cost on insert / update / delete. I am now reading about fulltext search and its my next step. So I am a bit interested in the Gin/Gist. But I will revive this thread once I am more familiar with fulltext, currently reading up on the topic.. If you're searching a lot on text, full text search is a great way to go. Be sure and look up the pg_stat type tables. There's tons of useful info in them about how your database is actually being accessed. pg_stat_user_indexes and pg_stat_user_tables are both very useful. ME: Thanks, I am new to PostgreSQL and just an SQL scholar really. I am using pgAdmin now, is there a way of looking at those stats from there or is it just from the command line ? Would you say it's safe to index all columns that are searched for in a relation? I have indexed perhaps 10 columns (of 15) and some are like the boolean one. But I am thinking that they will only be used if the DB finds them useful so I am over-indexing.. is this ok? I fin d updates and insertions pretty fast anyway so I am not worried about that aspect unless I am wrong !? :O SCOTT : This is one of those it really depends types of questions. If the database is mostly read from, and the updates aren't slowed down too much by the many indexes, then sure, go ahead and add the indexes. It won't generally slow down the select queries running all the time. After a month or so check the pg_stat_user_indexes table to see which non-unique indexes aren't being used and drop them. Thank you Scott for your private help :) / Moe
Re: [GENERAL] Indices types, what to use. Btree, Hash, Gin or Gist
On Sun, Feb 01, 2009 at 06:00:02PM +0100, Mohamed wrote: When it comes to the boolean, the content is about 70-30%. I find it strange though that an index on a 50-50% isn't that useful. With an index the DB can skip 50% of the table so it should be useful, but perhaps the intersection of sets is expensive for the DB? Could an index in fact possibly slow down queries? Or will the DB ignore using the index in such cases? It's more complex than you suggest: the database cannot just skip 50% of the table. The database reads or write blocks of data (8k) and each such block will contain (in your example) 50% rows you are interested in. So the database will have to read every block in the table anyway, so you may as well not use the index at all. Yes, the database will avoid using indexes if it decides they're a bad idea. Usually an index has to cut the number of blocks required by at least 90% before it becomes at all useful to use it. Indexes on booleans rarely reach that kind of level. Have a nice day, -- Martijn van Oosterhout klep...@svana.org http://svana.org/kleptog/ Please line up in a tree and maintain the heap invariant while boarding. Thank you for flying nlogn airlines. signature.asc Description: Digital signature
Re: [GENERAL] Full text index not being used, even though it is in the plan
Alex Neth a...@liivid.com writes: I am trying to use a full text index, but it seems to be reindexing on every query. The query plan looks fine, but the queries take extremely long (hours even). I think it is reindexing because it is notifying me that certain long words won't be indexed as you can see below, which is what it does when I create the index. I don't think it's reindexing, it's just calling to_tsvector() which it has to do when it rechecks rows that the index says might match. Is it possible that nearly all the full_listing values contain view? How does it perform with much more selective searches? If your full_listing values are quite large then recalculating the tsvector might be a lot more expensive than doing a full table scan and LIKE match for cases when nearly the whole table is going to be scanned anyways. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text index not being used
Alex, what text you're indexing ? I don't believe you have meaningful very long words ( 2047 characters). Do you really need multicolumn index ? I'd recommend to separate problem - create column fts for tsvector('english',full_listing), create index on it and try full-text query. The way you're doing imply calling to_tsvector every time you search, which can be very costly. Olegk On Sun, 1 Feb 2009, Alex wrote: So this seems to be because the result size is too big. I still don't know why it is looping through every record and printing a warning, but adding a LIMIT makes the queries complete in a reasonable time (although not all that fast). However I need to sort and also have many other facets that may or may not be included in the query. Adding a sort makes it load every record again and take forever. I tried to create an index including all of the fields I query on to see if that would work, but I get an error the the index row is too large: = create index master_index on source_listings(geo_lat, geo_lon, price, bedrooms, region, city, listing_type, to_tsvector('english', full_listing), post_time); NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. ERROR: index row requires 13356 bytes, maximum size is 8191 Any ideas about how to resolve this? Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Indices types, what to use. Btree, Hash, Gin or Gist
Mohamed mohamed5432154...@gmail.com writes: My Gmail(bloody gmail!) has been auto-replying to the last messager (Scott) so I think we have been having a private discussion on this topic. There is an option in the Google Labs tab to make Reply All the default button -- of course then there's always a chance you'll make the opposite mistake which can be a lot worse. Earlier I suggested with a boolean column you could consider making it the condition on a partial index with some other key. For example you could have CREATE INDEX partial_age_male on tab(age) WHERE gender = 'M'; CREATE INDEX partial_age_female on tab(age) WHERE gender = 'F'; Then if you always search on age with gender the optimizer can use the index which only includes the records for the appropriate gender. It's basically a free index key column since it doesn't actually have to store the extra column. Note that in this example if you were to search on just age it wouldn't be able to use either of these indexes however. In theory it could use the indexes if you search on just gender but it would be unlikely to for all the same reasons as previously mentioned for regular indexes. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Need some help converting MS SQL stored proc to postgres function
Thanks! You might be onto something, I see two potential problems though: 1) If the nested select returns no rows (no one has rated the recipe before), it would try to set the value to null. The Rating column is non-nullable which is the way I want it. 2) I'm not exactly 100% sure on this, but I think this query will end up locking every row in the recipes table which could be tens of thousands, and create some perf issues or deadlocks. Even though I run this query once per day to update ratings, I'd like to keep it as streamlined as possible.. Mike Tino Wildenhain wrote: Hi, Mike Christensen wrote: Hi guys, I'm in the process of migrating my database from MS SQL 2005 to PostgreSQL and there's one final stored proc that's giving me some problems.. Perhaps someone can give me some help? Here's the sproc: SELECT RecipeId, Avg(Rating) as Rating INTO #ratings FROM RecipeRatings GROUP BY RecipeId UPDATE Recipes SET Rating = #ratings.Rating FROM Recipes INNER JOIN #ratings ON (#ratings.RecipeId = Recipes.RecipeId AND #ratings.Rating Recipes.Rating) would not UPDATE receipes SET rating = r.rating FROM (SELECT recipeid,avg(rating) as rating GROUP BY recipeid) r WHERE recipeid=r.recipeid AND rating r.rating work too w/o temp table? (untested, can contain errors) Tino -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Indices types, what to use. Btree, Hash, Gin or Gist
Yeah, but reply-all will still send private messages :O .. its strange because this is the only mailing list that gmail behaves like this with.. it must have to with how postgre sends messages out. But anyways. Back to topic :) Yeah, I think that a partial index is something that would be smart. The problem is that sometimes I want to search for both genders (that is no match for that column at all) and my index will then not be used on the partial ones that includes that one which leads me to another thought. Does this mean that the DB will hit all the blocks anyways? Does that mean that as soon as I search for one column that is not a good index or not indexed at all, the other indexed fields become useless since the DB will have to go through all rows anyway? Will I have to make a partial index and include the gender together with all other fields each ? Here is my indexes as of now (in one of my relations). region index:'region_id_index' // Searched for or not included in query district index:'district_id_index' // Searched for or not included in query category index:'category_id_index'// Searched for alone or not included in query subCategory index:'sub_category_id_index' // Searched for or not included in query languageOfAd index:'language_of_ad_index' // Searched for values 1,2 and sometimes don't search this field if all lang should be shown name index:'name_index' phoneNumber index:'phone_number_index' // Always with name, I guess partial index could work here (often not searched for) email index:'email_index' // Always with name, I guess partial index could work here (often not searched for) price index:'price_index'// typeOfAd index:'type_of_ad_index'// 1,2,3,4 Always one of these in the query rentingPeriod index:'renting_period_index' // if 3.4 then 1,7,30,365 if 1,2 then value is 0 (but not always used in query) time index:'time_index' // Date with no timezone, newest first in index, read about it ? ordered index..? statusOfAd index:'status_of_ad_index' // Always in the query, guess could be included in all indexes as partial Thats only the index fields of this relation. Things that are searched for. I will create a Gin index on description also but thats for the fulltext that is coming together but is a bit of a struggle :) / Moe On Sun, Feb 1, 2009 at 7:23 PM, Gregory Stark st...@enterprisedb.comwrote: Mohamed mohamed5432154...@gmail.com writes: My Gmail(bloody gmail!) has been auto-replying to the last messager (Scott) so I think we have been having a private discussion on this topic. There is an option in the Google Labs tab to make Reply All the default button -- of course then there's always a chance you'll make the opposite mistake which can be a lot worse. Earlier I suggested with a boolean column you could consider making it the condition on a partial index with some other key. For example you could have CREATE INDEX partial_age_male on tab(age) WHERE gender = 'M'; CREATE INDEX partial_age_female on tab(age) WHERE gender = 'F'; Then if you always search on age with gender the optimizer can use the index which only includes the records for the appropriate gender. It's basically a free index key column since it doesn't actually have to store the extra column. Note that in this example if you were to search on just age it wouldn't be able to use either of these indexes however. In theory it could use the indexes if you search on just gender but it would be unlikely to for all the same reasons as previously mentioned for regular indexes. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services!
Re: [GENERAL] Pet Peeves?
On Sat, 2009-01-31 at 15:54 -0800, Octavio Alvarez wrote: On Sat, 2009-01-31 at 23:36 +, Gregory Stark wrote: Octavio Alvarez alvar...@alvarezp.ods.org writes: What about a WHERE clause like WHERE P1 P2 You could either: (1) do FROM grades AS g1 INNER JOIN grades AS g2 ON g1.P1 g2.P2, generating the record set before applying the crosstab transformation. Just to remove all the stupid things I said about the first solution to the WHERE P1 P2 problem: Your grades table would be defined as: test=# \d grades Table public.grades Column | Type| Modifiers +---+--- st | character varying | su | character varying | p | bigint| gr | bigint| Indexes: grades_st_key UNIQUE, btree (st, p, su) st = student; su = subject; p = period; gr = grade The non-crosstab query that gives you the recordset for the crosstab, would be: SELECT p2_gt_p1.st, p2_gt_p1.su, grades.p, grades.gr FROM ( SELECT g1.st, g1.su, g1.p, g1.gr, g2.p, g2.gr FROM grades g1 INNER JOIN grades g2 ON g1.st = g2.st AND g1.su = g2.su AND g2.p = 2 AND g1.p = 1 AND g2.gr g1.gr ) AS p2_gt_p1 LEFT JOIN grades USING (st, su); -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] urgent request : PSQLException: FATAL: could not open relation XXX: No such file or directory
On Sunday 01 February 2009 12:38:21 am Preethi Valsalan wrote: Hi I am working on an application where Postgresql is used as the db. I have a trigger and three functions running on that trigger. The data in db is updated by a thread each 1 minute continuously. I kept my application running for 2 days and on runtime Postgres stopped running. But when I restarted it and restarted my application I got the error PSQLException: FATAL: could not open relation XXX: No such file or directory . I found that some of the files in the db(data folder) are lost(just dissappeared). Can u explain me why it happened and how can I fix this problem, since this is blocking my application. You will need to supply more information to start the troubleshooting process: Postgres version OS type and version What is your application doing? Are there other application running that touch the db or its data directory? From this particular post what does on runtime Postgres stopped running mean? Thanks, -- Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
On Saturday 31 January 2009 8:47:28 pm Adam Rich wrote: On Thu, 29 Jan 2009 13:16:17 + Gregory Stark stark(at)enterprisedb(dot)com wrote: So, what do people say? Is Postgres perfect in your world or does it do some things which rub you the wrong way? I see all the major ones have already been mentioned, so here's some minor ones. - lack of system-level and DDL triggers - inability to limit triggers to certain columns - inability to know the DML operation causing a trigger From: http://www.postgresql.org/docs/8.3/interactive/plpgsql-trigger.html TG_OP Data type text; a string of INSERT, UPDATE, or DELETE telling for which operation the trigger was fired. This is also available in plpythonu, I don't know about the other PL's. - date_part/extract returning floats instead of integer Maybe this what you are looking for ?: http://www.postgresql.org/docs/8.3/interactive/datatype-datetime.html Note: When timestamp values are stored as double precision floating-point numbers (currently the default), the effective limit of precision might be less than 6. timestamp values are stored as seconds before or after midnight 2000-01-01. Microsecond precision is achieved for dates within a few years of 2000-01-01, but the precision degrades for dates further away. When timestamp values are stored as eight-byte integers (a compile-time option), microsecond precision is available over the full range of values. However eight-byte integer timestamps have a more limited range of dates than shown above: from 4713 BC up to 294276 AD. The same compile-time option also determines whether time and interval values are stored as floating-point or eight-byte integers. In the floating-point case, large interval values degrade in precision as the size of the interval increases. - parts of the SQL statement (e.g. 'for update of') requiring table aliases when present instead of table names. - lack of queryable high-water marks useful for tuning - lack of an auto-tuner, for that matter. - inability to log (e.g. long-running queries) to a table - lack of custom session-level variables (without editing postgresql.conf) - lack of autonomous transactions -- Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
Jasen Betts schrieb: On 2009-01-31, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote: Hi List, I have a short question to psql. Why does this not work: postgres=# ALTER TABLE tab1 ALTER COLUMN nr TYPE serial; ERROR: type serial does not exist but this: postgres=# ALTER TABLE tab1 DROP COLUMN nr; ALTER TABLE postgres=# ALTER TABLE tab1 ADD COLUMN nr serial; NOTICE: ALTER TABLE will create implicit sequence tab1_nr_seq for serial column tab1.nr ALTER TABLE because serial isn't a type. ah - I think this is what the error message says ;-/ ... ?? !! -- St.Pauli - Hamburg - Germany Andreas Wenk -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
On Saturday 31 January 2009 8:47:28 pm Adam Rich wrote: On Thu, 29 Jan 2009 13:16:17 + Gregory Stark stark(at)enterprisedb(dot)com wrote: So, what do people say? Is Postgres perfect in your world or does it do some things which rub you the wrong way? I see all the major ones have already been mentioned, so here's some minor ones. - lack of system-level and DDL triggers - inability to limit triggers to certain columns - inability to know the DML operation causing a trigger From: http://www.postgresql.org/docs/8.3/interactive/plpgsql-trigger.html TG_OP Data type text; a string of INSERT, UPDATE, or DELETE telling for which operation the trigger was fired. This is also available in plpythonu, I don't know about the other PL's. Thanks, I knew this was available for python perl PLs, I wasn't aware it was I plpgsql too. Still, it would be nice to have something akin to oracle's IF(UPDATING('col_name')) THEN - date_part/extract returning floats instead of integer Maybe this what you are looking for ?: http://www.postgresql.org/docs/8.3/interactive/datatype-datetime.html Note: When timestamp values are stored as double precision floating- point numbers (currently the default), the effective limit of precision might be less than 6. timestamp values are stored as seconds before or after midnight 2000-01-01. Microsecond precision is achieved for dates within a few years of 2000-01-01, but the precision degrades for dates further away. When timestamp values are stored as eight-byte integers (a compile-time option), microsecond precision is available over the full range of values. However eight- byte integer timestamps have a more limited range of dates than shown above: from 4713 BC up to 294276 AD. The same compile-time option also determines whether time and interval values are stored as floating-point or eight-byte integers. In the floating-point case, large interval values degrade in precision as the size of the interval increases. Nope, I mean if you use date_part to extract a piece of a date, you get a float instead of an integer. It trips me up everytime I try something like this: select * from table where (weekmask (1 date_part('DOW', $1))) 0 To my surprise, the operator fails because it requires an integer argument, but date_part provides only a double floating point. I realize this is documented as intended behavior, but why? Is there any scenario where DOW (or day, year, hour, or *any* field really) would be returning a fractional number? - parts of the SQL statement (e.g. 'for update of') requiring table aliases when present instead of table names. - lack of queryable high-water marks useful for tuning - lack of an auto-tuner, for that matter. - inability to log (e.g. long-running queries) to a table - lack of custom session-level variables (without editing postgresql.conf) - lack of autonomous transactions -- Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
Richard Broersma schrieb: On Sat, Jan 31, 2009 at 3:16 PM, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote: Why does this not work: postgres=# ALTER TABLE tab1 ALTER COLUMN nr TYPE serial; ERROR: type serial does not exist serial is really just short-hand for making an integer column use default incrementing function. The following will fully explain what it is so that you can alter the column: http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL Thanks Richard, that helped. I thought maybe there is another reason because when I create a table and use serial as type (like I would when I use integer) it works well. I know that searial is just for my convenience as written in the manual. I failed to understand, that it is not really a type. Cheers Andy -- St.Pauli - Hamburg - Germany Andreas Wenk -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ALTER TABLE with TYPE serial does not work
Scott Marlowe schrieb: On Sat, Jan 31, 2009 at 9:04 PM, Richard Broersma richard.broer...@gmail.com wrote: On Sat, Jan 31, 2009 at 3:16 PM, Andreas Wenk a.w...@netzmeister-st-pauli.de wrote: Why does this not work: postgres=# ALTER TABLE tab1 ALTER COLUMN nr TYPE serial; ERROR: type serial does not exist serial is really just short-hand for making an integer column use default incrementing function. The following will fully explain what it is so that you can alter the column: http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL Seeing as it works with adding a column, and I've seen instructions for creating a sequence, and then adding a dependency into the system tables, it's quite reasonable to expect that one day it will work with alter table alter column. But it's probably more complicated than just making it a serial type, there's probably some question of setting the sequence according to the max value in the table. I'd be surprised if it's not on the TODO list somewhere. Thanks for this Scott. For me as a user it would be cool to have it ... hopefully it's on a TODO list ;-). On the other hand I don't think that this case will show up too often because the decision to have a column in a table with a incrementing sequence should be made while designing the database structure ... Cheers Andy -- St.Pauli - Hamburg - Germany Andreas Wenk -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves?
On Sat, 31 Jan 2009, Reece Hart wrote: * lack of auto-tuning or tuning tools (or perhaps my lack of awareness of them?) http://pgfoundry.org/projects/pgtune/ aims to provide a tool for 8.4, that's working but still needs documentation and some loose ends cleaned up. Its suggestions aren't good yet for Windows systems yet, that's the biggest bug left in there. That's aimed to automate the suggestions set out in http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server One day I'll make that work better with older versions too. It does basically the right thing for 8.3 already but could be smarter, it includes some parameters that aren't there in 8.2, and doesn't work at all on 8.1 or earlier. If you step outside of just free solutions, Enterprise DB's commercial server product does more complicated autotuning via their DynaTune feature. -- * Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text index not being used
I tried to create an index including all of the fields I query on to see if that would work, but I get an error the the index row is too large: = create index master_index on source_listings(geo_lat, geo_lon, price, bedrooms, region, city, listing_type, to_tsvector('english', full_listing), post_time); It's not a fulltext index - btree doesn't support @@ operation. Read carefully: http://www.postgresql.org/docs/8.3/static/textsearch.html , and about full text indexes: http://www.postgresql.org/docs/8.3/static/textsearch-tables.html , http://www.postgresql.org/docs/8.3/static/textsearch-indexes.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Pet Peeves
On Sat, 31 Jan 2009, Adam Rich wrote: - lack of queryable high-water marks useful for tuning What specific things would you consider important to track a high-water mark for that aren't already there? -- * Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PGSQL or other DB?
On Jan 31, 2009, at 9:36 AM, Scott Marlowe wrote: On Sat, Jan 31, 2009 at 2:13 AM, Erik Jones ejo...@engineyard.com wrote: On Jan 30, 2009, at 11:37 AM, durumdara wrote: - I can add/modify a table, or a field to a table without full lock on the table (like DBISAM restructure). Like in FireBird, where the add field change only the table description. I don't know that PG supports this way of the DB modifying. Nope. PostgreSQL is an all or nothing transactional database. I'd never heard of DBISAM before you mentioned it and have never used Firebird. After doing a little reading it turns out that if you plan to use transactions at all (which is very likely given even just the little you've described about the applications you're building) then you should realize that altering tables is not compatible with transactions and doing so will automatically commit any open transactions on the table. Are talking about pgsql or some other database? Everything in pgsql can be done in a transaction, except create / drop database / tablespace. I was referring to DBISAM there. Looking into Firebird I couldn't find how it handles (or doesn't) that at all I but I did see that it will happily let you add a new not null column with no default to a table by writing nulls for the new attribute for any existing columns. That already makes me queasy. That's pretty much what pgsql does. Why does it make you queasy? Another poster already beat me to answering this one so I'll not repeat what they said. Erik Jones, Database Administrator Engine Yard Support, Scalability, Reliability 866.518.9273 x 260 Location: US/Pacific IRC: mage2k -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general