Re: [GENERAL] Simulating sequences
On Monday, Aug 18, 2003, at 09:01 US/Pacific, <[EMAIL PROTECTED]> wrote: With those items in mind, your function could become: CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; BEGIN IF NOT EXISTS(SELECT 1 FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name) THEN INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name,0); END IF; I would get the insert out of there, too. If it doesn't exist, throw an exception. I don't believe sequences should automatically create themselves (the tables and columns don't). -- SPY My girlfriend asked me which one I like better. pub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[EMAIL PROTECTED]> |Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE L___ I hope the answer won't upset her. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Simulating sequences
On Monday, Aug 18, 2003, at 07:27 US/Pacific, Vilson farias wrote: Greetings, I'm getting a big performance problem and I would like to ask you what would be the reason, but first I need to explain how it happens. Let's suppose I can't use sequences (it seams impossible but my boss doesn't like specific database features like this one). You're just replacing the postgres implementation of sequences with your own that has a similar API. The postgres will have a few advantages over what you'll be able to write in plpgsql. :) That said, I don't use native sequences for large projects. I do something with a similar table to yours, but I have a key cache size on each key. A container refetches keys when it runs out. I think I wrote it originally because it made work easier (it makes complex object relations easier if you can give things unique IDs before storing them), but it turns out that it performs really well because it's just an update statement to adjust the frequency of the key table access. If you're working in java, you can use my stuff from here: http://bleu.west.spy.net/~dustin/projects/spyjar.xtp If you're implementing your own, you can read the particular class docs here: http://bleu.west.spy.net/~dustin/spyjar/j2/doc/net/spy/db/GetPK.html For sequence simulation I had created a table called cnfg_key_generation and each tuple holds information for one of my tables (tablename, current_sequencial_number). Lets check : CREATE TABLE cnfg_key_generation ( department integer NOT NULL, table_name varchar(20) NOT NULL, current_key integer NOT NULL, CONSTRAINT XPKcnfg_key_generation PRIMARY KEY (department, table_name) ); Per example, for a table called 'my_test' I would have the following values : department = 1 table_name = 'my_test' current_key = 1432 Everytime I want a new key to use in my_test primary-key I just increment current_key value. For this job, I've created a simple stored procedure called key_generation CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; new_key_value integer; err_numinteger; BEGIN new_value := 0; LOCK TABLE cnfg_key_generation IN ROW EXCLUSIVE MODE; SELECT current_value INTO new_value FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name; IF NOT FOUND THEN new_key_value := 1; INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name, new_key_value); ELSE new_key_value := new_key_value + 1; UPDATE cnfg_key_generation SET current_key_value = new_key_value WHERE department = the_department AND table_name = the_table_name; END IF; RETURN new_key_value; END; ' LANGUAGE 'plpgsql'; Data insertion is done by the following way : INSERT INTO my_test VALUES (key_generation(1, 'my_test'), ...other fields...); Ok, this works, but has a huge performance problem. I read in pg docs that everytime I make an UPDATE, a new tuple is created and the old is marked as invalid. For a tuple that holds sequencial key generation, everytime a new key is generated, a new tuple is created inside cfg_key_generation. It means after 2million key generations for same table, performance will be completly degradated because there will be 2million of old versions of same tuple. For instance, I have a table called 'cham_chamada' that actually holds 1.5Million of tuples. The response time for key_generation execution for this table is more than 5seconds. In this same case if I execute key_generation for a table that has just few values (current key = 5 per example), response time is just some miliseconds (30 to 50ms). I tryied to fix this problem with a VACUUM and it was completly ineffective. After execution the problem was still there. Later, after execution of every kind of vacuum I knew (with and without ANALYZE, especific for that table, vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY FROM. At this time, the performance problem was fixed. What can I do to solve this problem without table reconstruction? Is there a performance degradation in PostgreSQL that can't be solved? If a have a huge table with millions of data being update constantly, what can I do to keep a good performance if vacuum isn't 100%? Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Am I doing some stupid thing? Best regards, - José Vilson de Mello de Farias Software Engineer Dígitro Tecnologia Ltda - www.digitro.com.br APC - Customer Oriented Applications E-mail: [EMAIL PROTECTED] Tel.: +55 48 281 7158 ICQ 11866179
Re: [GENERAL] Why lower's not accept an AS declaration ?
Stephan Szabo <[EMAIL PROTECTED]> writes: > I can't really think of any other way to interpret that section > particularly differently. If it's a simple table query and the expression > is not equivalent to a select list item then it can't use distinct or > group by or a set function. But this is bogus. What is wrong with SELECT a, max(b) FROM t GROUP BY a ORDER BY min(c) It would certainly be legal as SELECT a, max(b), min(c) AS ocol FROM t GROUP BY a ORDER BY ocol but SQL99 seems to be written so that you can't write the former --- which leaves me wondering exactly what they mean by features E121-02 and E121-03 ... After reading over the spec again I finally realized the significance of this bit: i) Let X be any directly contained in K(i). ii) If X does not contain an explicit or , then K(i) shall be a that shall be equivalent to the name of exactly one column of ST. Although they manage not to say so in so many words, it seems their solution to the output-column-name vs input-column-name ambiguity is that unqualified names in ORDER BY are output names, and qualified names are input names. Period, no alternatives. I think we'd create too much of a backwards compatibility problem for ourselves if we adopt this verbatim. I could go for (a) qualified names are input columns, (b) unqualified names are sought first as output columns and second as input columns. This would accept every SQL99- or SQL92-compatible query correctly. It would also accept most queries that we've historically accepted -- the gotchas would come if you rename an output column with a name that conflicts with an input column, and then refer to that (unqualified) name in an ORDER BY expression. That seems like a pretty small population of problems. As for the other restrictions in the spec, I say lose 'em. If an expression would be valid as a SELECT-list entry, it should be valid in ORDER BY. (I have no idea exactly how hard this would be to implement, btw. I think the existing infrastructure for unnamed joins might help, but I'm not sure.) regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] 'now' vs now() performance
Jeffrey Melloy <[EMAIL PROTECTED]> writes: > The docs say that 'now' is turned into a constant right away. Is this > overhead/poor planning simply because 'now' gets converted to a > constant so much earlier in the process? Yes. Note the estimated numbers of rows in the different plans. In general, a one-sided inequality (col > something) will *not* get turned into an indexscan unless the planner can see that 'something' is close enough to the end of the range of 'col' that the indexscan will pull only a reasonably small number of columns. When the 'something' is not determinable at plan time, the estimated number of rows will be large enough to discourage an indexscan. When you're certain that an indexscan is what you want, you can fake out the planner by formulating the query as a range query with two variable endpoints; for example message_timestamp > now() AND message_timestamp < (now() + '1000 years'::interval) (adjusting this to 'date' datatype is left as an exercise for the student). The planner still doesn't know what's going on, but its guess for a range query is a lot smaller than for an open-interval query; you should get an indexscan from it. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Mon, 18 Aug 2003, Tom Lane wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > Okay, I think many of the random restrictions (in 2a, the grouping, > > distinct, set function spec) are to stop you from doing things like: > > > select distinct a from table order by b; > > select a,min(b) from table group by a order by c; > > select count(*) from table order by a; > > > All of which seem badly defined to me > > Agreed, but restrictions on those grounds should be identical to the > restrictions on what you can write in a SELECT-list item. AFAICT the > restrictions actually cited here are quite different. I see that it's different for grouping for example because it doesn't mention the grouping columns as being okay, although simple column references to input names of grouping columns that are directly mentioned in the select list are okay because of the equivalence. I can't really think of any other way to interpret that section particularly differently. If it's a simple table query and the expression is not equivalent to a select list item then it can't use distinct or group by or a set function. We might argue about the meaning of simple table query or equivalent but 2.A.I and II seem pretty straightforward. I'm a little vague on why they worded IV the way they did, but VI seems to imply that you're adding items to the sort table (that you take out later) in order to make the column references match up. > > The whole definition of simple table query seems to boil down to the fact > > that the query expression must be a query specification (which would > > appear to kill UNION/INTERSECT/EXCEPT, which makes sense since input > > column names aren't necessarily meaningful in that case). > > Right, you could only use output column names for an ORDER BY on a > UNION/etc. We have that restriction already. But is that really all > they're saying here? The two cases I mentioned, union and its ilk and the loose joins without a select list are the cases I could see going through the section on query expression definition. There could be more, but AFAICS SELECT FROM seems to meet the requirements mentioned by simple table query. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] 'now' vs now() performance
I was recently running into performance problems with a query containing now()::date or CURRENT_DATE. When I went to debug, 'now'::date made efficient use of the index (on a timestamp field). The docs say that 'now' is turned into a constant right away. Is this overhead/poor planning simply because 'now' gets converted to a constant so much earlier in the process? I've pasted the query plans below. Jeff jmelloy=# explain analyze select distinct sender_id from messages where message_date > now()::date; QUERY PLAN -- Unique (cost=4517.17..4639.74 rows=2451 width=4) (actual time=1697.62..1697.90 rows=4 loops=1) -> Sort (cost=4517.17..4578.45 rows=24515 width=4) (actual time=1697.61..1697.74 rows=62 loops=1) Sort Key: sender_id -> Seq Scan on messages (cost=0.00..2729.88 rows=24515 width=4) (actual time=1695.42..1697.22 rows=62 loops=1) Filter: (message_date > ((now())::date)::timestamp without time zone) Total runtime: 1698.11 msec (6 rows) jmelloy=# explain analyze select distinct sender_id from messages where message_date > 'now'::date; QUERY PLAN Unique (cost=201.86..202.14 rows=6 width=4) (actual time=1.24..1.52 rows=4 loops=1) -> Sort (cost=201.86..202.00 rows=56 width=4) (actual time=1.23..1.36 rows=62 loops=1) Sort Key: sender_id -> Index Scan using adium_msg_date_sender_recipient on messages (cost=0.00..200.22 rows=56 width=4) (actual time=0.23..0.84 rows=62 loops=1) Index Cond: (message_date > '2003-08-18 00:00:00'::timestamp without time zone) Total runtime: 1.74 msec (6 rows) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Why lower's not accept an AS declaration ?
Stephan Szabo <[EMAIL PROTECTED]> writes: > Okay, I think many of the random restrictions (in 2a, the grouping, > distinct, set function spec) are to stop you from doing things like: > select distinct a from table order by b; > select a,min(b) from table group by a order by c; > select count(*) from table order by a; > All of which seem badly defined to me Agreed, but restrictions on those grounds should be identical to the restrictions on what you can write in a SELECT-list item. AFAICT the restrictions actually cited here are quite different. > The whole definition of simple table query seems to boil down to the fact > that the query expression must be a query specification (which would > appear to kill UNION/INTERSECT/EXCEPT, which makes sense since input > column names aren't necessarily meaningful in that case). Right, you could only use output column names for an ORDER BY on a UNION/etc. We have that restriction already. But is that really all they're saying here? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] 3 way outer join dilemma
On Mon, 18 Aug 2003 [EMAIL PROTECTED] wrote: > Here's what I have (simplified) > > SELECT t1.fid, t1.t1_data, t2.vid, t2.t2_data, t3.t3_data > FROM t1, t2, t3 > WHERE t1.fid = X > AND t2.vid = Y > AND t3.fid = t1.fid > AND t3.vid = t2.vid > > Now, I discover that the record in t3 may not always exist, so somehow I > want to do an outer join... > > SELECT t1.fid, t1.t1_data, t2.vid, t2.t2_data, t3.t3_data > FROM t1, t2, OUTER JOIN t3 ON (t3.fid = t1.fid AND t3.vid = t2.vid) > WHERE t1.fid = X > AND t2.vid = Y Maybe: FROM t1 CROSS JOIN t2 LEFT OUTER JOIN t3 ON ... ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Mon, 18 Aug 2003, Tom Lane wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > -- This seems really different from our previous standard reading of SQL92 > > though. It implies that you can't really do stuff on input columns > > except in very limited cases and that'd be really bad. > > Yes, it seems fraught with bogus restrictions, which makes me wonder if > we're interpreting it correctly. > > I could understand a definition that says "unqualified names are first > sought as output column names, and if no match then treated as input > column names; qualified names are always input column names". Perhaps > that's what they're really trying to do, but why all the strange > verbiage? Okay, I think many of the random restrictions (in 2a, the grouping, distinct, set function spec) are to stop you from doing things like: select distinct a from table order by b; select a,min(b) from table group by a order by c; select count(*) from table order by a; All of which seem badly defined to me since in none of those cases does the ordering really make sense because you can't necessarily distinctly choose a value for sorting for each output row (or the output row in the last case). The whole definition of simple table query seems to boil down to the fact that the query expression must be a query specification (which would appear to kill UNION/INTERSECT/EXCEPT, which makes sense since input column names aren't necessarily meaningful in that case). I believe that the grammar for query expression seems to allow something like FOO INNER JOIN BAR ON (FOO.A=BAR.B) as an entire query expression without a SELECT or select list -- and that would be disallowed as well --, but AFAIK we don't support that anyway. So the rules for the input column references are: You cannot do it through distinct, group by, set functions or UNION/INTERSECT/EXCEPT. You can also not do it through some wierd SQL99 constructs we don't support. :) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] 3 way outer join dilemma
Here's what I have (simplified) SELECT t1.fid, t1.t1_data, t2.vid, t2.t2_data, t3.t3_data FROM t1, t2, t3 WHERE t1.fid = X AND t2.vid = Y AND t3.fid = t1.fid AND t3.vid = t2.vid Now, I discover that the record in t3 may not always exist, so somehow I want to do an outer join... SELECT t1.fid, t1.t1_data, t2.vid, t2.t2_data, t3.t3_data FROM t1, t2, OUTER JOIN t3 ON (t3.fid = t1.fid AND t3.vid = t2.vid) WHERE t1.fid = X AND t2.vid = Y But I get the statement that "t1 is not part of JOIN" Is there some way that I can merge t1 and t2 together, or do I have to do a subselect (ugh) as the only viable alternative? Any ideas is appreciated... Terry Fielder Manager Software Development and Deployment Great Gulf Homes / Ashton Woods Homes [EMAIL PROTECTED] Fax: (416) 441-9085 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Why lower's not accept an AS declaration ?
because the people who created it had doctorate degrees? kidding:-) Tom Lane wrote: Stephan Szabo <[EMAIL PROTECTED]> writes: -- This seems really different from our previous standard reading of SQL92 though. It implies that you can't really do stuff on input columns except in very limited cases and that'd be really bad. Yes, it seems fraught with bogus restrictions, which makes me wonder if we're interpreting it correctly. I could understand a definition that says "unqualified names are first sought as output column names, and if no match then treated as input column names; qualified names are always input column names". Perhaps that's what they're really trying to do, but why all the strange verbiage? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Why lower's not accept an AS declaration ?
Stephan Szabo <[EMAIL PROTECTED]> writes: > -- This seems really different from our previous standard reading of SQL92 > though. It implies that you can't really do stuff on input columns > except in very limited cases and that'd be really bad. Yes, it seems fraught with bogus restrictions, which makes me wonder if we're interpreting it correctly. I could understand a definition that says "unqualified names are first sought as output column names, and if no match then treated as input column names; qualified names are always input column names". Perhaps that's what they're really trying to do, but why all the strange verbiage? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Manage PostgreSQL Connections
Thank you for the information. > the easy way to do that is to create an alternate pg_hba.conf in the > $PGDATA directory that only allows the postgresql superuser to attach and > then restart postgresql with that pg_hba.conf in place. By that you mean to have a pg_hba.conf file in $PGDATA directory named for example pg_hba.conf.dump and when I want to perform the backup/restore to rename it to pg_hba.conf and make a /etc/init.d/postgresql reload? and by doing that users will not lose their data between the dump and restore? I think by doing /etc/init.d/postgresql reload with the new pg_hba.conf users will be cut of brutaly, or am I wrong? Thank you again, > On Thu, 14 Aug 2003, Nagy Karoly wrote: > > > Is there any way to close the connections of other users in PostgreSQL. > > I wish to run an automatic pg_dump and pg_restore and I guess users must be disconnected first. > > Is that right? > > Actually, for a pg_dump, no, you don't need to disconnect people. pg_dump > creates a single snapshot backup that is consistent across the database > you are backing up (not the whole cluster of databases, just the one > you're pointing pg_dump at at the moment.) > > It really depends on what you are doing. If you're gonna pg_dump / drop > database / create database pg_restore it might be a good idea to > disconnect people so they don't lost data between the dump and restore. > > the easy way to do that is to create an alternate pg_hba.conf in the > $PGDATA directory that only allows the postgresql superuser to attach and > then restart postgresql with that pg_hba.conf in place. > -- Nagy Károly Gabriel R&D Manager Expert Software Group 410066 Oradea, Al. Gojdu 2 Tel. +4 0259 230 776 http://www.expert-software.ro ""Nagy Karoly"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] Is there any way to close the connections of other users in PostgreSQL. I wish to run an automatic pg_dump and pg_restore and I guess users must be disconnected first. Is that right? Nagy Károly Gabriel R&D Manager Expert Software Group 410066 Oradea, Al. Gojdu 2 Tel. +4 0259 230 776 http://www.expert-software.ro ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Example Database
I am rather new to PostgreSQL.What I am looking for ist this: An example database that demonstrates the usage of all the core PostgreSQL features. It is one thing to read extensive documentation, but it is much more intuitive for me to see an example database, that demonstrates the whole thing interacting. I mean, not just the basic elements, but an optimized (and maybe nicely documented) database getting the best out of views, triggers, indices, rules, stored procedures and so on - maybe even a reference database from the PostgreSQL developers themselves? Have been searching for some hours now, but did not succeed. Maybe I am just being blind. Thanx for any hints! Regards, Erwin Brandstetter ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Mon, 18 Aug 2003, Tom Lane wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > Actually, rereading SQL99, I wonder if it would expect this to work. > > Using 14.1's wording on order by clauses, syntax rule 18, h > > Hmm ... that section is not exactly crystal-clear, is it? I had been > thinking of the part about deliberate incompatibilities with SQL92, > but rereading that, I see it only says they've eliminated the ability > to reference output columns by *number*, not by name. > > Yet if they merely want to say "we allow expressions in the output > column names", why wouldn't they say that? This section is about ten > times longer than it would need to be to express that concept. I get > the impression that they're trying to compromise between allowing output > column names and input column names, but I sure don't follow exactly how > the compromise is supposed to work. And there are a ton of apparently- > unnecessary restrictions (no grouping, no subqueries in the sort keys) > that make me wonder what's going on. My reading is basically: You can make column references to output columns. If you make column references to things that aren't output columns, then the query must be a "simple table query" (as per the definition in e). If the expression is not equivalent to one of the output value expressions, the restrictions listed (no grouping, etc...) apply and treat it as if you added the appropriate columns to the output select list. ** This bit is very unclear, but it seems reasonable given the mention of removing extended sort key columns from the output later in the general rules. ** If it is equivalent to one of the output value expressions then act as if the output column name was used instead of the expression. You cannot use subqueries or set function in the order by. -- This seems really different from our previous standard reading of SQL92 though. It implies that you can't really do stuff on input columns except in very limited cases and that'd be really bad. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Hour difference?
Well, that might help, thanks... :) BTJ On Mon, 2003-08-18 at 20:47, Steve Worsley wrote: > fingerless=# select '7:43'::time AS start, '12:17'::time AS end, > (('12:17'::time) - ('7:43'::time))::interval AS difference; > start | end| difference > --+--+ > 07:43:00 | 12:17:00 | 04:34 > (1 row) > > > Hope that helps.. Just subsitute your column names for the times. > > --Steve > > > Bjørn T Johansen wrote: > > >I need to compute the difference of Time fields, in the format HHMM. Is > >it possible to do the math in the Select? > > > > > >Regards, > > > >BTJ > > > > > > ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] factors determining the execution plan
The osdl-dbt3 test starts with building and vacuuming the database. The execution plans were taken after the vacuuming. I did two tests with the same database parameters: 1. run two osdl-dbt3 runs on one box without rebooting the stystem. Though the execution plans are the same, the costs are different. The system status are different for the two runs, for example, some page cache are not released after the first one. Does that make the cost different? 2. run two osdl-dbt3 runs on two boxes. The hardware of the two boxes are the same. And each run starts from scratch (building linux kernel, pgsql ect, and reboot). To my surprise, not only the cost are different between the two runs, the execution plan changed for Query 9. The execution plans can be found at: http://khack.osdl.org/stp/277780/results/plan/power_query9.txt http://khack.osdl.org/stp/29/results/plan/power_query9.txt My test leads me to the following questions: What are the factors which determine the execution plan and cost? Does PostgreSQL check the system resource dynamically? If Inserts and Updates happened but vacuuming is not executed, dose the execution plan change? Thanks, -- Jenny Zhang Open Source Development Lab Inc 12725 SW Millikan Way Suite 400 Beaverton, OR 97005 (503)626-2455 ext 31 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Simulating sequences
"Vilson farias" <[EMAIL PROTECTED]> writes: > Let's suppose I can't use sequences (it seams impossible but my boss > doesn't like specific database features like this one). If sequences could be effectively replaced by standard SQL operations, we would not have bothered to invent them. Nor would other databases have bothered to invent their comparable features (autoincrement in MySQL, etc). Your boss has got his head screwed on backwards on this point --- writing a sequence replacement will not work well, and will not be markedly more portable to other databases. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Hour difference?
Is there a way to get an interval in a standard format? It seems like it keeps changing it's ouput style based on the time length. Jon On Mon, 18 Aug 2003, Bruno Wolff III wrote: > On Mon, Aug 18, 2003 at 16:09:43 +0200, > Bjørn T Johansen <[EMAIL PROTECTED]> wrote: > > I need to compute the difference of Time fields, in the format HHMM. Is > > it possible to do the math in the Select? > > Despite what it says in the documentation, you can't use that format > for the type time. > If timestamps will work for you, you can use to_timestamp to convert > to a timestamps and then subtract them to get an interval. > Another option would be to massage the strings to use a : separator > between the hours and minutes fields and then cast the strings to times. > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Simulating sequences
> wouldn't a better situation be ADDING a record that is one higher, and > then doing a select MAX()? > > The different triggers could do delete on the old records. > In my case that would not apply, because what I had was a need to keep a "sequence" counter for each employee, so I added a column ("expense_report_seq") to the employee table: CREATE TABLE paid.employee ( employee_pk serial, person_pk int4 NOT NULL, employee_identifier varchar(24), hire_date date, termination_date date, health_insurance_code_pk int4, performance_review_date date, emergency_contact_pk int4, labor_category_pk int4, expense_report_seq int4 DEFAULT 0); The incremented value of the expense_report_seq column is then inserted in the expense_pk column for a new row in the expense table, thus keeping a separate sequence for each employee: CREATE TABLE paid.expense ( project_pk int4 NOT NULL, organization_pk int4 NOT NULL, employee_pk int4 NOT NULL, expense_pk int4 NOT NULL, expense_report_date date DEFAULT now() NOT NULL, expense_date date DEFAULT now() NOT NULL, CONSTRAINT expense_pkey PRIMARY KEY (project_pk, organization_pk, employee_pk, expense_pk), CONSTRAINT expense_fkey FOREIGN KEY (employee_pk) REFERENCES employee (employee_pk) ) WITHOUT OIDS; Then there is the trigger: CREATE TRIGGER expense_bit BEFORE INSERT ON paid.expense FOR EACH ROW EXECUTE PROCEDURE expense_bit(); where CREATE FUNCTION paid.expense_bit() RETURNS trigger AS ' BEGIN SELECT INTO NEW.expense_pk expense_report_next(new.employee_pk); RETURN new; END; ' LANGUAGE 'plpgsql' VOLATILE; where CREATE FUNCTION paid.expense_report_next(int4) RETURNS int4 AS ' DECLARE l_employee_pk ALIAS FOR $1; BEGIN UPDATE employee SET expense_report_seq = (expense_report_seq + 1) WHERE employee_pk = l_employee_pk; RETURN (SELECT expense_report_seq FROM employee WHERE employee_pk = l_employee_pk) ; END;' LANGUAGE 'plpgsql' VOLATILE; Seems to work o.k., but this is not a large database with gazillions of transactions. ~Berend Tober ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] newbie and no idea
You can try If you know root's password, you don't need set postgresql's password. But if you don't known root password, you can't change password for postgres user (for change of password you can use passwd command). su root su postgres createuser aamehl logout logout createdb nigun_test bye Pavel Stehule On 18 Aug 2003, Aaron wrote: > A friend of mine just wrote a database for me using postgres. > I have postgres installed but can't figure out how to change the passwd > for user postgres. > > [EMAIL PROTECTED] aamehl]# ps -U postgres > PID TTY TIME CMD > 4014 pts/100:00:00 postmaster > 4016 pts/100:00:00 postmaster > 4017 pts/100:00:00 postmaster > [EMAIL PROTECTED] aamehl]# > --- > [EMAIL PROTECTED] aamehl]# ps -l -C postmaster > F S UID PID PPID C PRI NI ADDRSZ WCHAN TTY TIME CMD > 0 S26 4014 1 0 72 0- 2446 do_sel pts/100:00:00 > postmaster > 1 S26 4016 4014 0 72 0- 2693 do_sel pts/100:00:00 > postmaster > 1 S26 4017 4016 0 72 0- 2451 do_sel pts/100:00:00 > postmaster > [EMAIL PROTECTED] aamehl]# > > > [EMAIL PROTECTED] aamehl]# createdb -U postgres nigun_test > psql: FATAL: IDENT authentication failed for user "postgres" > > createdb: database creation failed > [EMAIL PROTECTED] aamehl]# > > - > Any idea?? > I am also interested in getting some front end up and running. > Thanks > Aaron > > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faqs/FAQ.html > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] newbie and no idea
Hi, On 18 Aug 2003, Aaron wrote: > [EMAIL PROTECTED] aamehl]# createdb -U postgres nigun_test > psql: FATAL: IDENT authentication failed for user "postgres" Quick and easy solution for you: edit ~postgres/data/pg_hba.conf and replace all "ident" string to "trust". Then, restart PostgreSQL server. The server will not prompt you any password. If you want to learn more about authentication methods, then download the administrator manual from http://www.PostgreSQL.org/docs/#pdfs and follow the "Authentication Methods" section. Regards, -- Devrim GUNDUZ [EMAIL PROTECTED] [EMAIL PROTECTED] http://www.tdmsoft.com http://www.gunduz.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Why lower's not accept an AS declaration ?
Hervé Piedvache wrote: An to be more precise what I exactly want to do : select case when 'now' between t.begin and t.end then t.login else 'None' end as log from my_table t order by lower(log); Try: select log from (select case when 'now' between t.begin and t.end then t.login else 'None' end as log from my_table t) as ss order by lower(log); HTH, Joe ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Mon, 18 Aug 2003, Tom Lane wrote: > =?iso-8859-15?q?Herv=E9=20Piedvache?= <[EMAIL PROTECTED]> writes: > > Is it an example more realistic for you to make an order by lower of > > something as an alias ? > > Aliases attached to SELECT output columns are visible outside the > SELECT, not inside it. The special case for ORDER BY simple-column-name > is a kluge for compatibility with a now-obsolete version of the SQL spec > (SQL92 expects this to work, SQL99 doesn't) and we aren't going to > extend it. See past discussions in the archives (I seem to recall > answering this same question within the past week...) Actually, rereading SQL99, I wonder if it would expect this to work. Using 14.1's wording on order by clauses, syntax rule 18, h "Ki is a ... shall contain a . i) Let X be any column reference directly contained in Ki. ii) If X does not contain an explicit or , then Ki shall be a that shall be equivalent to the name of exactly one column of ST." T is the result of evaluating the query expression. If no sort key refers to a column that isn't a column of T then ST is the same as T. If the result of evaluating the query expression (T) is the output of the query expression with the output column names then it should allow output column names in the value expressions of the order by clause I believe. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Why lower's not accept an AS declaration ?
- Original Message - From: "Hervé Piedvache" <[EMAIL PROTECTED]> To: "Darko Prenosil" <[EMAIL PROTECTED]>; "Postgresql General" <[EMAIL PROTECTED]> Sent: Monday, August 18, 2003 6:59 PM Subject: Re: [GENERAL] Why lower's not accept an AS declaration ? > Hi, > > An to be more precise what I exactly want to do : > > select > case when 'now' between t.begin and t.end then t.login else 'None' end as log > from my_table t > order by lower(log); > Here is the rewired query that works : CREATE TABLE my_table ("begin" timestamp, "end" timestamp, login varchar(100)); select case when now() between "t"."begin" and "t"."end" then t.login else 'None' end as log from my_table t order by lower(1); where number 1 is the number of result column. I'm puzzled too now, because according to docs, it should work. Here is the part from docs that even explains what happens if the real table column name and result alias are the same: If an ORDER BY expression is a simple name that matches both a result column name and an input column name, ORDER BY will interpret it as the result column name. This is the opposite of the choice that GROUP BY will make in the same situation. This inconsistency is made to be compatible with the SQL standard. I must confess that I wasn't reading Your mail carefully. Sorry ! You were right ! Regards ! ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Monday 18 August 2003 18:59, Hervé Piedvache wrote: > Hi, > > An to be more precise what I exactly want to do : > > select > case when 'now' between t.begin and t.end then t.login else 'None' end as > log from my_table t > order by lower(log); How about something like: select case when 'now' between t.begin and t.end then t.login else 'None' end as log, LOWER(case when 'now' between t.begin and t.end then t.login else 'None' end) as log_lower from my_table t order by 2; Ian Barwick [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] newbie and no idea
On Monday 18 August 2003 01:41 pm, Aaron wrote: > On Mon, 2003-08-18 at 21:25, Pavel Stehule wrote: > > You can try > > > > If you know root's password, > > I just installed postgres from rpm and I didn't add passwords. I > certainly don't know the postgres root password... > > maybe I should reinstall postgres?? > Aaron By the looks of your command line prompts, "[EMAIL PROTECTED] aamehl]#", you are already logged in as root. Root doesn't need to know another user's password to change it -- just change it: [EMAIL PROTECTED] aamehl]# passwd postgres You will then be prompted for the new password. Also, root doesn't need a password to become another user: [EMAIL PROTECTED] aamehl]# su postgres Although this sounds great, this kind of power justifies the creation of a regular (not a superuser) user account for routine use, especially if you are a newbie. Best of luck, Andrew Gould ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Why lower's not accept an AS declaration ?
- Original Message - From: "Darko Prenosil" <[EMAIL PROTECTED]> To: "Hervé Piedvache" <[EMAIL PROTECTED]>; "Postgresql General" <[EMAIL PROTECTED]> Sent: Monday, August 18, 2003 10:09 PM Subject: Re: [GENERAL] Why lower's not accept an AS declaration ? > > - Original Message - > From: "Hervé Piedvache" <[EMAIL PROTECTED]> > To: "Darko Prenosil" <[EMAIL PROTECTED]>; "Postgresql General" > <[EMAIL PROTECTED]> > Sent: Monday, August 18, 2003 6:59 PM > Subject: Re: [GENERAL] Why lower's not accept an AS declaration ? > > > > Hi, > > > > An to be more precise what I exactly want to do : > > > > select > > case when 'now' between t.begin and t.end then t.login else 'None' end as > log > > from my_table t > > order by lower(log); > > > > Here is the rewired query that works : > > CREATE TABLE my_table ("begin" timestamp, "end" timestamp, login > varchar(100)); > > select case > when now() between "t"."begin" and "t"."end" then t.login > else 'None' > end > as log > from my_table t > order by lower(1); > > > where number 1 is the number of result column. I'm puzzled too now, because > according to docs, it should work. > Here is the part from docs that even explains what happens if the real table > column name and result alias are the same: > > If an ORDER BY expression is a simple name that matches both a result column > name and an input column name, ORDER BY will interpret it as the result > column name. This is the opposite of the choice that GROUP BY will make in > the same situation. This inconsistency is made to be compatible with the SQL > standard. > > I must confess that I wasn't reading Your mail carefully. Sorry ! You were > right ! > Regards ! > Wrong again ! This works, but it does not sorting anything. We can say that ORDER BY accepts both column numbers and column aliases, but not column numbers and aliases as arguments in functions. I can say this because this works : select case when now() between "t"."begin" and "t"."end" then lower(t.login) else 'none' end as log from my_table t order by 1 ASC; same as: select case when now() between "t"."begin" and "t"."end" then lower(t.login) else 'none' end as log from my_table t order by log ASC; Sorry for the mess ! Regards ! ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Default Value in Table Setup Help
On Mon, Aug 18, 2003 at 14:57:12 -0400, Dev <[EMAIL PROTECTED]> wrote: > Hello all, > > I am working on setting up a table that will append a sequence to the end > of the value inserted. > Example; > INSERT INTO test (test) VALUES ('abcd'); > And have the data in the database be; > abcd0001 > > Now I do have things setup else where were the default value for the field > is such: > default ('abcd'::text || lpad(text(nextval('test_sequence'::text)), 4, > '0'::text)) > > But i want the "abcd" or what erver to be added in the insert? > > What am I missing to make this happen? I think you want to use a trigger to do this. The default function only gets used if you don't supply a value. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Performance with different index types
On Mon, 18 Aug 2003, Johann Uhrmann wrote: > Hello, > > are there any experiences about the performance of indices > with different data types. > > How do the performance of an index that consists of > > - an integer field > - a varchar() field > - a text field > > differ? It's not so much about the field type as what you are doing with it. Btree indexes are the default, and often are your best choice. Rtree indexes are often used for spatial comparisons (i.e. is this point inside this polygon stuff). Hash indexes should be a win for certain types of problems, but their implementation is pretty slow in postgresql, so you're often still better off with an Rtree index. GiST seems like it may replace Rtree indexes at some future date, but they're still kind of in development. If you need multi-column indexes, you have to use either btree or gist. The standard rule of thumb is, when in doubt, use btree. :-) also, look into partial / functional indexes. For instance, if you have a column that's a bool with 3 million rows, and <100 of those rows have the bool set to true, while the others are all false, it might make sense to create a partial index on that field for when you want one of those 100 rows with that field set to true: create index abc123 on tableabc (bool_field) where bool_field IS TRUE. the other issue folks have when they start using postgresql is that it sometimes tends to seq scan when you think it should be using the index. It may well be that a seq scan is a better choice, but often it's not, and the query planny just doesn't have enough information to know that. so, you need to vacuum, analyze, and possibly edit your postgresql.conf file's random_page_cost, effective_cache_size, and a few other fields to give the planner a kick in the right direction. the final issue is the one of type mismatch. If you've got a field with an int8, and you do this: select * from table where int8field=123; the planner may not use your index on int8field, since 123 gets coerced to int4. You need to cast the 123 to int in one of a few ways: select * from table where int8field=cast (123 as int8); <- SQL spec way select * from table where int8field=123::int8; select * from table where int8field='123'; > Is it a waste of memory/performance to make a text field > primary key? Well, that depends. If the text field is the NATURAL key, and you'll likely want to refer to it from other tables, then it's often a good choice, semantically at least, to use it as a pk. Sometimes, though, you need better performance, and then you can use an artificial pk, like a serial column, and create a unique index on the "natural" key column (i.e. the text field) to make sure it stays unique, but use the serial column for all table joins and such. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Default Value in Table Setup Help
On Mon, 18 Aug 2003, Dev wrote: > Hello all, > > I am working on setting up a table that will append a sequence to the end > of the value inserted. > Example; > INSERT INTO test (test) VALUES ('abcd'); > And have the data in the database be; > abcd0001 > > Now I do have things setup else where were the default value for the field > is such: > default ('abcd'::text || lpad(text(nextval('test_sequence'::text)), 4, > '0'::text)) > > But i want the "abcd" or what erver to be added in the insert? > > What am I missing to make this happen? You don't want to use a default (since that'll be ignored if you actually pass in a value for the column). You probably want a before trigger that alters the value that was inserted before the insertion actually happens. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why lower's not accept an AS declaration ?
Stephan Szabo <[EMAIL PROTECTED]> writes: > Actually, rereading SQL99, I wonder if it would expect this to work. > Using 14.1's wording on order by clauses, syntax rule 18, h Hmm ... that section is not exactly crystal-clear, is it? I had been thinking of the part about deliberate incompatibilities with SQL92, but rereading that, I see it only says they've eliminated the ability to reference output columns by *number*, not by name. Yet if they merely want to say "we allow expressions in the output column names", why wouldn't they say that? This section is about ten times longer than it would need to be to express that concept. I get the impression that they're trying to compromise between allowing output column names and input column names, but I sure don't follow exactly how the compromise is supposed to work. And there are a ton of apparently- unnecessary restrictions (no grouping, no subqueries in the sort keys) that make me wonder what's going on. Can anyone translate this part of the spec into plain English? regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Default Value in Table Setup Help
At 02:57 PM 8/18/03 -0400, Dev wrote: Hello all, I am working on setting up a table that will append a sequence to the end of the value inserted. Example; INSERT INTO test (test) VALUES ('abcd'); And have the data in the database be; abcd0001 Now I do have things setup else where were the default value for the field is such: default ('abcd'::text || lpad(text(nextval('test_sequence'::text)), 4, '0'::text)) But i want the "abcd" or what erver to be added in the insert? What am I missing to make this happen? I don't think you can do what you want to do with a column default. Instead you want to look at the rewrite rules. You can find information about them here: http://www.postgresql.org/docs/7.3/interactive/sql-createrule.html If I'm wrong I'm sure someone here will correct me :). --- Adam Kavan --- [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Simulating sequences
Hi Vilson, Vilson farias wrote: Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Yes, exactly. Its clean, fast and secure. The trick is, sequences life outside of transactions and nextval() is never rolled back. So you dont have to lock and you dont have to worry about duplicate keys. Its not quite possible to not use database specific code when wanting a great performance the same time. Fortunately postgresql is very close to SQL-spec, so you arent so much walking on the dark side if you adopt postgres style SQL. Regards Tino Wildenhain ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Hour difference?
On Mon, Aug 18, 2003 at 17:56:00 +0200, Bjørn T Johansen <[EMAIL PROTECTED]> wrote: > I am already using Time for time fields (i.e. timestamp fields without > the date part) in my database, are you saying this doesn't work??? No. You can't use HHMM format for input without doing some more work. You can use HH:MM as an input format. If you already have the data loaded into time fields, you can just subtract them to get an interval. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Simulating sequences
On Mon, Aug 18, 2003 at 11:27:14 -0300, Vilson farias <[EMAIL PROTECTED]> wrote: > > I tryied to fix this problem with a VACUUM and it was completly ineffective. > After execution the problem was still there. Later, after execution of every > kind of vacuum I knew (with and without ANALYZE, especific for that table, > vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY > FROM. At this time, the performance problem was fixed. Did you try VACUUM FULL? If you are doing just a normal vacuum and waited until there were over a million tuples in the table, your FSM setting probably wasn't high enough to let you recover the space. > What can I do to solve this problem without table reconstruction? Is there a > performance degradation in PostgreSQL that can't be solved? If a have a huge > table with millions of data being update constantly, what can I do to keep a > good performance if vacuum isn't 100%? You want to vacuum the table a lot more often. I remember a post (that should be in the archives) where someone calculated how many updates you could go before the dead tuples took up more than one block. The suggestion was that that was the point where you want to vacuum the table. > Does PostgreSQL sequences deal with these kinds performance questions? Is it > clean, fast and secury to use sequences? Maybe I still can make up my boss > mind about this subject if I get good arguments here to justify the use of > sequences. Besides solving a dead tuple problem, using sequences also avoids contention by not having to hold locks for the duration of a transaction. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Simulating sequences
Vilson farias wrote: Greetings, I'm getting a big performance problem and I would like to ask you what would be the reason, but first I need to explain how it happens. Let's suppose I can't use sequences (it seams impossible but my boss doesn't like specific database features like this one). I don't see how using PL/pgSQL is any better with respect to specific database feature, but that aside for a moment. Your function not only misses the required FOR UPDATE when reading the (possibly existing) current value, it also contains a general race condition. Multiple concurrent transactions could try inserting the new key and every but one would error out with a duplicate key error. Sequence values are int8 and are by default safe against integer rollover. Sequences do not rollback and therefore don't need to wait for concurrent transactions to finish. Your table based replacement is a major concurrency bottleneck. As soon as a transaction did an insert to a table, it blocks out every other transaction from inserting into that table until it either commits or rolls back. Your VACUUM theory is only partial correct. A frequent VACUUM will prevent the key table from growing. You'd have to do so very often since the excess number of obsolete index entries pointing to dead tuples also degrades your performance. Additionally if there is a very low number of keys (sequences) in that table, an ANALYZE run might cause the planner to go for a sequential scan and ignore the index on the table at which point your function will actually cause "two" sequential scan over all live and dead tuples of all sequences per call. Sequences are specially designed to overcome all these issues. If you cannot convice your boss to use sequences, he is a good example for why people having difficulties understanding technical issues should not assume leadership positions in IT projects. Jan For sequence simulation I had created a table called cnfg_key_generation and each tuple holds information for one of my tables (tablename, current_sequencial_number). Lets check : CREATE TABLE cnfg_key_generation ( department integer NOT NULL, table_name varchar(20) NOT NULL, current_key integer NOT NULL, CONSTRAINT XPKcnfg_key_generation PRIMARY KEY (department, table_name) ); Per example, for a table called 'my_test' I would have the following values : department = 1 table_name = 'my_test' current_key = 1432 Everytime I want a new key to use in my_test primary-key I just increment current_key value. For this job, I've created a simple stored procedure called key_generation CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; new_key_value integer; err_numinteger; BEGIN new_value := 0; LOCK TABLE cnfg_key_generation IN ROW EXCLUSIVE MODE; SELECT current_value INTO new_value FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name; IF NOT FOUND THEN new_key_value := 1; INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name, new_key_value); ELSE new_key_value := new_key_value + 1; UPDATE cnfg_key_generation SET current_key_value = new_key_value WHERE department = the_department AND table_name = the_table_name; END IF; RETURN new_key_value; END; ' LANGUAGE 'plpgsql'; Data insertion is done by the following way : INSERT INTO my_test VALUES (key_generation(1, 'my_test'), ...other fields...); Ok, this works, but has a huge performance problem. I read in pg docs that everytime I make an UPDATE, a new tuple is created and the old is marked as invalid. For a tuple that holds sequencial key generation, everytime a new key is generated, a new tuple is created inside cfg_key_generation. It means after 2million key generations for same table, performance will be completly degradated because there will be 2million of old versions of same tuple. For instance, I have a table called 'cham_chamada' that actually holds 1.5Million of tuples. The response time for key_generation execution for this table is more than 5seconds. In this same case if I execute key_generation for a table that has just few values (current key = 5 per example), response time is just some miliseconds (30 to 50ms). I tryied to fix this problem with a VACUUM and it was completly ineffective. After execution the problem was still there. Later, after execution of every kind of vacuum I knew (with and without ANALYZE, especific for that table, vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY FROM. At this time, the performance problem was fixed. What can I do to solve this problem without table reconstruction? Is there a performance degradation in PostgreSQL that can't be solved? If a have a huge table with millions of data being update constantly, what can I do to keep a good performance if va
Re: [GENERAL] Simulating sequences
Vilson farias wrote: Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Sorry, but you just outlined a sequence replacement which is a big hunk of PL/PgSQL! How is that not completely specific to PostgreSQL? The PgSQL 'serial' type is close enough to other RDBMS autoincrement types that porting to a different DB should be trivial. Porting your PL/PgSQL, that will be hard (particularly if you decide to go to something like MySQL, which doesn't even support procedural languages). -- __ / | Paul Ramsey | Refractions Research | Email: [EMAIL PROTECTED] | Phone: (250) 885-0632 \_ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Hour difference?
On Mon, Aug 18, 2003 at 16:09:43 +0200, Bjørn T Johansen <[EMAIL PROTECTED]> wrote: > I need to compute the difference of Time fields, in the format HHMM. Is > it possible to do the math in the Select? Despite what it says in the documentation, you can't use that format for the type time. If timestamps will work for you, you can use to_timestamp to convert to a timestamps and then subtract them to get an interval. Another option would be to massage the strings to use a : separator between the hours and minutes fields and then cast the strings to times. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Simulating sequences
I would collect the way that 5-10 of the top databases handle unique id's (sequences) for columns and compare them in a small paper. Show your boss that sequences are fairly standard and he should come around. Vilson farias wrote: Greetings, I'm getting a big performance problem and I would like to ask you what would be the reason, but first I need to explain how it happens. Let's suppose I can't use sequences (it seams impossible but my boss doesn't like specific database features like this one). For sequence simulation I had created a table called cnfg_key_generation and each tuple holds information for one of my tables (tablename, current_sequencial_number). Lets check : CREATE TABLE cnfg_key_generation ( department integer NOT NULL, table_name varchar(20) NOT NULL, current_key integer NOT NULL, CONSTRAINT XPKcnfg_key_generation PRIMARY KEY (department, table_name) ); Per example, for a table called 'my_test' I would have the following values : department = 1 table_name = 'my_test' current_key = 1432 Everytime I want a new key to use in my_test primary-key I just increment current_key value. For this job, I've created a simple stored procedure called key_generation CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; new_key_value integer; err_numinteger; BEGIN new_value := 0; LOCK TABLE cnfg_key_generation IN ROW EXCLUSIVE MODE; SELECT current_value INTO new_value FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name; IF NOT FOUND THEN new_key_value := 1; INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name, new_key_value); ELSE new_key_value := new_key_value + 1; UPDATE cnfg_key_generation SET current_key_value = new_key_value WHERE department = the_department AND table_name = the_table_name; END IF; RETURN new_key_value; END; ' LANGUAGE 'plpgsql'; Data insertion is done by the following way : INSERT INTO my_test VALUES (key_generation(1, 'my_test'), ...other fields...); Ok, this works, but has a huge performance problem. I read in pg docs that everytime I make an UPDATE, a new tuple is created and the old is marked as invalid. For a tuple that holds sequencial key generation, everytime a new key is generated, a new tuple is created inside cfg_key_generation. It means after 2million key generations for same table, performance will be completly degradated because there will be 2million of old versions of same tuple. For instance, I have a table called 'cham_chamada' that actually holds 1.5Million of tuples. The response time for key_generation execution for this table is more than 5seconds. In this same case if I execute key_generation for a table that has just few values (current key = 5 per example), response time is just some miliseconds (30 to 50ms). I tryied to fix this problem with a VACUUM and it was completly ineffective. After execution the problem was still there. Later, after execution of every kind of vacuum I knew (with and without ANALYZE, especific for that table, vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY FROM. At this time, the performance problem was fixed. What can I do to solve this problem without table reconstruction? Is there a performance degradation in PostgreSQL that can't be solved? If a have a huge table with millions of data being update constantly, what can I do to keep a good performance if vacuum isn't 100%? Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Am I doing some stupid thing? Best regards, - Jos? Vilson de Mello de Farias Software Engineer D?gitro Tecnologia Ltda - www.digitro.com.br APC - Customer Oriented Applications E-mail: [EMAIL PROTECTED] Tel.: +55 48 281 7158 ICQ 11866179 ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Hour difference?
I need to compute the difference of Time fields, in the format HHMM. Is it possible to do the math in the Select? Regards, BTJ -- --- Bjørn T Johansen (BSc,MNIF) Executive Manager [EMAIL PROTECTED] Havleik Consulting Phone : +47 67 54 15 17 Conradisvei 4 Fax : +47 67 54 13 91 N-1338 Sandvika Cellular : +47 926 93 298 http://www.havleik.no --- "The stickers on the side of the box said "Supported Platforms: Windows 98, Windows NT 4.0, Windows 2000 or better", so clearly Linux was a supported platform." --- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Cache lookup failed?
On Mon, 2003-08-18 at 09:44, Martijn van Oosterhout wrote: > It usually refers to some cached plan referring to a table or object that > does not exist anymore. Do you have stored procedures that refer to tables > that are deleted? This includes temporary tables. > > So maybe it's only happening when a certain stored procedure is executed > twice in the same session? > No, I don't think so. It seems to be pretty consistently happening in our homegrown database connection class, as we are executing a query against the PG metadata tables. Something like -- select a.attname, t.typname, a.attlen, a.atttypmod, a.attnotnull, a.atthasdef, a.attnum from pg_attribute a, pg_class c, pg_type t where c.relname = ? and a.attrelid = c.oid and a.attnum >= 0 and t.oid = a.atttypid order by 1 This is happening inside DBD::Pg, the "table_attributes" method. The relname being selected is NOT a temp table. We do make extensive use of temp tables in the code, however. -- Jeff Boes vox 269.226.9550 ext 24 Database Engineer fax 269.349.9076 Nexcerpt, Inc. http://www.nexcerpt.com ...Nexcerpt... Extend your Expertise ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] XML?
On Fri, 8 Aug 2003 07:07:42 +0200 (CEST) "Bjorn T Johansen" <[EMAIL PROTECTED]> wrote: > I need to convert recordsets to XML, is there an automatic way to do this > in PostgreSQL or a tool I can use? Or do I have to code this manually? Agata Report (agata.codigolivre.org.br) does that. Pablo > > > Regards, > > BTJ > > > --- > Bjørn T Johansen (BSc,MNIF) > Executive Manager > [EMAIL PROTECTED] Havleik Consulting > Phone : +47 67 54 15 17 Conradisvei 4 > Fax : +47 67 54 13 91 N-1338 Sandvika > Cellular : +47 926 93 298 http://www.havleik.no > --- > "The stickers on the side of the box said "Supported Platforms: Windows > 98, Windows NT 4.0, > Windows 2000 or better", so clearly Linux was a supported platform." > --- > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster :: Pablo Dall'Oglio ([EMAIL PROTECTED]) +55 (51) 3714-7040 :: Solis - Cooperativa de Solucoes Livres :: www.solis.coop.br - Lajeado, RS - Brasil :: www.varianet.com.br (personal) :: "Life's a Journey, Not a Destination" - Steven Tyler ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] C vs plpgsql and child processes
Hi Jan/Sean To the list I bow and apologise for wasting your time! I did not appropriately test the function (as is my want!) and passed command line arguments in the executable image path in the execl function. Now I edit my table containing the configuration vars for the trackformat to decoder info and everything purrs.. sigh. Writing help requests to the list when you've only exhausted half the possibilities are a bit silly. Thanks for your time, Jason On Mon, 18 Aug 2003 11:41 pm, Jan Wieck wrote: > Jason Godden wrote: > > Hi Sean, > > > > Yeah - It is declared VOLATILE. I think there must be something specific > > with the way PL/PGSQL handles child processes of a called function. The > > child process actually spawns mpg123 or ogg123 so it has to live beyond > > the life of the parent. Not sure. What I might do is rewrite the entire > > procedure from woe to go in using SPI and see how that goes. Failing > > that I guess I could always peek at the source! : ) > > PL/pgSQL does not pay any attention or could affect child processes of a > backend to my knowledge. Are you sure that the PL/pgSQL function really > calls your C function forking off the child? The best way to check would > be to have some NOTICE coming out of your C function before it actually > does create the child. > > > Jan > > > Thanks, > > > > Jason > > > > On Mon, 18 Aug 2003 04:48 am, Sean Chittenden wrote: > >> > Problem is that when I call these particular functions from within > >> > plpgsql rather than through a single sql command the child never > >> > actually starts (or starts and then exits immediately). > >> > >> Are you sure? I can't think of much that'd prevent a C function from > >> executing other than how you've declared the function (ie, is PgSQL > >> caching the results of the function?). Make sure you've declared it > >> as VOLATILE (or don't declare it anything and it'll default to > >> VOLATILE). > >> > >> http://developer.postgresql.org/docs/postgres/sql-createfunction.html > >> > >> -sc > > > > ---(end of broadcast)--- > > TIP 9: the planner will ignore your desire to choose an index scan if > > your joining column's datatypes do not match ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Cache lookup failed?
It usually refers to some cached plan referring to a table or object that does not exist anymore. Do you have stored procedures that refer to tables that are deleted? This includes temporary tables. So maybe it's only happening when a certain stored procedure is executed twice in the same session? Hope this helps, On Mon, Aug 18, 2003 at 01:01:53PM +, Jeff Boes wrote: > What might be the source of this error? > > Cache lookup failed for relation 188485009 > > We've been getting these at odd intervals, and they are not reproducible. > > Our setup: > > PostgreSQL 7.3.3 > Red Hat 7.3 > > kernel.shmall = 1352914698 > kernel.shmmax = 1352914698 > > shared_buffers = 131072 > max_fsm_pages = 35 > max_fsm_relations = 200 > wal_buffers = 32 > sort_mem = 65536 > vacuum_mem = 65536 > effective_cache_size = 196608 > > -- > Jeff Boes vox 269.226.9550 ext 24 > Database Engineer fax 269.349.9076 > Nexcerpt, Inc. http://www.nexcerpt.com >...Nexcerpt... Extend your Expertise > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > "All that is needed for the forces of evil to triumph is for enough good > men to do nothing." - Edmond Burke > "The penalty good people pay for not being interested in politics is to be > governed by people worse than themselves." - Plato pgp0.pgp Description: PGP signature
[GENERAL] Simulating sequences
Greetings, I'm getting a big performance problem and I would like to ask you what would be the reason, but first I need to explain how it happens. Let's suppose I can't use sequences (it seams impossible but my boss doesn't like specific database features like this one). For sequence simulation I had created a table called cnfg_key_generation and each tuple holds information for one of my tables (tablename, current_sequencial_number). Lets check : CREATE TABLE cnfg_key_generation ( department integer NOT NULL, table_name varchar(20) NOT NULL, current_key integer NOT NULL, CONSTRAINT XPKcnfg_key_generation PRIMARY KEY (department, table_name) ); Per example, for a table called 'my_test' I would have the following values : department = 1 table_name = 'my_test' current_key = 1432 Everytime I want a new key to use in my_test primary-key I just increment current_key value. For this job, I've created a simple stored procedure called key_generation CREATE FUNCTION key_generation(integer, varchar(20)) RETURNS integer AS' DECLARE the_department ALIAS FOR $1; the_table_name ALIAS FOR $2; new_key_value integer; err_numinteger; BEGIN new_value := 0; LOCK TABLE cnfg_key_generation IN ROW EXCLUSIVE MODE; SELECT current_value INTO new_value FROM cnfg_key_generation WHERE the_department = department AND the_table_name = table_name; IF NOT FOUND THEN new_key_value := 1; INSERT INTO cnfg_key_generation VALUES (the_department,the_table_name, new_key_value); ELSE new_key_value := new_key_value + 1; UPDATE cnfg_key_generation SET current_key_value = new_key_value WHERE department = the_department AND table_name = the_table_name; END IF; RETURN new_key_value; END; ' LANGUAGE 'plpgsql'; Data insertion is done by the following way : INSERT INTO my_test VALUES (key_generation(1, 'my_test'), ...other fields...); Ok, this works, but has a huge performance problem. I read in pg docs that everytime I make an UPDATE, a new tuple is created and the old is marked as invalid. For a tuple that holds sequencial key generation, everytime a new key is generated, a new tuple is created inside cfg_key_generation. It means after 2million key generations for same table, performance will be completly degradated because there will be 2million of old versions of same tuple. For instance, I have a table called 'cham_chamada' that actually holds 1.5Million of tuples. The response time for key_generation execution for this table is more than 5seconds. In this same case if I execute key_generation for a table that has just few values (current key = 5 per example), response time is just some miliseconds (30 to 50ms). I tryied to fix this problem with a VACUUM and it was completly ineffective. After execution the problem was still there. Later, after execution of every kind of vacuum I knew (with and without ANALYZE, especific for that table, vacuumdb from shell ...) I gave up and tryied a COPY TO/DROP/CREATE/COPY FROM. At this time, the performance problem was fixed. What can I do to solve this problem without table reconstruction? Is there a performance degradation in PostgreSQL that can't be solved? If a have a huge table with millions of data being update constantly, what can I do to keep a good performance if vacuum isn't 100%? Does PostgreSQL sequences deal with these kinds performance questions? Is it clean, fast and secury to use sequences? Maybe I still can make up my boss mind about this subject if I get good arguments here to justify the use of sequences. Am I doing some stupid thing? Best regards, - José Vilson de Mello de Farias Software Engineer Dígitro Tecnologia Ltda - www.digitro.com.br APC - Customer Oriented Applications E-mail: [EMAIL PROTECTED] Tel.: +55 48 281 7158 ICQ 11866179 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Cache lookup failed?
On Mon, 2003-08-18 at 09:53, Tom Lane wrote: > Always the same OID, or different ones? Does that OID actually exist in > pg_class? Can you tell us exactly what SQL command(s) are producing the > error? (If not, better turn on query logging so you can find out.) Different OIDs, and they do not exist in pg_class (it's the OID of that table's row, right? So for Cache lookup failed for relation 172465102 I would do select * from pg_class where oid = 172465102 right? I'm not 100% familiar yet with the ins and outs of pg_class. Too many OID-type fields in there, I can't keep them straight ... 8-} I'd turn on query logging, but since we're getting these about every 3-7 days, I'm not sure that would be the most effective use of all that disk ... maybe I can find a way to localize it to the point where the pg_class query is happening. -- Jeff Boes vox 269.226.9550 ext 24 Database Engineer fax 269.349.9076 Nexcerpt, Inc. http://www.nexcerpt.com ...Nexcerpt... Extend your Expertise ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Cache lookup failed?
"Jeff Boes" <[EMAIL PROTECTED]> writes: > What might be the source of this error? > Cache lookup failed for relation 188485009 > We've been getting these at odd intervals, and they are not reproducible. Always the same OID, or different ones? Does that OID actually exist in pg_class? Can you tell us exactly what SQL command(s) are producing the error? (If not, better turn on query logging so you can find out.) > Our setup: > PostgreSQL 7.3.3 BTW, I'd urge updating to 7.3.4 ASAP. Better to do it in a controlled fashion than to find yourself looking at a forced update if 7.3.3 fails to restart after a crash... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Dreamweaver
On Sat, 2003-08-16 at 15:12, Jochem van Dieten wrote: > Glen Eustace wrote: > > > > We recently purchased Dreamweaver MX and I was a little surprised to > > find that one of its pre-defined scripting systems is PHP + MySQL. I > > haven't done much exploring of what is actually offered but wondered > > whether any one had tried to convince Macromedia to provide a PHP + > > PostgreSQL or at least a PHP + anydb. > > I haven't tried the combination PHP + PostgreSQL but my efforts > to get ColdFusion + PostgreSQL supported were rather unsuccessful > so far. > Well, some folks are certainly doing it, check out this blog entry: http://www.redev.org/archives/23.cfm#more Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] compiling the examples
You forgot the: -L /usr/local/pgsql/lib -lpq Hope this helps, On Mon, Aug 18, 2003 at 02:01:08PM +0200, Marc Cuypers wrote: > Hi, > > How do i make the examples in postgresql-7.3.3/src/test/examples/? > It seems that just typing make doesn't link to the libpq library. Where > should I start make? > > Just typing make gives the following output: > # make > gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations > -I../../../src/interfaces/libpq -I../../../src/include > -Wl,-rpath,/usr/local/pgsql/lib testlibpq.c -o testlibpq > /tmp/ccAN4Szw.o: In function `exit_nicely': > /tmp/ccAN4Szw.o(.text+0xd): undefined reference to `PQfinish' > /tmp/ccAN4Szw.o: In function `main': > /tmp/ccAN4Szw.o(.text+0x3b): undefined reference to `PQsetdbLogin' > /tmp/ccAN4Szw.o(.text+0x4a): undefined reference to `PQstatus' > /tmp/ccAN4Szw.o(.text+0x75): undefined reference to `PQerrorMessage' > /tmp/ccAN4Szw.o(.text+0xa9): undefined reference to `PQexec' > /tmp/ccAN4Szw.o(.text+0xb4): undefined reference to `PQresultStatus' > /tmp/ccAN4Szw.o(.text+0xd8): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0xf3): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x104): undefined reference to `PQexec' > /tmp/ccAN4Szw.o(.text+0x112): undefined reference to `PQresultStatus' > /tmp/ccAN4Szw.o(.text+0x136): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x151): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x162): undefined reference to `PQexec' > /tmp/ccAN4Szw.o(.text+0x170): undefined reference to `PQresultStatus' > /tmp/ccAN4Szw.o(.text+0x194): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x1af): undefined reference to `PQnfields' > /tmp/ccAN4Szw.o(.text+0x1c9): undefined reference to `PQfname' > /tmp/ccAN4Szw.o(.text+0x20c): undefined reference to `PQgetvalue' > /tmp/ccAN4Szw.o(.text+0x23c): undefined reference to `PQntuples' > /tmp/ccAN4Szw.o(.text+0x24c): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x25d): undefined reference to `PQexec' > /tmp/ccAN4Szw.o(.text+0x26b): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x27c): undefined reference to `PQexec' > /tmp/ccAN4Szw.o(.text+0x28a): undefined reference to `PQclear' > /tmp/ccAN4Szw.o(.text+0x296): undefined reference to `PQfinish' > collect2: ld returned 1 exit status > make: *** [testlibpq] Error 1 > > -- > Best regards, > > Marc. > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > "All that is needed for the forces of evil to triumph is for enough good > men to do nothing." - Edmond Burke > "The penalty good people pay for not being interested in politics is to be > governed by people worse than themselves." - Plato pgp0.pgp Description: PGP signature
Re: [GENERAL] compiling the examples
Tom Lane wrote: Marc Cuypers <[EMAIL PROTECTED]> writes: How do i make the examples in postgresql-7.3.3/src/test/examples/? "make" works for me, assuming that I'm doing it in a built directory tree. [ looks at 7.3 branch... ] Hm, it looks like there's a mistake in the Makefile in that directory in 7.3: try changing LIBS += $(libpq) to LDFLAGS += $(libpq) This works. Thanks a lot. -- Best regards, Marc. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] compiling the examples
Marc Cuypers <[EMAIL PROTECTED]> writes: > How do i make the examples in postgresql-7.3.3/src/test/examples/? "make" works for me, assuming that I'm doing it in a built directory tree. [ looks at 7.3 branch... ] Hm, it looks like there's a mistake in the Makefile in that directory in 7.3: try changing LIBS += $(libpq) to LDFLAGS += $(libpq) regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] C vs plpgsql and child processes
Jason Godden wrote: Hi Sean, Yeah - It is declared VOLATILE. I think there must be something specific with the way PL/PGSQL handles child processes of a called function. The child process actually spawns mpg123 or ogg123 so it has to live beyond the life of the parent. Not sure. What I might do is rewrite the entire procedure from woe to go in using SPI and see how that goes. Failing that I guess I could always peek at the source! : ) PL/pgSQL does not pay any attention or could affect child processes of a backend to my knowledge. Are you sure that the PL/pgSQL function really calls your C function forking off the child? The best way to check would be to have some NOTICE coming out of your C function before it actually does create the child. Jan Thanks, Jason On Mon, 18 Aug 2003 04:48 am, Sean Chittenden wrote: > Problem is that when I call these particular functions from within > plpgsql rather than through a single sql command the child never > actually starts (or starts and then exits immediately). Are you sure? I can't think of much that'd prevent a C function from executing other than how you've declared the function (ie, is PgSQL caching the results of the function?). Make sure you've declared it as VOLATILE (or don't declare it anything and it'll default to VOLATILE). http://developer.postgresql.org/docs/postgres/sql-createfunction.html -sc ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] PQgetResultSet Problem
Deepa K <[EMAIL PROTECTED]> writes: > I am using postgresql 7.1.3 and a client using libpq. > I am executing a statement with muliple SQL commands semicolon > seperated. > If any one of the query in between fails, pqGetResultset returns > NULL on the failed query. So i am unable to process the rest of the > queries. That's what it's supposed to do. >Also the queries which were before the failed query, which were > successful, were also not commited to the database. Likewise. If this is not the behavior you want, then submit the commands separately. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Cache lookup failed?
What might be the source of this error? Cache lookup failed for relation 188485009 We've been getting these at odd intervals, and they are not reproducible. Our setup: PostgreSQL 7.3.3 Red Hat 7.3 kernel.shmall = 1352914698 kernel.shmmax = 1352914698 shared_buffers = 131072 max_fsm_pages = 35 max_fsm_relations = 200 wal_buffers = 32 sort_mem = 65536 vacuum_mem = 65536 effective_cache_size = 196608 -- Jeff Boes vox 269.226.9550 ext 24 Database Engineer fax 269.349.9076 Nexcerpt, Inc. http://www.nexcerpt.com ...Nexcerpt... Extend your Expertise ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] compiling the examples
Hi, How do i make the examples in postgresql-7.3.3/src/test/examples/? It seems that just typing make doesn't link to the libpq library. Where should I start make? Just typing make gives the following output: # make gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/interfaces/libpq -I../../../src/include -Wl,-rpath,/usr/local/pgsql/lib testlibpq.c -o testlibpq /tmp/ccAN4Szw.o: In function `exit_nicely': /tmp/ccAN4Szw.o(.text+0xd): undefined reference to `PQfinish' /tmp/ccAN4Szw.o: In function `main': /tmp/ccAN4Szw.o(.text+0x3b): undefined reference to `PQsetdbLogin' /tmp/ccAN4Szw.o(.text+0x4a): undefined reference to `PQstatus' /tmp/ccAN4Szw.o(.text+0x75): undefined reference to `PQerrorMessage' /tmp/ccAN4Szw.o(.text+0xa9): undefined reference to `PQexec' /tmp/ccAN4Szw.o(.text+0xb4): undefined reference to `PQresultStatus' /tmp/ccAN4Szw.o(.text+0xd8): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0xf3): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x104): undefined reference to `PQexec' /tmp/ccAN4Szw.o(.text+0x112): undefined reference to `PQresultStatus' /tmp/ccAN4Szw.o(.text+0x136): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x151): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x162): undefined reference to `PQexec' /tmp/ccAN4Szw.o(.text+0x170): undefined reference to `PQresultStatus' /tmp/ccAN4Szw.o(.text+0x194): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x1af): undefined reference to `PQnfields' /tmp/ccAN4Szw.o(.text+0x1c9): undefined reference to `PQfname' /tmp/ccAN4Szw.o(.text+0x20c): undefined reference to `PQgetvalue' /tmp/ccAN4Szw.o(.text+0x23c): undefined reference to `PQntuples' /tmp/ccAN4Szw.o(.text+0x24c): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x25d): undefined reference to `PQexec' /tmp/ccAN4Szw.o(.text+0x26b): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x27c): undefined reference to `PQexec' /tmp/ccAN4Szw.o(.text+0x28a): undefined reference to `PQclear' /tmp/ccAN4Szw.o(.text+0x296): undefined reference to `PQfinish' collect2: ld returned 1 exit status make: *** [testlibpq] Error 1 -- Best regards, Marc. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Monday 18 August 2003 13:04, Darko Prenosil wrote: > On Monday 18 August 2003 10:20, Hervé Piedvache wrote: > > Hi, > > > > May be my question is stupid ... but I'm a little suprised : > > > > SELECT id_letter as letter from my_table; > > > > letter > > - > > B > > C > > a > > A > > > > SELECT id_letter as letter from my_table order by letter; > > > > letter > > - > > A > > B > > C > > a > > > > SELECT id_letter as letter from my_table order by lower(letter); > > > > ERROR: Attribute "letter" not found > > Why did you change column name to "letter" in last query, and all the other > queries have "id_letter" as column name. What is table structure exactly ? > I assume that You don't have column with "letter" at all. > > Regards ! OK, now I see exactly the mistake You are making: SELECT id_letter as letter from my_table order by lower(id_letter); would be correct query, because "letter" is only alias for result column, not column in "my_table". Sorry I didn't see it first time. Regards ! ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Monday 18 August 2003 10:20, Hervé Piedvache wrote: > Hi, > > May be my question is stupid ... but I'm a little suprised : > > SELECT id_letter as letter from my_table; > > letter > - > B > C > a > A > > SELECT id_letter as letter from my_table order by letter; > > letter > - > A > B > C > a > > SELECT id_letter as letter from my_table order by lower(letter); > > ERROR: Attribute "letter" not found > Why did you change column name to "letter" in last query, and all the other queries have "id_letter" as column name. What is table structure exactly ? I assume that You don't have column with "letter" at all. Regards ! ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why lower's not accept an AS declaration ?
On Mon, Aug 18, 2003 at 10:20:13AM +0200, Hervé Piedvache wrote: > You can imagine my test is simple, in practise it's not the reallity of my > original request ... but this example is just to show that lower() function > does not accept an AS declaration ... is it normal ? Yes, that's normal. It's even required by the SQL spec AFAIK. If it's a problem of multiple evaluation, you can use subqueries in the FROM clause. -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > "All that is needed for the forces of evil to triumph is for enough good > men to do nothing." - Edmond Burke > "The penalty good people pay for not being interested in politics is to be > governed by people worse than themselves." - Plato pgp0.pgp Description: PGP signature
Re: [GENERAL] C vs plpgsql and child processes
Hi Sean, Yeah - It is declared VOLATILE. I think there must be something specific with the way PL/PGSQL handles child processes of a called function. The child process actually spawns mpg123 or ogg123 so it has to live beyond the life of the parent. Not sure. What I might do is rewrite the entire procedure from woe to go in using SPI and see how that goes. Failing that I guess I could always peek at the source! : ) Thanks, Jason On Mon, 18 Aug 2003 04:48 am, Sean Chittenden wrote: > > Problem is that when I call these particular functions from within > > plpgsql rather than through a single sql command the child never > > actually starts (or starts and then exits immediately). > > Are you sure? I can't think of much that'd prevent a C function from > executing other than how you've declared the function (ie, is PgSQL > caching the results of the function?). Make sure you've declared it > as VOLATILE (or don't declare it anything and it'll default to > VOLATILE). > > http://developer.postgresql.org/docs/postgres/sql-createfunction.html > > -sc ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Bit String Manipulation
Hi, I am having problems manipulating bit strings. CREATE TABLE lookup( fname TEXT PRIMARY KEY, digest BIT VARYING ); I am trying to construct another bit string based on the length of the first: SELECT b'1'::bit( bit_length( digest ) ) FROM lookup; This doesn't work as i had hoped, where am I going wrong? Thanks! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Performance with different index types
Hello, are there any experiences about the performance of indices with different data types. How do the performance of an index that consists of - an integer field - a varchar() field - a text field differ? Is it a waste of memory/performance to make a text field primary key? Thanks, Hans ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Why lower's not accept an AS declaration ?
Hi, May be my question is stupid ... but I'm a little suprised : SELECT id_letter as letter from my_table; letter - B C a A SELECT id_letter as letter from my_table order by letter; letter - A B C a SELECT id_letter as letter from my_table order by lower(letter); ERROR: Attribute "letter" not found You can imagine my test is simple, in practise it's not the reallity of my original request ... but this example is just to show that lower() function does not accept an AS declaration ... is it normal ? Thanks for your answers ... Regards, -- Hervé Piedvache Elma Ingénierie Informatique 6 rue du Faubourg Saint-Honoré F-75008 - Paris - France Pho. 33-144949901 Fax. 33-144949902 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org