[GENERAL] I18N
What is the preferred way of issuing errors of the form: "Delimiter expected after character #3" (where "3" obviously varies). ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Categories and Sub Categories (Nested)
On Apr 19, 2006, at 14:44 , Martin Kuria wrote: I have a postgresql database Table Categories which has the structure like this Cat_ID | Parent_ID | Name 1 | 0 | Automobiles 2 | 0 | Beauty & Health 3 | 1 | Bikes 4 | 1 | Cars 5 | 3 | Suzuki 6 | 3 | Yamaha 7 | 0 | Clothes According to the above Database data, Suzuki and Yamaha are the Subcategories of Category Bikes and Bikes in turn is the Subcategory of Automobiles. If you're looking at having a potentially deep hierarchy, I'd recommend taking a look at nested sets, which also allows you to keep everything in one table. http://www.intelligententerprise.com/001020/celko.jhtml? _requestid=145525] By the way, the method you outline above is often called the adjacency list model. Hope this helps. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How to implement a "subordinate database"?
On Apr 19, 2006, at 20:31 , chris smith wrote: On 4/19/06, Kynn Jones <[EMAIL PROTECTED]> wrote: I keep bumping against this situation: I have a main database A, and I want to implement a database B, that is distinct from A, but subordinate to it, meaning that it refers to data in A, but not vice versa. I don't simply want to add new tables to A to implement B, because this unnecessarily clutters A's schema with tables that entirely extraneous to it. How about putting B's tables in a separate schema in the same database as A? Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Installing PostgreSQL on Win 2003 R2 64-bit
On 4/27/06, Aly Dharshi <[EMAIL PROTECTED]> wrote: > Yes, if you search the archives there was something on this. Maybe it was > around > 32-bit, but nonetheless, take a search through. There is a patch in the queue written by Magnus Hagander right now which allows pg to build from the microsoft C compiler. In theory, this should allow pg to build 64 bit on windows with little or no modifcation. Another take is the Interix/SUA approach. I've had PostgreSQL sources unmodified working on windows as far back as 7.4. I haven't looked at it since the new R2 stuff came out, but if you are feeling adventerous you might want to try and get SUA pgsql working on 64 bit. Either of these approaches could work, but expect to be hacking around wierd problems. There is decent chance pgsql will be officially supported on windows as of 8.2, scheduled for release this summer. The current windows build environment, mingw, does not support 64 bit and doesn't look like it is going to in the near future. merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] For vim users: Syntax highlighting for PostgreSQL
Hi, I just wrote a pgsql.vim file for vim users. It helps you to colorize the file which contains PostgreSQL SQL keywords: http://www.gunduz.org/postgresql/pgsql.vim In order to use this file, first edit filetype.vim file and add " PgSQL au BufNewFile,BufRead *.pgsql setf pgsql to the relevant part. We will need to do this until this file becomes an official part of vim. Then put pgsql.vim to the syntax directory of vim (/usr/share/vim/vim64/syntax on my Fedora Core 5). After you rename your sql file as filename.pgsql, the syntax highlighting will be enabled. What I've added so far: - All the functions - All the data types - Many of the keywords (let us see if I've missed anything) - Comments I've used mysql.vim as a template. Thanks to \df, \dT and psql's -E for making this process easier. :) If you are vim user, please test it and send the possible errors/additions to me. I intend to send this file to vim developers, too, after the testing. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Checking for Foreign Keys constraining a record?
This is pretty ugly but you can query pgsql for table attributes... replace tablename and you'll get the schema for a table including primary and foreign keys. You could shrink it down and look just for the foreign key. SELECT f.attnum AS number, f.attname AS name, f.attnum, f.attnotnull AS notnull, f.atthasdef as default, pg_catalog.format_type (f.atttypid,f.atttypmod) AS type, CASE WHEN p.contype = 'p' THEN 't' ELSE 'f' END AS primarykey, CASE WHEN p.contype = 'f' THEN g.relname END AS foreignkey, CASE WHEN p.contype = 'f' THEN p.confkey END AS foreignkey_fieldnum, CASE WHEN p.contype = 'f' THEN g.relname END AS foreignkey, CASE WHEN p.contype = 'f' THEN p.conkey END AS foreignkey_connnum FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY ( p.conkey ) LEFT JOIN pg_class AS g ON p.confrelid = g.oid WHERE c.relkind = 'r'::char AND c.relname = 'tableName' AND f.attnum > 0 ORDER BY number; Hope this helps, Gavin On Apr 27, 2006, at 5:25 PM, Jerry Sievers wrote: Benjamin Smith <[EMAIL PROTECTED]> writes: I want to be able to determine in advance whether or not a record is "deleteable" before displaying the button to delete the record. If it's not deleteable, it should say so before the user hits the button. But, the only way that I've been able to find out if the customer record is deletable is to begin a transaction, try to delete it, check to see if it worked, and then rollback the session. This causes my error logger to log errors everytime somebody looks at a customer record, and (I'm sure) is not very efficient. Is there a way to ask the database: "Are there any FK constraints that would prevent this record from being deleted?" Short of your own fancy function that walks the FK tree, no. (BTW, this could be simple actually if the FK linkage is shallow.) Add a statement to prevent the nuisance error message to the trans. begin; set log_min_messages to log; do trial delete; rollback; HTH -- -- - Jerry Sievers 305 854-3001 (home) WWW ECommerce Consultant 305 321-1144 (mobilehttp://www.JerrySievers.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match Gavin M. Roy 800 Pound Gorilla [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Checking for Foreign Keys constraining a record?
Benjamin Smith wrote: I have a customer table (very important) and have numerous fields in other tables FK to the serial id of the customer table. What you need is a list of the foreign key definitions, out of which you build SQL selects that check each child table based on foreign key. The first row that hits returns true to speed things up. One comprehensive solution is to write a program that scans the system catalogs for the foreign key definitions table-by-table. Then generate a stored procedure called Has_Children_ for each table. The subroutine would take values for the primary key columns. It would check each child table and return true on the first one found or false at the end. There's an option to delete a customer record, but it has to fail if any records are linked to it (eg: invoices) in order to prevent the books from getting scrambled. I want to be able to determine in advance whether or not a record is "deleteable" before displaying the button to delete the record. If it's not deleteable, it should say so before the user hits the button. But, the only way that I've been able to find out if the customer record is deletable is to begin a transaction, try to delete it, check to see if it worked, and then rollback the session. This causes my error logger to log errors everytime somebody looks at a customer record, and (I'm sure) is not very efficient. Is there a way to ask the database: "Are there any FK constraints that would prevent this record from being deleted?" Thanks, -Ben begin:vcard fn:Kenneth Downs n:Downs;Kenneth email;internet:[EMAIL PROTECTED] tel;work:631-689-7200 tel;fax:631-689-0527 tel;cell:631-379-0010 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Help with unpredictable use of indexes on large tables...
Tom, Thanks for the advice. I realize that I have little understanding of index usage in PostgreSQL - I'm doing my best to improve this. Below is another comparison of the 'distinct' and 'group by' queries from the same table with seqscan set to on and off. I does look like the group by works better (with seqscan off), as you suggested. I'll try some more tinkering to see what I can make happen. However, I guess what I'm really trying to do in the context that I'm currently working is summarize the unique values stored in the index, rather than querying the table itself. Is this possible, or reasonable to do? Thanks again for your help, Mike (P.S. - Sorry if you get this twice Tom) === dbname=# set session enable_seqscan to on; SET dbname=# explain analyze select distinct year from tbl_ind_schools_edu; QUERY PLAN --- Unique (cost=32302.16..32579.31 rows=2 width=2) (actual time=2871.115..3705.652 rows=2 loops=1) -> Sort (cost=32302.16..32440.74 rows=55431 width=2) (actual time=2871.105..3268.114 rows=55431 loops=1) Sort Key: "year" -> Seq Scan on tbl_ind_schools_edu (cost=0.00..27485.31 rows=55431 width=2) (actual time=0.091..1903.820 rows=55431 loops=1) Total runtime: 3707.879 ms (5 rows) dbname=# set session enable_seqscan to off; SET dbname=# explain analyze select distinct year from tbl_ind_schools_edu; QUERY PLAN --- Unique (cost=0.00..161575.24 rows=2 width=2) (actual time=0.312..2143.846 rows=2 loops=1) -> Index Scan using schoolse_school_year on tbl_ind_schools_edu (cost=0.00..161436.67 rows=55431 width=2) (actual time=0.286..1717.445 rows=55431 loops=1) Total runtime: 2144.100 ms (3 rows) dbname=# set session enable_seqscan to on; SET dbname=# explain analyze select year from tbl_ind_schools_edu group by year; QUERY PLAN --- HashAggregate (cost=27623.89..27623.91 rows=2 width=2) (actual time=2176.003..2176.010 rows=2 loops=1) -> Seq Scan on tbl_ind_schools_edu (cost=0.00..27485.31 rows=55431 width=2) (actual time=0.072..1697.776 rows=55431 loops=1) Total runtime: 2254.643 ms (3 rows) dbname=# set session enable_seqscan to off; SET dbname=# explain analyze select year from tbl_ind_schools_edu group by year; QUERY PLAN --- Group (cost=0.00..161575.24 rows=2 width=2) (actual time=0.350..2128.425 rows=2 loops=1) -> Index Scan using schoolse_school_year on tbl_ind_schools_edu (cost=0.00..161436.67 rows=55431 width=2) (actual time=0.296..1689.331 rows=55431 loops=1) Total runtime: 2129.799 ms (3 rows) Tom Lane wrote: > Mike Leahy <[EMAIL PROTECTED]> writes: >> ... When I try to get the distinct number of >> years from these tables, it does a sequential scan to get two unique >> values from the "year" column in the *_edu table, but it uses an index >> scan to get a single unique value from the "year" column from the *_con >> table. In both cases, I would have expected the index scan to be used. > > You have a fundamental misunderstanding of what's going on here. Both > plans fetch the entire table contents. The difference is how the data > is brought into sorted order for the UNIQUE step --- either by an > explicit sort, or by scanning the table in index order. > > A full-table index scan is usually pretty darn inefficient (too much > random access) and so it's often the case that the sort approach is > actually faster. > > The two EXPLAINs you provided aren't compelling evidence of anything > wrong because they are for two different-sized tables ... but to the > extent that the results are comparable it appears that the planner is > actually biased in favor of the indexscan plan (cost divided by actual > time is way lower for the indexscan). > > What you should look at is performance of the two approaches on the > *same* table (fool with enable_sort and/or enable_indexscan to force > the alternative choices) and then see whether it makes sense to tweak > the planner cost parameters for your installation. > > Also I'd suggest trying > > select year from [table] group by year > > which is capable of using a hash aggregation approach; that will likely > beat either of these plans. > > regards, tom lane > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 1: if posting/re
Re: [GENERAL] Checking for Foreign Keys constraining a record?
Forgot to add; another option is to use a PL function with an exception handler. This may be a bit more elegant approach but not necessarily easier. FYI Jerry Sievers <[EMAIL PROTECTED]> writes: > Benjamin Smith <[EMAIL PROTECTED]> writes: > > > I want to be able to determine in advance whether or not a record is > > "deleteable" before displaying the button to delete the record. If it's not > > deleteable, it should say so before the user hits the button. > > > > But, the only way that I've been able to find out if the customer record is > > deletable is to begin a transaction, try to delete it, check to see if it > > worked, and then rollback the session. > > > > This causes my error logger to log errors everytime somebody looks at a > > customer record, and (I'm sure) is not very efficient. > > > > Is there a way to ask the database: "Are there any FK constraints that > > would > > prevent this record from being deleted?" > > Short of your own fancy function that walks the FK tree, no. (BTW, > this could be simple actually if the FK linkage is shallow.) > > Add a statement to prevent the nuisance error message to the trans. > > begin; > set log_min_messages to log; > do trial delete; > rollback; > > HTH > > > -- > --- > Jerry Sievers 305 854-3001 (home) WWW ECommerce Consultant > 305 321-1144 (mobile http://www.JerrySievers.com/ > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- --- Jerry Sievers 305 854-3001 (home) WWW ECommerce Consultant 305 321-1144 (mobilehttp://www.JerrySievers.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Checking for Foreign Keys constraining a record?
Benjamin Smith <[EMAIL PROTECTED]> writes: > I want to be able to determine in advance whether or not a record is > "deleteable" before displaying the button to delete the record. If it's not > deleteable, it should say so before the user hits the button. > > But, the only way that I've been able to find out if the customer record is > deletable is to begin a transaction, try to delete it, check to see if it > worked, and then rollback the session. > > This causes my error logger to log errors everytime somebody looks at a > customer record, and (I'm sure) is not very efficient. > > Is there a way to ask the database: "Are there any FK constraints that would > prevent this record from being deleted?" Short of your own fancy function that walks the FK tree, no. (BTW, this could be simple actually if the FK linkage is shallow.) Add a statement to prevent the nuisance error message to the trans. begin; set log_min_messages to log; do trial delete; rollback; HTH -- --- Jerry Sievers 305 854-3001 (home) WWW ECommerce Consultant 305 321-1144 (mobilehttp://www.JerrySievers.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Checking for Foreign Keys constraining a record?
I have a customer table (very important) and have numerous fields in other tables FK to the serial id of the customer table. There's an option to delete a customer record, but it has to fail if any records are linked to it (eg: invoices) in order to prevent the books from getting scrambled. I want to be able to determine in advance whether or not a record is "deleteable" before displaying the button to delete the record. If it's not deleteable, it should say so before the user hits the button. But, the only way that I've been able to find out if the customer record is deletable is to begin a transaction, try to delete it, check to see if it worked, and then rollback the session. This causes my error logger to log errors everytime somebody looks at a customer record, and (I'm sure) is not very efficient. Is there a way to ask the database: "Are there any FK constraints that would prevent this record from being deleted?" Thanks, -Ben -- "The best way to predict the future is to invent it." - XEROX PARC slogan, circa 1978 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Help with unpredictable use of indexes on large tables...
Mike Leahy <[EMAIL PROTECTED]> writes: > ... When I try to get the distinct number of > years from these tables, it does a sequential scan to get two unique > values from the "year" column in the *_edu table, but it uses an index > scan to get a single unique value from the "year" column from the *_con > table. In both cases, I would have expected the index scan to be used. You have a fundamental misunderstanding of what's going on here. Both plans fetch the entire table contents. The difference is how the data is brought into sorted order for the UNIQUE step --- either by an explicit sort, or by scanning the table in index order. A full-table index scan is usually pretty darn inefficient (too much random access) and so it's often the case that the sort approach is actually faster. The two EXPLAINs you provided aren't compelling evidence of anything wrong because they are for two different-sized tables ... but to the extent that the results are comparable it appears that the planner is actually biased in favor of the indexscan plan (cost divided by actual time is way lower for the indexscan). What you should look at is performance of the two approaches on the *same* table (fool with enable_sort and/or enable_indexscan to force the alternative choices) and then see whether it makes sense to tweak the planner cost parameters for your installation. Also I'd suggest trying select year from [table] group by year which is capable of using a hash aggregation approach; that will likely beat either of these plans. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Help with unpredictable use of indexes on large tables...
Hello list, I've been having a bit of difficulty getting Postgres to use indexes on some large tables that I have. Included below are the results from 'explain analyze' for two queries that should get the unique years of data that are available from two different tables (tbl_ind_schools_edu and tbl_ind_schools_con). The *_edu table has two years of data in it while the *_con table has one year, so *_edu essentially has nearly twice as many rows (i.e., 55k vs. 26k). There is an integer column called "year" in each table that flags what year each row of data is from, and a btree index on this column for both tables (called "schoolse_year" and "schoolsc_year" respectively). Both tables have been fully vacuumed/analyzed. When I try to get the distinct number of years from these tables, it does a sequential scan to get two unique values from the "year" column in the *_edu table, but it uses an index scan to get a single unique value from the "year" column from the *_con table. In both cases, I would have expected the index scan to be used. I also tried this using the year column as group by clause, but in that case, neither of the queries use the index scan. I know I can force PostgreSQL to use indexes by setting enable_seqscan to off, and this does improve the performance of the query. But I'm wondering why the query analyzer doesn't use this index on the larger table. I have several other tables of a similar nature (basically the same data aggregated at different levels), where one table has two years of data, and the other has one. In several cases, the table with two years never utilizes its index on the year column unless I force it to do so. I should point out that the table with two years of data also has a much larger number of columns, all with indexes since they are all potentially used for querying subsets from the tables. Is there something particularly wrong that I might doing (or something that I'm not doing) to prevent the indexes from being properly used? Thanks in advance for any advice... Mike dbname=# explain analyze select distinct year from tbl_ind_schools_edu; QUERY PLAN Unique (cost=32302.16..32579.31 rows=2 width=2) (actual time=1545.911..2084.170 rows=2 loops=1) -> Sort (cost=32302.16..32440.74 rows=55431 width=2) (actual time=1545.901..1892.026 rows=55431 loops=1) Sort Key: "year" -> Seq Scan on tbl_ind_schools_edu (cost=0.00..27485.31 rows=55431 width=2) (actual time=0.074..1180.303 rows=55431 loops=1) Total runtime: 2085.294 ms (5 rows) dbname=# explain analyze select distinct year from tbl_ind_schools_con; QUERY PLAN Unique (cost=0.00..954.37 rows=1 width=2) (actual time=76.277..372.526 rows=1 loops=1) -> Index Scan using schoolsc_year on tbl_ind_schools_con (cost=0.00..887.08 rows=26916 width=2) (actual time=76.265..275.314 rows=26916 loops=1) Total runtime: 372.659 ms (3 rows) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] trying to write a many-to-many type function.
Eric Davies <[EMAIL PROTECTED]> writes: > As far as I can see, you can't write a set returning function that > takes a set as input, as in >select getMetaData( select * from mytable); > Is a function of the type I'm trying to write even possible in PostgreSQL? I don't think it's possible at the moment :-(. We've speculated about ways to relax the restriction that a function in FROM can't take any non-constant arguments. It appears that adding SQL99's LATERAL and/or UNNEST features might do it ... I've had a hard time finding any very clear explanation of their semantics, though, so I'm not entirely sure. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] [Announce] mod_sqil: an RDMS => XML mapper apache module
Hi. Some people on the list may find the following interesting: http://marc2.theaimsgroup.com/?l=apache-modules&m=114599516512637&w=2 Since it uses Apaches apr_dbd, Postgres is one of the supported platforms. Sincerely, Joachim ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] trying to write a many-to-many type function.
Hi, I'd like to write a PostgreSQL server function (ideally in C) that takes a set of objects and returns 1 or more rows from each object, such that the rows returned can be cast to a desired rowtype. The structure of the returned rows depends on additional arguments. For example, I'd like to be able to write a function like the 'getMetaData' function shown below: psql>select oid, getMetaData(myobject, "x,y,z")::t(x float, y float, c float) from mytable; 93939 | (3,4,5) 93939 | (3,4,9) 93939 | (3,4,2) 93939 | (4,3,1) 93940 | (4,3,1) 93940 | (3,1,4) What I've tried: By following the examples in the contrib directory, I managed to write a function that returned "setof record", but I can only cast its results if the function is in the from clause, ie: select * from getMetaData('some text...'::myobjecttype) as t(x float, y float, c float); Which limits me to calling the function with a single object. As far as I can see, you can't write a set returning function that takes a set as input, as in select getMetaData( select * from mytable); Is a function of the type I'm trying to write even possible in PostgreSQL? ** Eric Davies, M.Sc. Barrodale Computing Services Ltd. Tel: (250) 472-4372 Fax: (250) 472-4373 Web: http://www.barrodale.com Email: [EMAIL PROTECTED] ** Mailing Address: P.O. Box 3075 STN CSC Victoria BC Canada V8W 3W2 Shipping Address: Hut R, McKenzie Avenue University of Victoria Victoria BC Canada V8W 3W2 **
Re: [GENERAL] Commit rules or Commit trigger
On Thu, Apr 27, 2006 at 10:17:51 -0400, Vivek Khera <[EMAIL PROTECTED]> wrote: > > On Apr 24, 2006, at 9:37 AM, Claudio Tognolo wrote: > > >I am developing a temporal database and I have the necessity to > >control the integrity constraints befor the commit of the transiction. > >I cannot use the deferrable checking because the integrity > >constraints is a select and i cannot use the trigger or rule because > >the event parameter not support the commit event. > >You have some idea? > > My understanding is that deferred constraints are not checked until > commit time, which seems to be what you want. Why do you believe > otherwise? But you aren't really allowed to do selects in constraints even though you can make Postgres do that in a way that partly works. You need to enforce these constraints with an after trigger. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Unexpected behavior
Hey, I am trying to figure out some unexpected behavior in Postgresql. When I create a rule that fires on a table after an update, and if that rule has a SELECT statement in it, it seems to be attempting to fire (on an empty set) regardless of how the conditional evaluates after an update. The result being that if I run an update on a table with such a rule, instead of getting a message along the lines of "UPDATE (# of rows)" I get the column names of the select statement with no rows and the message "row number -1 is out of range 0..-1". So first off, is having a select statement (I'm actually trying to run a function) inside a rule that fires on an update considered bad practice? I could do this through a trigger, but a rule just seems more natural. Here is some sql to setup an example of what I'm talking about: CREATE TABLE test_table ( id varchar(36) NOT NULL, amount float8, CONSTRAINT test_table_pkey PRIMARY KEY (id) ) WITHOUT OIDS; ALTER TABLE test_table OWNER TO postgres; CREATE OR REPLACE RULE protect_id AS ON UPDATE TO test_table WHERE new.id::text <> old.id::text DO INSTEAD SELECT 'abc' AS test_select; INSERT INTO test_table (id, amount) values ('a', 123); Now, to cause the error, just run an update: UPDATE test_table set amount = 1 where id = 'a'; You will find that it returns: test_select - (0 rows) Rather than what I expect: UPDATE 1 When that rule should never fire (the id hasn't changed). If I change the conditional of the rule to something that must always be false (like false, or 1 = 0), it will still behave in this manner. So am I doing something wrong or am I seeing a bug? Thanks, Thomas Meeks ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Vacuum suggesting doubling of max_fsm_pages
On Apr 27, 2006, at 11:06 AM, Kenneth Downs wrote:I've got a test database that is going live. During development I have not vacuum'd much, so to get started I did "Vacuum verbose analyze". you probably really do have a lot of spare space in the files. i'd recommend vacuum full if you can, to compact it all since you haven't done a vacuum regularly.then after a while of production, see what vacuum recommends for fsm settings.
Re: [GENERAL] CALCULAR EDAD!
juan jose dominguez <[EMAIL PROTECTED]> schrieb: > Buenas, queria saber como sacar la edad de alguien. Me da el siguiente error > con el siguiente codigo : > **** ---- FFuunnccttiioonn:: > ffuunncciioonn__ssaaccaa__eeddaadd(()) **** > **** DDRROOPP TTRRIIGGGGEERR > ttrriiggggeerr__ssaaccaa__eeddaadd OONN > ffeecchhaass;; **** > **** DDRROOPP FFUUNNCCTTIIOONN > ffuunncciioonn__ssaaccaa__eeddaadd (());; **** > **** CCRREEAATTEE OORR RREEPPLLAACCEE > FFUUNNCCTTIIOONN > ffuunncciioonn__ssaaccaa__eeddaadd(()) > RREETTUURRNNSS OOPPAAQQUUEE AASS'' > **** Please, NO HTML! > INSERT INTO fechas VALUES ('10/07/83'); > ERROR: invalid input syntax for type timestamp: "22 years 6 mons 20 days" > CONTEXT: PL/pgSQL function "funcion_saca_edad" line 6 at assignment The age() function returns not a timestamp, it returns a interval. Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) "If I was god, I would recompile penguin with --enable-fly."(unknow) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] CALCULAR EDAD!
Buenas, queria saber como sacar la edad de alguien. Me da el siguiente error con el siguiente codigo : -- Function: funcion_saca_edad() DROP TRIGGER trigger_saca_edad ON fechas; DROP FUNCTION funcion_saca_edad (); CREATE OR REPLACE FUNCTION funcion_saca_edad() RETURNS OPAQUE AS' DECLARE edad timestamp; BEGIN edad := age(CURRENT_DATE,NEW.fecha); RAISE NOTICE ''edad : %'',edad; RAISE NOTICE ''32131231''; RETURN NEW; END; 'LANGUAGE 'plpgsql'; CREATE TRIGGER trigger_saca_edad BEFORE INSERT OR UPDATE ON fechas FOR EACH ROW EXECUTE PROCEDURE funcion_saca_edad (); INSERT INTO fechas VALUES ('10/07/83'); ERROR: invalid input syntax for type timestamp: "22 years 6 mons 20 days" CONTEXT: PL/pgSQL function "funcion_saca_edad" line 6 at assignment Tu horóscopo diario, semanal y gratuito. Cartas, tarot y predicciones en MSN Horóscopo
Re: [GENERAL] query that needs two nested queries, is this the best way?
why not a join like below (not tested) select id from p4_versions a join p4_files b on (a.versionof = b.id and a.version = b.headver) where p4path like '%/date.txt' -- Original Message --- From: Mark Harrison <[EMAIL PROTECTED]> To: Postgresql-General Sent: Thu, 27 Apr 2006 09:50:38 -0700 Subject: [GENERAL] query that needs two nested queries, is this the best way? > I've got a query that depends upon two pieces of data from another table for > use in a where clause. > > If I perform this procedurally, I can issue two sql commands (see below), > one to get the two pieces of data to search upon (QUERY 1), and one to > perform the query with these two pieces of data plugged in (QUERY 2). > > This can also be done with one query that has two subselects. However, > this causes one redundant selection to be performed (QUERY 3). > > So, I have two questions: > > 1. Is there some way to formulate query 3 without having the redundant > subselects? > > 2. Stylistically or Idiomatically, which is preferrable? I realize > this is a pretty vague question, especially since both approaches > produce the same answer, but I'm just looking for the emotional > tendency of experienced SQL developers. > > Many TIA! > Mark > > ### QUERY 1: get "id" and "headver" values for use in the next query > > scratch1=# select id, headver from p4_files where p4path like '%/date.txt'; > id| headver > --+- > 60152254 | 7 > > ### QUERY 2: use those values in the query > > scratch1=# select id from p4_versions where versionof=60152254 and version=7; > id > -- > 60174263 > > ### QUERY 3: combine the two statements above by using two subselects > > scratch1=# select id from p4_versions where > versionof=(select id from p4_files where p4path like '%/date.txt') > and > version=(select headver from p4_files where p4path like > '%/date.txt'); > id > -- > 60174263 > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] query that needs two nested queries, is this the best way?
Mark Harrison <[EMAIL PROTECTED]> writes: > I've got a query that depends upon two pieces of data from another table for > use in a where clause. > scratch1=# select id from p4_versions where > versionof=(select id from p4_files where p4path like '%/date.txt') > and > version=(select headver from p4_files where p4path like > '%/date.txt'); Use a row-wise comparison, viz select id from p4_versions where (versionof, version) = (select id, headver from p4_files where p4path like '%/date.txt'); regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] query that needs two nested queries, is this the best
Mark Harrison wrote: ### QUERY 3: combine the two statements above by using two subselects scratch1=# select id from p4_versions where versionof=(select id from p4_files where p4path like '%/date.txt') and version=(select headver from p4_files where p4path like '%/date.txt'); This won't work if your LIKE matches more than one row anyway. Try something like: SELECT id FROM p4_versions WHERE (versionof, version) IN (SELECT id,headver FROM p4_files WHERE ...) -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] query that needs two nested queries, is this the best way?
I've got a query that depends upon two pieces of data from another table for use in a where clause. If I perform this procedurally, I can issue two sql commands (see below), one to get the two pieces of data to search upon (QUERY 1), and one to perform the query with these two pieces of data plugged in (QUERY 2). This can also be done with one query that has two subselects. However, this causes one redundant selection to be performed (QUERY 3). So, I have two questions: 1. Is there some way to formulate query 3 without having the redundant subselects? 2. Stylistically or Idiomatically, which is preferrable? I realize this is a pretty vague question, especially since both approaches produce the same answer, but I'm just looking for the emotional tendency of experienced SQL developers. Many TIA! Mark ### QUERY 1: get "id" and "headver" values for use in the next query scratch1=# select id, headver from p4_files where p4path like '%/date.txt'; id| headver --+- 60152254 | 7 ### QUERY 2: use those values in the query scratch1=# select id from p4_versions where versionof=60152254 and version=7; id -- 60174263 ### QUERY 3: combine the two statements above by using two subselects scratch1=# select id from p4_versions where versionof=(select id from p4_files where p4path like '%/date.txt') and version=(select headver from p4_files where p4path like '%/date.txt'); id -- 60174263 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] problem with unique text column
Holger Zwingmann wrote: If I check, after the reconnect, for some given 'text key' my SELECT won't find it and my procedure will thus insert a new (key,value) pair. When I disconnect and reconnect sometimes later again, I will suddenly find both of the keys with the next query and will thus raise a unique violation from within my store procedure. During my investigations, I also realized the following: When I dump the 'key/value' table using pgdump into a file containing INSERTS and then execute the file into a (empty) DB, which I have created initially via a file copy of my DB folder (DB was down, of course), I am able to insert a already existing 'text_key' again. I am only able find the 'text_key' if I do a string compare using LIKE, a simple key_value='value' query does not work. I thought it might be a encoding problem, but this also happens if I set the encoding option of pgdump to generate the dump file in utf-8, which is my DB setting, explicitly. OK, if you are certain that the text values are the same... It could be that the index is disagreeing with the data in the tables. You can test this by issuing "SET enable_indexscan=false" before running your two queries. If they both return the same answer then the index is at fault. Try a REINDEX and see if the problem goes away. If this is happening only when you stop the DB, copy the files and restart it then that would suggest to me one of: 1. You have fsync turned off, so data is not being written to disk 2. You aren't copying the files fully 3. There may be errors in the logs -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Problem with complex outer join expression
On 4/27/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote: > I believe that has been fixed in recent version of postgres. At some point > in the past there was only a date + int operator and not an int + date > operator. > Overloading makes it easy to forget that those are two different operators. I'll double check the order I've been using, but I'm using shared hosting for pg, so I'm limited to what they're prepared to support. Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Problem with complex outer join expression
On 4/28/06, Tom Lane <[EMAIL PROTECTED]> wrote: > Either write it as date + integer, or add an integer + date operator I thought I did it in that order, but I'll check it again. > (not very hard, just transpose the inputs), or use PG >= 8.0 which has > integer + date built in. I'm using a shared host for pg, so I'm limited to they are prepared to support. Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Autovacuum Logging
Well, one problem now is that everytime pg_autovacuum opens a database, a message is placed in the logs: LOG: autovacuum: processing database "test" LOG: autovacuum: processing database "test" LOG: autovacuum: processing database "test" LOG: autovacuum: processing database "test" This is certainly not desirable. I changed the message to DEBUG1 for 8.2 with the attached patch. I also added a sentence to the log_min_error_messages TODO item: * Allow log_min_messages to be specified on a per-module basis This would allow administrators to see more detailed information from specific sections of the backend, e.g. checkpoints, autovacuum, etc. Another idea is to allow separate configuration files for each module, or allow arbitrary SET commands to be passed to them. I am thinking the new second sentence is more flexible than just making a change for log_min_error_messages. --- Matthew T. O'Connor wrote: > Right, I think there has been discussion about this and general > agreement that the current autovacuum logging options are less than > ideal to put it mildly. Unfortunately, I don't think there has been any > action by anyone to do something about it. I hope to work on this at > some point, but coding time for PG scarce resource in my life, so no > promises. > > Matt > > > > Will Reese wrote: > > I found this short discussion between Tom and Bruce, but that's about it > > for autovacuum logging. > > > > http://archives.postgresql.org/pgsql-general/2006-04/msg00489.php > > > > It just seems like the "processing database" log statement should be set > > to a lower level, since it just logs every time autovacuum runs. And > > the "vacuum table" log statement should be set to LOG since it will only > > log when it actually vacuums a table, which is very useful and important > > information. If Bruce's solution is implemented in 8.2 that would be > > nice, but raising the log level from DEBUG2 to LOG would be a nice patch > > for 8.1.4 in the meantime. :) > > > > > > Will Reese -- http://blog.rezra.com > > > > On Apr 27, 2006, at 12:20 AM, Jim C. Nasby wrote: > > > >> I believe 8.2 will have improved autovac logging. Take a look in the > >> -hackers archives for more info. > >> > >> On Wed, Apr 26, 2006 at 10:47:26PM -0500, Will Reese wrote: > >>> Is there a reason many of the most useful autovacuum.c elog > >>> statements are set to DEBUG2? It seems to me that these should be > >>> set to LOG. > >>> > >>> I used autovacuum when it was a contrib module, and it failed after a > >>> month. To prevent major performance problems I went back to the > >>> daily vacuum. I was looking forward to using autovacuum in 8.1 since > >>> it is more resilient and configurable now, but when I noticed it > >>> would not log it's activities I almost decided against it. After > >>> looking at the source code it seems that all the necessary logging is > >>> built in, it's just not set to the appropriate log level. I'm sure > >>> I'm not the only person interested in this, and I saw somewhere (I > >>> can't find it again) that EnterpriseDB has enabled autovacuum > >>> logging. I don't think it's too verbose to change these to the > >>> normal log level, but if so could it be a postgresql.conf option to > >>> enable/disable autovacuum logging? It sure would be nice to be able > >>> to verify that tables are being vacuumed properly without having to > >>> set the log level to DEBUG2. > >>> > >>> Will Reese -- http://blog.rezra.com > >>> > >>> ---(end of broadcast)--- > >>> TIP 6: explain analyze is your friend > >>> > >> > >> --Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] > >> Pervasive Software http://pervasive.comwork: 512-231-6117 > >> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 > > > > > > ---(end of broadcast)--- > > TIP 6: explain analyze is your friend > > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + Index: src/backend/postmaster/autovacuum.c === RCS file: /cvsroot/pgsql/src/backend/postmaster/autovacuum.c,v retrieving revision 1.16 diff -c -c -r1.16 autovacuum.c *** src/backend/postmaster/autovacuum.c 6 Apr 2006 20:38:00 - 1.16 --- src/backend/postmaster/autovacuum.c 27 Apr 2006 15:54:56 - *** **
Re: [GENERAL] Autovacuum Logging
Right, I think there has been discussion about this and general agreement that the current autovacuum logging options are less than ideal to put it mildly. Unfortunately, I don't think there has been any action by anyone to do something about it. I hope to work on this at some point, but coding time for PG scarce resource in my life, so no promises. Matt Will Reese wrote: I found this short discussion between Tom and Bruce, but that's about it for autovacuum logging. http://archives.postgresql.org/pgsql-general/2006-04/msg00489.php It just seems like the "processing database" log statement should be set to a lower level, since it just logs every time autovacuum runs. And the "vacuum table" log statement should be set to LOG since it will only log when it actually vacuums a table, which is very useful and important information. If Bruce's solution is implemented in 8.2 that would be nice, but raising the log level from DEBUG2 to LOG would be a nice patch for 8.1.4 in the meantime. :) Will Reese -- http://blog.rezra.com On Apr 27, 2006, at 12:20 AM, Jim C. Nasby wrote: I believe 8.2 will have improved autovac logging. Take a look in the -hackers archives for more info. On Wed, Apr 26, 2006 at 10:47:26PM -0500, Will Reese wrote: Is there a reason many of the most useful autovacuum.c elog statements are set to DEBUG2? It seems to me that these should be set to LOG. I used autovacuum when it was a contrib module, and it failed after a month. To prevent major performance problems I went back to the daily vacuum. I was looking forward to using autovacuum in 8.1 since it is more resilient and configurable now, but when I noticed it would not log it's activities I almost decided against it. After looking at the source code it seems that all the necessary logging is built in, it's just not set to the appropriate log level. I'm sure I'm not the only person interested in this, and I saw somewhere (I can't find it again) that EnterpriseDB has enabled autovacuum logging. I don't think it's too verbose to change these to the normal log level, but if so could it be a postgresql.conf option to enable/disable autovacuum logging? It sure would be nice to be able to verify that tables are being vacuumed properly without having to set the log level to DEBUG2. Will Reese -- http://blog.rezra.com ---(end of broadcast)--- TIP 6: explain analyze is your friend --Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Installing PostgreSQL on Win 2003 R2 64-bit
Yes, if you search the archives there was something on this. Maybe it was around 32-bit, but nonetheless, take a search through. Aly. Stanislaw Tristan wrote: It is a possible? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings -- Aly S.P Dharshi [EMAIL PROTECTED] "A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject" ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] selecting column comment
>From the system catalogs: SELECT PC.relname, PD.description FROM pg_catalog.pg_description PD, pg_catalog.pg_class PC WHERE PD.objoid = PC.oid AND PD.objsubid = 0 "Ari Kahn" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] >I have a comments on tables in my database. > > e.g. COMMENT ON TABLE my_schema.my_table IS 'Employee Information'; > > I know you can get comments using \d+ > > Is there a way to select comments? > > Reason: I do a lot of perl interfaces. I find escape commands do not work > using DBI. > > Thanks, > Ari > > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Vacuum suggesting doubling of max_fsm_pages
Hi folks I'm hoping somebody can demystify something for me. I've got a test database that is going live. During development I have not vacuum'd much, so to get started I did "Vacuum verbose analyze". At the end of the process it told me that vacuum needed max_fsm_pages to be set at about 212000 (appx) to track all free pages. I then ran it again (w/o yet changing max_fsm_pages) and it came back asking for a number in the range of 49. So I set the value to 50, restarted the server, and ran vacuum again. This time it suggested a value of 98. It appears to me the requests are doubling in size, so before increasing it again I thought I would ask here. begin:vcard fn:Kenneth Downs n:Downs;Kenneth email;internet:[EMAIL PROTECTED] tel;work:631-689-7200 tel;fax:631-689-0527 tel;cell:631-379-0010 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PostgreSQL 8.1 + PHP5.1.x/4.4.2 + Apache 2.0.55/1.3.34 PROBLEM!! PLEASE HELP
This is not a postgreSQL issue -- you are having problems with your PHP configuration... <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Help! I was trying to make an installation of PHP 5.1.2 + Apache > 2.0.55 + PostgreSQL 8.1 on Windows 2003 Server, and I'm stuck. I > got PHP to work with Apache quite smoothly, so did I marry PHP with > PostgreSQL - scripts connecting to the database work fine from > windows command line, except that the following code: > > if (extension_loaded("php_pgsql")) { > echo "PGSQL loaded!"; > } > > returns no message. Still database queries work fine. Trouble starts > when I try to open a page in my browser - then I get an error > message like this: > > Error: call to udefined function pg_connect()... > > I changed the php.ini file a billion times, trying to figure out > what to set in the "extension_dir" and "extension=php_pgsql.dll" > lines, and I tried at least as many times to change apache's > httpd.conf file so that the php module is loaded properly. And it > is, as far as I'm concerned - the phpinfo() page shows without a > problem. One peculiar thing about it is that in the "Loaded > modules" section (don't remember the exact name) there's absolutely > no sign of the pgsql module. > > I also tried downloading and installing the latest php snapshot, > because someone on a forum told me it might work, but it didn't :o( > I even tried earlier versions of Apache and PHP, but nothing > worked. Initially I figured I must have found a bug, but I don't > really believe it could've gone on for so long without being > noticed by somebody... > > I'm begging for help, because the person I'm doing the server for > strongly insists on the machine working on Win 2003 and not some > distribution of linux or unix. > > Regards, > krzysieq > > > -- > Nie przychodz! Nie przyjezdzaj! Nie dzwon! Zamow ogloszenie drobne do > "Gazety Wyborczej" przez internet wchodzac na http://www.aaaby.pl/wyborcza > i wygraj nawet 10.000 zl! > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Mailing list setup issue
"Chris Velevitch" <[EMAIL PROTECTED]> writes: > In addition, it's not the norm. I subscribe to lots of mailing lists > and postgresql.org lists are the only ones that I've seem do that. You may think it's not the norm, but you're mistaken. This is how the PG lists are run, as well as most other tech-savvy lists I belong to. Get used to it, or at least learn to configure your subscription the way you want it. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Please comment on the following OpenFTS/tsearch2
On Thu, 27 Apr 2006, Vivek Khera wrote: On Apr 26, 2006, at 3:17 AM, Teodor Sigaev wrote: We knows installation of tsearch2 working with 4 millions docs. What are the design goals for the size of the source tables? My engineers are telling me of things their friends have tried and have hit limits of tsearch2. One was importing a large message board (millions of rows, a few sentences of text per row) and ran into problems (which were not detailed). Our interest is in using it for indexing mailing lists we host. We're looking at about 100 or so messages per day right now, with potential growth. Short of actually implementing it and loading up sample data, what guidelines can you provide as to the limits of tsearch2 source data size? I can imagine having 10+ million rows of 4k-byte to 10k-byte long messages within a couple of years. It should be no problem with inverted index we just posted. Search itself is very fast ! The problem is intrinsic for relational database - read data from disk. If you find 100,000 results and you want to rank them, you have to read them from hd, which is slow. That's why we use cacheing search daemon and on 5 mln blog and we could get 1mln search/day on 8Gb RAM server. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Please comment on the following OpenFTS/tsearch2 issues!
On Apr 26, 2006, at 3:17 AM, Teodor Sigaev wrote: We knows installation of tsearch2 working with 4 millions docs. What are the design goals for the size of the source tables? My engineers are telling me of things their friends have tried and have hit limits of tsearch2. One was importing a large message board (millions of rows, a few sentences of text per row) and ran into problems (which were not detailed). Our interest is in using it for indexing mailing lists we host. We're looking at about 100 or so messages per day right now, with potential growth. Short of actually implementing it and loading up sample data, what guidelines can you provide as to the limits of tsearch2 source data size? I can imagine having 10+ million rows of 4k-byte to 10k-byte long messages within a couple of years. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Commit rules or Commit trigger
On Apr 24, 2006, at 9:37 AM, Claudio Tognolo wrote:I am developing a temporal database and I have the necessity to control the integrity constraints befor the commit of the transiction.I cannot use the deferrable checking because the integrity constraints is a select and i cannot use the trigger or rule because the event parameter not support the commit event.You have some idea?My understanding is that deferred constraints are not checked until commit time, which seems to be what you want. Why do you believe otherwise?
Re: [GENERAL] pg_dump -t <> pg_restore -t
Richard Huxton wrote: > alexandre - aldeia digital wrote: >> Hi, >> >> If I do: >> >> pg_dump -Fc -t TABLE database > table-custom.sql >> >> The pg_dump returns the DDL of the table, the data and the DDL for >> indexes and PK. If I use -s, only the structure is returned but it's >> include all elements. >> >> But if I do: >> >> pg_dump -Fc database > backup-custom.file >> pg_restore -Fc -t TABLE backup-custom.file > table-plain.sql >> >> Only the DDL of table and data is returned, but not indexes, etc. >> >> Question: >> >> How can I restore a table with all other dependences from a custom file >> database ? > > The -l / -L options let you create/read back a control file listing all > the database objects. You can comment out/edit this to control pretty > much everything. Ok. But I have a lot of tables, and this tables have a lot of indexes. How can I get all objects related to a table ? Thanks, Alexandre ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Install issue on Kubuntu
P G wrote: > I have recently switched to Kubuntu and I just installed Postgres on > it. The installation was successful. > > When I try psql, I get this error message, though: > > % psql postgres > Error: You must install at least one postgresql-client- package. > > But I have installed postgresql-client-common and re-installs do not > seem to help. > > Any suggestions? Does /etc/postgresql-common/user_clusters exist? If not, you have struck a known bug in the Ubuntu packaging that I was told has since been fixed. An update should fix things (sudo apt-get update; sudo apt-get dist-upgrade). Or 'sudo /etc/postgresql-common/user_clusters' will get you going. If this doesn't help, you can open a support request at https://launchpad.net/distros/ubuntu/+source/postgresql-8.1/+tickets Or, if you think you know what is going wrong, a bug report at https://launchpad.net/distros/ubuntu/+source/postgresql-8.1/+bugs might be more appropriate. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.canonical.com/ Canonical Ltd.http://www.ubuntu.com/ signature.asc Description: OpenPGP digital signature
[GENERAL] Installing PostgreSQL on Win 2003 R2 64-bit
It is a possible? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] PostgreSQL 8.1 + PHP5.1.x/4.4.2 + Apache 2.0.55/1.3.34 PROBLEM!! PLEASE HELP
Help! I was trying to make an installation of PHP 5.1.2 + Apache 2.0.55 + PostgreSQL 8.1 on Windows 2003 Server, and I'm stuck. I got PHP to work with Apache quite smoothly, so did I marry PHP with PostgreSQL - scripts connecting to the database work fine from windows command line, except that the following code: if (extension_loaded("php_pgsql")) { echo "PGSQL loaded!"; } returns no message. Still database queries work fine. Trouble starts when I try to open a page in my browser - then I get an error message like this: Error: call to udefined function pg_connect()... I changed the php.ini file a billion times, trying to figure out what to set in the "extension_dir" and "extension=php_pgsql.dll" lines, and I tried at least as many times to change apache's httpd.conf file so that the php module is loaded properly. And it is, as far as I'm concerned - the phpinfo() page shows without a problem. One peculiar thing about it is that in the "Loaded modules" section (don't remember the exact name) there's absolutely no sign of the pgsql module. I also tried downloading and installing the latest php snapshot, because someone on a forum told me it might work, but it didn't :o( I even tried earlier versions of Apache and PHP, but nothing worked. Initially I figured I must have found a bug, but I don't really believe it could've gone on for so long without being noticed by somebody... I'm begging for help, because the person I'm doing the server for strongly insists on the machine working on Win 2003 and not some distribution of linux or unix. Regards, krzysieq -- Nie przychodz! Nie przyjezdzaj! Nie dzwon! Zamow ogloszenie drobne do "Gazety Wyborczej" przez internet wchodzac na http://www.aaaby.pl/wyborcza i wygraj nawet 10.000 zl! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Clustered table order is not preserved on insert
In article <[EMAIL PROTECTED]>, Jim C. Nasby <[EMAIL PROTECTED]> wrote: % Keep reading... from the same paragraph: % % Clustering is a one-time operation: when the table is subsequently % updated, the changes are not clustered. But this isn't really relevant to the question. More to the point is this tidbit from the documentation for SELECT: If the ORDER BY clause is specified, the returned rows are sorted in the specified order. If ORDER BY is not given, the rows are returned in whatever order the system finds fastest to produce. This is not necessarily the order in which they're stored on disk. -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Database Selection
> [EMAIL PROTECTED] (Scott Marlowe) writes: > > About the security thing. Security is a process, and you won't get > > it from using two different database engines. > > I'd argue that security is an "emergent property" which is either > supported by or undermined by particular facts/features/configurations. > I had other "security" aspect on my mind - one half of the newsgroups data will be accessible from public part of web pages, second part and the whole company data system will be accessible from private part of web pages; newsgroups database must have read/write web access, company database will have read only web access and read/write access from 3 specific IPs. Lets assume two databases+two database engines: If somebody hacks the newsgroups database and gets the read/write access then he cannot access data from the company database (different engine, different engine type). And now lets assume two databases+one database engine: If somebody hacks the newsgroups database and gets the read/write access then he could switch database under the same hacked access and get the read/write access to company data (if somebody gets access to protected database through (at least) the "only local access+login+password" restrictions then I must expect he knows how to switch (hack) to any connected database under the same engine). That is why I wanted to separate two databases using two different database engines (in order to increase the standard security covered by other security rules) But this idea is maybee too paranoiac and disadvantages of two different engines exceed the security benefits (maybe hypothetic) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Disk failure scenarios
I'm trying to set up PG, and am curious about the failure scenarios of Postgres with respect to crashed disks. In a given Postgres installation across many disks, which sections of Postgres can fail "gracefully" (i.e. the customer data is safe and the installation can be recreated without backups)? I'm thinking of the scenario where you have numerous tablespaces with tables and indexes spread across them and have separated pg_xlog onto a separate disk. So the setup might be something like this: Disk 1: OS + Postgres install Disk 2: pg_xlog Disk 3: initialized tablespace containing table_master Disk 4: tablespaceA containing tableA Disk 5: tablespaceB containing indexB In this simplistic configuration, only Disk 4 contains any real customer data, right? If any of the other disks fail, would it be possible to slap in a replacement disk and rebuild the database install around disk 4? I.e.: -Disk1: If the OS/Postgres install disk fails, its possible to reinstall the OS and the same version of Postgres and point it at disk 3 and everything should run, right? -Disk 2: If the transaction log dies, all changes since the last checkpoint are lost, right? Again, if I set up an empty pg_xlog directory somewhere else, the DB should run just fine, right? -Disk 3: This holds all the pg_* tables, which means the structure of the DB, right? If this disk goes, would it be possible to reinitialize the database directory, create the new database, create a new tablespaceA on Disk 4, and create a new tableA, and somehow have it use the data pages for tableA that are already on disk? Does it change if tableA inherits from table_master? -Disk 4: We're screwed without backups. -Disk 5: I figure that we can just recreate any indexes, right? Can we safely drop indexB if the data pages for the index don't exist on disk (i.e. the tablespace is empty)? Will Postgres do the "right" thing and delete the knowledge of the index from the pg_* tables and then stop? Thanks for any help, -Mike ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Having problems with a 25 million row table on 8.1.3
try: select tax_code from warehouse.sec_trans group by tax_code there was a discussion about this on the pgsql-performance a while back: http://archives.postgresql.org/pgsql-performance/2004-10/msg00053.php -ae On Apr 25, 2006, at 4:10 PM, Tony Caduto wrote: select DISTINCT tax_code from warehouse.sec_trans We let this run for 1/2 hour or so and canceled it. Then I tried select DISTINCT ON (tax_code) tax_code from warehouse.sec_trans ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Database Selection
Yes, I read this opinion that MySQL is only sql interface to filesystem :-) I plan to use win1250 encoding because this is native czech windows encoding (I do not understand why M$ invents the wheel and invents new code pages (cp1250) although the code page latin2 was here all the time). BTW czech windows uses cp1250 for window app and cp852 for console app. So if I write C++ console app in BCB and printf czech text then I see garbage (do you know another "OS" that by nature uses two code pages for its two app groups? Horrible) ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] selecting column comment
I have a comments on tables in my database. e.g. COMMENT ON TABLE my_schema.my_table IS 'Employee Information'; I know you can get comments using \d+ Is there a way to select comments? Reason: I do a lot of perl interfaces. I find escape commands do not work using DBI. Thanks, Ari ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Database Selection
All opinions here sound good so I decided to use only one db engine (PostgreSQL 8.1) and two databases; now I must tune my database generator parameters (I use PowerDesigner by Sybase) in order to work-arround the pg's identifier case sensitivity (newsgroups app (php scripts) doesn't use quotes (developed against MySQL) but the PowerBuilder app for company data uses quotes and mixed case of identifiers (developed against Sybase SQL Anywhere)). Thank you all for sharing all your experience and your time. Ivo ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Problem with complex outer join expression
On Thu, Apr 27, 2006 at 16:55:22 +1000, Chris Velevitch <[EMAIL PROTECTED]> wrote: > Seems like the problem has something to do with date arithmetic. I > can't seem to add an integer expression to a date constant. It works > if I add a integer constant to a date constant, but that's not what I > want. I tried cast(expression as integer) + date '2006-04-09' but I > get 'ERROR: operator does not exist: integer + date'. What do I need > to do to add a number to a date? I believe that has been fixed in recent version of postgres. At some point in the past there was only a date + int operator and not an int + date operator. Overloading makes it easy to forget that those are two different operators. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Mailing list setup issue
On Thu, Apr 27, 2006 at 18:44:25 +1000, Chris Velevitch <[EMAIL PROTECTED]> wrote: > > "One day I accidentally sent a private, personal reply out over one of > my own damn lists." > > It's like he accidentally drove down a one way street in the wrong > direction, so he now drives everywhere in reverse, just in case it > happens again. He obviously got surprised by that, in the "Principle > of Least Surprise", it surprises me to find only one person gets my > response. The cost of sending a reply to only one person by mistake is almost always going to b relatively low. The cost of accidentally replying to everyone will often be relatively high, so it makes sense to use a system that trades off probability of replying to all by mistake for probability of replying to just the sender by mistake. But you are correct, that isn't a great reason to determine how lists operate. The real reason is that it breaks the use of the preexisting reply-to header and even though that header is rarely used, mailing lists shouldn't be replacing it. > This article is 4 years old and he is unix user who prefers elm as a > mail reader. I'm fine with his choice, but I use 3 mailer readers > (gmail, outlook, opera) and none of them have a the same features as > elm. > > I think it's an inappropriate choice given the current norms. Your right, he should be using mutt. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Mailing list setup issue
On Thu, Apr 27, 2006 at 17:22:09 +1000, Chris Velevitch <[EMAIL PROTECTED]> wrote: > On 4/27/06, Marc G. Fournier <[EMAIL PROTECTED]> wrote: > > What do you mean? Do you mean a Reply-To: header added? If so, check out > > the help docs, but there is a 'set replyto' you can add that will have it > > added for your email address, without affecting everyone that doesn't want > > it ... > > help docs? There are help documents on the mailing list system that > used by postgresql.org? The only help documents I can find is > http://www.postgresql.org/docs/ that only talks about postgresql. > Where do I find these mailing list help documents. I didn't anyone can > just go and change the postgresql.org mailing list software > configuration? While this should be documented more obviously on the mailing list pages, you can get documentation by sending email to the mailing list server. (It would be nice if the nonchanging information were available as web pages - hint hint.) Send a message to [EMAIL PROTECTED] with a body of: help That will get you started. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Mailing list setup issue
On Thu, Apr 27, 2006 at 17:34:11 +1000, Chris Velevitch <[EMAIL PROTECTED]> wrote: > On 4/27/06, Guy Rouillier <[EMAIL PROTECTED]> wrote: > > Done on purpose. Use "reply to all". > > Isn't that a bad habit to get into? There's been a lot of press about > people who habitually reply all. Reply all is the normal thing to do. If people don't want the extra copies they can use mail-followup-to headers to indicate that (or on majordomo lists such as used here, you can have the list server not send you copies when you address is copied on replies). By replying to all, it leaves the recipient the option to handle the mail differently when they are cc'd. > Besides, isn't the point of a mailing list is that it's a global > public discussion on a topic in which all subscribers can 'listen in' > on the discussion. You never know what useful information can be > obtained from seeing the full discussion. That's why you are supposed to do a reply all unless you have some specific reason not to. (e.g. a thank you response that doesn't add to the discussion). Also note that nonmembers can post to the lists (though a moderator needs to approve their messages), so you aren't even guarenteed that a poster is on the list. > > In addition, it's not the norm. I subscribe to lots of mailing lists > and postgresql.org lists are the only ones that I've seem do that. They are probably nontechnical lists. Most lists that I subscribe to don't have broken reply-to headers. (And when they do, I have my mail filter remove them, but that still prevents their normal use.) > > So what's the reasoning behind this choice? It breaks the normal use of reply-to. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Mailing list setup issue
Chris Velevitch wrote: On 4/27/06, Guy Rouillier <[EMAIL PROTECTED]> wrote: Done on purpose. Use "reply to all". Isn't that a bad habit to get into? There's been a lot of press about people who habitually reply all. Besides, isn't the point of a mailing list is that it's a global public discussion on a topic in which all subscribers can 'listen in' on the discussion. You never know what useful information can be obtained from seeing the full discussion. In addition, it's not the norm. I subscribe to lots of mailing lists and postgresql.org lists are the only ones that I've seem do that. So what's the reasoning behind this choice? Look, this really has become a religious issue. There are list that do it both ways. Let's not waste anymore bandwidth on the list with it. Learn to live with the way the list is setup. -- Until later, Geoffrey Any society that would give up a little liberty to gain a little security will deserve neither and lose both. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
FW: [GENERAL] Postgres 8.1 sequences and 'CALL'-syntax
> Connection conn = ... > CallableStatement cs = conn.prepareCall("{? = call > nextval('seq_nm')}"); cs.registerOutParameter(1, Types.BIGINT); > cs.execute(); long nextval = cs.getLong(1); cs.close(); Yeah, it seems to work, if i change the definition for calling the sequence in conjunction with sequoia trough jboss change into: "{call nextvalue('seq_name')}" [the "{}" pair is important !] and use the "org.postgresql.jdbc3.Jdbc3PooledDataSource" driver class. Many Thx ! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SQL Rule
Bert wrote: This rule is creating first of all the insert and then i can run an Update on this insert, so far its functioning. Your definition is quite good but what are doing when you change the table definition? Rewrite all Rules? I actually use a data dictionary processor that automatically generates the trigger code and also modifies table structures. Your example would look like this: table example { column { col_A; col_B; } column col_C { chain calc { test { return: @col_A + @col_B; } } } This tool, called "Andromeda" is available for download if you like, but we consider the current version "pre-Alpha". The entire feature set is defined and I use it for all of my projects, but we are still stabilizing and documenting. If you are interested in being a very early adopter, drop me a line off-list. begin:vcard fn:Kenneth Downs n:Downs;Kenneth email;internet:[EMAIL PROTECTED] tel;work:631-689-7200 tel;fax:631-689-0527 tel;cell:631-379-0010 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] CREATE TYPE
You can have a looks at http://www.postgresql.org/docs/8.1/static/sql-createtype.htmlRegards/Shoaib On 4/27/06, Rodrigo Sakai <[EMAIL PROTECTED]> wrote: Hi all, I need some help to create types to use in a table. For exemple, I want to create: CREATE TYPE salary_type ( value NUMERIC(10,2), date DATE ) CREATE TABLE employee ( num_employee INT, name VARCHAR(60), salary salary_type ) I know I need two functions written in C (input_function, output_function). So, anyone can show me an example of this functions for my type salary_type? Thanks!
[GENERAL] CREATE TYPE
Hi all, I need some help to create types to use in a table. For exemple, I want to create: CREATE TYPE salary_type ( value NUMERIC(10,2), date DATE ) CREATE TABLE employee ( num_employee INT, name VARCHAR(60), salary salary_type ) I know I need two functions written in C (input_function, output_function). So, anyone can show me an example of this functions for my type salary_type? Thanks!
Re: [GENERAL] Disk Failure Scenarios
> -Disk 2: If the transaction log dies, all changes since the last> checkpoint are lost, right? Again, if I set up an empty pg_xlog > directory somewhere else, the DB should run just fine, right?No, because there's no way to know what state the data pages are in.Data may have made it to disk, may not have, partial page write, etc... As far as I understand it, data is only written to the WAL except when the WAL is checkpointing, right? So if your WAL disk crashes and you aren't int the middle of a checkpoint, there is a chance that you would just lose data since the last checkpoint. Am I missing something? Now the real question is: why are you trying to run without raid? I have a single, very fast disk lying around, and I was just wondering what parts of the DB I could "safely" put on it. I was thinking either the WAL or and index. I have essentially 15 10K drives and 1 15K drive, and don't quite know what to do with it. -Mike
Re: [GENERAL] Mailing list setup issue
On 4/27/06, Martijn van Oosterhout wrote: > Go to the archives: > > http://archives.postgresql.org/pgsql-general/ > > Select "To subscribe or unsubscribe" > > Goto "Sign-in" and Sign in > > Goto "Unsubscribe or Change Settings" and tick (I beleive) replyto. > > Have a nice day, Thanks for that information. Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] triger and transaction
On Thu, Apr 27, 2006 at 01:34:06AM -0700, SunWuKung wrote: > If a transaction inserts rows to two tables will a procedure invoked by > an "after insert on each row" trigger on one table see all the records > inserted in the other table? That's the idea. > In other words - will the transaction run completely before the trigger > is fired? Well, it's the same transaction, but it's after the insert. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] Mailing list setup issue
On Thu, Apr 27, 2006 at 06:59:19PM +1000, Chris Velevitch wrote: > On 4/27/06, Martijn van Oosterhout wrote: > > Fortunatly we have a mailing list manager that allows you to choose > > which way you want it. > > Please explain what you mean by this? Go to the archives: http://archives.postgresql.org/pgsql-general/ Select "To subscribe or unsubscribe" Goto "Sign-in" and Sign in Goto "Unsubscribe or Change Settings" and tick (I beleive) replyto. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] Mailing list setup issue
On 4/27/06, Martijn van Oosterhout wrote: > Fortunatly we have a mailing list manager that allows you to choose > which way you want it. Please explain what you mean by this? Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Postgres on WinXP - app on Cygwin
On 4/27/06, Tomas Lanczos <[EMAIL PROTECTED]> wrote: > I am curious, whether is it possible to connect to a PostgreSQL database > installed on WinXP for an application installed on the same box but running > in the Cygwin environment (the app. is the GRASS GIS). There are no reasons why this shouldn't work. Are you having a particular issue with it? -- Postgresql & php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Postgres on WinXP - app on Cygwin
I am curious, whether is it possible to connect to a PostgreSQL database installed on WinXP for an application installed on the same box but running in the Cygwin environment (the app. is the GRASS GIS). many thanx Tomas ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Mailing list setup issue
On 4/27/06, Kris Jurka <[EMAIL PROTECTED]> wrote: > > In addition, it's not the norm. I subscribe to lots of mailing lists > > and postgresql.org lists are the only ones that I've seem do that. > > > > So what's the reasoning behind this choice? > > http://www.unicom.com/pw/reply-to-harmful.html The author of that articles states:- "One day I accidentally sent a private, personal reply out over one of my own damn lists." It's like he accidentally drove down a one way street in the wrong direction, so he now drives everywhere in reverse, just in case it happens again. He obviously got surprised by that, in the "Principle of Least Surprise", it surprises me to find only one person gets my response. This article is 4 years old and he is unix user who prefers elm as a mail reader. I'm fine with his choice, but I use 3 mailer readers (gmail, outlook, opera) and none of them have a the same features as elm. I think it's an inappropriate choice given the current norms. Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Postgres 8.1 sequences and 'CALL'-syntax
On Thu, 27 Apr 2006, Schnabl, Sebastian wrote: Hello, I use postgres 8.1 and trie to run jboss over sequoia-ha (http://sequoia.continuent.org/HomePage). But i have an problem with sequences. Sequoia claims to support for good reasons and db-independece only "sql-standard(s)". Therefore they DON'T support the postgres-specific "select nextval('seq_name')". Instead they gave me the hint to use the sql-conform "call nexval('seq_name')". I'm not sure where they layer their stuff on, but the pg jdbc driver will support something like the following: Connection conn = ... CallableStatement cs = conn.prepareCall("{? = call nextval('seq_nm')}"); cs.registerOutParameter(1, Types.BIGINT); cs.execute(); long nextval = cs.getLong(1); cs.close(); Kris Jurka ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] triger and transaction
If a transaction inserts rows to two tables will a procedure invoked by an "after insert on each row" trigger on one table see all the records inserted in the other table? In other words - will the transaction run completely before the trigger is fired? Thanks for the help. Balázs ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Postgres 8.1 sequences and 'CALL'-syntax
Hello, I use postgres 8.1 and trie to run jboss over sequoia-ha (http://sequoia.continuent.org/HomePage). But i have an problem with sequences. Sequoia claims to support for good reasons and db-independece only "sql-standard(s)". Therefore they DON'T support the postgres-specific "select nextval('seq_name')". Instead they gave me the hint to use the sql-conform "call nexval('seq_name')". But unfortunately i always get only an syntax error on "call" from jdbc-driver or specially postgres as result. How can i use sequences in conjunction with "call"-syntax?? Thx in advance __ Sebastian Schnabl Qualitype AG Quality Assurance Systems |Bioinformatics Moritzburger Weg 67 | 01109 Dresden fon +49.351.8838 0 | fax +49.351.8838 2809 http://www.qualitype.de __ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Mailing list setup issue
On Thu, Apr 27, 2006 at 05:34:11PM +1000, Chris Velevitch wrote: > In addition, it's not the norm. I subscribe to lots of mailing lists > and postgresql.org lists are the only ones that I've seem do that. > > So what's the reasoning behind this choice? All sorts of reasons. One of the main ones being that list servers are sometimes slow and this delay can harm useful discussions. This way a group can have a useful discussion with quick replies and the list server will catchup sometime later. Fortunatly we have a mailing list manager that allows you to choose which way you want it. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] Mailing list setup issue
On Thu, 27 Apr 2006, Chris Velevitch wrote: In addition, it's not the norm. I subscribe to lots of mailing lists and postgresql.org lists are the only ones that I've seem do that. So what's the reasoning behind this choice? http://www.unicom.com/pw/reply-to-harmful.html Kris Jurka ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Mailing list setup issue
On 4/27/06, Guy Rouillier <[EMAIL PROTECTED]> wrote: > Done on purpose. Use "reply to all". Isn't that a bad habit to get into? There's been a lot of press about people who habitually reply all. Besides, isn't the point of a mailing list is that it's a global public discussion on a topic in which all subscribers can 'listen in' on the discussion. You never know what useful information can be obtained from seeing the full discussion. In addition, it's not the norm. I subscribe to lots of mailing lists and postgresql.org lists are the only ones that I've seem do that. So what's the reasoning behind this choice? Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Mailing list setup issue
On 4/27/06, Marc G. Fournier <[EMAIL PROTECTED]> wrote: > What do you mean? Do you mean a Reply-To: header added? If so, check out > the help docs, but there is a 'set replyto' you can add that will have it > added for your email address, without affecting everyone that doesn't want > it ... help docs? There are help documents on the mailing list system that used by postgresql.org? The only help documents I can find is http://www.postgresql.org/docs/ that only talks about postgresql. Where do I find these mailing list help documents. I didn't anyone can just go and change the postgresql.org mailing list software configuration? Chris -- Chris Velevitch Manager - Sydney Flash Platform Developers Group www.flashdev.org.au ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Mailing list setup issue
Chris Velevitch wrote: > It seems that all the postgresql mailing lists don't allow you to > automatically reply to the list only the sender. I don't get this > with all my other mailings list. Who I contact to have this > rectified? Done on purpose. Use "reply to all". -- Guy Rouillier ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Mailing list setup issue
On Thu, 27 Apr 2006, Chris Velevitch wrote: It seems that all the postgresql mailing lists don't allow you to automatically reply to the list only the sender. I don't get this with all my other mailings list. Who I contact to have this rectified? What do you mean? Do you mean a Reply-To: header added? If so, check out the help docs, but there is a 'set replyto' you can add that will have it added for your email address, without affecting everyone that doesn't want it ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly