[GENERAL] Anyone know of a Schema Comparer
Hi, We will have anything upto 400 schema in our db. Each Schema is a replica of a “master” schema. Can anyone recommend a tool that we can compare our master schema to any given target schema which will then generate an update script which can subsequently be executed ? We have looked at EMS comparer and few others but all seem to compare one entire db with another. Regards Paul Newman
[GENERAL] Creating table in different database
If there are multiple databases, say db1 and db2. And currently we are in db1, can we create table in db2 (without switching databases)? Is there any query to do this? - Paresh ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] a stored procedure ..with integer as the parameter
Am Dienstag, den 25.10.2005, 10:24 +0530 schrieb surabhi.ahuja: > oops i am sorry, > i mean from the client i ll be getting values (which i need to insert > into the table) in the form of strings: > > and i form the insert command as follows: > > function(char *a, char *b, char *c) > { > char command[1024]; > sprintf(command, "select insert('%s','%s','%s')", a,b,c); > execute the above command; > } > > the above is just the pseudo code > > the stored procedure in turn is as follows (psudocode): > > insert(smallint , smallint, varchar(256)) > begin > insert into table 1 values ($1, $2, $3); > end I'm not sure this serves much purpose if it isnt just for experimenting ;) char -> int is simply done by casting (even automatically) so your insert reduces to: INSERT INTO table1 (col_a,col_b,col_c) VALUES (a,b,c); (with or w/o stored function) simply sprintf into a string can be a very serious security hole btw. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] a stored procedure ..with integer as the parameter
Title: Re: [GENERAL] a stored procedure ..with integer as the parameter oops i am sorry, i mean from the client i ll be getting values (which i need to insert into the table) in the form of strings: and i form the insert command as follows: function(char *a, char *b, char *c) { char command[1024]; sprintf(command, "select insert('%s','%s','%s')", a,b,c); execute the above command; } the above is just the pseudo code the stored procedure in turn is as follows (psudocode): insert(smallint , smallint, varchar(256)) begin insert into table 1 values ($1, $2, $3); end From: Richard Huxton [mailto:[EMAIL PROTECTED]Sent: Mon 10/24/2005 3:04 PMTo: surabhi.ahujaCc: Stephan Szabo; pgsql-general@postgresql.orgSubject: Re: [GENERAL] a stored procedure ..with integer as the parameter ***Your mail has been scanned by InterScan VirusWall.***-***surabhi.ahuja wrote:> from the client I get : "200", "surabhi", "10"> now i have to make a call to the stored procedure using the above received strings.> will not the "200" be converted to a valid integer before saving into the database ..and same with the smallint values.That wasn't the error you reported. You reported a problem with storing"" which not only isn't a number, it isn't even text. Double-quotingindicates it's a name (i.e. an identifier - a table name or similar).So - this suggests some questions:1. Do you know what data-types the client is providing?2. What validation/checking do you do on the client-supplied data?3. How are you building your query - via Perl's DBI system, PHP'sPEAR:DB classes, JDBC?-- Richard Huxton Archonet Ltd
Re: [GENERAL] Map of Postgresql Users (OT)
On Mon, 24 Oct 2005, Claire McLister wrote: > Hi, > > We've developed an automatic email mapping capability from Google Maps > API. > > To try it out, we mapped origins of emails to this group from October > 2, 2005 2 pm (EST) through October 14th, 9 am (EST). > > The result of this map is at: > http://www.zeesource.net/maps/map.do?group=456 > > Would like to hear what you think of it. > How about using Postgres/PostGIS & UMN mapserver for a fully Open Source software & Postgres based map? For a map like the one at http://www.qgis.org/index.php?option=com_wrapper&Itemid=53 perhaps on the Postgres web site? Brent Wood ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Installation Problem
Hi All, I'm trying ot install postgresql 8.0.4 on suse 9.0. I can run ./configure but when I try to run make, the program cannot find a usable c compiler. I tried ./configure CC=/usr/lib/gcc-lib/i586-suse-linux/3.3.3/cc1 but get the error: cannot run c compiled programs. I tried to downlaod and install GCC.3.3.4 but the error message says it needs a c compiler Any ideas on how I might proceed? thanks, Phil ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] newbie question: reading sql commands from script files
Title: [GENERAL] newbie question: reading sql commands from script files In psql, look at \i. Sean - Original Message - From: basel novo To: pgsql-general@postgresql.org Sent: Monday, October 24, 2005 8:28 PM Subject: [GENERAL] newbie question: reading sql commands from script files What is the equivalent of the mysql 'source' command for reading sql commands from ascii script files? Thanks. _ Dont just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] newbie question: reading sql commands from script files
What is the equivalent of the mysql 'source' command for reading sql commands from ascii script files? Thanks. _ Dont just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] pl/pgsql help
Hi all When I installed postgresql in my computer, I had pl/pgsql language selected but still, I dont have any language installed. Can some one tell me how to install a language. thanks in advance. i am a beginner so any help would be appreciated. Thanks
Re: [GENERAL] Deleting vs foreign keys
On Tue, Oct 25, 2005 at 12:59:27AM +0300, WireSpot wrote: > I have an application that makes heavy use of foreign keys all over > the tables. This is very nice since the data is very consistent. There > also this "central" table which holds "sites" in it. A site pretty > much is the crux of it all. Deleting a site will very precisely > eliminate all data regarding it, since there's CASCADE on delete's > everywhere. > > The only trouble I'm having is that the original developers apparently > didn't account for large amounts of data. I'm starting to get a LOT of > data in some tables, and nowadays deleting a site will take a > disgusting amount of time (in the range of tens of minutes). Are there indexes on the foreign key columns? That is, given the following example, CREATE TABLE foo (id integer PRIMARY KEY); CREATE TABLE bar (fooid integer NOT NULL REFERENCES foo ON DELETE CASCADE); do you have an index on bar.fooid? Also, do you regularly vacuum and analyze the database? -- Michael Fuhr ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
On Mon, Oct 24, 2005 at 03:50:57PM -0700, [EMAIL PROTECTED] wrote: > I can't see any difference between these two statements: > > SELECT MAX(id) FROM table; > SELECT id FROM table ORDER BY id DESC LIMIT 1; > > If the planner / optimizer / whatever doesn't optimize them to the > same end result, is there a reason not to? Is there a case for > putting it on the TODO list? Already done in 8.1. Here's an excerpt from the Release Notes: Automatically use indexes for MIN() and MAX() (Tom) In previous releases, the only way to use an index for MIN() or MAX() was to rewrite the query as SELECT col FROM tab ORDER BY col LIMIT 1. Index usage now happens automatically. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1 -- SOLVED
On Mon, Oct 24, 2005 at 07:14:43PM -0400, Alex Turner wrote: > I believe based on semi-recent posts that MIN and MAX are now treated > as special cases in 8.1, and are synonymous with select id order by id > desc limit 1 etc.. Aha! I looked it up in the release notes, you are right. I had never thought they would not be special cased. Thanks. -- ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._. Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED] GPG = E987 4493 C860 246C 3B1E 6477 7838 76E9 182E 8151 ITAR license #4933 I've found a solution to Fermat's Last Theorem but I see I've run out of room o ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
I believe based on semi-recent posts that MIN and MAX are now treated as special cases in 8.1, and are synonymous with select id order by id desc limit 1 etc.. Alex On 10/24/05, Douglas McNaught <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] writes: > > > However, in the process of investigating this, my boss found something > > which we do not understand. A table with a primary key 'id' takes 200 > > seconds to SELECT MAX(id), but is as close to instantaneous as you'd > > want for SELECT ID ORDER BY ID DESC LIMIT 1. I understand why > > count(*) has to traverse all records, but why does MAX have to? This > > table has about 750,000 rows, rather puny. > > As I understand it, because aggregates in PG are extensible (the query > planner just knows it's calling some function), MAX isn't specially > handled--the planner doesn't know it's equivalent to the other query. > > There has been some talk of special-casing this, but I'm not sure > where it lead--you might check the archives. > > -Doug > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Autogenerated backup of a password protected database
Hi, Since pg_dump doesnt support password is there a way that password can be supplied -w option. Currently writting a application to do backups in java as i need a solution that is cross platform. Any help would be greatfully recieved. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
[EMAIL PROTECTED] writes: > However, in the process of investigating this, my boss found something > which we do not understand. A table with a primary key 'id' takes 200 > seconds to SELECT MAX(id), but is as close to instantaneous as you'd > want for SELECT ID ORDER BY ID DESC LIMIT 1. I understand why > count(*) has to traverse all records, but why does MAX have to? This > table has about 750,000 rows, rather puny. As I understand it, because aggregates in PG are extensible (the query planner just knows it's calling some function), MAX isn't specially handled--the planner doesn't know it's equivalent to the other query. There has been some talk of special-casing this, but I'm not sure where it lead--you might check the archives. -Doug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
Dang, that's a lot of answer! :-) and not what I was hoping for. Max and count both have to look up data records to skip values associated with other transactions. But count, by definition, has to scan every single record from one end of the index to the other, so the index is useless, whereas max will probably scan only a very few records before finding the first valid one. I can't see any difference between these two statements: SELECT MAX(id) FROM table; SELECT id FROM table ORDER BY id DESC LIMIT 1; If the planner / optimizer / whatever doesn't optimize them to the same end result, is there a reason not to? Is there a case for putting it on the TODO list? In case it is any help, here is the EXPLAIN ANALYZE results: EXPLAIN ANALYZE SELECT id FROM transaction ORDER BY id DESC LIMIT 1; QUERY PLAN Limit (cost=0.00..1.98 rows=1 width=4) (actual time=22.482..22.485 rows=1 loops=1) -> Index Scan Backward using transaction_pkey on "transaction" (cost=0.00..1944638.42 rows=984531 width=4) (actual time=22.474..22.474 rows=1 loops=1) Total runtime: 22.546 ms (3 rows) EXPLAIN ANALYZE SELECT MAX(id) FROM transaction; QUERY PLAN --- Aggregate (cost=52745.64..52745.64 rows=1 width=4) (actual time=11500.994..11500.998 rows=1 loops=1) -> Seq Scan on "transaction" (cost=0.00..50284.31 rows=984531 width=4) (actual time=57.164..8676.015 rows=738952 loops=1) Total runtime: 11501.096 ms And that's a good one - I've seen it take as long as 20 ms... -- ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._. Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED] GPG = E987 4493 C860 246C 3B1E 6477 7838 76E9 182E 8151 ITAR license #4933 I've found a solution to Fermat's Last Theorem but I see I've run out of room o ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
On Mon, 2005-10-24 at 16:57, [EMAIL PROTECTED] wrote: > Having been surprised a few times myself by EXPLAIN showing a > sequential scan instead of using an index, and having seen so many > others surprised by it, I hope I am not asking a similar question. > > We recently upgraded our db servers, both old and new running 8.0, and > one casualty was forgetting to add the nightly VACUUM ANALYZE. > Inserts were down to 7-8 seconds apiece, but are now back to normal > under a second since the tables were vacuumed. > > However, in the process of investigating this, my boss found something > which we do not understand. A table with a primary key 'id' takes 200 > seconds to SELECT MAX(id), but is as close to instantaneous as you'd > want for SELECT ID ORDER BY ID DESC LIMIT 1. I understand why > count(*) has to traverse all records, but why does MAX have to? This > table has about 750,000 rows, rather puny. > > I suspect there is either a FAQ which I missed, or no one can answer > without EXPLAIN printouts. I'm hoping there is some generic answer to > something simple I have overlooked. It may be, but it's a complex enough question, I'll toss out the answer, as I understand it. The problems with aggregates extends from two design decisions made in PostgreSQL. 1: Generic aggregation PostgreSQL uses a generic aggregation system. I.e. there ain't no short cuts in the query planner for one or another of the aggregates. If you make an aggregate function called std_dev() and implement it, it pretty much gets treated the same by the query planner as the ones that are built in. So, to the query planner, max(fieldname) is about the same as sum(fieldname). Now, you and I can tell by looking at them that one of those could be answered pretty quickly with an index, but how's the planner supposed to know? 2: MVCC PostgreSQL's implementation of an "in store" Multi-version Concurrency Control system means that indexes only tell you where the index entry is in the table, not whether or not it is visible to this particular transaction. Depending on when the tuple was last updated, and when our transactions started, and our transaction isolation level, a given version of a given tuple may or may not be visible to our transaction. So, while PostgreSQL can use an index to find things, it always has to go back to the actual table to find out if the value is visible to the current transaction. Put simply, reads cost more, but writes don't make the performance of the db plummet. Put more simply, everyone gets a medium slow read performance for certain ops, but writes keep streaming right along.The Ain't No Such Thing As A Free Lunch (TANSTAAFL). Notice that you CAN use an index for most aggregates, as long as the where clause you're using is limiting enough. select count(*) from sometable where somefield > 22 can use an index if somefield > 22 is selective enough. But again, if the db is going to read some base percentage of the pages in the main table, it's cheaper to just switch to a sequential scan than to use the index, since it HAS TO READ all the values in the table to see if they're visible. The good news is that generally speaking, everything else in PostgreSQL is quite fast, and parallelism is very good. Hope that's not too involved, and Tom, hope I didn't make too many mistakes there... ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostgreSQL vs mySQL, any performance difference for
On Mon, 2005-10-24 at 17:19, Scott Marlowe wrote: > But PostgreSQL won't mangle your data to make it fit without even a > notice, like MySQL will. Note, in all fairness, MySQL 5.0.12 now does throw a warning when mangling my data. Why the client doesn't display it is beyond me. Why it's not an error is also beyond me. But it does throw a warning, so I need to amend my last statement up there. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL vs mySQL, any performance difference for
On Mon, 2005-10-24 at 15:37, Jan wrote: > I need a database capable of storing at least 12 million records per > table, mostly used for batch queries. Basically an invoice database. > Some tables could potentially store 100 million records. > > mySQL5 contains many of the features or PostgreSQL, and I doubt that I > need all these features. Are there any spefic benefits in query > performance or reliability of going with PostgreSQL? > > Secondary need is a database where 200 users will need to perform > lookups, probably using Windows PC's. Most likely only a handful will > perform lookups simultanously. If you are handling invoices, I could not recommend MySQL, as it is too free and easy with your data to trust with them. By design, the default installation will let you do things like: (This is with MySQL 5.0.12 on my workstation, by the way...) mysql> create table test (i1 int); Query OK, 0 rows affected (0.07 sec) mysql> insert into test values (123913284723498723423); Query OK, 1 row affected, 2 warnings (0.07 sec) mysql> select * from test; ++ | i1 | ++ | 2147483647 | ++ 1 row in set (0.00 sec) And other fun things. If you're handling money with your database, you should choose anything EXCEPT mysql. It's a good storage db for content management, and problem ticket tracking, and even bug tracking, but when it comes to getting the math right, it's still got a ways to go. Plus, as queries get more complex, it gets slower and slower. I would recommend firebird or PostgreSQL, as better alternatives. 12 million rows, by the way, is nothing for either PostgreSQL or MySQL, if they're properly set up on good, fast hardware. Spend some money on a RAID controller with battery backed cache, lost of memory for your server, and a good backup device, and you'll find almost any halfway decent db engine can handle the load. But PostgreSQL won't mangle your data to make it fit without even a notice, like MySQL will. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] function DECODE and triggers
I'm migrating from oracle to postgresl, and i have these 2 problems: 1. PostgreSQL doesn't support function DECODE from Oracle, but it can be replicated with CASE WHEN expr THEN expr [...] ELSE expr END , the problem appears when i want to convert this sentence from oracle to postgresl: select decode (pre.C_GEN,'01','M','02','F','') as GENERO my convertion is case when pre.C_GEN = '01' then GENERO='M' else GENERO='F' end , but i dont' know if the assigment of GENERO is correct. 2. Writing triggers i don't know if postgresql supports statements like this: CREATE OR REPLACE TRIGGER trig AFTER UPDATE OF column2 <<- Here is the doubt ON table_product FOR EACH ROW BEGIN ... END In postgresql: CREATE OR REPLACE TRIGGER trig AFTER UPDATE OF column2 <<- is this correct? ON table_product FOR EACH ROW EXECUTE PROCEDURE trig(); Thanks for your answers.. Rafael _ ¿Estás pensando en cambiar de coche? Todas los modelos de serie y extras en MSN Motor. http://motor.msn.es/researchcentre/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] cannot stat `/usr/local/pgsql/data/pg_xlog/00000001000000430000009C': No such file or directory
Ok, I figured out how to get the archiver going again. This is what I have in the postgresql.conf file. archive_command = 'yblogger %p wrote in message news:[EMAIL PROTECTED] In my development system the file system where $PGDATA resides filled up. cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430076 to archive 2005-10-23 08:46:29 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430076
Re: [GENERAL] PostgreSQL vs mySQL, any performance difference for
Jan wrote: I need a database capable of storing at least 12 million records per table, mostly used for batch queries. Basically an invoice database. Some tables could potentially store 100 million records. It does not sound like your performance requirements are very demanding. Either database should make short work of that, although be careful of MySQL's planner, which is not always as good with more complex queries. Of course I assume here that you're not doing table scans constantly. If the 100 million rows needs to be read for each query, it will depend on your I/O speed, not your database software. mySQL5 contains many of the features or PostgreSQL, and I doubt that I need all these features. Are there any spefic benefits in query performance or reliability of going with PostgreSQL? Secondary need is a database where 200 users will need to perform lookups, probably using Windows PC's. Most likely only a handful will perform lookups simultanously. If the database server is on windows, consider that PostgreSQL 8.0 is the first release to support windows, and I might recommend 8.1, which is in late beta stages. If it's only the users that are on windows, that doesn't make much difference, the client libraries for either database are well-established on windows. Generally speaking, many reports indicate that PostgreSQL performs better for simultaneous reading and writing by many users. With just a handful doing read-only, it probably doesn't make much difference. Honestly, I think either database is capable. Personally, I trust PostgreSQL more on several fronts: (1) It does what you think it's doing. (2) It notices and prevents errors from becoming problems (i.e., it will not allow bad data to be inserted). (3) I trust its reliability and backup systems more. (4) Not nearly as many traps/gotchas. Regards, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1
Having been surprised a few times myself by EXPLAIN showing a sequential scan instead of using an index, and having seen so many others surprised by it, I hope I am not asking a similar question. We recently upgraded our db servers, both old and new running 8.0, and one casualty was forgetting to add the nightly VACUUM ANALYZE. Inserts were down to 7-8 seconds apiece, but are now back to normal under a second since the tables were vacuumed. However, in the process of investigating this, my boss found something which we do not understand. A table with a primary key 'id' takes 200 seconds to SELECT MAX(id), but is as close to instantaneous as you'd want for SELECT ID ORDER BY ID DESC LIMIT 1. I understand why count(*) has to traverse all records, but why does MAX have to? This table has about 750,000 rows, rather puny. I suspect there is either a FAQ which I missed, or no one can answer without EXPLAIN printouts. I'm hoping there is some generic answer to something simple I have overlooked. -- ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._. Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED] GPG = E987 4493 C860 246C 3B1E 6477 7838 76E9 182E 8151 ITAR license #4933 I've found a solution to Fermat's Last Theorem but I see I've run out of room o ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Deleting vs foreign keys
I have an application that makes heavy use of foreign keys all over the tables. This is very nice since the data is very consistent. There also this "central" table which holds "sites" in it. A site pretty much is the crux of it all. Deleting a site will very precisely eliminate all data regarding it, since there's CASCADE on delete's everywhere. The only trouble I'm having is that the original developers apparently didn't account for large amounts of data. I'm starting to get a LOT of data in some tables, and nowadays deleting a site will take a disgusting amount of time (in the range of tens of minutes). It's impossible to do it via Web, so I have to issue the central delete from the shell and leave it running until it's done. Is there any way I can make things better? I could queue site drops and have a cronjob pick them up instead of deleting "live" via Web, but that's just silly patchwork IMHO. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL vs mySQL, any performance difference for
Alex Turner wrote: I would ask you to ask the reverse question, why would you use MySQL when it still doesn't contain all the features of postgresql, has a bad query optimizer, a poor track record on scalability and will silenty truncate/accept invalid data, invalidating ACID, not only that you have to pay for it. I didn't see anything in his question indicating that he'd need a commercial license. It sounded to me like he could freely use either database, and the question is, which one? Regards, Jeff Davis ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump, MVCC and consistency
Martijn van Oosterhout wrote: On Mon, Oct 24, 2005 at 11:25:00AM -0600, Michael Fuhr wrote: PostgreSQL 8.1 makes checks to avoid data loss due to transaction ID wraparound, but there's one situation I'm not sure how it handles: when a transaction is so long-lived that it would appear to be in the future of newly-created transactions due to wraparound. I'd have to dig into the source code to find out if that's possible, and if so, what happens. Maybe one of the developers will comment. To avoid this you need to do a VACUUM FULL over the database at least once every two billion transactions (not statements or tuples, transactions). To that end, the server begins complaining after one billion. I've never seen this in practice. Perhaps you could calculate how long it would take to do that many transactions. Most systems will never see it... Hope this helps, Docs section 21.1.3 (in 8.0 docs anyway, it seems to have moved to 22.1.3 for 8.1 docs) makes no mention of VACUUM FULL, only VACUUM is required. VACUUM FULL is much more expensive, because it actually moves tuples on-disk rather than just marking them. So it's fortunate that a VACUUM FULL is not required. As far as I can tell, VACUUM FULL is never required for normal database activity, even a 24/7/365.25 system. I think Michael Fuhr was asking something a little different. The same section of the docs appears to clear it up a little: "...the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular normal XID, the row version will appear to be "in the past" for the next two billion transactions,..." So it seems that, in order for the wraparound to be a problem, the transaction would have to last longer than 2 billion other transactions. And if a transaction did last that long, according to the 8.1 docs (22.1.3): "...the system will shut down and refuse to execute any new transactions once there are fewer than 1 million transactions left until wraparound..." The word that stands out in that sentence to me is "new". So, presumably, it would continue that one long transaction indefinitely, refusing new transactions. That isn't great, but it's hard to imagine leaving a transaction open for 2 billion other transactions. And at least it doesn't cause data loss. Is there any easy way to see what transactions are currently open, how old the XID is, and what PID is executing it? And what about a transaction left open for 2PC? Does a transaction get a new XID if it's PREPAREd now and COMMIT PREPAREd in a year? Regards, Jeff Davis ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] PostgreSQL vs mySQL, any performance difference for large queries?
I would ask you to ask the reverse question, why would you use MySQL when it still doesn't contain all the features of postgresql, has a bad query optimizer, a poor track record on scalability and will silenty truncate/accept invalid data, invalidating ACID, not only that you have to pay for it. Why would you use MySQL? AlexOn 24 Oct 2005 13:37:23 -0700, Jan <[EMAIL PROTECTED]> wrote: I need a database capable of storing at least 12 million records pertable, mostly used for batch queries. Basically an invoice database.Some tables could potentially store 100 million records.mySQL5 contains many of the features or PostgreSQL, and I doubt that I need all these features. Are there any spefic benefits in queryperformance or reliability of going with PostgreSQL?Secondary need is a database where 200 users will need to performlookups, probably using Windows PC's. Most likely only a handful will perform lookups simultanously.---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] a stored procedure ..with integer as the parameter
On 24 Oct 2005 22:00:55 +0200, Harald Fuchs <[EMAIL PROTECTED]> wrote: In article <[EMAIL PROTECTED]>,Alex Turner <[EMAIL PROTECTED] > writes:> 1. ( ) text/plain (*) text/html> As sort of a side discussion - I have postulated that quoting all incomming> numbers as string would be an effective defense against SQL Injection style > attacks, as magic quotes would destory any end-quote type syntax:> in_value=1> select * from table where my_id='$in_value';> as an example for PHP - Postgres will silenty perform an atoi on the string to > make it a number, but it would prevent:> in_value=1; delete * from user;> select * from table where my_id=$in_value> Am I just smoking crack here, or does this approach have some merit? The former :-)The correct defense against SQL injection is proper escaping, not quoting.How about $in_value = '1''; delete from user'? This would be escaped by magic_quotes resulting in: select * from table where my_id='\'1\'\'; delete from user \'', which would result in an error, and a failed attack would it not, which would be a good thing? I tried to create this scenario, but in a trasactional environment, it executes, but blew the transation so the data never committed as the select query generated an error with the insert on the end because the return type was no longer a result set, but a status in PyGresql: AttributeError: 'long' object has no attribute 'ntuples' So maybe there isn't an easy way to create a SQL injection attack in a xactional environment that will actualy work? Alex ---(end of broadcast)---TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] is there a function which elminates spaces?
On Mon, Oct 24, 2005 at 03:50:33PM -0500, Jeffrey Melloy wrote: > >>In a WHERE clause, I want to to compare strings ignoring the spaces inside > >>them. Is therd a function to do that? I do not find it in the > >>documentation. > >> > >>SELECT ... WHERE (ignore_spaces(table.phone_number) ~* igore_spaces(?)); > >> > >>would be fine but ignore_space() does not exist! > >>Maybe there is a solution based on regular epxression, but I do not see > >>it. > >> > >> > I don't see a way to do it through regular expressions, either, though > in the phone number case, you could split the phone number into > different columns based on area code, whatever the middle group is > called, and whatever the last group is called. Or you could remove the > spaces before inserting and comparing, or write a function with pl/perl > or something. With perl's greater regular expression control, it would > probably be a one liner. I'd suggest replace(). test=# select replace('Long spacey string',' ',''); replace -- Longspaceystring (1 row) -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgp3RUxWdRXNt.pgp Description: PGP signature
[GENERAL] PostgreSQL vs mySQL, any performance difference for large queries?
I need a database capable of storing at least 12 million records per table, mostly used for batch queries. Basically an invoice database. Some tables could potentially store 100 million records. mySQL5 contains many of the features or PostgreSQL, and I doubt that I need all these features. Are there any spefic benefits in query performance or reliability of going with PostgreSQL? Secondary need is a database where 200 users will need to perform lookups, probably using Windows PC's. Most likely only a handful will perform lookups simultanously. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Select all invalid e-mail addresses
On Mon, Oct 24, 2005 at 09:02:26PM +0300, Andrus wrote: > I applied Michael hint about dollar quoting to this and tried > > create temp table customer ( email char(60)); > insert into customer values( '[EMAIL PROTECTED]'); > SELECT email FROM customer WHERE email !~* > $$ > [EMAIL PROTECTED]@(?:[EMAIL > PROTECTED])?[a-z0-9_-]+\.(?:a[defgilmnoqrstuwz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvxyz]|d[ejkmoz]|e[ceghrst]|f[ijkmorx]|g[abdefhilmnpqrstuwy]|h[kmnrtu]|i[delnoqrst]|j[mop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrtwy]|qa|r[eouw]|s[abcdeghijklmnortvyz]|t[cdfghjkmnoprtvwz]|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]|edu|com|net|org|gov|mil|info|biz|coop|museum|aero|name|pro|mobi|arpa)$ > $$ > > but this classifies e-mail address [EMAIL PROTECTED] as invalid (select > returns it). The same result are without dollar quoting, using your original > select. There are at least two problems: 1. Since you're storing the email address as char(60), in some cases it'll be padded with spaces up to 60 characters. This appears to be one of those cases: SELECT 'foo'::char(60) ~ '^foo$'; ?column? -- f (1 row) test=> SELECT 'foo'::char(60) ~ '^foo {57}$'; ?column? -- t (1 row) 2. Everything in the quoted string is part of the regular expression, including the embedded newlines immediately after the open quote and before the close quote. test=> SELECT 'foo'::text ~ $$ test$> ^foo$ test$> $$; ?column? -- f (1 row) test=> SELECT 'foo'::text ~ $_$^foo$$_$; ?column? -- t (1 row) Note the need to quote with something other than $$ ($_$ in this case) because of the $ that's part of the regular expression. Otherwise you'd get this: test=> SELECT 'foo'::text ~ $$^foo$$$; ERROR: syntax error at or near "$" at character 30 LINE 1: SELECT 'foo'::text ~ $$^foo$$$; ^ Suggestions: use text or varchar for the email address, don't embed newlines in the regular expression, and if you use dollar quotes and the regular expression ends with a dollar sign then quote with a character sequence other than $$. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Why is this function wrong
Johan Wehtje wrote: > This is probably obvious but I am not seeing what is going wrong with > this function definition: > > CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, > humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS > IF ($1 < 25) > THEN > SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) > ELSE > SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) > END IF > LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; > > ERROR: syntax error at or near "IF" at character 119 > LINE 2: IF ($1 < 25) > > I have tried with dollar quoting around the function body, changing the > maguage to sql and using CASE ... WHEN instead of IF , tried Brackets > and no brackets around the Boolean expression.. and a number of other > variations, as well as rereading the Documentation. > > I have also tried not having aliases/names for the arguments. > > In every case I always get the syntax error directly after "AS". > > Can somebody point what is probably obvious. > > Cheers > Johan Wehtje Don't you need BEGIN and END lines in there, e.g.: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS BEGIN IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) ; ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) ; END IF ; END ; LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; ? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Recovery after server crash
In article <[EMAIL PROTECTED]>, Chris <[EMAIL PROTECTED]> wrote: >How can I get my data from this 7.4 data directory into my 8.0.3 fresh >install? Bring up a copy of 7.4 long enough to run pg_dumpall. Then load the dump into 8.0.3. -- http://www.spinics.net/yosemite/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why is this function wrong
slight modification to the previous post, sorry, see http://www.postgresql.org/docs/8.0/interactive/plpgsql-structure.html for the real docs. CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS $$ BEGIN IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) ; ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) ; END IF ; END ; $$ LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Collate problem
talenat wrote: talenat wrote: Hi, I have a mandrake 10.1 server running Postgres 8.1beta3 with hr_HR locale. Client is a XP with WIN1250 code page. When I use ORDER BY in query collation is out of order. I have tried with client encoding WIN1250 and LATIN2 but with no luck. If I use XP as a server with postgres 8.1beta3 then everything is as expected. The database is filled with with data from script made on XP with WIN1250 codepage. It looks like conversion from Latin2 to Win1250 is not correct and I don't know what to do. Any hint is welcome. BR Here is an update to above. If I execute SELECT * FROM atable ORDER BY afield then collate is wrong. If I execute SELECT * FROM atable WHERE afield LIKE 'Š%' ORDER BY afield then collate is OK for returned subset. afield is a index field. It seems to me that there is a bug here but I need to be sure if I am not missing something. Hi, Here is a only solution that I could find so I hope that it will help others. It is definitely something wrong with automatic translation between Linux and Win. For Croatian character set on Linux default encoding is Latin2 and on Win it is Win1250. As I understand that translation should be automatic inside Postgresql server but the collation is wrong. The solution is to use HR_UTF8 ( unicode ) on Linux and then DBInit creates templates as unicode. When you create your database use UNICODE as encoding. After this you could use WIN1250 as client side encoding ( if the client is on Win machine since that is a code page for Croatia ) and everything is as expected. I don't know if there are any performance penalty but it works. BR ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] is there a function which elminates spaces?
codeWarrior wrote: SELECT trim(trailing ' ' from city_name) AS city_name FROM sys_cities; You might consider reading the manual as there are a multitude of string manipulation functions built into postgreSQL You didn't answer his question. If you're going to rag on someone for not reading the manual, at least you could read what he's asking. "Pierre Couderc" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] In a WHERE clause, I want to to compare strings ignoring the spaces inside them. Is therd a function to do that? I do not find it in the documentation. SELECT ... WHERE (ignore_spaces(table.phone_number) ~* igore_spaces(?)); would be fine but ignore_space() does not exist! Maybe there is a solution based on regular epxression, but I do not see it. I don't see a way to do it through regular expressions, either, though in the phone number case, you could split the phone number into different columns based on area code, whatever the middle group is called, and whatever the last group is called. Or you could remove the spaces before inserting and comparing, or write a function with pl/perl or something. With perl's greater regular expression control, it would probably be a one liner. Jeff ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Why is this function wrong
Thanks for your help, I adjusted the function after a better reading of the pl/Sql chapter - and feel I posted in haste. Hopefully though it may prove useful to someone else. Cheers Johan Wehtje Thomas F. O'Connell wrote: On Oct 24, 2005, at 11:27 AM, Johan Wehtje wrote: This is probably obvious but I am not seeing what is going wrong with this function definition: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) END IF LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; ERROR: syntax error at or near "IF" at character 119 LINE 2: IF ($1 < 25) I have tried with dollar quoting around the function body, changing the maguage to sql and using CASE ... WHEN instead of IF , tried Brackets and no brackets around the Boolean expression.. and a number of other variations, as well as rereading the Documentation. I have also tried not having aliases/names for the arguments. In every case I always get the syntax error directly after "AS". Can somebody point what is probably obvious. Cheers Johan Wehtje This version will compile, but it won't do anything useful: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS $$ BEGIN IF (bgtemp < 25) THEN PERFORM (10.66 * (0.28 * humidity)+(1.3 * bgtemp) - wspeed); ELSE PERFORM (8.62 * (0.38 * humidity) + (1.55 * bgtemp) - (0.5 * wspeed) + (exp(- wspeed + 2.4))); END IF; END; $$ LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; You were missing some basic syntactic requirements, including a BEGIN/END block, as well as several semicolons. You will also need to SELECT into a variable if you want anything useful to happen with the results. I've replaced your SELECTs with PERFORMs to recreate your function as originally written. I recommend a closer reading of the chapter on PL/pgSQL: http://www.postgresql.org/docs/8.0/static/plpgsql.html -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Open Source Solutions. Optimized Web Development. http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-469-5150 615-469-5151 (fax) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match . ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Map of Postgresql Users (OT)
I'm not sure if you show up in the wrong place, but you do show up in Mountain View, California. (To see that type 'hooger' next to the 'Name' filter above the map and click on 'Filter'. Claire On Oct 24, 2005, at 12:15 PM, Arthur Hoogervorst wrote: Hi, Pretty neat: I'm glad I'm not on that map! Regards, Arthur On 10/24/05, Claire McLister <[EMAIL PROTECTED]> wrote: Hi, We've developed an automatic email mapping capability from Google Maps API. To try it out, we mapped origins of emails to this group from October 2, 2005 2 pm (EST) through October 14th, 9 am (EST). The result of this map is at: http://www.zeesource.net/maps/map.do?group=456 Would like to hear what you think of it. Best wishes Claire -- Claire McLister [EMAIL PROTECTED] 1684 Nightingale Avenue Suite 201 Sunnyvale, CA 94087 408-733-2737(fax) http://www.zeesource.net ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Unsubscribe
http://archives.postgresql.org/pgsql-general/ - Onyx --- Onyx Mueller Software Engineer i-cubed : information integration & imaging LLC 201 Linden Street : Third Floor Fort Collins, CO 80524 970-482-4400 voice 970-482-4499 fax www.i3.com > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Angshu Kar > Sent: Monday, October 24, 2005 2:06 PM > To: pgsql-general@postgresql.org > Subject: [GENERAL] Unsubscribe > > Hi, > > Could anyone let me know how to unsubscribe? > > Thanks, > Angshu > > > On 10/24/05, William Yu <[EMAIL PROTECTED]> wrote: > > [EMAIL PROTECTED] wrote: > > VFP will run in WINE up to VFP version 8. But the > people I talk to that > > are doing this are using VFP's native tables. Does > anyone know if it's > > possible to run VFP8 in WINE and connect to > Postgresql on a Linux > > server? I think part of the question here is how > would WINE emulate the > > ODBC to get to the Postgresql server. > > I can confirm FoxPro running under Wine/Linux can > access Postgres via > ODBC and performs nearly comparable to FoxPro under > Windows. I got 90%+ > performance on some of my data processing programs. > Where you do get > really slow is accessing DBFs over the network because NFS is so > inefficient. > > It doesn't work with UnixODBC so you need to install > the Postgres > Windows ODBC driver under WINE. Double-click on the > ODBC installer, then > type: wine odbcad32.exe to launch the config file. > > ---(end of > broadcast)--- > TIP 6: explain analyze is your friend > > > > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] a stored procedure ..with integer as the parameter
Hi, Well, as noted in another thread, many databases will not allow it (quoting an integer), so if you ever have to port it to another db, you will be out of luck. Also, the string you mentioned is also not an integer. When I tried your example with the embedded delete statement (e.g. select testfunc1('4;delete from test3 where numval = 3')), I got an error from the function: ERROR: invalid input syntax for integer: "4;delete from test3 where numval = 3" Which, of course, was the original complaint about the empty string, too. I couldn't pass Harald's example with extra quotes in as a parameter to the function, either. It still says it isn't an integer. Which is true, of course. My test procedure was a simple plpgsql function, with an integer input parameter, returning void. Of course, I don't know exactly how the actual function really works, but I imagine the same kind of type check is going to take place, at least for plpgsql. Personally, my applications tend to validate any values that are input by a user prior to passing them to the database. So, if a number is to be entered, I check that the input is numeric before I pass it to the database. Of course, if anyone can call a function directly (like from pgsql, as opposed to it being known only to a specialized application), then they probably have enough access to be able to do most anything they want. Susan Alex Turner <[EMAIL PROTECTED]To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> >cc: "surabhi.ahuja" <[EMAIL PROTECTED]>, pgsql-general@postgresql.org Subject: Re: [GENERAL] a stored procedure ..with integer as the parameter 10/24/2005 11:44 AM |---| | [ ] Expand Groups | |---| As sort of a side discussion - I have postulated that quoting all incomming numbers as string would be an effective defense against SQL Injection style attacks, as magic quotes would destory any end-quote type syntax: in_value=1 select * from table where my_id='$in_value'; as an example for PHP - Postgres will silenty perform an atoi on the string to make it a number, but it would prevent: in_value=1; delete * from user; select * from table where my_id=$in_value Am I just smoking crack here, or does this approach have some merit? Alex -- Simply protected storage solutions ensure that your information is automatically safe, readily available and always there, visit us at http://www.overlandstorage.com -- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] a stored procedure ..with integer as the parameter
In article <[EMAIL PROTECTED]>, Alex Turner <[EMAIL PROTECTED]> writes: > 1. ( ) text/plain (*) text/html > As sort of a side discussion - I have postulated that quoting all incomming > numbers as string would be an effective defense against SQL Injection style > attacks, as magic quotes would destory any end-quote type syntax: > in_value=1 > select * from table where my_id='$in_value'; > as an example for PHP - Postgres will silenty perform an atoi on the string to > make it a number, but it would prevent: > in_value=1; delete * from user; > select * from table where my_id=$in_value > Am I just smoking crack here, or does this approach have some merit? The former :-) The correct defense against SQL injection is proper escaping, not quoting. How about $in_value = '1''; delete * from user'? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Unsubscribe
Hi, Could anyone let me know how to unsubscribe? Thanks,Angshu On 10/24/05, William Yu <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] wrote:> VFP will run in WINE up to VFP version 8. But the people I talk to that > are doing this are using VFP's native tables. Does anyone know if it's> possible to run VFP8 in WINE and connect to Postgresql on a Linux> server? I think part of the question here is how would WINE emulate the > ODBC to get to the Postgresql server.I can confirm FoxPro running under Wine/Linux can access Postgres viaODBC and performs nearly comparable to FoxPro under Windows. I got 90%+performance on some of my data processing programs. Where you do get really slow is accessing DBFs over the network because NFS is soinefficient.It doesn't work with UnixODBC so you need to install the PostgresWindows ODBC driver under WINE. Double-click on the ODBC installer, then type: wine odbcad32.exe to launch the config file.---(end of broadcast)---TIP 6: explain analyze is your friend
Re: [GENERAL] FoxPro in WINE to Postgresql on LINUX?
[EMAIL PROTECTED] wrote: VFP will run in WINE up to VFP version 8. But the people I talk to that are doing this are using VFP's native tables. Does anyone know if it's possible to run VFP8 in WINE and connect to Postgresql on a Linux server? I think part of the question here is how would WINE emulate the ODBC to get to the Postgresql server. I can confirm FoxPro running under Wine/Linux can access Postgres via ODBC and performs nearly comparable to FoxPro under Windows. I got 90%+ performance on some of my data processing programs. Where you do get really slow is accessing DBFs over the network because NFS is so inefficient. It doesn't work with UnixODBC so you need to install the Postgres Windows ODBC driver under WINE. Double-click on the ODBC installer, then type: wine odbcad32.exe to launch the config file. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] FoxPro in WINE to Postgresql on LINUX?
http://www.paulmcnett.com/articles.php has a white paper White Paper: Running Visual FoxPro on Linux Using Wine (October 2003) which describes using ODBC drivers from Linux. Andrus. <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...I know that there are some Visual FoxPro developers on this list who use PostgreSQl as their back-end. My question to them is... VFP will run in WINE up to VFP version 8. But the people I talk to that are doing this are using VFP's native tables. Does anyone know if it's possible to run VFP8 in WINE and connect to Postgresql on a Linux server? I think part of the question here is how would WINE emulate the ODBC to get to the Postgresql server. The reason for this strange question is that our corporation has standardized all custom development to be done in VFP. We need a stable back-end and some of us are now using Postgresql as our database server. It would be really nice if we could run our client apps on Linux as well. Thanks, *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297This e-mail message and any attachment(s) are for the sole use of the intended recipient(s) and may contain proprietary and/or confidential information which may be privileged or otherwise protected from disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient(s), please contact the sender by reply email and destroy the original message and any copies of the message as well as any attachment(s) to the original message.
Re: [GENERAL] pg_dump with low priority?
On 10/24/05, CSN <[EMAIL PROTECTED]> wrote: > > nice comes to mind: > > nice pg_dump ... > as mentioned earlier...nice isn't going to do anything for I/O. PITR (point in time recovery) would be, in my opinion, the best solution to this problem. aaron.glenn ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump with low priority?
nice comes to mind: nice pg_dump ... On Sat, 2005-10-22 at 07:39, Bryan Field-Elliot wrote: > We have a huge database which must be backed up every day with > pg_dump. The problem is, it takes around half an hour to produce the > dump file, and all other processes on the same box are starved for > cycles (presumably due to I/O) during the dump. It's not just an > inconvenience, it's now evolved into a serious problem that needs to > be addressed. > > Is there any mechanism for running pg_dump with a lower priority? I > don't mind if the backup takes two hours instead of half an hour, as > long as other processes were getting their fair share of cycles. __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Map of Postgresql Users (OT)
Hi, Pretty neat: I'm glad I'm not on that map! Regards, Arthur On 10/24/05, Claire McLister <[EMAIL PROTECTED]> wrote: > Hi, > > We've developed an automatic email mapping capability from Google Maps > API. > > To try it out, we mapped origins of emails to this group from October > 2, 2005 2 pm (EST) through October 14th, 9 am (EST). > > The result of this map is at: > http://www.zeesource.net/maps/map.do?group=456 > > Would like to hear what you think of it. > > Best wishes > > Claire > > -- > Claire McLister [EMAIL PROTECTED] > 1684 Nightingale Avenue Suite 201 > Sunnyvale, CA 94087 408-733-2737(fax) > > http://www.zeesource.net > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dump, MVCC and consistency
Beleive me, when you get data feeds with bad data and you have to do each insert as an xact because copy will just dump out, you can hit 1bil really fast. AlexOn 10/24/05, Martijn van Oosterhoutwrote: On Mon, Oct 24, 2005 at 11:25:00AM -0600, Michael Fuhr wrote:> PostgreSQL 8.1 makes checks to avoid data loss due to transaction> ID wraparound, but there's one situation I'm not sure how it handles:> when a transaction is so long-lived that it would appear to be in > the future of newly-created transactions due to wraparound. I'd> have to dig into the source code to find out if that's possible,> and if so, what happens. Maybe one of the developers will comment. To avoid this you need to do a VACUUM FULL over the database at leastonce every two billion transactions (not statements or tuples,transactions). To that end, the server begins complaining after onebillion. I've never seen this in practice. Perhaps you could calculate how long it would take to do that many transactions. Most systems willnever see it...Hope this helps,--Martijn van Oosterhout http://svana.org/kleptog/> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a> tool for doing 5% of the work and then sitting around waiting for someone> else to do the other 95% so you can sue them.
Re: [GENERAL] Prevent application log pollution with notice messages
> Also: > > They cannot be filterad out since all Postgres log entries > have type Error, e.q. > LOG: database system is ready message is Erorr message ! > > Why postgres uses Error category for NOTICES ? Is'nt it > reasonable to use Information type ? This should be fixed in 8.0.4, and in 8.1. //Magnus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Map of Postgresql Users (OT)
Hi, We've developed an automatic email mapping capability from Google Maps API. To try it out, we mapped origins of emails to this group from October 2, 2005 2 pm (EST) through October 14th, 9 am (EST). The result of this map is at: http://www.zeesource.net/maps/map.do?group=456 Would like to hear what you think of it. Best wishes Claire -- Claire McLister [EMAIL PROTECTED] 1684 Nightingale Avenue Suite 201 Sunnyvale, CA 94087 408-733-2737(fax) http://www.zeesource.net ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] a stored procedure ..with integer as the parameter
As sort of a side discussion - I have postulated that quoting all incomming numbers as string would be an effective defense against SQL Injection style attacks, as magic quotes would destory any end-quote type syntax: in_value=1 select * from table where my_id='$in_value'; as an example for PHP - Postgres will silenty perform an atoi on the string to make it a number, but it would prevent: in_value=1; delete * from user; select * from table where my_id=$in_value Am I just smoking crack here, or does this approach have some merit? AlexOn 10/21/05, [EMAIL PROTECTED] <[EMAIL PROTECTED] > wrote:First, you should not quote an integer value going into an integer column - bad habit to get into.Second, empty string is not an integer.Susan "surabhi.ahuja" <[EMAIL PROTECTED]To:.in> cc: Sent by: Subject: [GENERAL] a stored procedure ..with integer as the parameter |---| [EMAIL PROTECTED] | [ ] Expand Groups | tgresql.org |---| 10/20/2005 11:03 PM i have a stored procedureinsert_table(integer) which does "insert into table (x) value ($1)";now in my client i call the stored procedure as select insert_table("3");it works fine and inserts 3 into the tablebut suppose i giveselect insert_table("");it gives an error ...saying " invalid input syntax for integer: " please suggest a solution to this problemthanks,reagrdssurabhi--See our award-winning line of tape and disk-based backup & recovery solutions at http://www.overlandstorage.com-- ---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pg_dump, MVCC and consistency
On Mon, Oct 24, 2005 at 11:25:00AM -0600, Michael Fuhr wrote: > PostgreSQL 8.1 makes checks to avoid data loss due to transaction > ID wraparound, but there's one situation I'm not sure how it handles: > when a transaction is so long-lived that it would appear to be in > the future of newly-created transactions due to wraparound. I'd > have to dig into the source code to find out if that's possible, > and if so, what happens. Maybe one of the developers will comment. To avoid this you need to do a VACUUM FULL over the database at least once every two billion transactions (not statements or tuples, transactions). To that end, the server begins complaining after one billion. I've never seen this in practice. Perhaps you could calculate how long it would take to do that many transactions. Most systems will never see it... Hope this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpKdMyKCmFg1.pgp Description: PGP signature
[GENERAL] FoxPro in WINE to Postgresql on LINUX?
I know that there are some Visual FoxPro developers on this list who use PostgreSQl as their back-end. My question to them is... VFP will run in WINE up to VFP version 8. But the people I talk to that are doing this are using VFP's native tables. Does anyone know if it's possible to run VFP8 in WINE and connect to Postgresql on a Linux server? I think part of the question here is how would WINE emulate the ODBC to get to the Postgresql server. The reason for this strange question is that our corporation has standardized all custom development to be done in VFP. We need a stable back-end and some of us are now using Postgresql as our database server. It would be really nice if we could run our client apps on Linux as well. Thanks, *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297 This e-mail message and any attachment(s) are for the sole use of the intended recipient(s) and may contain proprietary and/or confidential information which may be privileged or otherwise protected from disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient(s), please contact the sender by reply email and destroy the original message and any copies of the message as well as any attachment(s) to the original message.
Re: [GENERAL] [ANNOUNCE] PostgreSQL 8.1 Beta 4
> >> In order to address some issues found with the Windows > port, as well > >> as GCC4, we have released a Beta 4 of the upcoming > >> 8.1 Release. > > > > Are you specifically referring to the interrupt/signals test on > > windows here? Because that one isn't in beta4, it was > disabled again > > before it was packaged... > > > > Or are you referring to something else? > > I was referring to: > > http://archives.postgresql.org/pgsql-committers/2005-10/msg00321.php > > But hadn't seen the followup on the 22nd concerning > temporarily disabling it :( > > Sorry about that ... Ok. No problem, just wanted to make sure you knew it wasn't in there :-) //Magnus ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Prevent application log pollution with notice messages
My appl creates temporary table with primary key which exists only during transaction. Each creation of such table causes Postgres to write an entry to Windows application log file like: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "taitmata_pkey" for table "taitmata" I have lot of transactions and those messages will pollute application log totally. How to disable those notices for temporary tables ? Also: They cannot be filterad out since all Postgres log entries have type Error, e.q. LOG: database system is ready message is Erorr message ! Why postgres uses Error category for NOTICES ? Is'nt it reasonable to use Information type ? Andrus. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Select all invalid e-mail addresses
> SELECT email FROM customer > WHERE email !~* > '[EMAIL PROTECTED]@(?:[EMAIL > PROTECTED])?[a-z0-9_-]+\.(?:a[defgilmnoqrstuwz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvxyz]|d[ejkmoz]|e[ceghrst]|f[ijkmorx]|g[abdefhilmnpqrstuwy]|h[kmnrtu]|i[delnoqrst]|j[mop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrtwy]|qa|r[eouw]|s[abcdeghijklmnortvyz]|t[cdfghjkmnoprtvwz]|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]|edu|com|net|org|gov|mil|info|biz|coop|museum|aero|name|pro|mobi|arpa)$' > > ...should be closer. Fixes one typo in the range, uses valid pg format > regex, rather > than perl regex and had a couple of pedant-fixes in the TLDs supported. > > It's syntactically correct, and appears to do the right thing on my > production > DB here (which conincedentally has a customer table with an email field > :)), but > you should make sure you understand what the regex actually does. Steve, thank you again. I applied Michael hint about dollar quoting to this and tried create temp table customer ( email char(60)); insert into customer values( '[EMAIL PROTECTED]'); SELECT email FROM customer WHERE email !~* $$ [EMAIL PROTECTED]@(?:[EMAIL PROTECTED])?[a-z0-9_-]+\.(?:a[defgilmnoqrstuwz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvxyz]|d[ejkmoz]|e[ceghrst]|f[ijkmorx]|g[abdefhilmnpqrstuwy]|h[kmnrtu]|i[delnoqrst]|j[mop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrtwy]|qa|r[eouw]|s[abcdeghijklmnortvyz]|t[cdfghjkmnoprtvwz]|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]|edu|com|net|org|gov|mil|info|biz|coop|museum|aero|name|pro|mobi|arpa)$ $$ but this classifies e-mail address [EMAIL PROTECTED] as invalid (select returns it). The same result are without dollar quoting, using your original select. Andrus. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [ANNOUNCE] PostgreSQL 8.1 Beta 4
On Mon, 24 Oct 2005, Magnus Hagander wrote: In order to address some issues found with the Windows port, as well as GCC4, we have released a Beta 4 of the upcoming 8.1 Release. Are you specifically referring to the interrupt/signals test on windows here? Because that one isn't in beta4, it was disabled again before it was packaged... Or are you referring to something else? I was referring to: http://archives.postgresql.org/pgsql-committers/2005-10/msg00321.php But hadn't seen the followup on the 22nd concerning temporarily disabling it :( Sorry about that ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] log_min_duration_statement oddity
Csaba Nagy wrote: > Thanks Tom, now at least I can stop chasing what I'm doing wrong :-) > > BTW, will be a way to also log the parameter values for prepared > statements ? While debugging performance problems it would be You mean the BIND values? No, not in 8.1, but it is on our TODO list now. Prepared logging is better in 8.1, though. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [ANNOUNCE] PostgreSQL 8.1 Beta 4
> In order to address some issues found with the Windows port, > as well as GCC4, we have released a Beta 4 of the upcoming > 8.1 Release. Are you specifically referring to the interrupt/signals test on windows here? Because that one isn't in beta4, it was disabled again before it was packaged... Or are you referring to something else? //Magnus ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pg_dump, MVCC and consistency
On Mon, Oct 24, 2005 at 02:29:24PM +0200, Florian Ledoux wrote: > If I have well understood, the defaut transaction isolation level in > PG is the "read commited" isolation level. If it is the isolation > scheme used by pg_dump how can I be sure that tables accessed at the > end of my export are consistent with those accessed at the begining ? > Does pg_dump use a serializable isolation scheme ? There are at least two ways to find out: examine the source code or enable query logging on the server. You'll discover that a pg_dump session starts with: BEGIN SET TRANSACTION ISOLATION LEVEL SERIALIZABLE > We have this kind of concerns with Oracle and a "CONSISTENT" flag can > be set in the exp utility to use a consistent snapshot of the database > from the begining to the end of the export process. Unfortunately, > this mode use intensively rollback segments and can drive to obsolete > data (also knows as "Snapshot too old"). Is there the equivalent of > rollback segments in PG ? Is there some issues like "snapshot too old" > with intensive multi-users and transactional databases ? PostgreSQL uses MVCC to get snapshots. See "Concurrency Control" and "Preventing transaction ID wraparound failures" in the documentation for discussion of how this works, what problems you might encounter, and how to avoid them. http://www.postgresql.org/docs/8.0/interactive/mvcc.html http://www.postgresql.org/docs/8.0/interactive/maintenance.html#VACUUM-FOR-WRAPAROUND PostgreSQL 8.1 makes checks to avoid data loss due to transaction ID wraparound, but there's one situation I'm not sure how it handles: when a transaction is so long-lived that it would appear to be in the future of newly-created transactions due to wraparound. I'd have to dig into the source code to find out if that's possible, and if so, what happens. Maybe one of the developers will comment. -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why is this function wrong
On Oct 24, 2005, at 11:27 AM, Johan Wehtje wrote: This is probably obvious but I am not seeing what is going wrong with this function definition: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) END IF LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; ERROR: syntax error at or near "IF" at character 119 LINE 2: IF ($1 < 25) I have tried with dollar quoting around the function body, changing the maguage to sql and using CASE ... WHEN instead of IF , tried Brackets and no brackets around the Boolean expression.. and a number of other variations, as well as rereading the Documentation. I have also tried not having aliases/names for the arguments. In every case I always get the syntax error directly after "AS". Can somebody point what is probably obvious. Cheers Johan Wehtje This version will compile, but it won't do anything useful: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS $$ BEGIN IF (bgtemp < 25) THEN PERFORM (10.66 * (0.28 * humidity)+(1.3 * bgtemp) - wspeed); ELSE PERFORM (8.62 * (0.38 * humidity) + (1.55 * bgtemp) - (0.5 * wspeed) + (exp(- wspeed + 2.4))); END IF; END; $$ LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; You were missing some basic syntactic requirements, including a BEGIN/ END block, as well as several semicolons. You will also need to SELECT into a variable if you want anything useful to happen with the results. I've replaced your SELECTs with PERFORMs to recreate your function as originally written. I recommend a closer reading of the chapter on PL/pgSQL: http://www.postgresql.org/docs/8.0/static/plpgsql.html -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Open Source Solutions. Optimized Web Development. http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-469-5150 615-469-5151 (fax) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why is this function wrong
The variant you're showing here has absolutely no quoting for the function body. You need to quote the body, and escape the quotes you have inside the body (in this example you don't have any). Wrap the body in BEGIN ... END. Also put semicolons after each statement end. Corrected code: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS ' BEGIN IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3); ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))); END IF; END ' LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; HTH, Csaba. On Mon, 2005-10-24 at 18:27, Johan Wehtje wrote: > This is probably obvious but I am not seeing what is going wrong with > this function definition: > > CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, > humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS > IF ($1 < 25) > THEN > SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3); > ELSE > SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) > END IF > LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; > ERROR: syntax error at or near "IF" at character 119 > LINE 2: IF ($1 < 25) > > I have tried with dollar quoting around the function body, changing the > maguage to sql and using CASE ... WHEN instead of IF , tried Brackets > and no brackets around the Boolean expression.. and a number of other > variations, as well as rereading the Documentation. > > I have also tried not having aliases/names for the arguments. > > In every case I always get the syntax error directly after "AS". > > Can somebody point what is probably obvious. > > Cheers > Johan Wehtje > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [pgsql-general] Daily digest v1.5657 (16 messages)
Florian, Reponses from, an ex-Oracle DBA, below. On Mon, 2005-10-24 at 11:51 -0300, [EMAIL PROTECTED] wrote: > Date: Mon, 24 Oct 2005 14:29:24 +0200 > From: Florian Ledoux <[EMAIL PROTECTED]> > To: pgsql-general@postgresql.org > Subject: pg_dump, MVCC and consistency > Message-ID: <[EMAIL PROTECTED]> > > Hello everybody ! > > I am coming from the (expensive) "Oracle World" and I am a newbie in > PG administration. I am currently working on backup concerns... I am > using pg_dump and I have not encountered any problems but I have some > questions about the internal management of data consistency in PG > server. > I have read some articles about the MVCC mechanism but I can't see how > it handles a consistent "snapshot" of the database during all the > export process. The whole secret, as I understand it, is that updates and deletes do not overwrite the original tuple. The original tuple remains in place, marked with transaction ids describing the transactions to which it is visible. These "old" tuples remain until a vacuum is performed. The vacuum removes only those tuples which are no longer visible to any running transaction. > If I have well understood, the defaut transaction isolation level in > PG is the "read commited" isolation level. If it is the isolation > scheme used by pg_dump how can I be sure that tables accessed at the > end of my export are consistent with those accessed at the begining ? > Does pg_dump use a serializable isolation scheme ? I believe pg_dump uses serializable. > We have this kind of concerns with Oracle and a "CONSISTENT" flag can > be set in the exp utility to use a consistent snapshot of the database > from the begining to the end of the export process. Unfortunately, > this mode use intensively rollback segments and can drive to obsolete > data (also knows as "Snapshot too old"). Is there the equivalent of > rollback segments in PG ? Is there some issues like "snapshot too old" > with intensive multi-users and transactional databases ? One of the greats joy of postgres is never seeing a "snapshot too old" error. There is no rollback or undo space required as the original tuples remain in place. This has other benefits too - you don't have to reconstruct the original tuple from rollback in order to retrieve it, making selects faster, and you don't have to write rollback data, making writes faster. > I have not a good knowledge of PG internal mechanism, I hope that my > questions are clear enough... > Yep. I hope the answers were too. > Florian > __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Why is this function wrong
This is probably obvious but I am not seeing what is going wrong with this function definition: CREATE OR REPLACE FUNCTION "public"."cproc_hli" (bgtemp NUMERIC, humidity NUMERIC, wspeed NUMERIC) RETURNS NUMERIC AS IF ($1 < 25) THEN SELECT (10.66 * (0.28 * $2)+(1.3 * $1) - $3) ELSE SELECT (8.62 * (0.38 * $2) + (1.55 * $1) - (0.5 * $3) + (exp(- $3 + 2.4))) END IF LANGUAGE 'plpgsql' VOLATILE RETURNS NULL ON NULL INPUT SECURITY INVOKER; ERROR: syntax error at or near "IF" at character 119 LINE 2: IF ($1 < 25) I have tried with dollar quoting around the function body, changing the maguage to sql and using CASE ... WHEN instead of IF , tried Brackets and no brackets around the Boolean expression.. and a number of other variations, as well as rereading the Documentation. I have also tried not having aliases/names for the arguments. In every case I always get the syntax error directly after "AS". Can somebody point what is probably obvious. Cheers Johan Wehtje ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] is there a function which elminates spaces?
SELECT trim(trailing ' ' from city_name) AS city_name FROM sys_cities; You might consider reading the manual as there are a multitude of string manipulation functions built into postgreSQL "Pierre Couderc" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > In a WHERE clause, I want to to compare strings ignoring the spaces inside > them. Is therd a function to do that? I do not find it in the > documentation. > > SELECT ... WHERE (ignore_spaces(table.phone_number) ~* igore_spaces(?)); > > would be fine but ignore_space() does not exist! > Maybe there is a solution based on regular epxression, but I do not see > it. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] insertion becoming slow
OK, it's quite some time from when the original question was posted, but now I have more data... see below. On Thu, 2005-09-29 at 19:24, Jim C. Nasby wrote: > On Tue, Sep 27, 2005 at 10:24:02AM +0200, Csaba Nagy wrote: > > Hi all, > > > > I've recently asked a similar question, which received no useful answer > > yet, so I'll drop in too. > > > > In my case, the table I was inserting to was a quite big one already to > > start with (and analyzed so), so I was expecting that it will not slow > > down due to indexes, as they were quite big to start with as I said. > > > > What I mean is that I expected that the speed will be more or less > > constant over the whole inserting. But the result was that after a while > > the average insert speed dropped considerably and suddenly, which I > > can't explain and would like to know what caused it... > > The table was ~100 million live rows and quite often updated, and the > > insert was ~40 million rows. After ~10 million rows the average speed > > dropped suddenly about 4 times. > > > > My only suspicion would be that the table had a quite big amount of free > > space in it at the beginning due to the fact that it is quite often > > updated, and then the free space was exhausted. So the speed difference > > might come from the difference in using free space versus creating new > > pages ? Or the same thing for the b-tree indexes. > > > > Is there any other reasonable explanation for this ? As I see this kind > > of behavior consistently, speed OK on start of inserting, and then slow > > down, and I would like to know if I can tune my DB to cope with it or > > just accept that it works like this... > > > > Cheers, > > Csaba. > > I can't think of any explanation for this off-hand. Can you re-run the > test on a table that doesn't have a bunch of free space in it to see if > that's what the issue was? So the issue was that the system had other scheduled heavy activities running I was not aware of. So when they started, the insert performance dropped... so I guess it is all clear now, at least for me... it's the typical case of the right hand doesn't know what the left hand does, and the head spends a lot of time figuring out what both were doing ;-) Cheers, Csaba. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] FW: cannot stat `/usr/local/pgsql/data/pg_xlog/00000001000000430000009C': No such file or directory
Here is the message. 2005-10-24 08:36:44 CDTLOG: database system was shut down at 2005-10-24 08:35:25 CDT 2005-10-24 08:36:44 CDTLOG: checkpoint record is at 43/B11C 2005-10-24 08:36:44 CDTLOG: redo record is at 43/B11C; undo record is at 43/B11C; shutdown TRUE 2005-10-24 08:36:44 CDTLOG: next transaction ID: 6891749; next OID: 14756831 2005-10-24 08:36:44 CDTLOG: database system is ready cp: cannot stat `/usr/local/pgsql/data/pg_xlog/00010043009C': No such file or directory could not copy /usr/local/pgsql/data/pg_xlog/00010043009C to archive 2005-10-24 08:36:44 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/00010043009C Postgresql version is: PostgreSQL 8.0.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.1 (SuSE Linux) From: Sailer, Denis (YBUSA-CDR) Sent: Monday, October 24, 2005 10:23 AM To: 'pgsql-general@postgresql.org' Subject: FW: cannot stat `/usr/local/pgsql/data/pg_xlog/00010043009C': No such file or directory I forgot to include the specific error message related to the archival process not finding the file. From: Sailer, Denis (YBUSA-CDR) [mailto:[EMAIL PROTECTED] Sent: Monday, October 24, 2005 8:52 AM To: pgsql-general@postgresql.org Subject: cannot stat `/usr/local/pgsql/data/pg_xlog/00010043009C': No such file or directory In my development system the file system where $PGDATA resides filled up. cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430076 to archive 2005-10-23 08:46:29 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430076 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076'? 2005-10-23 08:46:30 CDTLOG: archived transaction log file "00010 0430076" 2005-10-23 08:46:31 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430077 to archive 2005-10-23 08:47:46 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430077 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077'? 2005-10-23 08:47:47 CDTLOG: archived transaction log file "00010 0430077" 2005-10-23 08:47:48 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430078 to archive 2005-10-23 08:49:49 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430078 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078'? 2005-10-23 08:49:51 CDTLOG: archived transaction log file "00010 0430078" 2005-10-23 08:49:52 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writ At this point in time postgres had gone away. I tried to restart Postgres, but received these messages. Postgres would not start. 2005-10-24 08:28:04 CDTLOG: database system was interrupted at 2005-10-23 10:01:14 CDT 2005-10-24 08:28:04 CDTLOG: could not open file "/usr/local/pgsql/data/pg_xlog/000100430043" (log file 67, segment 67): No such file or dir ectory 2005-10-24 08:28:04 CDTLOG: invalid checkpoint record 2005-10-24 08:28:04 CDTPANIC: could not locate required checkpoint record 2005-10-24 08:28:04 CDTHINT: If you are not restoring from a backup, try removing the file "/usr/local/pgsql/data/backup_label". 2005-10-24 08:28:04 CDTLOG: startup process (PID 13345) was terminated by signal 6 2005-10-24 08:28:04 CDTLOG: aborting startup due to startup process failure 2005-10-24 08:28:04 CDTLOG: logger shutting down Since this is development I used “pg_resetxlog -f /usr/local/pgsql/data”. I then tried to start Postgres, but received the same messages as above. I removed the backup in progress file as stated in the message. Postgres was able to start after that, but now it keeps trying to archive log files that don’t exist. How can I tell postgres to move to the point in time where log file truncation occurred where the log files now start?
[GENERAL] FW: cannot stat `/usr/local/pgsql/data/pg_xlog/00000001000000430000009C': No such file or directory
I forgot to include the specific error message related to the archival process not finding the file. From: Sailer, Denis (YBUSA-CDR) [mailto:[EMAIL PROTECTED] Sent: Monday, October 24, 2005 8:52 AM To: pgsql-general@postgresql.org Subject: cannot stat `/usr/local/pgsql/data/pg_xlog/00010043009C': No such file or directory In my development system the file system where $PGDATA resides filled up. cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430076 to archive 2005-10-23 08:46:29 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430076 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076'? 2005-10-23 08:46:30 CDTLOG: archived transaction log file "00010 0430076" 2005-10-23 08:46:31 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430077 to archive 2005-10-23 08:47:46 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430077 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077'? 2005-10-23 08:47:47 CDTLOG: archived transaction log file "00010 0430077" 2005-10-23 08:47:48 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430078 to archive 2005-10-23 08:49:49 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430078 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078'? 2005-10-23 08:49:51 CDTLOG: archived transaction log file "00010 0430078" 2005-10-23 08:49:52 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writ At this point in time postgres had gone away. I tried to restart Postgres, but received these messages. Postgres would not start. 2005-10-24 08:28:04 CDTLOG: database system was interrupted at 2005-10-23 10:01:14 CDT 2005-10-24 08:28:04 CDTLOG: could not open file "/usr/local/pgsql/data/pg_xlog/000100430043" (log file 67, segment 67): No such file or dir ectory 2005-10-24 08:28:04 CDTLOG: invalid checkpoint record 2005-10-24 08:28:04 CDTPANIC: could not locate required checkpoint record 2005-10-24 08:28:04 CDTHINT: If you are not restoring from a backup, try removing the file "/usr/local/pgsql/data/backup_label". 2005-10-24 08:28:04 CDTLOG: startup process (PID 13345) was terminated by signal 6 2005-10-24 08:28:04 CDTLOG: aborting startup due to startup process failure 2005-10-24 08:28:04 CDTLOG: logger shutting down Since this is development I used “pg_resetxlog -f /usr/local/pgsql/data”. I then tried to start Postgres, but received the same messages as above. I removed the backup in progress file as stated in the message. Postgres was able to start after that, but now it keeps trying to archive log files that don’t exist. How can I tell postgres to move to the point in time where log file truncation occurred where the log files now start?
[GENERAL] PostgreSQL 8.1 Beta 4
In order to address some issues found with the Windows port, as well as GCC4, we have released a Beta 4 of the upcoming 8.1 Release. This Beta is meant to be a quick beta, baring any problems, with our first Release Candidate happening late this same week. With this beta, Tom has also put out the call for port reports, to round out our 'Supported Platforms' list: "If you don't see your favorite platform already listed as tested for 8.1 at http://developer.postgresql.org/docs/postgres/supported-platforms.html then please give it a try and send in your results." We heavily encourage any, and all, to test this Beta out, to make sure we have weeded out as many bugs before release as possible ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump with low priority?
On Sat, 2005-10-22 at 07:39, Bryan Field-Elliot wrote: > We have a huge database which must be backed up every day with > pg_dump. The problem is, it takes around half an hour to produce the > dump file, and all other processes on the same box are starved for > cycles (presumably due to I/O) during the dump. It's not just an > inconvenience, it's now evolved into a serious problem that needs to > be addressed. > > Is there any mechanism for running pg_dump with a lower priority? I > don't mind if the backup takes two hours instead of half an hour, as > long as other processes were getting their fair share of cycles. You could either set up plain old replication via slony / mammoth replicator, and backup the slave, or setup Point in Time Recovery. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] a stored procedure ..with integer as the parameter
On Mon, 24 Oct 2005, surabhi.ahuja wrote: > from the client i receive a vector of strings ...which i have to pass as > arguments to the stored procedure. > > That stored procedure has valid arguments data types > > for instance > i have a stored procedure as follows: > > insert(integer, varchar(256), smallint) > > from the client I get : "200", "surabhi", "10" > > now i have to make a call to the stored procedure using the above received > strings. > will not the "200" be converted to a valid integer before saving into > the database ..and same with the smallint values. I don't see how this example is directly related to what you sent before with an empty string. Theoretically, yes, I believe a quoted 200 ('200') and a quoted 10 ('10') should work for those two positions (unless there's a question where there's another insert function that takes different arguments that grabs it first). As an aside unquoted/uncast 10 will not currently match the smallint argument, though, so it may be better to simply not use smallint arguments. However, from your first mail, there was a question of ('') which is not a valid integer. Calling insert('200', 'surabhi', '') or insert('', 'surabhi', '10') are going to give the "invalid input syntax for integer" because an empty string doesn't match the pattern for an integer. > From: Stephan Szabo [mailto:[EMAIL PROTECTED] > Sent: Fri 10/21/2005 12:11 PM > To: surabhi.ahuja > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] a stored procedure ..with integer as the parameter > > On Fri, 21 Oct 2005, surabhi.ahuja wrote: > > > i have a stored procedure > > > > insert_table(integer) > > which does "insert into table (x) value ($1)"; > > > > now in my client i call the stored procedure as > > > > select insert_table("3"); > > > > it works fine and inserts 3 into the table > > > > but suppose i give > > > > select insert_table(""); > > > > it gives an error ...saying " invalid input syntax for integer: " > > > > please suggest a solution to this problem > > Don't try to use an empty string as an integer? > > Seriously, you need to give information about what you want to happen, > because an empty string isn't a valid integer and can't really be > converted into one (we explicitly removed code that handled this case > because we thought it didn't make sense). > > > > > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] cannot stat `/usr/local/pgsql/data/pg_xlog/00000001000000430000009C': No such file or directory
In my development system the file system where $PGDATA resides filled up. cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430076 to archive 2005-10-23 08:46:29 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430076 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076'? 2005-10-23 08:46:30 CDTLOG: archived transaction log file "00010 0430076" 2005-10-23 08:46:31 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430077 to archive 2005-10-23 08:47:46 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430077 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430077'? 2005-10-23 08:47:47 CDTLOG: archived transaction log file "00010 0430077" 2005-10-23 08:47:48 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writing `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078': No space left on device could not copy /usr/local/pgsql/data/pg_xlog/000100430078 to archive 2005-10-23 08:49:49 CDTLOG: archive command "yblogger /usr/local/pgsql/data/pg_xlog/000100430078 cp: overwrite `/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430078'? 2005-10-23 08:49:51 CDTLOG: archived transaction log file "00010 0430078" 2005-10-23 08:49:52 CDTLOG: could not close temporary statistics file "/usr/local/pgsql/data/global/pgstat.tmp.27235": No space left on device cp: writ At this point in time postgres had gone away. I tried to restart Postgres, but received these messages. Postgres would not start. 2005-10-24 08:28:04 CDTLOG: database system was interrupted at 2005-10-23 10:01:14 CDT 2005-10-24 08:28:04 CDTLOG: could not open file "/usr/local/pgsql/data/pg_xlog/000100430043" (log file 67, segment 67): No such file or dir ectory 2005-10-24 08:28:04 CDTLOG: invalid checkpoint record 2005-10-24 08:28:04 CDTPANIC: could not locate required checkpoint record 2005-10-24 08:28:04 CDTHINT: If you are not restoring from a backup, try removing the file "/usr/local/pgsql/data/backup_label". 2005-10-24 08:28:04 CDTLOG: startup process (PID 13345) was terminated by signal 6 2005-10-24 08:28:04 CDTLOG: aborting startup due to startup process failure 2005-10-24 08:28:04 CDTLOG: logger shutting down Since this is development I used “pg_resetxlog -f /usr/local/pgsql/data”. I then tried to start Postgres, but received the same messages as above. I removed the backup in progress file as stated in the message. Postgres was able to start after that, but now it keeps trying to archive log files that don’t exist. How can I tell postgres to move to the point in time where log file truncation occurred where the log files now start?
Re: [GENERAL] Outer join query plans and performance
Rich Doughty wrote: I'm having some significant performance problems with left join. Can anyone give me any pointers as to why the following 2 query plans are so different? [snip] knew i'd forgotten something... select version(); version -- PostgreSQL 8.0.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.2 20050821 (prerelease) (Debian 4.0.1-6) -- - Rich Doughty ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Outer join query plans and performance
I'm having some significant performance problems with left join. Can anyone give me any pointers as to why the following 2 query plans are so different? EXPLAIN SELECT * FROM tokens.ta_tokens t LEFT JOIN tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN tokens.ta_tokenhist h2 ON t.token_id = h2.token_id WHERE h1.histdate = 'now'; Nested Loop Left Join (cost=0.00..68778.43 rows=2215 width=1402) -> Nested Loop (cost=0.00..55505.62 rows=2215 width=714) -> Index Scan using idx_tokenhist__histdate on ta_tokenhist h1 (cost=0.00..22970.70 rows=5752 width=688) Index Cond: (histdate = '2005-10-24 13:28:38.411844'::timestamp without time zone) -> Index Scan using ta_tokens_pkey on ta_tokens t (cost=0.00..5.64 rows=1 width=26) Index Cond: ((t.token_id)::integer = ("outer".token_id)::integer) -> Index Scan using fkx_tokenhist__tokens on ta_tokenhist h2 (cost=0.00..5.98 rows=1 width=688) Index Cond: (("outer".token_id)::integer = (h2.token_id)::integer) Performance is fine for this one and the plan is pretty much as i'd expect. This is where i hit a problem. EXPLAIN SELECT * FROM tokens.ta_tokens t LEFT JOIN tokens.ta_tokenhist h1 ON t.token_id = h1.token_id LEFT JOIN tokens.ta_tokenhist h2 ON t.token_id = h2.token_id WHERE h2.histdate = 'now'; Hash Join (cost=1249148.59..9000709.22 rows=2215 width=1402) Hash Cond: (("outer".token_id)::integer = ("inner".token_id)::integer) -> Hash Left Join (cost=1225660.51..8181263.40 rows=4045106 width=714) Hash Cond: (("outer".token_id)::integer = ("inner".token_id)::integer) -> Seq Scan on ta_tokens t (cost=0.00..71828.06 rows=4045106 width=26) -> Hash (cost=281243.21..281243.21 rows=10504921 width=688) -> Seq Scan on ta_tokenhist h1 (cost=0.00..281243.21 rows=10504921 width=688) -> Hash (cost=22970.70..22970.70 rows=5752 width=688) -> Index Scan using idx_tokenhist__histdate on ta_tokenhist h2 (cost=0.00..22970.70 rows=5752 width=688) Index Cond: (histdate = '2005-10-24 13:34:51.371905'::timestamp without time zone) I would understand if h2 was joined on h1, but it isn't. It only joins on t. can anyone give any tips on improving the performance of the second query (aside from changing the join order manually)? Thanks -- - Rich Doughty ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pg_dump, MVCC and consistency
Hello everybody ! I am coming from the (expensive) "Oracle World" and I am a newbie in PG administration. I am currently working on backup concerns... I am using pg_dump and I have not encountered any problems but I have some questions about the internal management of data consistency in PG server. I have read some articles about the MVCC mechanism but I can't see how it handles a consistent "snapshot" of the database during all the export process. If I have well understood, the defaut transaction isolation level in PG is the "read commited" isolation level. If it is the isolation scheme used by pg_dump how can I be sure that tables accessed at the end of my export are consistent with those accessed at the begining ? Does pg_dump use a serializable isolation scheme ? We have this kind of concerns with Oracle and a "CONSISTENT" flag can be set in the exp utility to use a consistent snapshot of the database from the begining to the end of the export process. Unfortunately, this mode use intensively rollback segments and can drive to obsolete data (also knows as "Snapshot too old"). Is there the equivalent of rollback segments in PG ? Is there some issues like "snapshot too old" with intensive multi-users and transactional databases ? I have not a good knowledge of PG internal mechanism, I hope that my questions are clear enough... Florian ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Migration from 8.0 to 7.4...
Good point. This is exactly my problem. I tried to switch to the V2 protocol with the 8.0 JDBC driver. But in this case, I have the same behavior than with the 7.4 driver working on the 8.0 database. To answer Richard Huxton : the problem of JDBC 7.4 driver with a 8.0 database is that autocommit/commit/rollback doesn't work anymore. And I have also this behavior with a 8.0 JDBC driver and V2 protocol. So, I'm back to my first point... Csaba Nagy a écrit : Romain, If you're using JDBC and the V3 protocol is hurting you (which was my case at one point), you can force the 8.0 driver to use the V2 protocol which does a lot of things differently. For a quick solution that could work until you fix your code to properly work with V3. Just shooting in the dark... Csaba. On Mon, 2005-10-24 at 10:02, Romain Vinot wrote: Hi, We've got a problem with postgres 8.0 (our own code is not ready yet for the new driver) but a production database is already on postgres 8.0 (too bad, we didn't tested it enough...). So we need to migrate back to postgres 7.4 and wait for a code upgrade. Is there a possible way to do this ? pg_dump output of 8.0 is not compatible with 7.4. But the output of "pg_dump -a" seems to be compatible. One solution would be to create tables with a 7.4 sql script. Then use pg_dump to get the data from the 8.0 database and fill them in the 7.4 database. And finally restore all functions and triggers from a 7.4 sql script. Before testing this way, I would like your advise to know if our solution has a chance to work, if there is a better way or anything... Thanks for any help ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Romain Vinot - Ingénieur R&D Akio Software 43, rue de Dunkerque Tel : 01 53 20 63 80 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] SHA1 authentication
Thank you for the explanation. Cheers, Bohdan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Newbie Questions
On Fri, 2005-10-21 at 19:46 -0400, Douglas McNaught wrote: > Redefined Horizons <[EMAIL PROTECTED]> writes: > > > I'm running the latest stable version of PostgreSQL on a Debian Linux box > > running Gnome 2.0. I've just started setting up my first database with > > PostgreSQL and I've got a few newbie questions: > > I'm going to assume you installed the Debian packages rather than > installing from source... ...and if you did install from source, they're wherever you put them (as I frequently tell my daughter)... The real configuration files for the Debian packages are stored in /etc/postgresql and /etc/postgresql-common. There are symbolic links to those locations in the database directories. ... > > [2] Can I use the "SU" command to log in as Postgres if I am logged in as > > a non-root user, or is this only possible as the root user? > > If you give the postgres user a Unix password (as root) using the > 'passwd' command, you should be able to 'su' to that user from any > account. But generally that is thought to be inadvisable from a Unix security point of view, because it also allows anyone to log in as postgres without your knowing their real identity. You can use sudo to execute a command as another user, and then only your own password is needed. ... > > [4] How do I ensure that the Postmaster server process is started when I > > reboot my machine? > > It should be automatically started if you installed the Debian packages. If you didn't, look in contrib for an example of an init script. On Debian, this should be put into /etc/init.d and then you should use update-rc.d to set up the links for the runlevels in which you want it to run. > > [5] When I'm in PgAdmin III, does it matter if I add a new server as a > > non-root user, as the root-user, or as the Postgres user? What is the > > difference between the 3? > > Never used it, sorry. PgAdmin is a client application, so setting up a server means telling PgAdmin where to find the server you want to connect to. You should do it as the user who is going to be using PgAdmin. ... You do not generally need to use the postgres username; you can set up your own username as a PostgreSQL superuser and then you can do anything that postgres can do (except access the Unix files from outside a database connection). -- Oliver Elphick olly@lfix.co.uk Isle of Wight http://www.lfix.co.uk/oliver GPG: 1024D/A54310EA 92C8 39E7 280E 3631 3F0E 1EC0 5664 7A2F A543 10EA Do you want to know God? http://www.lfix.co.uk/knowing_god.html ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] a stored procedure ..with integer as the parameter
surabhi.ahuja wrote: from the client I get : "200", "surabhi", "10" now i have to make a call to the stored procedure using the above received strings. will not the "200" be converted to a valid integer before saving into the database ..and same with the smallint values. That wasn't the error you reported. You reported a problem with storing "" which not only isn't a number, it isn't even text. Double-quoting indicates it's a name (i.e. an identifier - a table name or similar). So - this suggests some questions: 1. Do you know what data-types the client is providing? 2. What validation/checking do you do on the client-supplied data? 3. How are you building your query - via Perl's DBI system, PHP's PEAR:DB classes, JDBC? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Migration from 8.0 to 7.4...
Romain, If you're using JDBC and the V3 protocol is hurting you (which was my case at one point), you can force the 8.0 driver to use the V2 protocol which does a lot of things differently. For a quick solution that could work until you fix your code to properly work with V3. Just shooting in the dark... Csaba. On Mon, 2005-10-24 at 10:02, Romain Vinot wrote: > Hi, > > We've got a problem with postgres 8.0 (our own code is not ready yet for > the new driver) but a production database is already on postgres 8.0 > (too bad, we didn't tested it enough...). > > So we need to migrate back to postgres 7.4 and wait for a code upgrade. > Is there a possible way to do this ? > > pg_dump output of 8.0 is not compatible with 7.4. But the output of > "pg_dump -a" seems to be compatible. > One solution would be to create tables with a 7.4 sql script. Then use > pg_dump to get the data from the 8.0 database and fill them in the 7.4 > database. And finally restore all functions and triggers from a 7.4 sql > script. > > Before testing this way, I would like your advise to know if our > solution has a chance to work, if there is a better way or anything... > > Thanks for any help ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] SHA1 authentication
On Mon, Oct 24, 2005 at 10:39:11AM +0200, Bohdan Linda wrote: > > Hello all, > > I would like to use password authentication for pgsql users for remote > backup purposes. I don't like the fact storing cleartext password on a > system. From documentation, i have learnt that passwords can be encrypted > by md5 and crypt methods. > > But we know, that md5 is rather weak encryption, so I am asking is there > any feasible way, how we can use SHA1 instead MD5? Firstly, SHA-1 is in no better shape than MD5, see [1]. Secondly all the current attacks are based upon generating collisions which kills it for the digital signing purpose, but for the purposes of authentication like we're using that's not relevent. That would require being able to generate a password that matches a given hash which none of the current attacks do. However, looking into the future this would require the following: 1. Adding code to backend and frontend (sha1 code may exist in contrib, not sure). 2. Changing the protocol to support another hashing algorithm. 3. All sorts of other fiddling but it wouldn't be too hard. By all means, submit a patch but there's no real hurry right now. We should probably move straight to something more secure anyway, maybe SHA-256 or something. [1] http://www.schneier.com/blog/archives/2005/02/sha1_broken.html Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpJVz68gF1OA.pgp Description: PGP signature
[GENERAL] STL problem in stored procedures
Hi All, I have downloaded an extension for PostgreSql. It contains stored procedures written in C. I compiled the neccessary files into an so file and I tried to use the functions, but it gives an error message when it has to load the shared object file. The error message says, it cannot locate a symbol. I think this symbol is part of the C++ STL. This is the error message: psql:/usr/share/postgresql/contrib/proximity.sql:5: ERROR: could not load library "/usr/lib/postgresql/proximity.so": /usr/lib/postgresql/proximity.so: undefined symbol: _ZNKSs17find_first_not_ofERKSsj What should I do to be able to use this extension? Anyone has any experience with this extension? Thanks, Zoli -- Zoltan Dudas Software Engineer SEI Hungary Office +36-52-889-532 Ext: 2032 Fax +36-52-889-599 Email [EMAIL PROTECTED] URL www.sei-it.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Migration from 8.0 to 7.4...
Romain Vinot wrote: Hi, We've got a problem with postgres 8.0 (our own code is not ready yet for the new driver) but a production database is already on postgres 8.0 (too bad, we didn't tested it enough...). What "new driver"? I wasn't aware of 8.0 not being able to work with drivers for 7.4 So we need to migrate back to postgres 7.4 and wait for a code upgrade. Is there a possible way to do this ? pg_dump output of 8.0 is not compatible with 7.4. But the output of "pg_dump -a" seems to be compatible. One solution would be to create tables with a 7.4 sql script. Then use pg_dump to get the data from the 8.0 database and fill them in the 7.4 database. And finally restore all functions and triggers from a 7.4 sql script. I've had to do something similar. The easiest solution is to pg_dump --schema-only and run that through a perl script or similar to translate/correct any parts that are wrong. First though, verify that the downgrade is necessary. What specifically is the problem with 8.0? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] a stored procedure ..with integer as the parameter
Title: Re: [GENERAL] a stored procedure ..with integer as the parameter from the client i receive a vector of strings ...which i have to pass as arguments to the stored procedure. That stored procedure has valid arguments data types for instance i have a stored procedure as follows: insert(integer, varchar(256), smallint) from the client I get : "200", "surabhi", "10" now i have to make a call to the stored procedure using the above received strings. will not the "200" be converted to a valid integer before saving into the database ..and same with the smallint values. thanks, regards Surabhi From: Stephan Szabo [mailto:[EMAIL PROTECTED]Sent: Fri 10/21/2005 12:11 PMTo: surabhi.ahujaCc: pgsql-general@postgresql.orgSubject: Re: [GENERAL] a stored procedure ..with integer as the parameter ***Your mail has been scanned by InterScan VirusWall.***-***On Fri, 21 Oct 2005, surabhi.ahuja wrote:> i have a stored procedure>> insert_table(integer)> which does "insert into table (x) value ($1)";>> now in my client i call the stored procedure as>> select insert_table("3");>> it works fine and inserts 3 into the table>> but suppose i give>> select insert_table("");>> it gives an error ...saying " invalid input syntax for integer: ">> please suggest a solution to this problemDon't try to use an empty string as an integer?Seriously, you need to give information about what you want to happen,because an empty string isn't a valid integer and can't really beconverted into one (we explicitly removed code that handled this casebecause we thought it didn't make sense).
Re: [GENERAL] Create GLOBAL TABLE
I found a solution to make a global table: Create a normal table, set the rellisshared to true (from pg_class), and move the file from current database folder to folder global. I don't know if cascase updates is running, if I link this table to multi databases. Thx, Marius -Original Message- From: Richard Huxton [mailto:[EMAIL PROTECTED] Sent: 20 October 2005 10:17 To: Marius Cornea Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Create GLOBAL TABLE Marius Cornea wrote: > 1.The sintax for create table is : >CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE table_name ( > { column_name data_type [ DEFAULT default_expr ] [ column_constraint [ > ... ] ] ... > > What mean the parameter GLOBAL|LOCAL ?? http://www.postgresql.org/docs/8.0/static/sql-createtable.html "Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This makes no difference in PostgreSQL, but see Compatibility." > 2. in pg_class it is a field "relisshared" how can i use it ? > 3.1 how can I use BKI script ?, > 3.2 it is posible to modify postgres.bki to create another table with > initdb script ? > like: "create bootstrap shared_relation .." ? It almost certainly *is* possible to add your own system table. However, that's probably not something to do without taking time to make sure you know what you're doing. It will also mean your PostgreSQL installation will be different from everyone else's. First thing to do is search the mailing list archives and see if anyone else has done the same. http://archives.postgresql.org/ Then, I'd see how an existing table was added (say pg_authid which I think is new in 8.1 to handle roles). There is a new cvs browser from command-prompt - details here. http://archives.postgresql.org/pgsql-general/2005-10/msg00852.php Not sure if you might want to look at whether you need to allocate another fixed OID for your new shared table too. Check it looks possible and then post details of your plan on the hackers list and see what they say. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] SHA1 authentication
Hello all, I would like to use password authentication for pgsql users for remote backup purposes. I don't like the fact storing cleartext password on a system. From documentation, i have learnt that passwords can be encrypted by md5 and crypt methods. But we know, that md5 is rather weak encryption, so I am asking is there any feasible way, how we can use SHA1 instead MD5? Cheers, Bohdan ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Migration from 8.0 to 7.4...
Hi, We've got a problem with postgres 8.0 (our own code is not ready yet for the new driver) but a production database is already on postgres 8.0 (too bad, we didn't tested it enough...). So we need to migrate back to postgres 7.4 and wait for a code upgrade. Is there a possible way to do this ? pg_dump output of 8.0 is not compatible with 7.4. But the output of "pg_dump -a" seems to be compatible. One solution would be to create tables with a 7.4 sql script. Then use pg_dump to get the data from the 8.0 database and fill them in the 7.4 database. And finally restore all functions and triggers from a 7.4 sql script. Before testing this way, I would like your advise to know if our solution has a chance to work, if there is a better way or anything... Thanks for any help -- Romain Vinot ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly