[GENERAL] INDIANNIC : POSTGRES WINDOWS INSTALLATION
hi i wrote to selena and was asked to send this write on windows installation out here we installed postgres on our windows 2000 server yesterday and faced a few problems (related to initdb ). We found that a few initial steps will make things easy for all. These are not listed out in the forums or postgres docs. if it is ok you can post these in the postgres windows faq. we have written below a few lines which may help others. this applies to win2k and also apply to win2003 a) if your server is hosted in a dataceneter for installation on windows 2000 you cannot use terminal service. you need a remote desktop service. you can install realvnc (http://www.realvnc.com) install it so that you get access to your server similar to a console. If you are using win2003 use remote desktop for installation. b) postgres database server will not install as an user with administrator rights. So before starting installation create some random user : my_postgres under computer management users , and remember the password. By default this user which you create will belong to the member of users group. Select and Remove this membership. c) create a folder say c:\postgres. It is assumed the users system and adminstrator already have full rights over entire c: drive d) now select the folder c:\postgres properties security add user my_postgres and user everyone. both users should be given full permissions to ready / write / modify etc to the c:\postgres directory location. After installation you SHOULD remove user everyone but retain user_postgres user which selective permissions as per posgres documentation. e) go to c: drive properties quotas make sure you give user_postgres unlimited quota or atleast around 200 mb. if you dont do this then installation may stop midway with errors. f) now you can start your installation and give path c:\postgres during installation instead of c:\program files, and use the above user : my_postgres . remember that after installation postgres service under windows service will run as my_postgres for security purpose. Now remove user everyone from c:\postfix Adjust the user_postgres permissions as per posgres document. Restart postgres service and check. If the permissions on the c:\postgres directory is not correct then the service will not start correctly. Sincerely Rajesh Mahadevan Indian Network Information Centre +91-22-27693138 +91-9920572502 Mobile : +91-9821057134 Hi Rajesh, Thank you for writing that up! Could you please send your message to [EMAIL PROTECTED] I think it would be of interest to a number of people. -selena On Feb 10, 2008 6:03 PM, INDIANNIC-HOSTING [EMAIL PROTECTED] wrote: hi we installed postgres on our windows 2000 server yesterday and faced a few problems (related to initdb ). We found that a few initial steps will make things easy for all. These are not listed out in the forums or postgres docs. if it is ok you can post these in the postgres windows faq. we have written below a few lines which may help others. this applies to win2k and also apply to win2003 a) if your server is hosted in a dataceneter for installation on windows 2000 you cannot use terminal service. you need a remote desktop service. you can install realvnc (http://www.realvnc.com) install it so that you get access to your server similar to a console. If you are using win2003 use remote desktop for installation. b) postgres database server will not install as an user with administrator rights. So before starting installation create some random user : my_postgres under computer management users , and remember the password. By default this user which you create will belong to the member of users group. Select and Remove this membership. c) create a folder say c:\postgres. d) now select the folder c:\postgres properties security add user my_postgres and user everyone both users should be given full permissions to ready / write / modify etc to the c:\postgres directory location. After installation you SHOULD remove user everyone but retain user_postgres user which selective permissions as per posgres documentation. e) go to c: drive properties quotas make sure you give user_postgres unlimited quota or atleast around 200 mb. if you dont do this then installation may stop midway with errors. f) now you can start your installation and give path c:\postgres during installation instead of c:\program files, and use the above user : my_postgres . remember that after installation postgres service under windows service will run as my_postgres for security purpose. After installation adjust the user_postgres permissions. Restart postgres service and check. If the permissions on the c:\postgres directory is not correct then the service will not start correctly. Sincerely Rajesh Mahadevan Indian Network Information Centre +91-22-27693138 +91-9920572502 Mobile : +91-9821057134 ---(end of
Re: [GENERAL] ERROR: expected just one rule action
On Feb 11, 2008 4:28 PM, Tom Lane [EMAIL PROTECTED] wrote: Dave Page [EMAIL PROTECTED] writes: If the endianess isn't corrected for the non-native platform at build time, I've seen initdb leave a cluster with a completely broken pg_rewrite (iirc). Hmm, but is pg_rewrite really the most obvious symptom? In 8.3 I would expect massive breakage all over, because of the varvarlena stuff's sensitivity to endianness. It was what we found first upon investigating why my first attempts at building a universal binary failed. A quick test with pgAdmin threw an odd error following a select from a view. We eventually realised that the query trees in ev_action were all blank if memory serves. Greg might remember more... -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Oracle-compatible database company ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] oids
Bob Pawley wrote: All of my tables are without oids. I have an application in which I drop, then recreate a table (to reset serial numbers) and with an update on the new information I get an error about a specific oid missing. This is a known problem. It was fixed in 8.3 -- you may want to try that. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Empty to NULL conversion - Ruby - Postgres ?
On Sat, 2008-02-09 at 09:23 -0500, Venks wrote: Steve/Jeff, Thanks for your replies. I am using the latest pg module but I don't know if there is any way to handle this without SQL. I am manually taking care of it using SQL functions to convert empty strings to NULL. It would be nice if there is a setting that could take care of this issue. Hi, Can we take this issue to the ruby-pg lists (or forums), so that other ruby-pg users can see it? Show a simple irb session that demonstrates your problem, and also the version of the pg module that you are using. This issue did exist, and was fixed, but perhaps you have one version behind or something. Regards, Jeff Davis ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] tsearch2 text::TSVECTOR cast not working for me on Pg 8.1.6
Hi, I want to convert a TEXT string that I am mangling to TSVECTOR with a cast. I am using Postgresql 8.1.6 and tsearch2. According to the documentation this should work although I am getting an ERROR. tsearch2 reference on www.sai.msu.su says that text::TSVECTOR RETURNS TSVECTOR FWIW, I am using regexp_replace to just take out the first occurrence of each lexeme and then trying to cast back into a tsvector. EG: SELECT replace( regexp_replace( textin( tsvector_out( to_tsvector('default', coalesce(l.comments, '')) ) )::TEXT , '(:\\d+[ABCD]*)(\\,\\d+[ABCD]*)*', '\\1', 'g') , , '')::tsvector FROM listings l LIMIT 1; ERROR: cannot cast type text to tsvector What is interesting is that a basic select works probably because the pseudo type 'cstring' is in effect. select 'To a Vector'::TSVECTOR; tsvector --- 'a' 'To' 'Vector' (1 row) Time: 3.297 ms Any ideas on how to get back to a cstring so that it will work OR how to make the text::TSVECTOR cast work? Cheers, James ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] SPI_ERROR_CONNECT
I am receiving a SPI_ERROR_CONNECT error. From what I'm reading I could fix this in C using SPI_push(). How does one fix this with PL/PGSql? Return error: --- NOTICE: current day = 1 ERROR: SPI_connect failed: SPI_ERROR_CONNECT CONTEXT: PL/pgSQL function pop_tag_day_over line 17 at FOR over SELECT rows ** Error ** ERROR: SPI_connect failed: SPI_ERROR_CONNECT SQL state: XX000 Context: PL/pgSQL function pop_tag_day_over line 17 at FOR over SELECT rows --- Here is my function: --- CREATE OR REPLACE FUNCTION pop_tag_day_over() RETURNS void AS $BODY$ DECLARE current_row RECORD; trans_day integer; BEGIN trans_day := 0; truncate table day_over; FOR i IN 1..(extract('day' from(last_day(process_month(-1)::integer LOOP execute 'CREATE OR REPLACE VIEW temp_tags_18 AS SELECT datetime, tagnum, tagtype, vrn FROM tag WHERE datetime = process_month() AND datetime (process_month() - 18 + ' || trans_day || ') ORDER BY vrn, tagnum, datetime'; FOR current_row IN SELECT * from temp_tags_18_counted LOOP IF current_row.day_count = 1 THEN insert into day_over (vrn,process_day) values (current_row.vrn,(1 + trans_day) ); END IF; END LOOP; raise notice 'current day = %',trans_day+1; trans_day := i; END LOOP; END; $BODY$ LANGUAGE 'plpgsql'; --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] catalog info for sequences
On Feb 11, 2008, at 11:39 AM, Marc Munro wrote: Can someone please tell me how to extract the mix, max, increment by, etc, values for a sequence from the system catalogs. Is this in the manual somewhere (I couldn't find it)? Take a look at information_schema.sequences for a list of sequence names, and select * from sequence_name for information about a specific sequence. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] .Net Development Issues
Hey all, I'm trying to set up an ODBC connection inside Visual Studio 2005 and getting the error message [Microsoft ODBC Driver Manager] Data source name not found and no default driver specified The confusing part about that message is I already have the ODBC drivers installed and have been connecting to the Database through the same program when run on the command line. But when we brought it into VS2005 it (at run time) won't connect to the DB. The string being used is DRIVER={PostgreSQL Unicode};SERVER=localhost;DATABASE=ASDFS;UID=peckb1;PWD=**;Dsn=PostgreSQ L30W (I replaced the actual password with **, but the real password is supplied in the string. Anyone have any ideas on how to get it to work inside VS2005? We are mainly doing this for the ability to have a graphical IDE for debugging. We are using PostgreSQL 8.2.4. - Brian Peck - 858-795-1398 - Software Engineer - Lockheed Martin
[GENERAL] Language Code to Language name conversion for use in pg_ts_config?
I've started tinkering with the FTS functionality in 8.3 and was wondering if there was a good way to convert from a language code (like en-us) to a language name as used in pg_ts_config.cfgname (like 'english'), As far as I know ISO639 language codes are used pretty much everywhere else in postgres but there for some reason? I can build a table to do the mapping if I have to, but wasn't sure if there was a better way planned or one that was missing? (I want to build indexes based on languages passed in from the application as language codes) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] oids
On Mon, Feb 11, 2008 at 10:38:55AM -0800, Bob Pawley wrote: All of my tables are without oids. I have an application in which I drop, then recreate a table (to reset serial numbers) and with an update on the new information I get an error about a specific oid missing. Any thoughts would be appreciated. Your application has the plan for accessing that table cached, and the way the access happens under the hood is by the oid of the table. This is the table oid that someone upthread was mentioning. There's probably a less kludgey way of resetting serial numbers. Is this a sequence? What's wrong with setval()? A ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] oids
All of my tables are without oids. I have an application in which I drop, then recreate a table (to reset serial numbers) and with an update on the new information I get an error about a specific oid missing. Any thoughts would be appreciated. Bob - Original Message - From: Erik Jones [EMAIL PROTECTED] To: Bob Pawley [EMAIL PROTECTED] Cc: PostgreSQL pgsql-general@postgresql.org Sent: Monday, February 11, 2008 10:29 AM Subject: Re: [GENERAL] oids On Feb 11, 2008, at 12:15 PM, Bob Pawley wrote: I'm running Postgresql 8.2 on Windows. If I create a table 'without oids' are oids still in use behind the scenes?? Yes and no. WITHOUT OIDS specifies that you don't want each row to get its own oid. You will often here of a table's oid and what that is is the oid of the pg_class entry for that table. The default when creating tables is WITHOUT OIDS and you should leave it that way. Erik Jones DBA | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] oids
I'm running Postgresql 8.2 on Windows. If I create a table 'without oids' are oids still in use behind the scenes?? Bob ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Question about CLUSTER
Hello, I'm planning to cluster a few large tables in our database but I'm unable to find any recommendations/documentation on best practices -- Mainly, whether it's better to use an index which has a higher idx_scan value, a higher idx_tup_read value, or the higest idx_tup_fetch value. Can anyone point me to docs which explain this better? -salman ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Working with huge amount of data.
Mario Lopez wrote: Hi guys :-), I am working on a personal project in which I am trying to make sense on a huge (at least for me) amount of data. I have approximately 150 million rows of unique words (they are not exactly words it is just for explaining the situation). The table I am inserting this is a quite simple table, something like this: CREATE TABLE public.names ( id SERIAL, name VARCHAR(255) ) WITHOUT OIDS; It is a requirement that I can make searches on the varchar with queries that look the following way: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ I optimized the first type of queries making partitions with every letter that a name can begin with: AFAIK, you only need to add an index on name to be able to speed up the first kind of queries. Have a look at B-Tree description in [1]. - Jonathan [1] : http://www.postgresql.org/docs/8.3/interactive/indexes-types.html ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Working with huge amount of data.
On Feb 11, 2008, at 9:37 AM, Mario Lopez wrote: Hi guys :-), I am working on a personal project in which I am trying to make sense on a huge (at least for me) amount of data. I have approximately 150 million rows of unique words (they are not exactly words it is just for explaining the situation). The table I am inserting this is a quite simple table, something like this: CREATE TABLE public.names ( id SERIAL, name VARCHAR(255) ) WITHOUT OIDS; It is a requirement that I can make searches on the varchar with queries that look the following way: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ I optimized the first type of queries making partitions with every letter that a name can begin with: CREATE TABLE public.names_a ( CONSTRAINT names_a_check CHECK ((name)::text ~~ 'a%'::text) ) INHERITS (public.names) WITHOUT OIDS; The problem arises with the second type of queries, where there are no possible partitions and that the search keywords are not known, I have tried making indexes on the letter it ends with, or indexes that specify that it contains the letter specified but none of them work the planifier only make sequential scans over the table. For the moment the quickest scan I have being able to make is using grep!!, surprisingly enough grep searches on an average of 20 seconds a whole plain text file of 2 GB one name per line and PostgreSQL on the fist type of queries takes like 50 seconds while the second type of queries con take up to two minutes which is completely unacceptable for an online search engine that has to attend a user querying this information. How does this big search engines let’s say Google make this up? I am amazed of the quickness on searching this amount of information in so little time. Any approach I could take? I am open minded so anything is acceptable not necessarily only PostgreSQL based solutions (although I would prefer it). By the way Textual Search in PostgreSQL is discarded because what I am looking at are not names that can be decomposed on lexems, let's say that this varchar is composed of random garbage. Actually, a friend of mine actually did exactly what you've tried: grep. He had a cron job that would update the txt file from the table's data every five minutes and then his app would shell out to run those kinds of queries. Of course, with a setup like that your results can be a little out of date (the period between runs of the cron job) but, if you can deal with that, that's actually a pretty simple solution that doesn't take too much setup. Erik Jones DBA | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Working with huge amount of data.
Hubert, Your two posts look pretty cool :), I would read them tonight and answer you back :) Thanks! On Mon, Feb 11, 2008 at 04:37:24PM +0100, Mario Lopez wrote: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ check this: http://www.depesz.com/index.php/2007/07/30/indexable-field-like-something/ and this: http://www.depesz.com/index.php/2007/09/15/speeding-up-like-xxx/ depesz ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Working with huge amount of data.
On Mon, Feb 11, 2008 at 04:37:24PM +0100, Mario Lopez wrote: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ check this: http://www.depesz.com/index.php/2007/07/30/indexable-field-like-something/ and this: http://www.depesz.com/index.php/2007/09/15/speeding-up-like-xxx/ depesz -- quicksil1er: postgres is excellent, but like any DB it requires a highly paid DBA. here's my CV! :) http://www.depesz.com/ - blog dla ciebie (i moje CV) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Is PG a moving target?
On Feb 9, 2008, at 12:20 PM, Ken Johanson wrote: But given the recent and dramatic example of 8.3's on-by-default stricter typing in functions (now not-autocasting), I worry that kind of change could happen in every minor version (8.4 etc). You need to *know* your software if you're using it production. 8.4 is *not* a minor version upgrade; it is a major upgrade. The Postgres guarantee is that nothing will change in behavior on the 8.x branch for a given x. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] end of life for pg versions...
On Mon, Feb 11, 2008 at 03:26:39PM +0100, Ivan Sergio Borgonovo wrote: On Mon, 11 Feb 2008 08:46:00 -0500 Christopher Browne [EMAIL PROTECTED] wrote: On Feb 11, 2008 8:04 AM, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: I did manage to find an announcement about the support of pg for windows... but I wasn't able to see anything you'd have a summary of scheduled and planned EOL for various pg versions (on different platform). There have been some secondary sources for support that simultaneously promise longer support times than PGDG does... What about a place where to read announcement made by the PostgreSQL Global Development Team? I can't think of going more upstream than them. http://www.postgresql.org/support/security that's probably the best one you can find. Or that in combination with the news archive (http://www.postgresql.org/about/newsarchive) //Magnus ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] end of life for pg versions...
On Mon, Feb 11, 2008 at 08:46:00AM -0500, Christopher Browne wrote: On Feb 11, 2008 8:04 AM, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: I did manage to find an announcement about the support of pg for windows... but I wasn't able to see anything you'd have a summary of scheduled and planned EOL for various pg versions (on different platform). There have been some secondary sources for support that simultaneously promise longer support times than PGDG does... For instance, support for 7.3 has essentially ceased, but Red Hat has that included in some version(s) of their distributions that still have some time to run before they fall out of RHAT support. So if issues come up with 7.3, they *may* be indirectly addressed thru someone like RHAT. Similarly, Sun or EnterpriseDB may make support promises that exceed what PGDG offers. What you'll find, in practice, is that if you have issues with old versions, and report such, people will be quick to recommend upgrading to some version that is less ancient. Nothing has crystallized into a real policy; if someone feels like backpatching bug fixes back to 7.1, nothing is stopping them from doing so. But that takes more time and effort, so the eldest version that is now still getting patched is 7.4. And it is pretty plausible that that may change to 8.0 in the next year or so. Please note that the only documented versioning oplicy we do have says security fixes are only backpatched to 7.4 and later. And this change was made as we released 7.3.21 early january. //Magnus ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] end of life for pg versions...
On Mon, 11 Feb 2008 08:46:00 -0500 Christopher Browne [EMAIL PROTECTED] wrote: On Feb 11, 2008 8:04 AM, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: I did manage to find an announcement about the support of pg for windows... but I wasn't able to see anything you'd have a summary of scheduled and planned EOL for various pg versions (on different platform). There have been some secondary sources for support that simultaneously promise longer support times than PGDG does... What about a place where to read announcement made by the PostgreSQL Global Development Team? I can't think of going more upstream than them. thanks -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_stat_activity xact_start and autovacuum
Dawid Kuroczko escribió: Dawid Kuroczko escribió: I'm using 8.3.0 and I see that autovacuum processes in pg_stat_activity have xact_start. As far as I know, since at least 8.2.x the VACUUM does not start a new transaction. I am referrring to the E.8.3.5 Release 8.2 Release Notes: * Allow VACUUM to expire rows without being affected by other concurrent VACUUM operations (Hannu Krossing, Alvaro, Tom) Oh, I see. Well, it is certainly running in a transaction, even though that transaction does not prevent other vacuums from removing old rows. Right now I am using: SELECT extract('epoch' from min(xact_start)) AS oldest_xact_age FROM pg_stat_activity WHERE current_query NOT LIKE 'autovacuum:%'; ...which works fine but somehow I feel that if xact_age would be NULL, it would ring more true. Since VACUUM does not prevent VACUUMING it can take days to complete and still I wouldn't need to worry. ;-) Actually it's not just autovacuum; it's any lazy vacuum. It's hard to tell those processes apart in pg_stat_activity. Perhaps we could have added a column in pg_stat_activity indicating processes that don't hold old tuples, but I feel that would have been a little too much. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Mechanics of Select
On Feb 11, 2008 3:56 AM, Alban Hertroys [EMAIL PROTECTED] wrote: On Feb 11, 2008, at 2:23 AM, Willem Buitendyk wrote: As others have suggested my big problem with the function I wrote was that I had made it Volatile instead of Immutable (it is no doubt suffering from code bloat as well). That made all the difference. Curiously though - I tried it just with the date_trunc function and it was just as slow as my old Volatile function. select * from track where datetime = '2007-04-01' and datetime date_trunc('month','2007-04-01'::timestamp)+interval '1 month'; was about 55s That's probably because '2007-04-01'::timestamp can be at different time zones depending on client configuration and hence is volatile. If you need a timestamp you probably want to use the servers TZ, which you can specify using: timestamp at your timezone No, straight up timestamps shouldn't have this problem, only timestamptz. I'd suggest trying an index on the date_trunc function here and see if that helped. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SPI_ERROR_CONNECT
Willem Buitendyk [EMAIL PROTECTED] writes: ERROR: SPI_connect failed: SPI_ERROR_CONNECT CONTEXT: PL/pgSQL function pop_tag_day_over line 17 at FOR over SELECT rows Hm, what PG version is this? And could we have a complete test case not just the function? (I don't feel like trying to reverse-engineer your tables and views...) regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Problem in using C API - libpq
Hello, you have to use switch -lpq http://www.postgresql.org/docs/8.3/static/libpq-build.html Regards Pavel Stehule On 12/02/2008, Shwe Yee Than [EMAIL PROTECTED] wrote: Hello, I´ve got a problem when trying to access Postgresql through C language. I've included libpq-fe.h as a header file in the C program. When I compile it, I got the following errors: [EMAIL PROTECTED] src]$ gmake alpha1.o(.text+0x297c6):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12726: undefined reference to `PQstatus' alpha1.o(.text+0x297f5):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12729: undefined reference to `PQerrorMessage' alpha1.o(.text+0x2986f):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12741: undefined reference to `PQexec' alpha1.o(.text+0x29886):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12743: undefined reference to `PQresultStatus' alpha1.o(.text+0x298b2):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12746: undefined reference to `PQclear' alpha1.o(.text+0x298c3):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12747: undefined reference to `PQfinish' alpha1.o(.text+0x298e8):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12751: undefined reference to `PQntuples' alpha1.o(.text+0x2990c):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12752: undefined reference to `PQgetvalue' alpha1.o(.text+0x29935):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12754: undefined reference to `PQclear' alpha1.o(.text+0x29946):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12756: undefined reference to `PQfinish' collect2: ld returned 1 exit status gmake: *** [etd.cgi] Error 1 Ayone can help me? Thanks in advance Regards, Shwe Looking for last minute shopping deals? Find them fast with Yahoo! Search. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Problem in using C API - libpq
Including the header is not enough you must also link your binary against the library. On Feb 11, 2008, at 9:52 PM, Shwe Yee Than wrote: Hello, I´ve got a problem when trying to access Postgresql through C language. I've included libpq-fe.h as a header file in the C program. When I compile it, I got the following errors: [EMAIL PROTECTED] src]$ gmake alpha1.o(.text+0x297c6):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12726: undefined reference to `PQstatus' alpha1.o(.text+0x297f5):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12729: undefined reference to `PQerrorMessage' alpha1.o(.text+0x2986f):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12741: undefined reference to `PQexec' alpha1.o(.text+0x29886):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12743: undefined reference to `PQresultStatus' alpha1.o(.text+0x298b2):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12746: undefined reference to `PQclear' alpha1.o(.text+0x298c3):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12747: undefined reference to `PQfinish' alpha1.o(.text+0x298e8):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12751: undefined reference to `PQntuples' alpha1.o(.text+0x2990c):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12752: undefined reference to `PQgetvalue' alpha1.o(.text+0x29935):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12754: undefined reference to `PQclear' alpha1.o(.text+0x29946):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12756: undefined reference to `PQfinish' collect2: ld returned 1 exit status gmake: *** [etd.cgi] Error 1 Ayone can help me? Thanks in advance Regards, Shwe Looking for last minute shopping deals? Find them fast with Yahoo! Search.
[GENERAL] PostgreSQL does not support updateable cursors
Dear All, I got error message 'ERROR: column "ctid" does not exist; Error while executing the query' when I try to query SELECT on my VIEW as 'rsSystem.Open "SELECT * FROM v_memocatlist ORDER BY memocategory", connSystem, adOpenStatic, adLockOptimistic'. I found some information on internet about the PostgreSQL does not support updateable cursors so I change my code tobe read only cursors as 'rsSystem.Open "SELECT * FROM v_memocatlist ORDER BY memocategory", connSystem, adOpenForwardOnly, adLockReadOnly' then it work fine.However, I need touse this query for make data ready to be updated on some record. How can I fix this problem? More Information about my system. Database: PostgreSQL 8.3ODBC: psqlODBC 08.03.0100Language: MS Visual Basic 6.0 Regards, Premsun NETsolutions Asia Limited +66 (2) 237 7247
[GENERAL] Problem in using C API - libpq
Hello, I´ve got a problem when trying to access Postgresql through C language. I've included libpq-fe.h as a header file in the C program. When I compile it, I got the following errors: [EMAIL PROTECTED] src]$ gmake alpha1.o(.text+0x297c6):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12726: undefined reference to `PQstatus' alpha1.o(.text+0x297f5):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12729: undefined reference to `PQerrorMessage' alpha1.o(.text+0x2986f):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12741: undefined reference to `PQexec' alpha1.o(.text+0x29886):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12743: undefined reference to `PQresultStatus' alpha1.o(.text+0x298b2):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12746: undefined reference to `PQclear' alpha1.o(.text+0x298c3):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12747: undefined reference to `PQfinish' alpha1.o(.text+0x298e8):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12751: undefined reference to `PQntuples' alpha1.o(.text+0x2990c):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12752: undefined reference to `PQgetvalue' alpha1.o(.text+0x29935):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12754: undefined reference to `PQclear' alpha1.o(.text+0x29946):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12756: undefined reference to `PQfinish' collect2: ld returned 1 exit status gmake: *** [etd.cgi] Error 1 Ayone can help me? Thanks in advance Regards, Shwe - Looking for last minute shopping deals? Find them fast with Yahoo! Search.
Re: [GENERAL] type casting in 8.3
On Feb 11, 3:15 am, ${spencer} [EMAIL PROTECTED] wrote: I just literally ran my first search of the day and got the same error. i changed my query so that the integer was cast into text and then it worked fine. [EMAIL PROTECTED] wrote: All, I'm getting the following (new) (php?) error on a database recently moved from 8.2.6 to 8.3. I know there's been changes with casts in 8.3 and I intend to learn how to fix them but I don't know how to interpret the error: ERROR: operator does not exist: character integer LINE 1: ... 303841-9' and amount 0 and quant 0 and reg_id != 99) gro... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. I think that operator does not exist: character integer is the key here. Does it mean the query is trying to interpret a text string as an integer? The string ...303841-9 is an ISBN number and is char(16) in the table. Thanks for any pointers or links to a tutorial. tf Ya know, I found and fixed my problem today at work. I just explicitly cast everything in the query and as I worked my way through it I watched how the error message changed. At the very end I came up to something like ...and store != $store. I changed it to ...and store != '$store' and, bingo the query worked again. Then I backed out all the :: casting I had done and everything still worked. 'magine that. tf ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] tsearch2 text::TSVECTOR cast not working for me on Pg 8.1.6
James Reynolds [EMAIL PROTECTED] writes: I want to convert a TEXT string that I am mangling to TSVECTOR with a cast. I am using Postgresql 8.1.6 and tsearch2. According to the documentation this should work although I am getting an ERROR. tsearch2 reference on www.sai.msu.su says that text::TSVECTOR RETURNS TSVECTOR Seems to be a documentation mistake, since there's no CREATE CAST anywhere in the tsearch.sql file (either in 8.1 or later versions). I think you want to use to_tsvector(). regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] SPI_ERROR_CONNECT
Tom Lane wrote: That's a fairly bad workaround (assuming that the function is a legitimate candidate to be IMMUTABLE) because it defeats potential optimizations. What I'd suggest you do instead is rethink your apparently widespread habit of whacking your view definitions around on-the-fly. This would never have worked at all before PG 8.3 (and as you can see we still have some bugs left in supporting it in 8.3 :-(). Even when it does work, there is a whole lot of frantic paddling going on just under the surface. We may sail serenely on like the swan, but not very speedily Yep, already started reorganizing so that I don't have to hack away at the views so much. So far I've been able to do without the functions that would only work with volatile. cheers, willem ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Is PG a moving target?
On Mon, 2008-02-11 at 09:09 +0100, Peter Eisentraut wrote: Ken Johanson wrote: Is there anything now, or in the works, for compatibility emulation? For example to setup my session to act like 8.2 and allow less-strict typing. The best way to ensure 8.2 compatibility is to use 8.2. But as casts are user definable, you can add back any casts you want. Just don't add dozens of implicit casts and then come back here wondering why your application is behaving strangely. :) As I understand it, it's tricky (or impossible) to get the 8.2 behavior back just by adding/modifying casts. If not, couldn't we just publish those casts so people can be backwards compatible if they want? Regards, Jeff Davis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Language Code to Language name conversion for use in pg_ts_config?
Josh Hayes-Sheen [EMAIL PROTECTED] writes: I've started tinkering with the FTS functionality in 8.3 and was wondering if there was a good way to convert from a language code (like en-us) to a language name as used in pg_ts_config.cfgname (like 'english'), As far as I know ISO639 language codes are used pretty much everywhere else in postgres but there for some reason? I can build a table to do the mapping if I have to, but wasn't sure if there was a better way planned or one that was missing? (I want to build indexes based on languages passed in from the application as language codes) Feel free to make your own TS configuration names according to whatever convention you like. The predefined ones are really meant as samples anyway, not necessarily the ones you'd use in production. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] WINDOWS INSTALLATION TIPS
we installed postgres on our windows 2000 server yesterday and faced a few problems (related to initdb ). We found that a few initial steps will make things easy for all. These are not listed out in the forums or postgres docs. if it is ok you can post these in the postgres windows faq. we have written below a few lines which may help others. this applies to win2k and also apply to win2003 a) if your server is hosted in a dataceneter for installation on windows 2000 you cannot use terminal service. you need a remote desktop service. you can install realvnc (http://www.realvnc.com) install it (you can uninstall it later) so that you get access to your server similar to a console. Ifyou are using win2003 use remote desktop for installation. b) postgres database server will not install as an user with administrator rights. So before starting installation create some random user : my_postgres under computer management users , and remember the password. By default this user which you create will belong to the member of users group which has limited permissions. c) create a folder say c:\postgres. It is assumed that the windows users system and adminstrator already have full rights over entire c: drive d) now select the folder c:\postgres properties security add user my_postgres and user everyone. both users should be given full permissions to ready / write / modify etc to the c:\postgres directory location. After installation you SHOULD remove user everyone but retain user_postgres user which selective permissions as per posgres documentation. e) go to c: drive properties quotas make sure you give user_postgres unlimited quota or atleast around 200 mb. if you dont do this then installation may stop midway with errors. f) now you can start your installation and give path c:\postgres during installation instead of c:\program files, and use the above user : my_postgres . remember that after installation postgres service under windows service will run as my_postgres for security purpose. Now remove user everyone from c:\postfix Adjust the user_postgres permissions as per postgres document. Restart postgres service and check. If the permissions on the c:\postgres directory is not correct then the service will not start correctly. hope this helps many new people ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] oids
On Feb 11, 2008, at 12:15 PM, Bob Pawley wrote: I'm running Postgresql 8.2 on Windows. If I create a table 'without oids' are oids still in use behind the scenes?? Yes and no. WITHOUT OIDS specifies that you don't want each row to get its own oid. You will often here of a table's oid and what that is is the oid of the pg_class entry for that table. The default when creating tables is WITHOUT OIDS and you should leave it that way. Erik Jones DBA | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] end of life for pg versions...
On Feb 11, 2008, at 9:33 AM, Ivan Sergio Borgonovo wrote: On Mon, 11 Feb 2008 15:36:21 +0100 Magnus Hagander [EMAIL PROTECTED] wrote: http://www.postgresql.org/support/security that's probably the best one you can find. Or that in combination with the news archive (http://www.postgresql.org/about/newsarchive) Really... without making it too formal as a developer I'd appreciate a rough schedule a page where you would say something like: - we expect our next minor release will come out in X months - we expect our major release will come out in Y months - EOL of release A for platform B is planned around date Z Even with a disclaimer with a very bland commitment to the release schedule it could help developers to build up their own schedule and support list too and give some hook for advocacy as well. The problem with that is that as a volunteer-run project, dates can be off by a mile. Less than a year ago the plan was to release 8.3 is August-September 2007. Instead it was released a week or two ago. IIRC, the decision to end support for a version is determined in large part by how hard it would be to back-patch something. If a bug was found that dated back to 7.4 but was very difficult to fix in 7.4 I bet you'd see 7.4 get EOL'd unless someone wanted to pay to back- patch it. I think the closest thing to a policy you'll find is a discussion from a year or two ago where the consensus was that we should endeavor to support a version for at least 2 years after it's replacement comes out (ie: 8.2 should be supported for at least 2 years after we released 8.3). -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] pg_stat_activity xact_start and autovacuum
On Feb 11, 2008, at 8:14 AM, Alvaro Herrera wrote: Actually it's not just autovacuum; it's any lazy vacuum. It's hard to tell those processes apart in pg_stat_activity. Perhaps we could have added a column in pg_stat_activity indicating processes that don't hold old tuples, but I feel that would have been a little too much. I don't think it'd be too hard to construct a regex that would catch all vacuums, after which you could throw out FULLs. I'm thinking something like \s*vacuum((\s+full){0,1}\s+\S+){0,1};{0,1} Where \s indicates whitespace and \S indicates not whitespace (sorry, don't have a regex manual handy...) You could probably even simplify that to \s*vacuum(\s+full){0} Of course, you'd want to perform all of those in a case-insensitive manner. -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Conditional ordering operators
You should start a project for this on pgFoundry. It looks very useful! On Feb 6, 2008, at 1:15 PM, Sergey Konoplev wrote: Hello everybody. I've written a script (see attachment) which creates operators @ - ascending ordering @ - descending ordering that allows you to replace code like this if condition1 then for select fields from tables where restrictions order by field1 desc, field2 loop actions end loop; elsif condition2 then for select fields from tables where restrictions order by field3, field1 desc, field2 desc loop actions end loop; else for select fields from tables where restrictions order by field4 loop actions end loop; end if; that way for select fields from tables where restrictions order by case when condition1 then @field1 @field2 when condition2 then @field3 @field1 @field2 else @field4 end loop actions end loop; It looks better, doesn't it? Also it provides Oracle like OVER PARTITION effect select * from ( values (1.2, '2007-11-23 12:00'::timestamp, true), (1.4, '2007-11-23 12:00'::timestamp, true), (1.2, '2007-11-23 12:00'::timestamp, false), (1.4, '2007-01-23 12:00'::timestamp, false), (3.5, '2007-08-31 13:35'::timestamp, false) ) _ order by @column1 || case when column1 = 1.2 then @column3 when column1 = 1.4 then @column3 else @column2 @column3 end; column1 | column2 | column3 -+-+- 1.2 | 2007-11-23 12:00:00 | f 1.2 | 2007-11-23 12:00:00 | t 1.4 | 2007-11-23 12:00:00 | t 1.4 | 2007-01-23 12:00:00 | f 3.5 | 2007-08-31 13:35:00 | f (5 rows) Notice that rows 1-2 and 3-4 have opposite order in third column. p.s. Unfortunately I haven't manage yet with text fields because of localization. -- Regards, Sergey Konoplevconditional_ordering.sql ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Working with huge amount of data.
On Feb 11, 2008, at 10:37 AM, Mario Lopez wrote: SELECT * FROM names WHERE name LIKE ‘keyword%’ If you use the C locale, PG can use an index for this query Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ But not this one - substring searches are painful. However, there is some hope - you can checkout the pg_trgm module which provides trigrams and indexing methods so '%foo%' type searches can be done very quickly. Things like google don't usually index substrings, just full words, which makes life easier. They may also use trigrams and other things of that nature if they need to do subtrings (trigrams can also be used to provide Did you mean results. -- Jeff Trout [EMAIL PROTECTED] www.dellsmartexitin.com www.stuarthamm.net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Working with huge amount of data.
Erik, Thanks for your answers, actually this is a workable solution because my data does not get updated so frequently (every 24 hours). The problem is that I would like a more advanced version of this, there must be something I can do, I am going to try what Hubert Despez explained in his articles. Thanks :) On Feb 11, 2008, at 9:37 AM, Mario Lopez wrote: Hi guys :-), I am working on a personal project in which I am trying to make sense on a huge (at least for me) amount of data. I have approximately 150 million rows of unique words (they are not exactly words it is just for explaining the situation). The table I am inserting this is a quite simple table, something like this: CREATE TABLE public.names ( id SERIAL, name VARCHAR(255) ) WITHOUT OIDS; It is a requirement that I can make searches on the varchar with queries that look the following way: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ I optimized the first type of queries making partitions with every letter that a name can begin with: CREATE TABLE public.names_a ( CONSTRAINT names_a_check CHECK ((name)::text ~~ 'a%'::text) ) INHERITS (public.names) WITHOUT OIDS; The problem arises with the second type of queries, where there are no possible partitions and that the search keywords are not known, I have tried making indexes on the letter it ends with, or indexes that specify that it contains the letter specified but none of them work the planifier only make sequential scans over the table. For the moment the quickest scan I have being able to make is using grep!!, surprisingly enough grep searches on an average of 20 seconds a whole plain text file of 2 GB one name per line and PostgreSQL on the fist type of queries takes like 50 seconds while the second type of queries con take up to two minutes which is completely unacceptable for an online search engine that has to attend a user querying this information. How does this big search engines let’s say Google make this up? I am amazed of the quickness on searching this amount of information in so little time. Any approach I could take? I am open minded so anything is acceptable not necessarily only PostgreSQL based solutions (although I would prefer it). By the way Textual Search in PostgreSQL is discarded because what I am looking at are not names that can be decomposed on lexems, let's say that this varchar is composed of random garbage. Actually, a friend of mine actually did exactly what you've tried: grep. He had a cron job that would update the txt file from the table's data every five minutes and then his app would shell out to run those kinds of queries. Of course, with a setup like that your results can be a little out of date (the period between runs of the cron job) but, if you can deal with that, that's actually a pretty simple solution that doesn't take too much setup. Erik Jones DBA | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Working with huge amount of data.
On Feb 11, 2008, at 10:37 AM, Mario Lopez wrote: The problem arises with the second type of queries, where there are no possible partitions and that the search keywords are not known, I have tried making indexes on the letter it ends with, or indexes that specify that it contains the letter specified but none of them work the planifier only make sequential scans over the table. Postgres doesn't use indexes on prefix-wildcard searches like your '%word' type search. It will always sequential scan the table. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SPI_ERROR_CONNECT
The problem was with the following: FOR current_row IN SELECT * from temp_tags_18_counted The select from the [temp_tags_18_counted] view is made up of 3 cross joins. When I simplify and remove the joins everything works. I tried this with some test data with only a few rows and the joins in place and it works too. In the production data table there are about 250K rows. Is it possible that calls to queries are colliding here or not giving each other enough time before being whisked around to next call in the FOR loop? cheers, willem Tom Lane wrote: Willem Buitendyk [EMAIL PROTECTED] writes: ERROR: SPI_connect failed: SPI_ERROR_CONNECT CONTEXT: PL/pgSQL function pop_tag_day_over line 17 at FOR over SELECT rows Hm, what PG version is this? And could we have a complete test case not just the function? (I don't feel like trying to reverse-engineer your tables and views...) regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] end of life for pg versions...
On Mon, 11 Feb 2008 15:36:21 +0100 Magnus Hagander [EMAIL PROTECTED] wrote: http://www.postgresql.org/support/security that's probably the best one you can find. Or that in combination with the news archive (http://www.postgresql.org/about/newsarchive) Really... without making it too formal as a developer I'd appreciate a rough schedule a page where you would say something like: - we expect our next minor release will come out in X months - we expect our major release will come out in Y months - EOL of release A for platform B is planned around date Z Even with a disclaimer with a very bland commitment to the release schedule it could help developers to build up their own schedule and support list too and give some hook for advocacy as well. thanks -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] postgresql book - practical or something newer?
On Feb 11, 2008, at 5:50 PM, Greg Smith wrote: On Tue, 5 Feb 2008, Joshua D. Drake wrote: On Tue, 5 Feb 2008 23:07:37 -0500 (EST) Greg Smith [EMAIL PROTECTED] wrote: Can anyone think of another place a community docs wiki could go at? CMD will host anything you need. Basically all it would take to get this off the ground is a host running PHP 5.0+ and PostgreSQL 8.1+ (with tsearch2) that the current Mediawiki distribution could be installed into; PHP 5.1 and PG 8.2 would be preferred. The main install instructions are at http://www.mediawiki.org/wiki/Manual:Installing_MediaWiki and I know they're good because I fixed the parts that weren't when I last installed one of these. The main open question for initial post-install configuration is how to deal with edit privledges. I think this one would be OK with letting anyone sign up for an account in an automated way, rather than requiring a human approval like the Developer's wiki does, but only allow registered accounts to edit. That will give some defense against the spammers while not making life difficult for the person who just wants to submit something at random one day. Also, a few volunteers to receive notifications of edits for some basic QA just to make sure that what's added is correct. Erik Jones DBA | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] postgresql book - practical or something newer?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mon, 11 Feb 2008 18:50:41 -0500 (EST) Greg Smith [EMAIL PROTECTED] wrote: I could help out with the initial setup, you could just have somebody internally do the install and let me have an account when it's ready, whatever makes sense for you. I have two articles I can submit as examples of a good format for people to use to push some initial content in there, I may turn those into a template or something. Let me know what I can do to help get this going. I just got back from scale, let me talk to the guys and see which machine this needs to go on and I will get back with you. Joshua D. Drake - -- The PostgreSQL Company since 1997: http://www.commandprompt.com/ PostgreSQL Community Conference: http://www.postgresqlconference.org/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL SPI Liaison | SPI Director | PostgreSQL political pundit -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHsOH2ATb/zqfZUUQRAq2YAKCGObN2kEDM+k+JhtdlHUK2Wlpb2QCfWlzH CfYgYQQlxGM7HOt4kFtrgMc= =5+Uc -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] postgresql book - practical or something newer?
On Tue, 5 Feb 2008, Joshua D. Drake wrote: On Tue, 5 Feb 2008 23:07:37 -0500 (EST) Greg Smith [EMAIL PROTECTED] wrote: Can anyone think of another place a community docs wiki could go at? CMD will host anything you need. Basically all it would take to get this off the ground is a host running PHP 5.0+ and PostgreSQL 8.1+ (with tsearch2) that the current Mediawiki distribution could be installed into; PHP 5.1 and PG 8.2 would be preferred. The main install instructions are at http://www.mediawiki.org/wiki/Manual:Installing_MediaWiki and I know they're good because I fixed the parts that weren't when I last installed one of these. The main open question for initial post-install configuration is how to deal with edit privledges. I think this one would be OK with letting anyone sign up for an account in an automated way, rather than requiring a human approval like the Developer's wiki does, but only allow registered accounts to edit. That will give some defense against the spammers while not making life difficult for the person who just wants to submit something at random one day. I could help out with the initial setup, you could just have somebody internally do the install and let me have an account when it's ready, whatever makes sense for you. I have two articles I can submit as examples of a good format for people to use to push some initial content in there, I may turn those into a template or something. Let me know what I can do to help get this going. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_stat_activity xact_start and autovacuum
On Feb 11, 2008 2:27 PM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dawid Kuroczko escribió: I'm using 8.3.0 and I see that autovacuum processes in pg_stat_activity have xact_start. As far as I know, since at least 8.2.x the VACUUM does not start a new transaction. If that statement is correct, the xact_start column in pg_stat_activity should be NULL... Why does it matter? Monitoring. It's good to know the age of oldest running transaction, and autovacuuming is well, adding noise. Autovacuum certainly uses transactions ... ?? I am referrring to the E.8.3.5 Release 8.2 Release Notes: * Allow VACUUM to expire rows without being affected by other concurrent VACUUM operations (Hannu Krossing, Alvaro, Tom) I have probably oversimplifed my statement above. What I am monitoring is the age of the oldest transaction, to be alerted before tables accumulate too many dead rows. From this point of view long running VACUUM is not a problem (since relese 8.2). Right now I am using: SELECT extract('epoch' from min(xact_start)) AS oldest_xact_age FROM pg_stat_activity WHERE current_query NOT LIKE 'autovacuum:%'; ...which works fine but somehow I feel that if xact_age would be NULL, it would ring more true. Since VACUUM does not prevent VACUUMING it can take days to complete and still I wouldn't need to worry. ;-) Let me know if I mixed things up horribly. :-) Regards, Dawid ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] end of life for pg versions...
On Feb 11, 2008 8:04 AM, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: I did manage to find an announcement about the support of pg for windows... but I wasn't able to see anything you'd have a summary of scheduled and planned EOL for various pg versions (on different platform). There have been some secondary sources for support that simultaneously promise longer support times than PGDG does... For instance, support for 7.3 has essentially ceased, but Red Hat has that included in some version(s) of their distributions that still have some time to run before they fall out of RHAT support. So if issues come up with 7.3, they *may* be indirectly addressed thru someone like RHAT. Similarly, Sun or EnterpriseDB may make support promises that exceed what PGDG offers. What you'll find, in practice, is that if you have issues with old versions, and report such, people will be quick to recommend upgrading to some version that is less ancient. Nothing has crystallized into a real policy; if someone feels like backpatching bug fixes back to 7.1, nothing is stopping them from doing so. But that takes more time and effort, so the eldest version that is now still getting patched is 7.4. And it is pretty plausible that that may change to 8.0 in the next year or so. -- http://linuxfinances.info/info/linuxdistributions.html The definition of insanity is doing the same thing over and over and expecting different results. -- assortedly attributed to Albert Einstein, Benjamin Franklin, Rita Mae Brown, and Rudyard Kipling ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_stat_activity xact_start and autovacuum
Dawid Kuroczko escribió: I'm using 8.3.0 and I see that autovacuum processes in pg_stat_activity have xact_start. As far as I know, since at least 8.2.x the VACUUM does not start a new transaction. If that statement is correct, the xact_start column in pg_stat_activity should be NULL... Why does it matter? Monitoring. It's good to know the age of oldest running transaction, and autovacuuming is well, adding noise. Autovacuum certainly uses transactions ... ?? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Mechanics of Select
In response to Alban Hertroys [EMAIL PROTECTED]: On Feb 11, 2008, at 12:43 AM, brian wrote: Try: CREATE OR REPLACE FUNCTION first_day_next_month(inputdate date) RETURNS date AS $BODY$ DECLARE resultdate date; BEGIN SELECT INTO resultdate to_date(to_char((inputdate + interval \ '1 month'), '-MM') || '-01', '-mm-dd'); RETURN resultdate; END; $BODY$ LANGUAGE 'plpgsql'; No need for the variable or the SELECT, and it's an immutable function, so better define that. Besides that it's probably better to use the date_trunc function here. Try: CREATE OR REPLACE FUNCTION first_day_next_month(inputdate date) RETURNS date AS $BODY$ BEGIN RETURN date_trunc('month', inputdate + interval '1 month'); END; $BODY$ LANGUAGE 'plpgsql' IMMUTABLE; And with that I wonder why you'd even need a function :) Because it's clear what the function does by the name. It becomes self-documenting, and ginormous queries will be easier to grok with a function called first_day_next_month(). -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ERROR: expected just one rule action
On Feb 10, 2008 10:51 PM, Tom Lane [EMAIL PROTECTED] wrote: If these are universal (Intel+PPC) binaries, that could be an issue too. There's been some discussion recently about how to build universal binaries for PG, but I don't think anyone's figured out a really nice way to do it. If the endianess isn't corrected for the non-native platform at build time, I've seen initdb leave a cluster with a completely broken pg_rewrite (iirc). -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Oracle-compatible database company ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] advanced database design (long)
On febr. 2, 15:15, [EMAIL PROTECTED] (Lewis Cunningham) wrote: --- vladimir konrad [EMAIL PROTECTED] wrote: I think that I understand basic relational theory but then I had an idea. Basically, instead of adding field to a table every time there is a need for it, have a table split in two: one holds identity (id) and one holds the attributes (linked to this id). Basically, if in the future user decides that the subject should have a new attribute, he can simply add attribute definition and attribute_definition_set (if any) and the application would handle Basically, you would be creating your own data dictionary (i.e. system catalog) on top of the db data dictionary. The database already comes with a way to easily add columns: ddl. I have seen newbie database designers reinvent this method a hundred times. The performance hits and complexity of querying data would far out weigh any perceived maintenance gain. My .02. LewisC Lewis R Cunningham An Expert's Guide to Oracle Technologyhttp://blogs.ittoolbox.com/oracle/guide/ LewisC's Random Thoughtshttp://lewiscsrandomthoughts.blogspot.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match I always thought that having nullable columns in a table is a Bad Thing (http://technet.microsoft.com/en-us/library/ms191178.aspx) and shows that you try to put different type of entities into the same table - having 90 in a column ... br. I think its much better to avoid it whenever you have the info but when you don't you just have to use the EAV model. E.g. If I knew what info I wanted to store on a person I could create columns for that, but since in our application users create the questionnaires that is used to store info on persons I see little choice - I must have a subjectID, questionID, value table. SWK SWK ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly I use it in the same manner : at the time of recording, I just know one kind of entity (words) with a value as varchar. Maybe better with xml, but i use for long time to do the following job with SQL. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SELECT CAST(123 AS char) - 1
Ken Johanson [EMAIL PROTECTED] writes: For sake of interoperability (and using an API that requires String-type hashtable keys), I'm trying to find a single CAST (int - var/char) syntax that works between the most databases. Only char seems to be a candidate, but in 8.3 casting from an integer outputs only the first char... Is this a bug, or would someone like to horrify me by stating something like spec says this is correct. :-) Okay: the spec says this is correct. SQL92 section 6.1 data type quoth character string type ::= CHARACTER [ left paren length right paren ] | CHAR [ left paren length right paren ] ... 4) If length is omitted, then a length of 1 is implicit. Therefore, writing just char is defined as equivalent to char(1). Also, section 6.10 cast specification defines an explicit cast to a fixed-length string type as truncating or padding to the target length (LTD): Case: i) If the length in characters of SV is equal to LTD, then TV is SV. ii) If the length in characters of SV is larger than LTD, then TV is the first LTD characters of SV. If any of the re- maining characters of SV are non-space characters, then a completion condition is raised: warning-string data, right truncation. iii) If the length in characters M of SV is smaller than LTD, then TV is SV extended on the right by LTD-M spaces. We don't report a completion condition for lack of any infrastructure for that, but the result of the expression is per spec. Possibly you could get what you want by casting to char(10) or so. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Problem in using C API - libpq
Could you please tell me how to do so? Thanks. Ben [EMAIL PROTECTED] wrote: Including the header is not enough you must also link your binary against the library. On Feb 11, 2008, at 9:52 PM, Shwe Yee Than wrote: Hello, I´ve got a problem when trying to access Postgresql through C language. I've included libpq-fe.h as a header file in the C program. When I compile it, I got the following errors: [EMAIL PROTECTED] src]$ gmake alpha1.o(.text+0x297c6):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12726: undefined reference to `PQstatus' alpha1.o(.text+0x297f5):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12729: undefined reference to `PQerrorMessage' alpha1.o(.text+0x2986f):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12741: undefined reference to `PQexec' alpha1.o(.text+0x29886):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12743: undefined reference to `PQresultStatus' alpha1.o(.text+0x298b2):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12746: undefined reference to `PQclear' alpha1.o(.text+0x298c3):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12747: undefined reference to `PQfinish' alpha1.o(.text+0x298e8):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12751: undefined reference to `PQntuples' alpha1.o(.text+0x2990c):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12752: undefined reference to `PQgetvalue' alpha1.o(.text+0x29935):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12754: undefined reference to `PQclear' alpha1.o(.text+0x29946):/home/srb/SRB3_4_2/mySRB/src/alpha1.c:12756: undefined reference to `PQfinish' collect2: ld returned 1 exit status gmake: *** [etd.cgi] Error 1 Ayone can help me? Thanks in advance Regards, Shwe - Looking for last minute shopping deals? Find them fast with Yahoo! Search. - Looking for last minute shopping deals? Find them fast with Yahoo! Search.
[GENERAL] SELECT CAST(123 AS char) - 1
For sake of interoperability (and using an API that requires String-type hashtable keys), I'm trying to find a single CAST (int - var/char) syntax that works between the most databases. Only char seems to be a candidate, but in 8.3 casting from an integer outputs only the first char... Is this a bug, or would someone like to horrify me by stating something like spec says this is correct. :-) I noticed this is also occurring on date/time types though that's not my need/concern. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] ERROR: expected just one rule action
Dave Page [EMAIL PROTECTED] writes: On Feb 11, 2008 4:28 PM, Tom Lane [EMAIL PROTECTED] wrote: Dave Page [EMAIL PROTECTED] writes: If the endianess isn't corrected for the non-native platform at build time, I've seen initdb leave a cluster with a completely broken pg_rewrite (iirc). Hmm, but is pg_rewrite really the most obvious symptom? In 8.3 I would expect massive breakage all over, because of the varvarlena stuff's sensitivity to endianness. It was what we found first upon investigating why my first attempts at building a universal binary failed. Okay, then that definitely suggests that this is a theory for Dave L. to pursue. In a universal build you need to do ./configure twice to generate two different pg_config.h files. I imagine his helpful fellow knew that already, if he'd been able to generate universal binaries with Xcode 2.5, but maybe 3.0 is messing it up somehow. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Continual uptime while loading data ... COPY vs INSERTS within a transaction.
On Sat, 2008-02-09 at 19:27 -0500, Tom Lane wrote: Benjamin Arai [EMAIL PROTECTED] writes: We are thinking of modifying our system to use COPY to replace these large INSERT transactions but we are concerned that it will greatly impact the user experience (i.e., exclusively lock the table during the copy process). First, does COPY grab an exclusive lock? Second, is there a better way to load data? No, and no. Use COPY. Unless inserting into a table that has rules and those rules need to fire. I think I saw a post (by you, Tom) that said COPY doesn't fire rules (haven't read the 8.3 release notes yet though, if COPY fires rules in 8.3 that'd be great, I'd love to use copy for pushing rows into the base table and having the rules fire so the right data goes into the right inherits descendant tables). tiger ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SELECT CAST(123 AS char) - 1
Tom Lane wrote: SQL92 section 6.1 data type quoth character string type ::= CHARACTER [ left paren length right paren ] | CHAR [ left paren length right paren ] ... 4) If length is omitted, then a length of 1 is implicit. Therefore, writing just char is defined as equivalent to char(1). However when length is not defined I think it will generally be safe(r) to auto-size. In the grand scheme auto-size creates much more sensible output than a 1-char wide one (even if right-padded to max char-length of the type). Also, section 6.10 cast specification defines an explicit cast to a fixed-length string type as truncating or padding to the target length (LTD): And PG does this, perfectly. It even right-pads, the other databases (tried My and Ms) do not... Possibly you could get what you want by casting to char(10) or so. Alas the behavior is different. The right padding exists (in PG). So I cannot get uniform behavior (the other DB's fault I agree for not supporting cast as varchar). Unless PG can start throwing an exception in this version when it truncates to implicit-1, I think it should be forgiving and auto-size.. Is it possible to override this built-in cast function with a create-cast? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SPI_ERROR_CONNECT
Thanks Tom, I sent you a test case. The problem has since been resolved by changing one of my functions to VOLATILE instead of IMMUTABLE. This has caught me twice now in the last few days. I hope my learning of this will be a little more IMMUTABLE :) cheers, willem PG 8.3 Tom Lane wrote: Willem Buitendyk [EMAIL PROTECTED] writes: The problem was with the following: FOR current_row IN SELECT * from temp_tags_18_counted The select from the [temp_tags_18_counted] view is made up of 3 cross joins. When I simplify and remove the joins everything works. I tried this with some test data with only a few rows and the joins in place and it works too. In the production data table there are about 250K rows. Is it possible that calls to queries are colliding here or not giving each other enough time before being whisked around to next call in the FOR loop? No. Please provide a test case instead of speculating. And, again, what is the PG version? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] SPI_ERROR_CONNECT
Willem Buitendyk [EMAIL PROTECTED] writes: The problem was with the following: FOR current_row IN SELECT * from temp_tags_18_counted The select from the [temp_tags_18_counted] view is made up of 3 cross joins. When I simplify and remove the joins everything works. I tried this with some test data with only a few rows and the joins in place and it works too. In the production data table there are about 250K rows. Is it possible that calls to queries are colliding here or not giving each other enough time before being whisked around to next call in the FOR loop? No. Please provide a test case instead of speculating. And, again, what is the PG version? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Working with huge amount of data.
Hi guys :-), I am working on a personal project in which I am trying to make sense on a huge (at least for me) amount of data. I have approximately 150 million rows of unique words (they are not exactly words it is just for explaining the situation). The table I am inserting this is a quite simple table, something like this: CREATE TABLE public.names ( id SERIAL, name VARCHAR(255) ) WITHOUT OIDS; It is a requirement that I can make searches on the varchar with queries that look the following way: SELECT * FROM names WHERE name LIKE ‘keyword%’ Or SELECT * FROM names WHERE name LIKE ‘%keyword%’ I optimized the first type of queries making partitions with every letter that a name can begin with: CREATE TABLE public.names_a ( CONSTRAINT names_a_check CHECK ((name)::text ~~ 'a%'::text) ) INHERITS (public.names) WITHOUT OIDS; The problem arises with the second type of queries, where there are no possible partitions and that the search keywords are not known, I have tried making indexes on the letter it ends with, or indexes that specify that it contains the letter specified but none of them work the planifier only make sequential scans over the table. For the moment the quickest scan I have being able to make is using grep!!, surprisingly enough grep searches on an average of 20 seconds a whole plain text file of 2 GB one name per line and PostgreSQL on the fist type of queries takes like 50 seconds while the second type of queries con take up to two minutes which is completely unacceptable for an online search engine that has to attend a user querying this information. How does this big search engines let’s say Google make this up? I am amazed of the quickness on searching this amount of information in so little time. Any approach I could take? I am open minded so anything is acceptable not necessarily only PostgreSQL based solutions (although I would prefer it). By the way Textual Search in PostgreSQL is discarded because what I am looking at are not names that can be decomposed on lexems, let's say that this varchar is composed of random garbage. Thanks for the time taken to read. :-) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] catalog info for sequences
Can someone please tell me how to extract the mix, max, increment by, etc, values for a sequence from the system catalogs. Is this in the manual somewhere (I couldn't find it)? Thanks __ Marc ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] end of life for pg versions...
I did manage to find an announcement about the support of pg for windows... but I wasn't able to see anything you'd have a summary of scheduled and planned EOL for various pg versions (on different platform). -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pg_stat_activity xact_start and autovacuum
Hello. I'm using 8.3.0 and I see that autovacuum processes in pg_stat_activity have xact_start. As far as I know, since at least 8.2.x the VACUUM does not start a new transaction. If that statement is correct, the xact_start column in pg_stat_activity should be NULL... Why does it matter? Monitoring. It's good to know the age of oldest running transaction, and autovacuuming is well, adding noise. Regards, Dawid ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] using SSL in psql
Willy-Bas Loos wrote: Hi, How, using psql, can i connect to a PostgreSQL server that has sslhost in the pg_hba.conf file? I can't find the SSL option in the manpage. thx, WBL Make sure both your server and client have ssl support compiled in. I'm not sure if that's there by default with the provided binaries, but if you compiled your own, you specified --with-openssl. Checking pg_config will be helpful here. Make sure your server is really configured to provide SSL support. ssl=on in the postgresql.conf, and be sure to have at least server.key and server.crt (and optionally your root.crt and root.crl). Make sure to ask for an ssl connection, especially if you have both ssl and non-ssl options in the pg_hba.conf. Use the environment variable PGSSLMODE=require to force the issue and test with psql. If successful, you will see a line similar to this above the ready prompt: [Usual welcome banner snipped] SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256) yourdb= Alternatively, the only programmatic way to tell that I know if is the pgsslinfo contrib module, where you can install the function ssl_is_used() in your db. Manual re: server setup for SSL: http://www.postgresql.org/docs/8.3/interactive/ssl-tcp.html Useful environment variables for the client: http://www.postgresql.org/docs/current/static/libpq-envars.html Regards, Paul ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] copy question - fixed width?
Le lundi 11 février 2008, Klint Gore a écrit : Is there any way to make copy work with fixed width files? I'll try to see about implementing this in pgloader, shouldn't be complex. But we have some other things on the TODO (which could get formalized by now...). So at the moment the preprocessing sed script is a better idea than waiting for the pgloader release which will be able to process no-separator fixed-field length input files. Regards, -- dim signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Is PG a moving target?
Ken Johanson wrote: Is there anything now, or in the works, for compatibility emulation? For example to setup my session to act like 8.2 and allow less-strict typing. The best way to ensure 8.2 compatibility is to use 8.2. But as casts are user definable, you can add back any casts you want. Just don't add dozens of implicit casts and then come back here wondering why your application is behaving strangely. :) -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] DBD::Pg 2.0.0 released
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Version 2.0.0 of DBD::Pg, the Perl DBI interface to Postgres, has been released. Find it at your favorite CPAN mirror. This is a major release, so agrressive testing and feedback is much appreciated. Please report any bugs here: http://rt.cpan.org/Public/Dist/Display.html?Name=DBD-Pg SHA1 and MD5 checksums: 920fa860ee79f4184201005264ab0ef8d7c0e479 DBD-Pg-2.0.0.tar.gz 71757069e05662985f2931e84edc3e6c DBD-Pg-2.0.0.tar.gz - From the Changes file: Major changes: - Make minimum supported server 7.4. [GSM] - Overhaul COPY functions: deprecate pg_getline, pg_putline, and pg_endcopy. The new functions are pg_getcopydata, pg_getcopydata_async, pg_putcopydata, and pg_putcopyend. [GSM] - Add support for arrays: can pass in arrayrefs to execute, and they are automatically returned as arrays when fetching. [GSM] - Add support for asynchronous queries. [GSM] - Allow raw transaction statements through - in other words, do not croak if $dbh-prepare(COMMIT) is attempted. Not only was this a little too controlling, there is a growing host of other commands such as COMMIT PREPARED that we need to allow. [GSM]. - Check transaction status after each command, to allow things such as 'PREPARE TRANSACTION' to work properly. (CPAN bug #32423) [GSM] - Overhauled the data type system. [GSM] - Switch from cvs to subversion. Switch from gborg to perl.org. - Change versioning system to three numbered system. Bug fixes: - Add $dbh-{pg_placeholder_dollaronly} to allow '?' and other symbols to be used in prepared statements without getting interpreted as placeholders, i.e. the geometric operator '?#' (CPAN bug #24124) [GSM] - Fix memory leak in bytea quoting. (CPAN bug #21392). Fix memory leak in pg_notifies. [Stephen Marshall [EMAIL PROTECTED] - Fix memory leak when using savepoints. (CPAN bug #29791) [EMAIL PROTECTED] - Use adbin, not adsrc, when figuring out the sequence name for the last_insert_id() method. This allows the function to work properly if the sequence name is changed. Note that {pg_cache=0} should be passed to the function if you expect this might happen. (CPAN bug #30924) [GSM] - Use unsigned chars when parsing passed-in queries, preventing UTF-8 strings from ruining the prepare. UTF-16 may still cause problems. (CPAN bug #31577) [GSM] - Fix crash when executing query with two placeholders side by side. Thanks to Daniel Browning for spotting this. [GSM] - Skip item if no matching key in foreign_key_info. (CPAN bug #32308) [GSM] - Fix bug in last_insert_id. (CPAN bug #15918) [EMAIL PROTECTED] - Fix pg_description join in table_info(). [Max Cohan [EMAIL PROTECTED] - Make sure arrays handle UTF-8 smoothly (CPAN bug #32479) [GSM] - Force column names to respect utf8-ness. Per report from Ch Lamprect. [GSM] - Make sure array items are marked as UTF as needed. (CPAN bug #29656) [GSM] - Force SQL_REAL and SQL_NUMERIC to be float8 not float4. (CPAN bug #30010) [GSM] - Allow objects with stringification overloading to work with quote(). (CPAN bug #32868) [David E. Wheeler and GSM] - Use prepare_cached in last_insert_id function. (CPAN bug #24313) - Switch from pow to powf to support AIX compiler issue. (CPAN bug #24579) [GSM] Enhancements and API changes: - Complain loudly and fail to proceed if Makefile.PL finds no -lpq [GSM] - Add three new columns to column_info, to return unquoted version: pg_schema, pg_table, and pg_columns. Add all three to primary_key_info, and the first two to table_info (CPAN bug #20282) [GSM] - Change $dbh-{User} to $dbh-{Username} [GSM] - Change $dbh-{Name} to return the entire DSN string, minus the 'dbi:Pg:' part. Thanks to Mark Stosberg for the idea. [GSM] - Allow data_sources to accept optional arguments. [GSM] - Add private_attribute_info() method. [GSM] - Add SQL_INTERVAL and others to types.c [GSM] - Added statistics_info function [Brandon Black [EMAIL PROTECTED] - Be much more flexible in test connection options. [GSM] - Overhaul test suite, allow tests to be run individually. [GSM] New and experimental: - Quick support for named trace level 'SQL' [GSM] - Very experimental support for bind_param_inout, use with caution. [GSM] Documentation fixes: - Fix bad PG_INTEGER example in docs, thanks to Xavi Drudis Ferran. (CPAN bug #31545) [GSM] - Fix META.yml file. (CPAN bug #25759) [GSM] - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 200802111802 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iD8DBQFHsNQOvJuQZxSWSsgRAyAhAKD0yHpd4d5+/K4kfztUO817TodL5wCglQgl 0ja+V2hAL5dVDb+f1dtuEhU= =hJTb -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Have you searched our list archives?
[GENERAL] using SSL in psql
Hi, How, using psql, can i connect to a PostgreSQL server that has sslhost in the pg_hba.conf file? I can't find the SSL option in the manpage. thx, WBL
Re: [GENERAL] Mechanics of Select
On Feb 11, 2008, at 2:23 AM, Willem Buitendyk wrote: As others have suggested my big problem with the function I wrote was that I had made it Volatile instead of Immutable (it is no doubt suffering from code bloat as well). That made all the difference. Curiously though - I tried it just with the date_trunc function and it was just as slow as my old Volatile function. select * from track where datetime = '2007-04-01' and datetime date_trunc('month','2007-04-01'::timestamp)+interval '1 month'; was about 55s That's probably because '2007-04-01'::timestamp can be at different time zones depending on client configuration and hence is volatile. If you need a timestamp you probably want to use the servers TZ, which you can specify using: timestamp at your timezone Are you always entering the first day of a month for start date? In that case you can leave out the entire date_trunc as the interval already calculates the correct length internally: template1= select '2007-04-01'::date + interval '1 month'; ?column? - 2007-05-01 00:00:00 select * from track where datetime = '2007-04-01' and datetime first_day_next_month('2007-04-01'); was about 36s Also, specifying dates like this without specifying their format is a bad habit in my book. You're in trouble as soon as the date format for your database changes (different system, for example). I suggest you use to_date('2007-04-01', '-MM-DD') instead. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,47b015f9167323996417255! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ERROR: expected just one rule action
Dave Page [EMAIL PROTECTED] writes: If the endianess isn't corrected for the non-native platform at build time, I've seen initdb leave a cluster with a completely broken pg_rewrite (iirc). Hmm, but is pg_rewrite really the most obvious symptom? In 8.3 I would expect massive breakage all over, because of the varvarlena stuff's sensitivity to endianness. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Is PG a moving target?
On Monday 11 February 2008 14:49, Jeff Davis wrote: On Mon, 2008-02-11 at 09:09 +0100, Peter Eisentraut wrote: Ken Johanson wrote: Is there anything now, or in the works, for compatibility emulation? For example to setup my session to act like 8.2 and allow less-strict typing. The best way to ensure 8.2 compatibility is to use 8.2. But as casts are user definable, you can add back any casts you want. Just don't add dozens of implicit casts and then come back here wondering why your application is behaving strangely. :) As I understand it, it's tricky (or impossible) to get the 8.2 behavior back just by adding/modifying casts. If not, couldn't we just publish those casts so people can be backwards compatible if they want? that was the idea behind castcompat, which didn't get far out of the gate before several examples cropped up showing how backwards-compatible casting would break new 8.3 system expectations. -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] SPI_ERROR_CONNECT
Willem Buitendyk [EMAIL PROTECTED] writes: I sent you a test case. Thanks for the test case --- I've committed a patch: http://archives.postgresql.org/pgsql-committers/2008-02/msg00108.php The problem has since been resolved by changing one of my functions to VOLATILE instead of IMMUTABLE. This has caught me twice now in the last few days. That's a fairly bad workaround (assuming that the function is a legitimate candidate to be IMMUTABLE) because it defeats potential optimizations. What I'd suggest you do instead is rethink your apparently widespread habit of whacking your view definitions around on-the-fly. This would never have worked at all before PG 8.3 (and as you can see we still have some bugs left in supporting it in 8.3 :-(). Even when it does work, there is a whole lot of frantic paddling going on just under the surface. We may sail serenely on like the swan, but not very speedily ... regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] oids
On Mon, Feb 11, 2008 at 10:15:40AM -0800, Bob Pawley wrote: I'm running Postgresql 8.2 on Windows. If I create a table 'without oids' are oids still in use behind the scenes?? Nope. Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Those who make peaceful revolution impossible will make violent revolution inevitable. -- John F Kennedy signature.asc Description: Digital signature