[GENERAL] pg_dump in cycle
Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Auto increment/sequence on multiple columns?
You'll probably need a sequence per thread. A sequence is not necessarily tied to a column. -tfo On Sep 12, 2004, at 11:16 AM, Nick wrote: This is actually a table that holds message threads for message boards. Column A is really 'message_board_id' and column B is 'thread_id'. I would like every new thread for a message board to have a 'thread_id' of 1 and increment from there on. -Nick ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] ECPG Mac OS X
On Wed, Sep 15, 2004 at 01:28:14AM -0400, Richard Connamacher wrote: Also, anyone know if it can parse Objective C files? They're basically identical to c language files, with two added constructs: method calls, which are surrounded by brackets: ... It depends where these constructs are used. If they are used inside an SQL declare section it will probably cause trouble as ecpg does not know them. Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL! ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Invalid large object descriptor : 0 with autocommit
Hi , I have an application that ran on Oracle, and the application, for some reason requires autocommit to be true. Now, when we moved this application to postgres, we moved the blob column to LargeObjects. But PostgreSQL doesn't seem to be able to use Large Objects with autoCommit = true. Is there any other way to work around this problem ? Thanks in advance !! -- Aditya Kulkarni ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] table partitioning
A couple of days ago I announced that I wrote a JDBC driver that adds table partitioning features to databases accessed via JDBC. I also wrote: In case you think this could be of any interest if integrated in Postgresql (I mean if it was a core functionality of Postgresql, not just a JDBC driver) let me know. But nobody seemed to care... Wouldn't this be an interesting feature of Postgresql? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] pg_dump in cycle
Use pg_dump instead of pg_dumpall example: pg_dump databaseName databaseDumpFile if u have many databases, you can make a script that dumps each database in it's own file - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 8:08 AM Subject: [GENERAL] pg_dump in cycle Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pg_dump in cycle
U can use : select datname from pg_database; in order to get the list of databses HTH Najib. - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 10:41 AM Subject: Re: [GENERAL] pg_dump in cycle Use pg_dump instead of pg_dumpall example: pg_dump databaseName databaseDumpFile if u have many databases, you can make a script that dumps each database in sure, I have many databases. how can I write such script without explicitly specifying database names ? I do not want to modify that script after I have added database. it's own file - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 8:08 AM Subject: [GENERAL] pg_dump in cycle Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pg_dump in cycle
yes, but how can I integrate that query with shell script (which will perform actual dumping) ? I would even say, select datname from pg_database where not datistemplate, becuase otherwise pg_dump will complain about template0 Cheers, Ilia Chipitsine U can use : select datname from pg_database; in order to get the list of databses HTH Najib. - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 10:41 AM Subject: Re: [GENERAL] pg_dump in cycle Use pg_dump instead of pg_dumpall example: pg_dump databaseName databaseDumpFile if u have many databases, you can make a script that dumps each database in sure, I have many databases. how can I write such script without explicitly specifying database names ? I do not want to modify that script after I have added database. it's own file - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 8:08 AM Subject: [GENERAL] pg_dump in cycle Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] pg_dump in cycle
Check out the psql command. U can use : psql -l which outputs sometinhg like List of databases Name | Owner | Encoding -+---+--- dragon_devel| ptufenkji | UNICODE dragon_devel_v2 | ptufenkji | UNICODE dragon_joujou | ptufenkji | UNICODE dragon_prod | ptufenkji | UNICODE fgm | gnakhle | UNICODE fgm_eval| ptufenkji | UNICODE hotline | postgres | UNICODE template0 | postgres | SQL_ASCII template1 | postgres | SQL_ASCII usj | ptufenkji | UNICODE for the shell script and do some text filtering in order to retrieve the databse names. (Or may be u can do a connection to the database from the shell script i am not sure: i don't have a big experience in shell scripting) - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Cc: generalpost [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 11:01 AM Subject: Re: [GENERAL] pg_dump in cycle yes, but how can I integrate that query with shell script (which will perform actual dumping) ? I would even say, select datname from pg_database where not datistemplate, becuase otherwise pg_dump will complain about template0 Cheers, Ilia Chipitsine U can use : select datname from pg_database; in order to get the list of databses HTH Najib. - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 10:41 AM Subject: Re: [GENERAL] pg_dump in cycle Use pg_dump instead of pg_dumpall example: pg_dump databaseName databaseDumpFile if u have many databases, you can make a script that dumps each database in sure, I have many databases. how can I write such script without explicitly specifying database names ? I do not want to modify that script after I have added database. it's own file - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 8:08 AM Subject: [GENERAL] pg_dump in cycle Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] problem with pg_restore and user privileges
Hi, I'm a Postgis user, and I have a problem restoring data from 7.4 to 8.0.0beta2. I use the postgis_restore.pl script that comes with postgis distribution. I do the following for the dump: pg_dump -Fc mydb mydb.sql and the script does the following restore operations: ... some commands ... open( PSQL, "| ./psql -a dbname") || die "Can't run psql\n"; ... pg_restore -l mydb.sql dump.list ... some commands .. something like: .. pg_restore -Ldump.list mydb.sql ... some commands .. It seems to work except for the user privileges. At the end, no user privileges are restored in the new database My doubt is the following: pg_restore -l mydb.sql dump.list dump the privileges information in the dump.list file or not? some hints? Thanks Reds
Re: [GENERAL] pg_dump in cycle
This seems to be more interesting for shell scripting: psql -d DatabaseName -c 'select datname from pg_database where not datistemplate' ; datname - fgm_eval hotline usj dragon_devel dragon_joujou dragon_devel_v2 dragon_prod fgm (8 rows) Cheers. Najib. - Original Message - From: Najib Abi Fadel [EMAIL PROTECTED] To: Ilia Chipitsine [EMAIL PROTECTED] Cc: generalpost [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 12:08 PM Subject: Re: [GENERAL] pg_dump in cycle Check out the psql command. U can use : psql -l which outputs sometinhg like List of databases Name | Owner | Encoding -+---+--- dragon_devel| ptufenkji | UNICODE dragon_devel_v2 | ptufenkji | UNICODE dragon_joujou | ptufenkji | UNICODE dragon_prod | ptufenkji | UNICODE fgm | gnakhle | UNICODE fgm_eval| ptufenkji | UNICODE hotline | postgres | UNICODE template0 | postgres | SQL_ASCII template1 | postgres | SQL_ASCII usj | ptufenkji | UNICODE for the shell script and do some text filtering in order to retrieve the databse names. (Or may be u can do a connection to the database from the shell script i am not sure: i don't have a big experience in shell scripting) - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Cc: generalpost [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 11:01 AM Subject: Re: [GENERAL] pg_dump in cycle yes, but how can I integrate that query with shell script (which will perform actual dumping) ? I would even say, select datname from pg_database where not datistemplate, becuase otherwise pg_dump will complain about template0 Cheers, Ilia Chipitsine U can use : select datname from pg_database; in order to get the list of databses HTH Najib. - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: Najib Abi Fadel [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 10:41 AM Subject: Re: [GENERAL] pg_dump in cycle Use pg_dump instead of pg_dumpall example: pg_dump databaseName databaseDumpFile if u have many databases, you can make a script that dumps each database in sure, I have many databases. how can I write such script without explicitly specifying database names ? I do not want to modify that script after I have added database. it's own file - Original Message - From: Ilia Chipitsine [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, September 16, 2004 8:08 AM Subject: [GENERAL] pg_dump in cycle Dear Sirs, I want to dump all databases, but separately each database in its own file, not all databases in one single file as pg_dumpall does. How can I implement that ? Cheers, Ilia Chipitsine ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] [HACKERS] Problems with SPI memory management
On Wed, 2004-09-15 at 19:51, Tom Lane wrote: You'd be well advised to be doing this sort of hackery in a build with --enable-cassert. That turns on CLOBBER_FREED_MEMORY which makes misuse of freed memory a whole lot more obvious. I did this but when I try do create a function the following message is displayed: ERROR: could not load library /usr/local/pgsql2/PLANSandTESTS/libtimes.so: /usr/local/pgsql2/PLANSandTESTS/libtimes.so: undefined symbol: assert_enabled Any ideas? Is there any file I should include during compilation? I couldn't find any HOW-TO or something. Regards, Ntinoas Katsaros ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] schema level variables
hi Thank you for your feedback. Actually, we are trying to move to postgres as a company, and not just move a particular project. So we have to show the corporate hierarchy that this is not only good to move, but that it is feasible. So we would like to rummage through as little of the legacy code as possible. I am helping move one app for now (as a test case), but we would like to ensure smooth migration of the rest. For this we are ready to contribute to postgres if necessary. The gist is that it would be good if some of the features we use often could go into postgres itself. Global variables (package/schema level) seem like a good feature addition to postgres. If someone could give us some pointers on how to go about evaluating the feasibility of implementation, that would be very helpful. thanks Paramveer Singh Shridhar Daithankar [EMAIL PROTECTED] 15/09/2004 09:21 PM To:[EMAIL PROTECTED] cc:pgSQL General [EMAIL PROTECTED] Subject:Re: [GENERAL] schema level variables On Wednesday 15 Sep 2004 6:12 pm, [EMAIL PROTECTED] wrote: Hi! I am trying to port an oracle app to postgres, an I don't know what to do with package scope variables. I was looking up some documentation and it seems (IMHO) that schemas would be a nice place to put the variables in(as they already have functions, operators and types). Is this feasible? Is the dev team interested in doing this at some point in the future? Can you replace the package level variable name with a function? The function would run a select against a table that stores name/value pair. Of course the table has to be limited to the schema itself.. Would that be an acceptable work-around? Shridhar
[GENERAL] lexicographical ordering in postgres
Hi! I created a table in postgres with varchar values in it, and I noticed that postgres lexicographical ordering is weird in the sense that it ignores whitespaces. please look at the result I got: select * from tablename order by columnname; cloumnname - one 1 one 1 one 12 one 2 one 30 (5 rows) This means that 'onespace1' and 'onespacespace1' are the same lexicographically. Is this correct? thanks paraM ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Stemmer integration in tsearch2 / $libdir error
Hello everybody, After installing the tsearch2 nice and smoothly and following the steps provided on Olegs and Teodors page i come to an abrupt end after: # Compile and install dictionary cd PGSQL_SRC/contrib/dict_fr make make install which still seems to work fine!! Next comes: sh-2.05b$ psql testdb /usr/local/pgsql/share/contrib/dict_fr.sql SET BEGIN ERROR: could not access file $libdir/dict_fr: No such file or directory ERROR: current transaction is aborted, commands ignored until end of transaction block COMMIT also tried to go directly from the dir: sh-2.05b$ psql testdb dict_fr.sql SET BEGIN ERROR: could not access file $libdir/dict_fr: No such file or directory ERROR: current transaction is aborted, commands ignored until end of transaction block COMMIT dict_fr.sql does exist in the dir but somehow i cant get the $libdir path inserted right? Is this $libdir thing a path that i'm supposed to set global somehow?? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] lexicographical ordering in postgres
Check your locale settings. The en_US locale sorts like that for example... On Thu, Sep 16, 2004 at 04:00:43AM -0500, [EMAIL PROTECTED] wrote: Hi! I created a table in postgres with varchar values in it, and I noticed that postgres lexicographical ordering is weird in the sense that it ignores whitespaces. please look at the result I got: select * from tablename order by columnname; cloumnname - one 1 one 1 one 12 one 2 one 30 (5 rows) This means that 'onespace1' and 'onespacespace1' are the same lexicographically. Is this correct? thanks paraM ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgp5jp69xkPC3.pgp Description: PGP signature
[GENERAL] REFERENCES
Hi, I got the following scenario in a database. CREATE TABLE persons ( id char(15), name varchar(80) ); CREATE TABLE customer( discount int4 ) INHERITS (persons); CREATE TABLE personal_data( id REFERENCES persons(id) MATCH FULL ON UPDATE CASCADE ON DELETE CASCADE, class char(20), info varchar(40) ); Point of it is, I want to be able to add a mobile phone number as: INSERT INTO personal_data(id,class,info) values(ID,'MOBILE',PHONENO); Yet still have it automatically update or deleted if the person disappears in the database, and not stick around as a ghost. This works ok, except that the table customer will break it. I can't add personal_data to a customer, even though all customer fields and data, including the ID is in the persons database. So my question is, is there a way to do this now, will this be possible in the future or what is the status? Mvh, Örn pgpaISO0W1IJJ.pgp Description: PGP signature
[GENERAL] Postgresql -- webservices?
Hello, Does anyone have experience in interfacing a Postgresql database (tables? plpgsql functions? perl functions?) with the outside world through webservices? (XML-RPC, SOAP, UDDI, WSDL...) Philippe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Tsearch2 adding additional dictionaries
Marcel Boscher wrote: Hello everybody, i'm having a hard time trying to install an i-spell dictionary into tsearch2... i do exactly as i'm being tol don the website: http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_compound_words everything goes fine until i try the examples like: # select lexize('norwegian_ispell','overbuljongterningpakkmesterassistent'); {over,buljong,terning,pakk,mester,assistent} my problem then is, as a result of my select i get NULL thats it... no fancy errors nothing just the Null... any ideas? thx in advance Marc look, to_tsquery(' overbuljongterningpakkmesterassistent') returns over buljong terning pakk mester assistent query. Are your sure that text contains all of those words exists? -- Teodor Sigaev E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Postgres memory usage
Hi: Our postgres database has tables with several million rows in a server running Red Hat 8.0 with 1GB of memory. Recently we are experiencing a low performance in the access to the server via HTTP, after rebooting the server the speed is the same. I have noticed that available memory is aparently too low, according to top: 12:58pm up 1:28, 3 users, load average: 0,00, 0,01, 0,09 94 processes: 91 sleeping, 3 running, 0 zombie, 0 stopped CPU states: 0,0% user, 0,0% system, 0,0% nice, 100,0% idle Mem: 1031012K av, 1021440K used,9572K free, 0K shrd, 62864K buff Swap: 2040244K av, 14960K used, 2025284K free 876808K cached Is it normal for Postgres to allocate almost all the memory in the computer? Thanks in advance. Ruben. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgresql -- webservices?
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Philippe Lang) wrote: Does anyone have experience in interfacing a Postgresql database (tables? plpgsql functions? perl functions?) with the outside world through webservices? (XML-RPC, SOAP, UDDI, WSDL...) Yeah, I did some of this using the Perl SOAP module. The robust way involves getting Apache involved so that you've got something that starts the services 'on demand,' as well as a connection pool manager. Perl's weaker on the WSDL side of things, as that is something typically autogenerated by a language compiler, whilst Perl is pretty dynamic and way too weakly typed; if you want WSDL, Java is probably the way to go... Contrary to how it gets billed, this is pretty heavyweight stuff, because you have a pretty thick layer of XML encoding on top of the data. -- cbbrowne,@,acm.org http://linuxfinances.info/info/soap.html What I find most amusing about com and .NET is that they are trying to solve a problem I only had when programming using MS tools. -- Max M [EMAIL PROTECTED] (on comp.lang.python) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Checking regex pattern validity
Tom Lane wrote: David Garamond [EMAIL PROTECTED] writes: Is there a function like IS_VALID_REGEX() to check whether a pattern is valid (i.e. it compiles)? I'm storing a list of regex patterns in a table. It would be nice to be able to add a CHECK constraint to ensure that all the regexes are valid. ... CHECK (('' ~ pattern) IS NOT NULL) ... Not exactly what I wanted, but close enough. Thanks! However, what if only want to accept invalid regex patterns? Or what if invalid pattern should be converted to NULL automatically? I'd still vote for a function... -- dave ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Postgresql -- webservices?
Christopher Browne [EMAIL PROTECTED] wrote: Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Philippe Lang) wrote: Does anyone have experience in interfacing a Postgresql database (tables? plpgsql functions? perl functions?) with the outside world through webservices? (XML-RPC, SOAP, UDDI, WSDL...) Yeah, I did some of this using the Perl SOAP module. The robust way involves getting Apache involved so that you've got something that starts the services 'on demand,' as well as a connection pool manager. Perl's weaker on the WSDL side of things, as that is something typically autogenerated by a language compiler, whilst Perl is pretty dynamic and way too weakly typed; if you want WSDL, Java is probably the way to go... Contrary to how it gets billed, this is pretty heavyweight stuff, because you have a pretty thick layer of XML encoding on top of the data. I've done this twice with C and the gsoap library. Works very well, but you have the deveopment time and effort involved with C apps. gsoap generates a WSDL from your header files, which is nice. And, of course, it's very fast. You have to write your own connection handling routines, so there's a bit of work to do there. Especially if you want to avoid the latency of establishing the Postgres connection, and thus need preforked or pretreaded systems. -- Bill Moran Potential Technologies http://www.potentialtech.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgres memory usage
On Sep 16, 2004, at 8:58 AM, [EMAIL PROTECTED] wrote: I have noticed that available memory is aparently too low, according to top: 12:58pm up 1:28, 3 users, load average: 0,00, 0,01, 0,09 94 processes: 91 sleeping, 3 running, 0 zombie, 0 stopped CPU states: 0,0% user, 0,0% system, 0,0% nice, 100,0% idle Mem: 1031012K av, 1021440K used,9572K free, 0K shrd, 62864K buff Swap: 2040244K av, 14960K used, 2025284K free 876808K cached in the unix world, free memory is mostly useless because the OS will give up various buffers it is using for caching if an app needs memory. It is usually best to look at the output of free which will show you how much of the memory is used by buffers caches. If that number is also low you should look to see how much memory your applications are using. Also, what are your shared_buffer and sort_mem settings set to in postgresql.conf? Remember in Linux top will include how much shared memory an app has touched in its SIZE. (But you can also look at the shared column to see how much of hte size is shared) . -- Jeff Trout [EMAIL PROTECTED] http://www.jefftrout.com/ http://www.stuarthamm.net/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] psql + autocommit
* Peter Eisentraut: | John Sidney-Woollett wrote: | Any ideas why the global file doesn't work? | | There is no support for a global configuration file at this time. I | suggested that you *implement* it. Version 8.0.0beta2 supports a global configuration file. It's should be located in '~postgres/etc/pgsql'. -- Lars Haugseth ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] psql + autocommit
Thanks - I'll live with modifying the ~/.psqlrc file until we move to version 8. John Sidney-Woollett Lars Haugseth wrote: * Peter Eisentraut: | John Sidney-Woollett wrote: | Any ideas why the global file doesn't work? | | There is no support for a global configuration file at this time. I | suggested that you *implement* it. Version 8.0.0beta2 supports a global configuration file. It's should be located in '~postgres/etc/pgsql'. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] psql + autocommit
Thanks to you and Bruce for the info. I'll live with modifying the ~/.psqlrc file until we move to version 8. John Sidney-Woollett Lars Haugseth wrote: * Peter Eisentraut: | John Sidney-Woollett wrote: | Any ideas why the global file doesn't work? | | There is no support for a global configuration file at this time. I | suggested that you *implement* it. Version 8.0.0beta2 supports a global configuration file. It's should be located in '~postgres/etc/pgsql'. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Strange UTF-8 behaviour
Hi, (UTF-8 encoded) Sorry, I actually forgot to switch encoding :) I just hope the last part of the email was readable. Ciao ciao -- Matteo Beccati http://phpadsnew.com/ http://phppgads.com/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] UTF-8 question.
I'm new to PostgreSQL, and from the looks of it, it's a great database, and I'll be using more of it in the future. I had a quick question if anyone could clear this up. The documentation for PostgreSQL (version 7.1, the version this server is using) says that it supports multibyte character encodings like Unicode (which implies UTF-16 encoding). Later on, the same page says that Unicode is represented using UTF-8 encoding. UTF-8 is the 8-bit version of Unicode. The multibyte version of Unicode is UTF-16. So, which is it? If I create a database using Unicode as the encoding, will the encoding be UTF-8 (singlebyte) or UTF-16 (multibyte)? Thanks! Rich ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] UTF-8 question.
At 8:39 PM -0400 9/16/04, Richard Connamacher wrote: I'm new to PostgreSQL, and from the looks of it, it's a great database, and I'll be using more of it in the future. I had a quick question if anyone could clear this up. The documentation for PostgreSQL (version 7.1, the version this server is using) says that it supports multibyte character encodings like Unicode (which implies UTF-16 encoding). Don't confuse Unicode, the 'character set' and rules for characters, represented by a sequence of abstract 32 bit integers, with UTF-[8|16|32] which is a way to encode those abstract integers into a stream of bytes someplace. Later on, the same page says that Unicode is represented using UTF-8 encoding. UTF-8 is the 8-bit version of Unicode. The multibyte version of Unicode is UTF-16. So, which is it? If I create a database using Unicode as the encoding, will the encoding be UTF-8 (singlebyte) or UTF-16 (multibyte)? Erm... UTF-8 *is* a multibyte encoding. Up to 6 bytes per code point, if things get really degenerate. (And, last I checked, means you can have up to 70 bytes for really degenerate characters, but my memory might be off (could be 80)) UTF-8, UTF-16, and UTF-32 will all encode Unicode characters just fine. -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] UTF-8 question.
On Sep 17, 2004, at 9:39 AM, Richard Connamacher wrote: UTF-8 is the 8-bit version of Unicode. The multibyte version of Unicode is UTF-16. UTF-8 encodes characters with varying numbers of bytes, not just 1 byte per character. IIRC, it's anywhere from 1 to 5 bytes, actually. PostgreSQL uses UTF-8. If you can, upgrade. 7.1 is nearing prehistoric. :) Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] UTF-8 question.
Thanks to both Dan Sugalski and Michael Glaesemann for answering my question. I probably should have realized that, while Latin letters are one byte, the fact that others are encoded into up to 5-byte groups qualifies it as a multi-byte encoding. I don't anticipate having very many non-latin letters in my database, I just want it to have the option if it ever becomes necessary. So, UTF-8 is be much more space efficient for my needs. 7.1 may be prehistoric, but it's running on an off-site server that I'm renting, and this version came pre-installed. Since it's already there and working, I'd like to get familiar with it before I try to reinstall a newer version. I doubt I'd know what to do with many of the newer features anyway, since this is my first time playing with PostgreSQL and my knowledge is currently limited to simple relationships and basic SQL queries. Many thanks for the clarification, Rich On Sep 17, 2004, at 9:39 AM, Richard Connamacher wrote: UTF-8 is the 8-bit version of Unicode. The multibyte version of Unicode is UTF-16. UTF-8 encodes characters with varying numbers of bytes, not just 1 byte per character. IIRC, it's anywhere from 1 to 5 bytes, actually. PostgreSQL uses UTF-8. If you can, upgrade. 7.1 is nearing prehistoric. :) Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] UTF-8 question.
Richard Connamacher [EMAIL PROTECTED] writes: 7.1 may be prehistoric, but it's running on an off-site server that I'm renting, and this version came pre-installed. Since it's already there and working, I'd like to get familiar with it before I try to reinstall a newer version. I doubt I'd know what to do with many of the newer features anyway, It's not so much more features as fewer bugs. There are known data loss problems in 7.1.* and before (transaction ID wraparound, for instance, though you might call that a design shortcoming rather than a bug per se). regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Is it possible to get the 7.4.1 static docs in HTML form anymore?
Reason being I'd like to install them locally on my laptop so that when I'm lap topping it, I still have docs without the need for an Internet connection. Regards. Hadley