Re: [GENERAL] Using UnixODBC and postgresql
On Sat, Oct 07, 2000 at 12:51:54AM +1100, Danny wrote: 1)Does Postgresql with with UNIXODBC ? Yes, it does without a problem. Michael -- Michael Meskes [EMAIL PROTECTED] Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!
Re: [GENERAL] Need help : Weird referencial Integrity triggers
Vilson farias wrote: I've been changing some tables and I saw a strange behavior in pg_trigger. Check out the commented code below : [...] Now, if I select the corresponding triggers from pg_trigger, I'll find three. Why 3? One for update, one for insert and one for delete ? One for INSERT/UPDATE of the foreign key table, one for DELETE and one for UPDATE of the primary key table. You haven't specified a referential action for the UPDATE case of the primary key (ON UPDATE ...). The default action is NO ACTION, what means that changing the primary keys value isn't allowed as long as there are references to it from any foreign key. Thus, updating a PK can never be ignored and there must be a trigger. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] #
Re: [GENERAL] How does TOAST compare to other databases' mechanisms?
Tom Lane wrote: Frank Joerdens [EMAIL PROTECTED] writes: Can I go around bragging to my SQL-minded friends about using this really cool thing that no other database has, or should I keep my mouth shut because it's actually not so cool? IMHO it's pretty cool. You get the benefits of BLOB storage without having to deal with weird declarations or access methods. I have no idea whether any other databases do it the same way, but simply removing the limit on physical tuple length wouldn't have been as nice. See, with a toasted column, you don't pay to suck the contents of the column into memory when you read the row for a query that doesn't actually touch that column. So, for example, you might have a table declared like CREATE TABLE foo (key1 text, moddate timestamp, big_horking_data text); [...] And it also helps to avoid tables, containing such big_horking_data items, to grow as fast as they would before 7.1. In the case CREATE TABLE customer_call ( cc_callid serial primary key, cc_custid integer foreign key ..., cc_priority integer, cc_calltime timestamp, cc_callclosed timestamp, cc_huge_description text ); UPDATE customer_call SET cc_callclosed = now() WHERE cc_callid = 5432; the cc_huge_description isn't touched. Now think about the way the non-overwriting storage manager in PostgreSQL works. Normally it would store a completely new tuple, containing the description again and VACUUM needs to move alot of data to condense the table again. TOAST will reuse the previous toasted value and NOT outdate it, but put another reference to it into the new tuple. This will avoid alot of write access to the disks and speedup VACUUM. Also, the UPDATE will never even read these items, so the update itself is (like Tom's SELECT sample) working on a small table. All in all it is a very good solution for the very special problems we have in PostgreSQL. It might not compare in any way to what other databases do, but the non-overwriting technology bought us MVCC to be relatively easy. Now it lowers the cost of having it. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] #
Re: [GENERAL] How does TOAST compare to other databases' mechanisms?
idea whether any other databases do it the same way, but simply removing the limit on physical tuple length wouldn't have been as nice. Yes, that makes it a lot clearer. This is more or less what I'd been wondering, without really noticing: Why not just remove the limit on physical tuple length? Because the description of TOAST's inner workings seemed so convoluted (what with compression etcetera) that it appeared as either a great, strained effort to remove a "deep, hardwired limitation" (someone's statement regarding the 8K barrier) or something that someone had thought long and hard about to make particularly cool. All in all it is a very good solution for the very special problems we have in PostgreSQL. It might not compare in any way to what other databases do, but the non-overwriting technology bought us MVCC to be relatively easy. Now it lowers the cost of having it. I am more than satisfied with both your explanations, thanks. I was afraid I might have to switch to MySQL (the horror! the horror!), also because of a piece of information from another recent thread: For more than 10,000 BLOBs, the database was a clear winner. That is, it took less time to locate and read an 8k-16k BLOB from the MySQL database than from the local disk. For smaller numbers of BLOBs, the filesystem was faster. This is undoubtedly right for existing Postgres releases, because currently PG stores each BLOB as an independent table --- and just to add insult to injury, creates an index for it :-(. So you actually have *two* added files in the DB directory per BLOB. Needless to say, this does not scale real well to large numbers of BLOBs. . . . Tom goes on to say that TOAST is a solution to most of these issues whilst a better BLOB implementation is waiting in the wings (scheduled for 7.1 also) which will provide a more natural solution to some problems - an issue which is also touched upon by Jan in pgsql-sql: TOAST is finished and will be shipped with 7.1. It's not a solution for huge items, but medium sized text up to some hundred K works fine. What do you mean by "..not a solution for huge items"? Does TOAST have a size limit? . . . Jan goes on to explain that TOAST does not have an explicit size limit and why it is still less than optimal for really big items. Which leads to another question in the context of my little content management app: Part of it is composed of smaller and larger texts, for which TOAST will be great, because I can still search them, they're not opaque for a, for instance, LIKE query parameter in a SELECT statement (right?), like BLOBs, if I want to search for texts that contain particular words or statements (this will probably be slow if the texts are big and a lot of them are included in the query). But I also have images, most of which will _probably_ not exceed a few hundred K. At which point do I cross the line where TOAST becomes silly and BLOBs are The Way? Regards, Frank
[GENERAL] Using BLOBs with PostgreSQL
I'm evaluating a couple of different databases for use as the back-end to a web-based publishing system that's currently being developed in Java and Perl. I want to keep _all_ of the data in the database, to simplify future replication and data management. That includes such data as GIF images, large HTML files, even multi-megabyte downloadable software archives. I've been using MySQL for initial development; it has pretty clean and easy-to-use BLOB support. You just declare a BLOB column type, then read and write arbitrarily large chunks of data. In Perl, BLOB columns work just like varchar columns; in JDBC, the getBinaryStream()/setBinaryStream() functions provide support for streaming large data objects. How well-supported is this functionality in PostgreSQL? I did some early experimenting with PG, but couldn't find any column type that would accept binary data (apparently PG's parser chokes on null characters?). I've heard about TOAST, but have no idea what it really is, how to use it, or how well it performs. I'm leery of database-specific APIs. - Tim Kientzle
[GENERAL] Postmaster startup problems
I'm having problems starting postgres. What happens is that I start it but then it says it isn't running. In one terminal window: postgres@the-kernel:~$ postmaster -D /usr/local/pgsql/data/ $HOME/pm.log DEBUG: Data Base System is starting up at Sat Oct 7 13:13:29 2000 DEBUG: Data Base System was shut down at Sat Oct 7 13:13:25 2000 DEBUG: Data Base System is in production state at Sat Oct 7 13:13:29 2000 In another terminal window while still looking at postmaster 'running' in the other: postgres@the-kernel:~$ psql psql: connectDBStart() -- connect() failed: Connection refused Is the postmaster running at 'localhost' and accepting connections on Unix socket '5432'? If I try to start postmaster again: postgres@the-kernel:~$ postmaster -D /usr/local/pgsql/data/ $HOME/pm.log FATAL: StreamServerPort: bind() failed: Address already in use Is another postmaster already running on that port? If not, remove socket node (/tmp/.s.PGSQL.5432) and retry. /usr/local/pgsql/bin//postmaster: cannot create UNIX stream port Which leads me to think that it is in fact running. But I can't connect!! Is it maybe running on a different port or something? Another strange thing is that TOP reports the running process to be /usr/local/pgsql/bin instead of /usr/local/pgsql/bin/postmaster which is the way I remember it running before. Collin Peters
Re: [GENERAL] Using BLOBs with PostgreSQL
On Sat, 07 Oct 2000, Tim Kientzle wrote: I've been using MySQL for initial development; it has pretty clean and easy-to-use BLOB support. You just declare a BLOB column type, then read and write arbitrarily large chunks of data. In Perl, BLOB columns work just like varchar columns; in JDBC, the getBinaryStream()/setBinaryStream() functions provide support for streaming large data objects. If you're talking about BLOB texts, just declare the column as text and thats all. In the case of binary data, I don't have an idea. I only work we text data. How well-supported is this functionality in PostgreSQL? I did some early experimenting with PG, but couldn't find any column type that would accept binary data (apparently PG's parser chokes on null characters?). I've heard about TOAST, but have no idea what it really is, how to use it, or how well it performs. I'm leery of database-specific APIs. As far as I have listen, it looks like a nice way to optimize searches in blobs. Don't know anything else. Saludos... :-) -- "And I'm happy, because you make me feel good, about me." - Melvin Udall - Martín Marqués email: [EMAIL PROTECTED] Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/ Administrador de sistemas en math.unl.edu.ar -
Re: [GENERAL] Newbie, Howto access Array-Slots in user defined functions?
Tom Lane wrote: [EMAIL PROTECTED] (Martin Jacobs) writes: CREATE FUNCTION lessbyte (_bytea, _bytea) RETURNS bool AS 'SELECT $1[1] $2[1];' LANGUAGE 'sql'; ERROR: Unable to identify an operator '' for types 'bytea' and 'bytea' You will have to retype this query using an explicit cast There is nothing wrong with your syntax --- you've declared a function that takes two arrays of bytea, selects the first element of each, and compares 'em. But bytea doesn't support comparison operators ... or much of anything, actually. There is a get_byte function, so you could conceivably build what you want starting with create function lessbyte(bytea, bytea) returns bool as 'select get_byte($1,0) get_byte($2,0)' language 'sql'; Thank your for this hint, but my 6.3.2 installation does not know a function get_byte(). Instead it knows functions byteaGetByte, byteaSetByte, ... Sorry, but that don't work either. New problems come up, see below. However, I don't see any reasonable way to deal with variable-length inputs without a loop, and SQL functions don't have looping constructs. Given the lack of operators, type bytea isn't currently useful for much except plain storage and retrieval of raw byte sequences. Have you got a strong reason for using bytea, rather than some better-supported type like text? Heck, even array of char would work better: regression=# CREATE FUNCTION lessbyte(_char, _char) returns bool as regression-# 'SELECT $1[1] $2[1];' LANGUAGE 'sql'; CREATE Playing with some homebrown databases I tried to collect ip adresses in a table. One possibility would be to store them in a character array, but then I would have to enter them in escaped notation and would see characters instead of numbers when doing a plain select in pgsql or pgaccess. (I know, that postgreSQL supports a native data type supporting internet and mac adresses.) Try the following: internetaccess= create table iptest (ip bytea[4]); CREATE internetaccess= insert into iptest (ip) values ('{192,147,68,5}'); INSERT 44085 1 internetaccess= insert into iptest (ip) values ('{191,146,67,1}'); INSERT 44086 1 internetaccess= select * from iptest; ip -- {"192","147","68","5"} {"191","146","67","1"} (2 rows) So far it's what I expected. Now let's extract the class A part of the address: internetaccess= select ip[1] from iptest; ERROR: type name lookup of ip failed Uhh, what's this? What's going wrong now? Another approach: internetaccess= select "byteaGetByte"(ip,1) from iptest; ERROR: function byteaGetByte(_bytea, int4) does not exist Ok, \df command shows that there is a function byteaGetByte which expects as first argument a bytea and as second an int4 value. But how can I use this function if the parser generates references to bytea (_bytea) instead of the object itself? ... Back to your question: Ip addresses are not of variable length. Therefor it should be possible to implement the missing compare operators for , =, , = ... by unrolling the loop by hand. I know this is not optimal but I thought of this being an exercise for myself to lern about PostgreSQL extensions with functions and operators without having to implement external moduls using C/C++ and gcc. Martin Jacobs
[GENERAL] Book now in book-sized format
I have updated the PDF of my book so it is in the final book-sized format. A final PDF should go to the publisher within weeks. It is expected to be on sale in mid-November. It already appears on Amazon and Barnes Noble web sites. I have finished the index. If anyone has suggestions for additional entries, please let me know. It is hard to guess what people will want to look up in an index. The book is at: http://www.postgresql.org/docs/awbook.html -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026
Re: [GENERAL] Newbie, Howto access Array-Slots in user defined functions?
[EMAIL PROTECTED] (Martin Jacobs) writes: Thank your for this hint, but my 6.3.2 installation does not know a function get_byte(). 6.3.2? You didn't mention that before. There are a ton of array-related bugs in 6.3.2. Try a more recent release. regards, tom lane
[GENERAL] RE: pgaccess on Win95
Has anyone been able to execute the subject on PostgreSQL7.0.2 Error message reports unable to find libraries. You need to have Tcl/Tk installed. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/ I should have been more specific. Sorry! TclPro 1.4 (evaluation copy) is installed. Uses Tcl vs 8.3. It is in directory \Program Files\TclPRO 1.4 pgaccess version 0.98.6 is installed in ~\TclPro 1.4. It lists the following files in ~\pgaccess\win32\dll libpgtcl80.dll-6.4.2 libpgtcl80.dll-6.5.1 libpgtcl81.dll-6.5.1 libpgtcl82.dll-6.5 libpq.dll-6.4.2 libpq.dll-6.5.1 pgaccess in Linux supports pg vs 7.x. What has to be done to bring the windows version up to speed.? Thanks for your help, Bill Get your own FREE, personal Netscape WebMail account today at http://home.netscape.com/webmail
[GENERAL] Trying to use PGSql with PHP
I'm trying to setup PostgreSQL on Redhat Linux 6.2 and access it with PHP. I have Apache installed and working, PHP installed and working, and PostgreSQL installed and working. However, I'm trying to get PHP to talk to the PostgreSQL system. I believe I need to install php-pgsql-3.0.15-2.rpm, but when I try, I get an error saying the that package depends on 'libpq.so.2.0'. I have PostgreSQL v7.0.2 (all of the RPMS's) and PHP 3.0.15 installed. What am I missing? I'm a bit of a newbie with Linux, so any help will be grateful! Thanks, John Tsombakos [EMAIL PROTECTED] *