Re: [GENERAL] any disadvantage of PostgeSQL on Windows

2001-07-09 Thread GH

On Tue, Jul 10, 2001 at 11:30:05AM +0800, some SMTP stream spewed forth: 
> Would appreciate a lot if anyone could tell me if there is any
> disadvantage/performance issue in using PostgeSQL on windows.

I know of no disadvantages that are not a direct result of Windows.
Such things as the shared memory implementation, process switching,
general sluggishness, gui (if it a strictly server situation), etc.
Besides, PostgreSQL is fundamentally a 'unix thing'. It would be rather
like running MS Access on FreeBSD, only better because PostgreSQL is Just
That Good.

That said, you should have no trouble at all, and performance should be
reasonable.

> Ethan.

-- 
> What, no one sings along with Ricky Martin anymore?
My kid sister does (but then, she prefers pico to vi ...)
-- Suresh Ramasubramanian, alt.sysadmin.recovery


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Known problem with HASH index?

2001-07-09 Thread Allan Engelhardt

Is there a known problem with HASH type index in PostgreSQL 7.1.2 4PGDG on Red Hat 
Linux 7.1 (2.4.2 kernel)?  I can't find a lot of documentation, but this is what I 
observe:


[playpen]$ dropdb test; createdb test; psql -f create_table.sql test; psql -c "COPY 
clients FROM '/tmp/input.txt';" test; psql -c 'CREATE INDEX clients_idx ON clients 
USING HASH (tel);' test; vacuumdb test; vacuumdb --analyze test
DROP DATABASE
CREATE DATABASE
CREATE
COPY
CREATE
NOTICE:  Index clients_idx: NUMBER OF INDEX' TUPLES (916864) IS NOT THE SAME AS HEAP' 
(100).
 Recreate the index.
VACUUM
NOTICE:  Index clients_idx: NUMBER OF INDEX' TUPLES (916864) IS NOT THE SAME AS HEAP' 
(100).
 Recreate the index.
VACUUM

[playpen]$ cat create_table.sql
CREATE TABLE clients (
   ClientID integer,
   firstname varchar(5),
   surname varchar(22),
   area varchar(3),
   neigh varchar(27),
   dimos varchar(50),
   tel varchar(7)

The input file is a bit big to include, but was created using this brain-damaged perl 
script (somebody please teach me how to do random letter strings :-))

#!/usr/local/bin/perl -w
my $i;
my $j;
srand(time||$$);

for ($i = 0; $i<1_000_000; $i++) {
print int(rand(1<<31)), "\t";
foreach $j (1..5)  { printf "%c", int(rand(57)+65); } print "\t";
foreach $j (1..22) { printf "%c", int(rand(57)+65); } print "\t";
foreach $j (1..3)  { printf "%c", int(rand(57)+65); } print "\t";
foreach $j (1..27) { printf "%c", int(rand(57)+65); } print "\t";
foreach $j (1..50) { printf "%c", int(rand(57)+65); } print "\t";
foreach $j (1..7)  { printf "%c", int(rand(10)+48); } print "\n";
}
print "\\.\n";



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Re: postgres slower than grep?

2001-07-09 Thread Allan Engelhardt

I could only be botherd to try a million lines, but my results from 7.1.2 are below.

Basically:

1. I find about 50% database storage overhead in this case.  That's not completely 
silly, considering this is structured data, but seems a little high.  I don't know the 
internal structures well enough to really see what's happening.

2. Why would it be faster than grep?  This has to match structured data, in this case 
varchar, and not just bytes.  It has to worry about transactions and logs, not just a 
stream of data.  Besides, in my tests it is not *that* slow (3 sec, compared with 
1/2).  Dunno what's up with your system.

3. As you said: With an index it rocks, easily beating grep.  Use an index - it's your 
friend :-)


Allan.




[playpen]$ wc -l /tmp/input.txt
101 /tmp/input.txt
[playpen]$ ls -lh /tmp/input.txt
-rw-rw-r--1 allane   allane   124M Jul  9 22:12 input.txt
[playpen]$ psql -f create_table.sql test
CREATE
[playpen]$ time psql -c "COPY clients FROM '/tmp/input.txt';" test

COPY

real 1m26.543s
user 0m0.000s
sys 0m0.010s
[playpen]$ du -sh $PGDATA/base/963549/
179M /var/lib/pgsql/data/base/963549
[playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test
 test
 clientid | firstname | surname | area | neigh | dimos | tel
--+---+-+--+---+---+-
(0 rows)


real 0m24.281s
user 0m0.010s
sys 0m0.000s
[playpen]$vacuumdb -a; vacuumdb -a --analyze
...
[playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test
 clientid | firstname | surname | area | neigh | dimos | tel
--+---+-+--+---+---+-
(0 rows)


real 0m3.808s
user 0m0.010s
sys 0m0.000s
[playpen]$ time grep '1234567' /tmp/input.txt
...
real 0m0.505s
user 0m0.150s
sys 0m0.360s
[playpen]$ dropdb test
DROP DATABASE
[playpen]$ createdb -E LATIN1 test
CREATE DATABASE
[playpen]$ psql -f create_table.sql test
CREATE
[playpen]$ time psql -c "COPY clients FROM '/tmp/input.txt';" test
COPY

real 1m23.927s
user 0m0.010s
sys 0m0.000s
[playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test
 clientid | firstname | surname | area | neigh | dimos | tel
--+---+-+--+---+---+-
(0 rows)


real 0m23.934s
user 0m0.010s
sys 0m0.010s
[playpen]$vacuumdb -a; vacuumdb -a --analyze
...
[playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test
 clientid | firstname | surname | area | neigh | dimos | tel
--+---+-+--+---+---+-
(0 rows)


real 0m3.796s
user 0m0.010s
sys 0m0.000s


[playpen]$ psql -c 'CREATE INDEX clients_idx_tel ON clients (tel);' test
CREATE
[playpen]$ vacuumdb --analyze test
VACUUM
[playpen]$ time psql -c 'SELECT * FROM clients WHERE tel=1234567;' test
 clientid | firstname | surname | area | neigh | dimos | tel
--+---+-+--+---+---+-
(0 rows)


real 0m0.189s
user 0m0.010s
sys 0m0.000s

Spiros Ioannou wrote:

> Hello all,
>
> before you start reading, have in mind that this is not post to advertise
> one db over another, I just want to know what happens, that's why I did
> the comparisons with other db.
> to the point:
>
> This is my table:
>
>Table "table1"
>  Attribute |Type | Modifier
> ---+-+--
>  ClientID  | integer |
>  firstname | varchar(5)  |
>  surname   | varchar(22) |
>  area  | varchar(3)  |
>  neigh | varchar(27) |
>  dimos | varchar(50) |
>  tel   | varchar(7)  |
>
> The Facts:
> --
> The table exported to a text file is about 330MB, and contains about 5
> milion lines.
>
> The same table takes 670MB in postgres. Why?
> (I tried mysql, and it uses 340MB).
>
> issuing the following query:
> select * from table1 where tel='7485842';
> takes about 1min and 40 seconds to complete. (query has one result)
>
> explain of the above returns:
> Seq Scan on table1  (cost=0.00..147835.01 rows=23 width=76)
>
> issuing a grep in the text file takes about 25 seconds! *(for the whole grep
> to finish - worse case)*
>
> issuing the same query in mysql takes about 25 seconds.
>
> -to test the disk speed, I doubled the size of the text file by copying
>  it twice and the same grep query took 51 seconds (as expected)
>
> when creating an index the query completes of course in no time
>
> yes , i tried vacuum
>
> Postgres is version 7.0
> system is solaris 2.7
> hard disk is 1rpm, ultraSCSI
> cpu is UltraSparcIIi 333Mhz
> physical memory is 384MB
>
> and now the questions
> -
> -shouldn't postgres be at least as fast as the grep?
> -shouldn't the table data file be smaller to reduce disk activity? Why is it
>  double as the mysql same data file or the text file? I also noticed that
>  an index file for column "tel" is about 130MB large and the mysql's one
>  was 64MB, is this normal?
>
> Thank you for your time,
>
> Spiros  Ioannou
> e-mail:[EMAIL PROTECTED]
> ---
> Ima

Re: [GENERAL] Re: Backups WAS: 2 gig file size limit

2001-07-09 Thread Thomas Lockhart

> > Have you observed otherwise?
> Yes.  Specifically timestamps are dumped in a way that (1) they lose
> percision (2) sometimes have 60 in the seconds field which prevents the
> dump from being restored.

The loss of precision for timestamp data stems from conservative
attempts to get consistant behavior from the data type. It is certainly
not entirely successful, but changes would have to solve some of these
problems without introducing more.

I've only seen the "60 seconds problem" with earlier Mandrake distros
which combined normal compiler optimizations with a "fast math"
optimization, against the apparent advice of the gcc developers. What
kind of system are you on, and how did you build PostgreSQL?

Regards.

   - Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: Backups WAS: 2 gig file size limit

2001-07-09 Thread Tom Lane

Joseph Shraibman <[EMAIL PROTECTED]> writes:
> Could a command be added that locks all the files, quickly tars them
> up, then releases the lock?

pg_ctl stop
tar cfz - $PGDATA >someplace
pg_ctl start

There is no possibility of anything less drastic, if you want to ensure
that the database files are consistent and not changing.  Don't even
think about doing a partial dump of the $PGDATA tree, either.  If you
don't have a pg_log that matches your data files, you've got nothing.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Re: Backups WAS: 2 gig file size limit

2001-07-09 Thread Joseph Shraibman

Doug McNaught wrote:
> 
> [HACKERS removed from CC: list]
> 
> Joseph Shraibman <[EMAIL PROTECTED]> writes:
> 
> > Doing a dumpall for a backup is taking a long time, the a restore from
> > the dump files doesn't leave the database in its original state.  Could
> > a command be added that locks all the files, quickly tars them up, then
> > releases the lock?
> 
> As I understand it, pg_dump runs inside a transaction, so the output
> reflects a consistent snapshot of the database as of the time the dump
> starts (thanks to MVCC); restoring will put the database back to where
> it was at the start of the dump.
> 
In theory.

> Have you observed otherwise?

Yes.  Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

And I suspect any statistics generated by VACUUM ANALYZE are lost.

If a database got corrupted somehow in order to restore from the dump
the database would have to delete the original database then restore
from the dump.  Untarring would be much easier (especially as the
database grows).  Obviously this won't replace dumps but for quick
backups it would be great.

-- 
Joseph Shraibman
[EMAIL PROTECTED]
Increase signal to noise ratio.  http://www.targabot.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: Backups WAS: 2 gig file size limit

2001-07-09 Thread Doug McNaught

[HACKERS removed from CC: list]

Joseph Shraibman <[EMAIL PROTECTED]> writes:


> Doing a dumpall for a backup is taking a long time, the a restore from
> the dump files doesn't leave the database in its original state.  Could
> a command be added that locks all the files, quickly tars them up, then
> releases the lock?

As I understand it, pg_dump runs inside a transaction, so the output
reflects a consistent snapshot of the database as of the time the dump 
starts (thanks to MVCC); restoring will put the database back to where 
it was at the start of the dump.

Have you observed otherwise?

-Doug
-- 
The rain man gave me two cures; he said jump right in,
The first was Texas medicine--the second was just railroad gin,
And like a fool I mixed them, and it strangled up my mind,
Now people just get uglier, and I got no sense of time...  --Dylan

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [PATCH] Partial indicies again

2001-07-09 Thread Tom Lane

Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
>> I suspect you need to run cnfify after and-ing together the predicate
>> and index quals.  clauselist_selectivity assumes it's working on
>> cnf'ified expressions.

> OK. Do I need to worry about that function destroying either of the input
> lists?

AFAIR, cnfify doesn't modify its inputs.  But watch out for the
difference between explicit and implicit ANDing.

> I'll have to check again, but I thought that the VACUUM code only had the
> OID of the index, so how it is supposed to work out if it's a partial index.

You'll need to look it up.

In practice, I seem to recall that VACUUM is broken for partial indexes
anyway, specifically because it does not pay attention to partial-ness:
when it moves a tuple it shouldn't make an index entry for the new copy
if the index is partial and the tuple fails the predicate check.  The
correct fix for this is not to add code, but to remove it.  VACUUM
should never have had its own index-entry-making code in the first
place; it should be using ExecOpenIndices and friends from the main
executor.  Once you do that, the info returned by ExecOpenIndices will
include the predicate, and you can just look there.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] orphaned RI constraints

2001-07-09 Thread Stephan Szabo


On Mon, 9 Jul 2001, Jan Wieck wrote:

> Somazx Interesting wrote:
> > At 01:30 PM 7/6/2001 -0700, you wrote:
> > >To fix it, you should be able to use DROP TRIGGER on the appropriate
> > >triggers that were created (you can find these through a select on
> > >pg_trigger, using the tgargs to find the appropriate ones).  As a warning,
> > >you need to double quote the trigger name, so for example if you saw the
> > >following rows for the constraint:
> > >
> > >   782359 | RI_ConstraintTrigger_782384 |   1654 |  9 | true  |
> > >true   |  |782372 | false| false
> > >|   6 || \000qqq2\000qqq\000UNSPECIFIED\000a\000a\000
> > >   782359 | RI_ConstraintTrigger_782386 |   1655 | 17 | true  |
> > >true   |  |782372 | false| false
> > >|   6 || \000qqq2\000qqq\000UNSPECIFIED\000a\000a\000
> > >
> > >you should be able to do
> > >DROP TRIGGER "RI_ConstraintTrigger_782384";
> > >DROP TRIGGER "RI_ConstraintTrigger_782386";
> >
> > Hi,
> >
> > The above doesn't work for me since DROP TRIGGER requires an ON  > name> argument, and the table which the trigger is on has long since been
> > dropped.
> >
> > Is there something else I can try?
> 
> That's  hard  to believe, because tables that get dropped for
> sure take all their triggers with them. What's the result of
> 
> SELECT relname FROM pg_class WHERE oid = 782359;
> 
> Should be there and be either "qqq"  or  "qqq2".  That's  the
> table name these triggers are fired for.
> 
> What's   a   little  confusing  is  that  in  your  case  the
> tgconstrrelid contains 782372 and not  NULL.  I  assume  from
> that  that  this  is  not  from  the  database  you're having
> problems with, right?

The example rows were from my db with some pulled out to make
it a bit more obvious how to get the trigger names.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] ODBC 3.0 functions (UCASE, LCASE, etc.)

2001-07-09 Thread Bruce Momjian


This is clearly a good idea.  It is easy to create SQL functions as
synonyms for existing functions.  You can even alter the arguments.  I
would encourage anyone who wants to start coding an SQL file of
compatibility functions.

> On Thu, 3 May 2001, Bruce Momjian wrote:
> 
> > 
> > This sounds good.  Would these exist in ODBC or in the backend?  My
> > understanding is that these are best done in ODBC.
> 
> It's not so much an ODBC problem *per se*, but, rather, that many
> databases offer 'functions' (some from standards, some made up) that we
> don't have. (The ODBC specs just happen to recommend a large slew of
> them.)
> 
> I'd think these would have to be backend functions, but probably best not
> actually in the backend source.
> 
> Since we can create functions easily, and since few of these things are
> actually new features, but re-named/re-ordered functions we already have,
> it wouldn't seem too onerous to make wrappers around these, or hook some
> of these in as aliases for existing functions/types.
> 
> Things like:
> 
>  - aliasing int to mediumint
>  - alias textcat to concat
>  - iif( condition, true_expr, false_expr )
>  - first(), last() aggregates
>  - std() as an alias for stddev()
> 
> Yes, some of these *are* ugly, non-standard forms used by other databases.
> Read on for why I still think it could be a good idea.
> 
> Some things we have (like existing odbc.sql file in src/interfaces/odbc),
> or in contrib/ (like soundex) are probably missed by many users. (And,
> although this is probably intentional, are left off of MySQL's crash-me).
> 
> Perhaps a useful form would be a folder of function wrappers, group by
> 'competitor':
> 
>   oracle.sql
>   mysql.sql
>   access.sql
>   odbc.sql
> 
> which would contain the wrappers functions for these systems. Of course,
> we can't mimic *everything* (at least not in plpgsql functions ;-) )
> but we might be able to do much better.
> 
> 
> I know it seems like a trivial thing, but it's not too infrequent that
> I'll hear someone chatting online/posting a follow-up message about how
> they've evaluated PostgreSQl, or another product, but didn't use it
> because we lacked feature foo(), when it's there, and just called feature
> bar().
> 
> Anyway, this isn't an itch that I *need* to scratch -- right now,
> all I use in the office is PostgreSQL for backend work :-). But I think in
> the 'how easy are we to evaluate and use' department, it could be a small,
> but helpful win.
> 
> -- 
> Joel Burton   <[EMAIL PROTECTED]>
> Director of Information Systems, Support Center of Washington
> 
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] PostgreSQL perl / libpq.so.2 problem - again :(

2001-07-09 Thread Lamar Owen

On Saturday 07 July 2001 11:39, Justin Clift wrote:
> Are you familiar with Symantec Ghost?

Yes.  Is there now a version that doesn't require Windows installed? :-)  I 
own the Windows version.  I've used it numerous times for that.

However, I prefer to simply unlock a drive slide and slide a new one in :-).

> Works well for me.  What do you guys think, good approach?

It or the VMware approach are both good approaches -- but my development 
servers do other things that are mission critical overnight -- and playing on 
my backup/restore setup is against my policy.  So I pull the slide out, and 
pop the dev slide in when I need to do 'playing' (like RPM building).  
This way I can also use more modern versions of certain packages on my 
backup/restore and utility systems (for hot failover and the like) and keep 
pristine OS installs laying around for RPM building.  3.2GB harddrives are 
pretty cheap these days -- I just don't have enough of them on hand, and have 
a budget to follow.

However, I can't really afford the VMware approach at the moment for the use 
I would put VMware to :-)  I sure would like to have their Enterprise 
version -- just being able to have fine-grained resource management like that 
is so nice -- makes one wish for an S/390.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] [PATCH] Partial indicies again

2001-07-09 Thread Martijn van Oosterhout

Well, getting closer. Maybe I should start version numbering the patches?

Things changed in this version:

* As Tom Lane pointed out, what to do with EXTEND INDEX. I've decided to
  remove it since there is no defined syntax for it and you cannot make an
  index more restrictive with it. Since the names of indicies don't matter,
  you may as well drop the index and make a new one if you want to change
  the predicate.

  At this stage I have only removed it from the grammer because complete
  removal would require many line-by-line changes and would end up with a
  patch much larger than this one. I'd prefer to submit that as a separate
  patch after this one is completed.

* Updates to the documentation as well as comments within the code. If you
  see a spot I missed, let me know.

* Changes to the cost estimation. This is much better than before but still
  needs some refining since index_selectivity != rel_selectivity. Feedback
  welcomed.

Issues still needing to be fixed:

* Make it pg_dump-able. I've tried to extract the expression out the of
  system tables by using stringToNode and deparse_expression but it doesn't
  seem to work. I keep getting the error: "get_names_for_var: bogus
  varlevelsup 0". Anyone know what's going on? See attachment <>.
  
* VACUUM still complains about tuple count mismatch. Anyone have a good idea
  on how to deal with this?

* The regression tests need to be changed to deal with the newly added
  regression tests for these indicies.  

Oh, and it appears I accidently changed the allowed syntax a bit. You used
to have to qualify each field in the partial index predicate with the name
of the relation. That's no longer required.

Everyone still alive out there?
-- 
Martijn van Oosterhout <[EMAIL PROTECTED]>
http://svana.org/kleptog/
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.


diff --exclude gram.c -ur postgresql-7.1.orig/doc/TODO postgresql-7.1/doc/TODO
--- postgresql-7.1.orig/doc/TODOMon Apr  9 04:11:09 2001
+++ postgresql-7.1/doc/TODO Sat Jul  7 15:06:55 2001
@@ -144,7 +144,7 @@
   fails index can't store constant parameters
 * -Allow SQL function indexes
 * Add FILLFACTOR to index creation
-* Re-enable partial indexes
+* -Re-enable partial indexes
 * Allow inherited tables to inherit index, UNIQUE constraint, and primary key
   [inheritance]
 * Prevent pg_attribute from having duplicate oids for indexes (Tom)
diff --exclude gram.c -ur postgresql-7.1.orig/doc/src/sgml/catalogs.sgml 
postgresql-7.1/doc/src/sgml/catalogs.sgml
--- postgresql-7.1.orig/doc/src/sgml/catalogs.sgml  Mon Apr  2 05:17:30 2001
+++ postgresql-7.1/doc/src/sgml/catalogs.sgml   Sat Jul  7 15:01:17 2001
@@ -1053,7 +1053,7 @@
   indpred
   text
   
-  Query plan for partial index predicate (not functional)
+  Query plan for partial index predicate
  
 

diff --exclude gram.c -ur postgresql-7.1.orig/doc/src/sgml/indices.sgml 
postgresql-7.1/doc/src/sgml/indices.sgml
--- postgresql-7.1.orig/doc/src/sgml/indices.sgml   Sat Feb 24 05:11:55 2001
+++ postgresql-7.1/doc/src/sgml/indices.sgmlSat Jul  7 15:06:09 2001
@@ -557,11 +557,12 @@

 Note
 
- Partial indices are not currently supported by
- PostgreSQL, but they were once supported
- by its predecessor Postgres, and much
- of the code is still there.  We hope to revive support for this
- feature someday.
+ For a long time partial indicies were not supported by
+ PostgreSQL, but they were once supported by
+ its predecessor Postgres, and much of the
+ code was still there. Currently (July 2001) there is some work underway
+ to revive this feature. See the pgsql-general mailing list archives for
+ details.
 

 
diff --exclude gram.c -ur postgresql-7.1.orig/doc/src/sgml/ref/create_index.sgml 
postgresql-7.1/doc/src/sgml/ref/create_index.sgml
--- postgresql-7.1.orig/doc/src/sgml/ref/create_index.sgml  Mon Jan 29 13:53:33 
2001
+++ postgresql-7.1/doc/src/sgml/ref/create_index.sgml   Tue Jul 10 00:34:22 2001
@@ -25,8 +25,10 @@
   
 CREATE [ UNIQUE ] INDEX index_name ON 
table
 [ USING acc_name ] ( column [ ops_name ] [, ...] )
+[ WHERE expr ]
 CREATE [ UNIQUE ] INDEX index_name ON 
table
 [ USING acc_name ] ( func_name( column [, ... ]) [ ops_name ] )
+[ WHERE expr ]
   
 
   
@@ -137,6 +139,15 @@

   
  
+
+ 
+  expr
+  
+   
+   Defines the expression for a partial index.
+   
+  
+ 
 

   
@@ -225,6 +236,23 @@
of these access methods are fully dynamic and do not have to be
optimized periodically (as is the case with, for example, static hash
access methods).
+  
+
+  
+When the WHERE clause is present, this defines a
+partial index. A partial index is an index that o

Re: [GENERAL] fmgr_info: function 2714160: cache lookup failed

2001-07-09 Thread Jan Wieck

Steve Wampler wrote:
>
> Can someone explain to me what this error is and how to avoid it?
> (Results from an INSERT through JDBC on 7.1.2 with RH Linux).

You  must  have  droppend  a  function that was referenced by
another  object   (view,   operator,   aggregate,   whatnot).
Recreating  the  function  only doesn't help, because they're
referenced  by  OID.  You  have  to  drop  and  recreate  all
dependant objects as well.


Jan

PS: No, Bruce, I don't start on pg_depend right now :-)

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] problems with rules and views

2001-07-09 Thread Tom Lane

"Fabrizio Mazzoni" <[EMAIL PROTECTED]> writes:
> Considering that my view has about 35 columns, is there a way to write a
> rule that does a batch update..?? Something like:

> create rule xxx as
> on update to yyy <-- my view
> do instead
> update to xxx,qqq,zzz; <-- tables from which the view is created

> Or do i have to write a statement for each column...

Just assume that *all* the columns are being updated.  NEW will have the
correct value (possibly the old value) for each column.  You'll probably
need one update command for each source table, but not one per column.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Transactions in rules..

2001-07-09 Thread Jan Wieck

Fabrizio Mazzoni wrote:
> Can i use transactions in rules??
> I was trying something like:
>
> create rule x_test as
> on insert to view1
> do instead
> (
>begin;
>insert into test1 values (new.a,new.b);
>insert into test2 values (new.c,new.d);
>commit;
> );
>
> But when i give this command in psql i always get an error..

There  is  no  way  to  execute  a statement without having a
transaction  in  PostgreSQL.  If  you're  not  inside  of   a
transaction  block,  even  a  simple  SELECT  from the client
(psql, application) will have it's own transaction.

For your above case, if you don't do BEGIN/COMMIT, the INSERT
will  have  it's  own Xact with automatic COMMIT (if no ERROR
happens).  The rule actions will allways belong to  the  same
Xact  the INSERT does, so either all is committed or nothing.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re: [DOCS] QUERY - Including the new LDP PostgreSQL HOWTO in the main PostgreSQL distribution

2001-07-09 Thread Mitch Vincent

> For a long time, I thougt HOWTO is docuement for "quick start". Isn't it?

I'm not sure there is an understood meaning of what a HOWTO actually it.
It's an explanation of how to do something, I guess... Generally I've found
(as in your example) that it's putting software together and making it work,
like Apache/SSL/Frontpage/something -- for that there is no single document
that describes how to set it up. I really don't think the PostgreSQL
installation guide could get much more simple -- why re-invent the wheel
here?

-Mitch


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] trigger: NEW/OLD-error or nothing happens

2001-07-09 Thread Jan Wieck

Knut Suebert wrote:
> Joe Conway schrieb:
>
> > OLD does not make sense on an insert because there is no "OLD" data for a
> > brand new row. Similarly, there is no "NEW" data during a delete. See
> > http://www.postgresql.org/idocs/index.php?plpgsql-trigger.html
>
> Makes sense, thanks. And at the bottom of that page is a nice hint
> regarding the trigger BEFORE/AFTER insert.
>
> So I changed to BEFORE -- but still nothing happens
>
> An exception for debugging:
>
>create function nac_viceversa() returns opaque as'
>declare
> x int4;
>begin
> x := nacmin(NEW.sport,NEW.dport)
> NEW.minport := x;
> NEW.maxport := nacmax(NEW.sport,NEW.dport);
> raise exception ''%'',x;
>return NEW;
>end;
>'language 'plpgsql';
>
>create trigger nac_update before insert or update on traf
>for each row execute procedure nac_viceversa();
>
> the calculation works...
>
>ERROR:  110
>ERROR:  110
>ERROR:  110
>
> ... but is not written into the row -- even after removing the exception ;-)
>
> id   | ... | device | minport | maxport
>12151 | ... | eth0   | |
>12152 | ... | eth0   | |
>
> Am I stupid?

You  can use RAISE NOTICE to have *debugging* without rolling
back the transaction. Maybe it'd help to add more RAISE's  to
see the values of NEW.sport and NEW.dport as well?

And you don't need the eXtra variable. NEW.attrib should work
well in the RAISE.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Qn on best way to upgrade from 7.0.2 to 7.0.3 on production database

2001-07-09 Thread GH

On Mon, Jul 09, 2001 at 05:52:21PM +0800, some SMTP stream spewed forth: 
[...]
> therefore I need to upgrade to 7.0.3.
> 
> the doc says to do dump/restore when upgrading the database. however, since pg_dump 
>has problem with timestamp data, how am I going to restore back the data after 
>upgrade?!!!
> 
> pls help. Thanks

It is my understanding that no dump/restore is necessary for minor
version upgrades (e.g. .2 to .3). You should be fine, but don't take my
word for it.

gh
-- 
> What, no one sings along with Ricky Martin anymore?
My kid sister does (but then, she prefers pico to vi ...)
-- Suresh Ramasubramanian, alt.sysadmin.recovery

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] error handling. How to?

2001-07-09 Thread Richard Huxton

From: "Janning Vygen" <[EMAIL PROTECTED]>

> one of my problems:
> if i have a value like bool or int2 and the frontend is html and just
takes
> the input from a textfield, the backend returns an error before executing
the
> trigger beacuse the values cant be parsed into the correct type.
>
> But should not the trigger be called BEFORE inserting?
> It seems not, maybe just because the NEW record has to be filled and with
a
> mismatching type it can't be done!?

Correct - your update/insert statement will be parsed before it gets
processed. It's only if the statement passes the checks and gets executed
that your trigger will see it.

> Bad thing about it: my trigger should report every error. but it is never
> executed when the type parsing fails.
>
> I just dont want to put any error handling code in my frontend. I just
want
> the backend to return something like
> "ERROR: errormessage1; ERROR: errormessage2;"
> to return all errors on all fields.

Data validation (making sure all values submitted are of valid types and
ranges) needs to be done before you reach the database. You should be
checking all submitted values anyway, just for security implications. Do
this in your server-side Perl/PHP/Java/C, don't just rely on javascript in
the browser.

Only allowing users to add orders for products with an existing
product-code, now that is something that can (and should) be enforced in the
database. Personally, I'd still want to trap the errors from the database
and then produce a user-friendly message in my application.

> Can anybody tell me how to make a really good and flexible error
processing
> with postgreSQL??

Examine the types of errors that can occur. You should find they fall into a
number of different classes. These classes should (hopefully) closely match
the structure of your application. Each part of your application should then
be responsible for generating its own exceptions and dealing with errors
generated by helper modules.

In your case, handle this in your object/database abstraction layer. If you
don't have one, build one - it's not usually a big issue performance-wise
and will pay you back tenfold if you need to change the underlying database
/ database system.

HTH

- Richard Huxton


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] error handling. How to?

2001-07-09 Thread Janning Vygen

Hi,

i try do achieve that my database checks every entry on insert and update and 
returns german error messages, so my frontend knows what's wrong and can show 
some reasonable error messages to the user.

i wrote a trigger who checks BEFORE every insert and updtae and returns error 
messages if new values are not acceptable.

one of my problems:
if i have a value like bool or int2 and the frontend is html and just takes 
the input from a textfield, the backend returns an error before executing the 
trigger beacuse the values cant be parsed into the correct type.

But should not the trigger be called BEFORE inserting? 
It seems not, maybe just because the NEW record has to be filled and with a 
mismatching type it can't be done!?

Bad thing about it: my trigger should report every error. but it is never 
executed when the type parsing fails.

I just dont want to put any error handling code in my frontend. I just want 
the backend to return something like
"ERROR: errormessage1; ERROR: errormessage2;"
to return all errors on all fields.

Can anybody tell me how to make a really good and flexible error processing 
with postgreSQL?? 

janning

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]