[GENERAL] Backing up large objects

2001-01-23 Thread Adam Haberlach

I've got a database which includes large objects.  In that past,
for the few times I've used lobjs, I've written a quick script to 
handle backups, but this time we've got a slightly more complex
system.

Does anyone have a general-purpose system for backing up dbs
with lobjs, or should I write my own dang script?

-- 
Adam Haberlach|A cat spends her life conflicted between a
[EMAIL PROTECTED]   |deep, passionate, and profound desire for
http://www.newsnipple.com |fish and an equally deep, passionate, and
'88 EX500 |profound desire to avoid getting wet.



Re: [GENERAL] VACUUM ANALYZE FAILS on 7.0.3

2001-01-23 Thread Tom Lane

"Dave Cramer" <[EMAIL PROTECTED]> writes:
> When I run VACUUM ANALYZE it fails and all the backend connections are
> closed. Has anyone else run into this problem?

There should be a core dump file from the crashed backend in your
database subdirectory --- can you provide a backtrace from it?

regards, tom lane



[GENERAL] Re: Looking for info on Solaris 7 (SPARC) specific considerations

2001-01-23 Thread Martin A. Marques

El Mar 23 Ene 2001 12:38, Frank Joerdens escribió:
> I am faced with the task of installing, configuring, and tuning my
> database, which is currently running under Linux, under Solaris 7 on a
> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the
> sysadmin at the site hasn't used or installed PostgreSQL and would rather
> have me do it. Is this actually supported? The FAQ (the one bundled with
> the 7.1 beta3 which I'll be using) lists only:
>
> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1

No problem for me. Solaris 7 and 8.

> If it is supported (I don't suppose a little OS version number increment
> would make a differnce here), I've never used Solaris or anything other
> than Intel-based hardware and am looking for some info on what to watch
> out for and consider when installing and tuning PostgreSQL on Solaris on
> a SPARC plattform. Aside from the shared memory stuff in the Admin
> Guide, I haven't found anything so far. Particularly, I would expect
> that you could gain a significant performance boost from running the
> database on a 64 bit plattform (without knowing exactly why, only
> picking up on word-of-mouth and assorted hype on 64 bit architectures).
> How do you get the most out of it? Would I use gcc or the native Sun
> compiler (how do you control that anyway)?

Well, maybe I'm wrong, but I guess the 64 bit push you get it compiling. I 
mean, memory pages of the OS should be bigger, and the int should also be a 
64 bit int, and not a 32 bit int. Maybe there is more then that, but it's all 
I have to the moment.
Talking about compiler, I use gcc (compiled with the pre-compiled gcc 
binaries) with the the Solaris binutils, especially because it makes life 
easier.

Now, it would be a good idea to try Linux or NetBSD on the SPARC instead of 
Solaris. I am at this moment getting info on the instalation of Linux distros 
and the BSD distros for SPARC, and really think we can get a boost from this.

Hope this helps. ;-)



-- 
System Administration: It's a dirty job, 
but someone told I had to do it.
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [GENERAL] Re: VACUUM and 24/7 database operation

2001-01-23 Thread Steve Wolfe

> Shouldn't it be possible to build vacuum as an ongoing internal PG
process,
> instead of a seperate operation? How does Oracle byepass this? Must be
some
> way that can be implemented.

  Well, here's what it comes down to:  Do you want updates to happen
quickly, and vacuum when load is low, or do you want updates to be slow all
the time?  I suppose that there are some sites that can't find two minutes
per day when updates will block (not selects), but I imagine they're very
few.

steve





Re: [GENERAL] Re: VACUUM and 24/7 database operation

2001-01-23 Thread Alfred Perlstein

> With best regards.
> Sanjay.
> 
> At 05:53 PM 1/23/01 , Tom Lane wrote:
> >[EMAIL PROTECTED] writes:
> >> - Is 2 minutes a standard time for vacuuming a 500.000 rows table ?
> >> - Can it be reduced ?
> >> - In a far future, what are the problems we can run into not vacuuming 
> >> that table ? We have already seen that after a month, some transactions 
> >> involving where id >= some_value take forever, so we supressed them.
> >
> >If it takes a month before query performance gets bad, then perhaps you
> >could vacuum the table only once a month.  However, that vacuum would
> >probably take longer than two minutes, so it's a tradeoff...
> >
> >We have plans for 7.2 to reduce the need for periodic vacuums, but that
> >won't help you much now.
> >
> >There are patches available for a "lazy vacuum" process on 7.0.3,
> >which can be a win if vacuum only needs to get rid of a few rows.
> >But they're not very thoroughly tested IMHO.  See
> >http://people.freebsd.org/~alfred/vacfix/
> >

We've been running them since I released them with only a single
problem that has never resurfaced.  I would say they are pretty stable.

It's not "just a few rows" by the way, it's several thousand and up to
probably 50,000 rows we get about a 20-40x speedup in the time taken
to vacuum (10-15 minutes to 13-40 seconds).  This is on tables that
are over 300megabytes and indecies that are even larger (multiple
column indicies).

It's a shame this still hasn't made it into 7.1 :(

* Sanjay Arora <[EMAIL PROTECTED]> [010123 12:10] wrote:
> Tom,
> 
> Shouldn't it be possible to build vacuum as an ongoing internal PG process,
> instead of a seperate operation? How does Oracle byepass this? Must be some
> way that can be implemented.
> 
> Any pointers to further reading to brush up my theory in this regard please?

Follow the long trail of my messages on the lists about it, I'd
say about 1/3 of my posts have to do with the problems we were
facing before contracting Vadim to do the patches available at:

  http://people.freebsd.org/~alfred/vacfix/

> 
> IAC, regarding the actual inquiry, wouldn't be a replicated database on a
> second server be more cheaper than Oracle, if the party is satisfied with
> PG performance? I browsed some PG commercial organization site that told
> about a Replication Server being available for PG. I am about to look into
> that next month. Is it any good like PG? Will provide failover too..rather
> than using Oracle.

It should, but I havne't read up on it much.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[GENERAL] Postgres-Book from addison-wesley?

2001-01-23 Thread Konstantinos Agouros

Hi,

when I looked at the database-section of my bookstore browsing for the Practi-
cal SQL Handbook I found a book on postgres from Addison Wesley. Is that one
any good (I didn't have the time to look into it).

Konstantin
-- 
Dipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: [EMAIL PROTECTED]
Otkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185

"Captain, this ship will not sustain the forming of the cosmos." B'Elana Torres



[GENERAL] Re: VACUUM and 24/7 database operation

2001-01-23 Thread Sanjay Arora

Tom,

Shouldn't it be possible to build vacuum as an ongoing internal PG process,
instead of a seperate operation? How does Oracle byepass this? Must be some
way that can be implemented.

Any pointers to further reading to brush up my theory in this regard please?

IAC, regarding the actual inquiry, wouldn't be a replicated database on a
second server be more cheaper than Oracle, if the party is satisfied with
PG performance? I browsed some PG commercial organization site that told
about a Replication Server being available for PG. I am about to look into
that next month. Is it any good like PG? Will provide failover too..rather
than using Oracle.

With best regards.
Sanjay.

At 05:53 PM 1/23/01 , Tom Lane wrote:
>[EMAIL PROTECTED] writes:
>> - Is 2 minutes a standard time for vacuuming a 500.000 rows table ?
>> - Can it be reduced ?
>> - In a far future, what are the problems we can run into not vacuuming 
>> that table ? We have already seen that after a month, some transactions 
>> involving where id >= some_value take forever, so we supressed them.
>
>If it takes a month before query performance gets bad, then perhaps you
>could vacuum the table only once a month.  However, that vacuum would
>probably take longer than two minutes, so it's a tradeoff...
>
>We have plans for 7.2 to reduce the need for periodic vacuums, but that
>won't help you much now.
>
>There are patches available for a "lazy vacuum" process on 7.0.3,
>which can be a win if vacuum only needs to get rid of a few rows.
>But they're not very thoroughly tested IMHO.  See
>http://people.freebsd.org/~alfred/vacfix/
>
>   regards, tom lane
> 



Re: [GENERAL] \copy not importing in 702... (SOLVED)

2001-01-23 Thread Anand Raman

hi guys
I just realized that this is occuring because of a the absence of a
primary key in the other table. 

Thus the copy command exited without any failure notice..

Thanx
Anand 

On Wed, Jan 24, 2001 at 12:29:18AM +0530, Anand Raman wrote:
>hi guys
>
>I am having a strange problem with \copy ..
>When i am trying to copy from the a csv file \copy doesnt report any
>error and quietly returns to the command prompt.. 
>However when i do a select count(*) from the table it returns no rows.,.
>
>can anyone help me with this.. 
>
>A sample dialog is reproduced here ..
>
>arttoday=# delete from exhibit_distributions ;
>DELETE 0
>arttoday=# select count(*) from exhibit_distributions ;
>count 
>---
>0
>(1 row)
>
>arttoday=# \copy exhibit_distributions from stdin using delimiters '|' with null as ''
>456|356|1001|'2001-01-23'|||5||/artetc/reproductions/grfx/small/04.jpg|/artetc/reproductions/grfx/large/04.jpg|94|74|||Dimensions|2|18|f|AT95004||ONLY_WORLD|INDIA|T||'ARTETC'
>\.
>arttoday=# select count(*) from exhibit_distributions ;
>count 
>---
>0
>(1 row)
>arttoday=# select version();
>version   
>-
>PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66
>(1 row)
>
>
>ANy help will be appreciated..
>
>
>Thanx
>Anand 



[GENERAL] \copy not importing in 702...

2001-01-23 Thread Anand Raman

hi guys

I am having a strange problem with \copy ..
When i am trying to copy from the a csv file \copy doesnt report any
error and quietly returns to the command prompt.. 
However when i do a select count(*) from the table it returns no rows.,.

can anyone help me with this.. 

A sample dialog is reproduced here ..

arttoday=# delete from exhibit_distributions ;
DELETE 0
arttoday=# select count(*) from exhibit_distributions ;
count 
---
0
(1 row)

arttoday=# \copy exhibit_distributions from stdin using delimiters '|' with null as ''
456|356|1001|'2001-01-23'|||5||/artetc/reproductions/grfx/small/04.jpg|/artetc/reproductions/grfx/large/04.jpg|94|74|||Dimensions|2|18|f|AT95004||ONLY_WORLD|INDIA|T||'ARTETC'
\.
arttoday=# select count(*) from exhibit_distributions ;
count 
---
0
(1 row)
arttoday=# select version();
version   
-
PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66
(1 row)


ANy help will be appreciated..


Thanx
Anand 



[GENERAL] VACUUM ANALYZE FAILS on 7.0.3

2001-01-23 Thread Dave Cramer

When I run VACUUM ANALYZE it fails and all the backend connections are
closed. Has anyone else run into this problem?


--DC--




[GENERAL] Re: Looking for info on Solaris 7 (SPARC) specific considerations

2001-01-23 Thread Gunnar R|nning

Tom Lane <[EMAIL PROTECTED]> writes:

> > sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1
> 
> After you build PG and test it, send us a port report, and we'll add
> Solaris 7 to the list of recently tested platforms.  That's how it
> works ...
> 

We've had client running pgsql 7.0 on Solaris 7 since early May 2000. No
problems at all. 

> > Would I use gcc or the native Sun
> > compiler (how do you control that anyway)?
> 
> Try 'em both --- set CC environment variable before running configure
> to control configure's choice of compiler.

Most Solaris 7 installations I've seen come without the Sun compiler as
standard, so gcc is probably your safest bet. You probably have to
/usr/ccs/bin to your path to get ar and friends.

regards, 

Gunnar



Re: [GENERAL] VACUUM and 24/7 database operation

2001-01-23 Thread Tom Lane

[EMAIL PROTECTED] writes:
> - Is 2 minutes a standard time for vacuuming a 500.000 rows table ?
> - Can it be reduced ?
> - In a far future, what are the problems we can run into not vacuuming 
> that table ? We have already seen that after a month, some transactions 
> involving where id >= some_value take forever, so we supressed them.

If it takes a month before query performance gets bad, then perhaps you
could vacuum the table only once a month.  However, that vacuum would
probably take longer than two minutes, so it's a tradeoff...

We have plans for 7.2 to reduce the need for periodic vacuums, but that
won't help you much now.

There are patches available for a "lazy vacuum" process on 7.0.3,
which can be a win if vacuum only needs to get rid of a few rows.
But they're not very thoroughly tested IMHO.  See
http://people.freebsd.org/~alfred/vacfix/

regards, tom lane



Re: [GENERAL] VACUUM and 24/7 database operation

2001-01-23 Thread Gordan Bobic

>For one of our customer, we are running a PostgreSQL database on a
> dynamic PHP-driven site. This site has a minimum of 40 visitors at a
> time and must be responsive 24h a day.

And from the bandwidth and hit logs, they cannot determine a time of day
when there are hardly any hits? Possible, but it might be worth a
double-check.

>The problem is with VACUUMing this table. It takes 2 long minutes
> everyday. Two minutes during wich no request can be done because of the
> lock on the table... (nearly every request is involving this large
> table). Our customer really dislike this fact and is planning to
> replace PostgreSQL with Oracle.

If they are sufficiently desperate over 2 minutes per day, which is, BTW,
less than 0.14% of the time, to want to replace it with Oracle at the full
cost of it (and last time I checked, Oracle in full server setup costs were
in 5 figures, just for the software), then I'd say let them. It's their
money they are throwing away. I am assuming here that they have checked and
confirmed that Oracle would no suffer a similar problem?

>2 minutes is seen by our customer as sufficent for his customer to
> get away from his site.

That would be the 0.14%, would it? What sort of service are they running on
it?

> - Is 2 minutes a standard time for vacuuming a 500.000 rows table ?

I'd say that is pretty good. I have a 30M row table with 30K-100K
inserts/updates per day (not including selects here), and I'm quite happy
if I can get it to vacuum in less than 15 minutes on a P2/400.

> - Can it be reduced ?

I guess. If you run your database on am ultra-fast RAID stripe, put more
memory and a faster processor in the server, you could probably reduce it.

Have you considered running two servers? If you set up two replicated
servers, then you could run everything off one server. At a convenient time
of day, when there's minimum load, you could swap vacuum the backup one,
wait for the queued replicated queries to be executed, and then fail them
over. Then, you can vacuum the primary server, and make it wait as the
fail-over server until next time it's time to vacuum the database. The
fail-over should take a matter of seconds (fractions of seconds?),
depending on how you do it. A second server is likely to cost them less
than a full Oracle licence...

Incidentally, how do they handle backups? I am not sure how well a backup
of the database will work out with it still running, and records being
inserted during the backup. Running two servers will also get you around
this problem, in the same way, because you can shut down a database while
you back it up without loosing any uptime or data.


> - In a far future, what are the problems we can run into not vacuuming
> that table ? We have already seen that after a month, some transactions
> involving where id >= some_value take forever, so we supressed them.

Performance on inserts and updates will degrade, especially if there are
lots of deletes as well. Basically, the tables will get fragmented, because
deletes, AFAIK, do lazy deletion, so the tables aren't cleaned out until
you vacuum them. This also tends to mess up the index timings because there
is a lot of dead records in them.

Eventually, it will grind to a halt.

Depending on what your performance and requirements are, you could do a
vacuum once per week, perhaps?

Regards.

Gordon




[GENERAL] Re: plpgsql - cont'd

2001-01-23 Thread Mitch Vincent

Ok, after some more playing, this works.

CREATE FUNCTION invoice_payment() RETURNS OPAQUE AS '
BEGIN

   UPDATE invoice_master SET total = total - NEW.amount,updated = now(),
is_paid=(CASE WHEN total::numeric = NEW.amount::numeric THEN TRUE::bool ELSE
FALSE::bool END) WHERE invoice_id = NEW.invoice_id;

   RETURN NEW;

END;
' LANGUAGE 'plpgsql';


Thanks again for answering my stupid little questions, Tom :-)

-Mitch

- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 23, 2001 12:01 PM
Subject: Re: plpgsql - cont'd


> "Mitch Vincent" <[EMAIL PROTECTED]> writes:
> > Hmm, this is the third time this has happened.. I am using 7.1 Bert 3,
so I
> > expected some things like this... When deleting and adding functions
back,
> > when I run them I get :
>
> > ERROR:  plpgsql: cache lookup for proc 49237 failed
>
> > -- What's happening there and is there anyway to fix this without having
to
> > dump/restore (which is what I've had to do thus far.. ) ?
>
> dump/restore is the hard way.  If you delete and recreate a function,
> the new incarnation has a new OID, so anything that referred to the old
> OID is now broken, and has to be deleted/recreated itself.  Triggers
> are one such reference.  Cached query plans are another, although I
> don't think that's the issue here.  How are you invoking the functions,
> anyway?
>
> regards, tom lane
>




Re: [GENERAL] Outer Joins

2001-01-23 Thread Bruce Momjian

> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Can someone explain why cname and date from table c gets printed in this
> > query?
> 
> Say what?
> 
> test=# CREATE TABLE a (id INTEGER, name TEXT, aname TEXT);
> CREATE
> test=# CREATE TABLE b (id INTEGER, name TEXT, bname TEXT);
> CREATE
> test=# CREATE TABLE c (id INTEGER, name TEXT, cname TEXT, date DATE DEFAULT
> test(# CURRENT_DATE);
> CREATE
> test=# SELECT * FROM a FULL OUTER JOIN b USING (id);
>  id | name | aname | name | bname
> +--+---+--+---
> (0 rows)

The text of the email showed this query returning columns from table c. 
Seems it was just a mistake in the email.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Another plpgsql question..

2001-01-23 Thread Tom Lane

"Mitch Vincent" <[EMAIL PROTECTED]> writes:
> ERROR:  Unable to identify an operator '=' for types 'numeric' and 'float8'
> You will have to retype this query using an explicit cast

> --- amount and total are both numeric(9,2),

Hm, then "(total - NEW.amount) = 0.00::numeric" should work, although
I don't see why you don't just write it as "total = NEW.amount".

regards, tom lane



[GENERAL] showing X fields from X total

2001-01-23 Thread Marcos

hi,
i have posgresql 6.4 and jdbc6.4.
i am making a query returning like 6000 registers. I would like to
restrict the query in order to show the result 100 by 100 registers.
how can i make that SQL query?

if i make "  select * from files order by (name)";

thanks in advance,

marcos

--
m a r c o s @ i v a l . e s




Re: [GENERAL] Outer Joins

2001-01-23 Thread Tom Lane

Bruce Momjian <[EMAIL PROTECTED]> writes:
> Can someone explain why cname and date from table c gets printed in this
> query?

Say what?

test=# CREATE TABLE a (id INTEGER, name TEXT, aname TEXT);
CREATE
test=# CREATE TABLE b (id INTEGER, name TEXT, bname TEXT);
CREATE
test=# CREATE TABLE c (id INTEGER, name TEXT, cname TEXT, date DATE DEFAULT
test(# CURRENT_DATE);
CREATE
test=# SELECT * FROM a FULL OUTER JOIN b USING (id);
 id | name | aname | name | bname
+--+---+--+---
(0 rows)


regards, tom lane



[GENERAL] System Tables ER / ALTER COLUMN

2001-01-23 Thread Nelio Alves Pereira Filho

I was wondering if there is a ER diagram of the system tables..
As pgsql doesn't have a command to change the type of a field, I thought
to make a procedure that do that, changing the system tables.

Tks

-- 
Nelio Alves Pereira Filho
IFX Networks - www.ifx.com.br
+55 11 3365-5863
[EMAIL PROTECTED]



Re: [GENERAL] "recovery mode"

2001-01-23 Thread Steve Wolfe

> I don't think recovery mode actually does much in 7.0.* --- I think it's
> just a stub (Vadim might know better though).  In 7.1 it means the thing
> is replaying the WAL log after a crash.  In any case it shouldn't
> create a lockup condition like that.
>
> The only cases I've ever heard of where a user process couldn't be
> killed with kill -9 are where it's stuck in a kernel call (and the
> kill response is being held off till the end of the kernel call).
> Any such situation is arguably a kernel bug, of course, but that's
> not a lot of comfort.
>
> Exactly which process were you sending kill -9 to, anyway?  There should
> have been a postmaster and one backend running the recovery-mode code.
> If the postmaster was responding to connection requests with an error
> message, then I would not say that it was locked up.

  I believe that it was a backend that I tried -9'ing.  I knew it wasn't
something that good to do, but I had to get it running again.  It's amazing
how bold you get when you hear an entire department mumbling about "Why
isn't the site working?". : )

   Anyway, I think the problem wasn't in postgres.  I rebooted the machine,
and it worked - for about ten minutes.  Then, it froze, with the kernel
crapping out.   I rebooted it, it lasted about three minutes until the same
thing happened.  Reboot, it didn't even get through the fsck before it did
it again.

I looked at the CPU temps, one of the four was warmer than it should be,
but still within acceptable limits (40 C).  So, I shut it down, reseated the
RAM chassis, the DIMM's, the CPU's, and the expansion cards.  When it came
up, I compiled and put on a newer kernel (I guess there was some good in the
crashes), and then it worked fine.  Because of the symptoms, I imagine that
it was a flakey connection.   Odd, considering that everything except the
DIMM's (including the CPU's) are literally screwed to the motherboard!

steve






Re: [GENERAL] Looking for info on Solaris 7 (SPARC) specific considerations

2001-01-23 Thread Tom Lane

Frank Joerdens <[EMAIL PROTECTED]> writes:
> I am faced with the task of installing, configuring, and tuning my
> database, which is currently running under Linux, under Solaris 7 on a
> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the
> sysadmin at the site hasn't used or installed PostgreSQL and would
> rather have me do it. Is this actually supported? The FAQ (the one
> bundled with the 7.1 beta3 which I'll be using) lists only:

> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1

After you build PG and test it, send us a port report, and we'll add
Solaris 7 to the list of recently tested platforms.  That's how it
works ...

> Would I use gcc or the native Sun
> compiler (how do you control that anyway)?

Try 'em both --- set CC environment variable before running configure
to control configure's choice of compiler.

regards, tom lane



Re: [GENERAL] Data entry screen building utilities

2001-01-23 Thread Tim Barnard

Interesting idea. I hadn't thought of trying this.
Thanks.

Tim

- Original Message -
From: "Prasanth Kumar" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, January 22, 2001 7:03 PM
Subject: Re: [GENERAL] Data entry screen building utilities


> 
> >Rather than writing PostgreSQL data entry screens from scratch using
> ncurses and >cdk, is anyone aware of any data entry screen building
> utilities useful for character->based screens, as opposed to GUI
utilities?
> >
> >Tim
> 
> I don't know of such a utility but I remember that the tool 'pgaccess'
lets
> you create simple form under Tk/Tcl and saves it as a simple string in the
> database. Maybe you could create a ncurses based library to parse this
> string and produce a text based data entry screen?
>
> --
> Prasanth Kumar
> [EMAIL PROTECTED]
>
>




Re: [GENERAL] plpgsql - cont'd

2001-01-23 Thread Tom Lane

"Mitch Vincent" <[EMAIL PROTECTED]> writes:
> Hmm, this is the third time this has happened.. I am using 7.1 Bert 3, so I
> expected some things like this... When deleting and adding functions back,
> when I run them I get :

> ERROR:  plpgsql: cache lookup for proc 49237 failed

> -- What's happening there and is there anyway to fix this without having to
> dump/restore (which is what I've had to do thus far.. ) ?

dump/restore is the hard way.  If you delete and recreate a function,
the new incarnation has a new OID, so anything that referred to the old
OID is now broken, and has to be deleted/recreated itself.  Triggers
are one such reference.  Cached query plans are another, although I
don't think that's the issue here.  How are you invoking the functions,
anyway?

regards, tom lane



[GENERAL] Adding procedural languages

2001-01-23 Thread Jeff Meeks

Hi,
Is it safe to recompile postgres to get support for other procedural
languages going.
I have tried to compile just the languages in the src/interfaces
directory but to no avail.

What is the best way to get them going after you install Postgres.

Thanks
Jeff Meeks
[EMAIL PROTECTED]



Re: [GENERAL] Data entry screen building utilities

2001-01-23 Thread Tim Barnard

"I haven't looked at it extensively, but is there a reason why you wouldn't
want to use it?"

I was hoping for something that would help me quickly put together some
screens, without extensive coding. I didn't think such a utility existed but
it was worth asking :-)

Tim


- Original Message -
From: "Paul M Foster" <[EMAIL PROTECTED]>
To: "Tim Barnard" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, January 22, 2001 5:45 PM
Subject: Re: [GENERAL] Data entry screen building utilities


> On Mon, Jan 22, 2001 at 03:14:36PM -0800, Tim Barnard wrote:
>
> > Rather than writing PostgreSQL data entry screens from scratch using
> > ncurses and cdk, is anyone aware of any data entry screen building
> > utilities useful for character-based screens, as opposed to GUI
> > utilities?
>
> I had never heard of cdk, but I found and downloaded it, since I'm doing
> similar console-type work. I haven't looked at it extensively, but is
> there a reason why you wouldn't want to use it?
>
> Paul
>




[GENERAL] VACUUM and 24/7 database operation

2001-01-23 Thread Thomas . Favier

Hello,

   For one of our customer, we are running a PostgreSQL database on a 
dynamic PHP-driven site. This site has a minimum of 40 visitors at a 
time and must be responsive 24h a day.

   One of the table has 500.000 rows and is very frequently accessed 
(it is the table registering basic users infos). We have no performance 
problem dispite the large amount of updates done on this table.

   The problem is with VACUUMing this table. It takes 2 long minutes 
everyday. Two minutes during wich no request can be done because of the 
lock on the table... (nearly every request is involving this large 
table). Our customer really dislike this fact and is planning to 
replace PostgreSQL with Oracle.
   2 minutes is seen by our customer as sufficent for his customer to 
get away from his site.

   Questions :

- Is 2 minutes a standard time for vacuuming a 500.000 rows table ?

- Can it be reduced ?

- In a far future, what are the problems we can run into not vacuuming 
that table ? We have already seen that after a month, some transactions 
involving where id >= some_value take forever, so we supressed them.

Below are details on the table :

erp-# \d visiteurs
  Table "visiteurs"
   Attribute   | Type |   Modifier   
---+--+--
 id| integer  | not null
 login | varchar(127) | not null
 password  | varchar(10)  | not null
 name  | varchar(10)  | not null
 datecrea  | timestamp| not null
 payszoneid| varchar(127) | not null
 ptzoneid  | varchar(127) | not null
 dialertitle   | varchar(15)  | 
 referer   | varchar(255) | 
 exported  | varchar(2)   | not null default 'N'
 earncentmin   | float8   | 
 opearncentmin | float8   | 
 ret   | float8   | 
 paymentid | integer  | 
 entiteid  | varchar(127) | not null
 etat  | varchar(2)   | default 'E'
 devise| smallint | 
 entitelogin   | varchar(20)  | 
Indices: visiterus_etat,
 visiteurs_exported,
 visiteurs_id_btree,
 visiteurs_login

erp=# select relname,relpages,reltuples from pg_class where 
relname='visiteurs';
  relname  | relpages | reltuples 
---+--+---
 visiteurs |14549 |584489
(1 row)

Thank you.

Thomas FAVIER
[EMAIL PROTECTED]
__
ACCELANCE - www.accelance.fr
97, rue Racine - 69100 Villeurbanne
Tel: +33 (0)4 37 43 12 22 / Fax: +33 (0)4 37 43 12 20
__




Re: [GENERAL] Data entry screen building utilities

2001-01-23 Thread Tim Barnard

Thanks for the idea, but like you stated lynx is too crude an interface.

Tim

- Original Message -
From: "Robert B. Easter" <[EMAIL PROTECTED]>
To: "Tim Barnard" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, January 22, 2001 11:07 PM
Subject: Re: [GENERAL] Data entry screen building utilities


> You might be able to just use a web interface, like PHP scripts, but use a
> text console web browser, like lynx.  This would let the entry screen be
> accessible from GUI terminals too.  Then again, lynx might be too crude an
> interface.
>





Re: [GENERAL] Re: postgres memory management

2001-01-23 Thread Tom Lane

Alexander Jerusalem <[EMAIL PROTECTED]> writes:
> you are right, I have the same problem on RedHat. After I inserted -e it 
> works so far. But there's something else that seems strange to me I'm not 
> quite sure if I'm reading this right since I understand only half of what 
> happens in this script. After the comment that says "Don't do anything if 
> process still running..." on line there is the following sequence of lines:

> ps hj$ipcs_pid >/dev/null 2>&1
> if [ $? -eq 0 ]; then
>  echo "skipped"

> As I understand it the if statement tests the output of the previous ps 
> statement. The strange thing is that the variable $ipcs_pid is never used 
> anywhere before this line, so I think it's always null (or whatever this 
> scripting language defaults to). There are three other variables ipcs_id, 
> ipcs_cpid and ipcs_lpid but no ipcs_pid. If I'm right here, it seems that 
> this script does effectively nothing in terms of shared memory.

I think you are right --- the Linux portion of this script is broken.
Aside from the bogus variable, the awk call at the top of the loop is
wrong (printf has three arguments and only two percents).  Given those
two typos, there are probably more.

Feel free to submit a patch to make it actually work ...

regards, tom lane



Re: [GENERAL] Looking for info on Solaris 7 (SPARC) specificconsiderations

2001-01-23 Thread Peter Eisentraut

First, you read the installation instructions.  Then, if specific problems
come up you send specific problem reports.  But I think we have this
platform pretty much covered.  Good luck.


Frank Joerdens writes:

> I am faced with the task of installing, configuring, and tuning my
> database, which is currently running under Linux, under Solaris 7 on a
> brand-new and shiny Sun UltraSPARC (3 CPUs, 768 MB RAM), because the sysadmin at the 
>site
> hasn't used or installed PostgreSQL and would rather have me do it. Is
> this actually supported? The FAQ (the one bundled with the 7.1 beta3
> which I'll be using) lists only:
>
> sparc_solaris - SUN SPARC on Solaris 2.4, 2.5, 2.5.1
>
> If it is supported (I don't suppose a little OS version number increment
> would make a differnce here), I've never used Solaris or anything other
> than Intel-based hardware and am looking for some info on what to watch
> out for and consider when installing and tuning PostgreSQL on Solaris on
> a SPARC plattform. Aside from the shared memory stuff in the Admin
> Guide, I haven't found anything so far. Particularly, I would expect
> that you could gain a significant performance boost from running the
> database on a 64 bit plattform (without knowing exactly why, only
> picking up on word-of-mouth and assorted hype on 64 bit architectures).
> How do you get the most out of it? Would I use gcc or the native Sun
> compiler (how do you control that anyway)?
>
> Any pointers would be much appreciated.
>
> Thanks, Frank
>
>

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [GENERAL] Outer Joins

2001-01-23 Thread Bruce Momjian

Can someone explain why cname and date from table c gets printed in this
query?

Thanks.

> SELECT * FROM a FULL OUTER JOIN b USING (id)
> 
>  id | name | aname  | name | bname  | name |  cname  |date
> +--++--++--+-+
>   1 | Bob  | aname1 | Bob  | bname1 | Bob  | cname1  | 2001-01-07
>   2 | Jim  | aname2 | Tom  | bname2 | Jim  | cname2  | 2001-01-07
>   9 |  ||  || Tom  | cname9  | 2001-01-07
>  10 |  ||  ||  | cname10 | 2001-01-07
> (4 rows)
> 

---


[ Charset ISO-8859-1 unsupported, converting... ]
> On Saturday 06 January 2001 20:21, Tom Lane wrote:
> > "Robert B. Easter" <[EMAIL PROTECTED]> writes:
> > > What is the syntax for this?  Is there an example I can see/run?
> >
> > SQL92 standard.
> >
> > See
> > http://www.postgresql.org/devel-corner/docs/postgres/sql-select.htm
> > for documentation (such as it is).  There are some examples in the
> > join regression test, too.
> >
> > regards, tom lane
> 
> Thanks. I've tested out the cvs version and see that these joins appear to 
> work:
> 
> Qualified join:
> T1 INNER|[LEFT|RIGHT|FULL OUTER] JOIN T2 ON|USING() ...
> 
> Natural join:
> T1 NATURAL INNER|[LEFT|RIGHT|FULL OUTER] JOIN T2
> 
> Cross join:
> T1 CROSS JOIN T2
> 
> But, 
> 
> Union join:
> T1 UNION JOIN T2
> 
> is not implemented.  Nice! :)
> 
> 
> Here is a sample running of all this on cvs pgsql:
> 
> CREATE TABLE a (id INTEGER, name TEXT, aname TEXT);
> CREATE TABLE b (id INTEGER, name TEXT, bname TEXT);
> CREATE TABLE c (id INTEGER, name TEXT, cname TEXT, date DATE DEFAULT 
> CURRENT_DATE);
> 
> INSERT INTO a VALUES (1, 'Bob', 'aname1');
> INSERT INTO a VALUES (2, 'Jim', 'aname2');
> INSERT INTO a VALUES (3, 'Tom', 'aname3');
> INSERT INTO a VALUES (7, 'Joe', 'aname7');
> INSERT INTO a VALUES (8, null, 'aname8');
> 
> INSERT INTO b VALUES (1, 'Bob', 'bname1');
> INSERT INTO b VALUES (2, 'Tom', 'bname2');
> INSERT INTO b VALUES (3, 'Joe', 'bname3');
> INSERT INTO b VALUES (5, 'Jim', 'bname5');
> INSERT INTO b VALUES (6, null, 'bname6');
> 
> INSERT INTO c VALUES (1, 'Bob', 'cname1');
> INSERT INTO c VALUES (2, 'Jim', 'cname2');
> INSERT INTO c VALUES (9, 'Tom', 'cname9');
> INSERT INTO c VALUES (10, null, 'cname10');
> 
> -- Qualified Joins
> SELECT * FROM a JOIN b USING (id) JOIN c USING (id);
> SELECT * FROM a INNER JOIN b ON (a.id = b.id);
> SELECT * FROM a LEFT OUTER JOIN b USING(id, name);
> SELECT * FROM a RIGHT OUTER JOIN b USING (id);
> SELECT * FROM a FULL OUTER JOIN b USING (id)
>   RIGHT OUTER JOIN c USING(id);
> -- Natural Joins
> SELECT * FROM a NATURAL INNER JOIN b;
> SELECT * FROM a NATURAL LEFT OUTER JOIN b;
> SELECT * FROM a NATURAL RIGHT OUTER JOIN b;
> SELECT * FROM a NATURAL FULL OUTER JOIN b;
> -- Cross Join
> SELECT * FROM a CROSS JOIN b;
> -- Union Join (not implemented, yet)
> SELECT * FROM a UNION JOIN b;
> 
> 
> The output is like this with cvs version:
> 
> 
> CREATE TABLE a (id INTEGER, name TEXT, aname TEXT);
> CREATE
> CREATE TABLE b (id INTEGER, name TEXT, bname TEXT);
> CREATE
> CREATE TABLE c (id INTEGER, name TEXT, cname TEXT, date DATE DEFAULT 
> CURRENT_DATE);
> CREATE
> INSERT INTO a VALUES (1, 'Bob', 'aname1');
> INSERT 21117 1
> INSERT INTO a VALUES (2, 'Jim', 'aname2');
> INSERT 21118 1
> INSERT INTO a VALUES (3, 'Tom', 'aname3');
> INSERT 21118 1
> INSERT INTO a VALUES (7, 'Joe', 'aname7');
> INSERT 21119 1
> INSERT INTO a VALUES (8, null, 'aname8');
> INSERT 21120 1
> 
> INSERT INTO b VALUES (1, 'Bob', 'bname1');
> INSERT 21121 1
> INSERT INTO b VALUES (2, 'Tom', 'bname2');
> INSERT 21122 1
> INSERT INTO b VALUES (3, 'Joe', 'bname3');
> INSERT 21122 1
> INSERT INTO b VALUES (5, 'Jim', 'bname5');
> INSERT 21122 1
> INSERT INTO b VALUES (6, null, 'bname6');
> INSERT 21123 1
> 
> INSERT INTO c VALUES (1, 'Bob', 'cname1');
> INSERT 21124 1
> INSERT INTO c VALUES (2, 'Jim', 'cname2');
> INSERT 21125 1
> INSERT INTO c VALUES (9, 'Tom', 'cname9');
> INSERT 21126 1
> INSERT INTO c VALUES (10, null, 'cname10');
> INSERT 21127 1
> 
> SELECT * FROM a JOIN b USING (id) JOIN c USING (id);
> 
>  id | name | aname  | name | bname  | name | cname  |date
> +--++--++--++
>   1 | Bob  | aname1 | Bob  | bname1 | Bob  | cname1 | 2001-01-07
>   2 | Jim  | aname2 | Tom  | bname2 | Jim  | cname2 | 2001-01-07
> (2 rows)
> 
> SELECT * FROM a INNER JOIN b ON (a.id = b.id);
> 
>  id | name | aname  | id | name | bname
> +--+++--+
>   1 | Bob  | aname1 |  1 | Bob  | bname1
>   2 | Jim  | aname2 |  2 | Tom  | bname2
>   3 | Tom  | aname3 |  3 | Joe  | bname3
> (3 rows)
> 
> SELECT * FROM a LEFT OUTER JOIN b USING(id, name);
> 
>  id | name | aname  | bname
> +--++
>   1 | Bob  | aname1 | bname1
>   2 | Jim  | a

Re: [GENERAL] Another plpgsql question..

2001-01-23 Thread Mitch Vincent

Ok, it appears now I have an error.. Unless I'm going crazy, this started
after I had to do a restore because of one of those cache lookup errors.. I
changed nothing, still, this is what I get..


CREATE FUNCTION invoice_payment() RETURNS OPAQUE AS '
BEGIN

   UPDATE invoice_master SET total = total - NEW.amount,updated = now(),
is_paid=(CASE WHEN (total - NEW.amount) = 0.00 THEN ''t'' ELSE ''f'' END)
WHERE invoice_id = NEW.invoice_id;

   RETURN NEW;

END;
' LANGUAGE 'plpgsql';


Now I get

brw=# INSERT into invoice_payments VALUES
(1,1000,'now',100,'now',100,1,1,150.00);
ERROR:  Unable to identify an operator '=' for types 'numeric' and 'float8'
You will have to retype this query using an explicit cast
DEBUG:  Last error occured while executing PL/pgSQL function invoice_payment
DEBUG:  line 2 at SQL statement
ERROR:  Unable to identify an operator '=' for types 'numeric' and 'float8'
You will have to retype this query using an explicit cast
brw=#

--- amount and total are both numeric(9,2), I've tried casting everything
(total,amount and 0.00) to float and everything to numeric with the same
error popping up.. What needs casting here?

I can determine if an invoice has been paid or not a number of ways, really
what I should do there is NEW.amount >= total -- I tried and got the above
error again..

Thanks!

-Mitch

- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 23, 2001 11:05 AM
Subject: Re: [GENERAL] Another plpgsql question..


> "Mitch Vincent" <[EMAIL PROTECTED]> writes:
> > is_paid is never updated...
>
> It's not possible that is_paid is never updated; that command *will*
> replace the total, updated, and is_paid columns with *something*.
> It may be that in the cases you've checked, it gets updated to the
> same value it had before.  That's why I want to see the test cases.
>
> regards, tom lane
>




Re: [GENERAL] Re: OID/XID allocation (was Re: is PG able to handle a>500 GB Da tabase?)

2001-01-23 Thread Bruce Momjian

> Bruce Momjian writes:
> 
> > Added to TODO:
> >
> > * Remove unused pg_variable table
> 
> While you're at it:  Remove unused pg_inheritproc and pg_ipl tables.

Added to TODO:

* Remove unused pg_variable, pg_inheritproc, pg_ipl tables

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Another plpgsql question..

2001-01-23 Thread Tom Lane

"Mitch Vincent" <[EMAIL PROTECTED]> writes:
> is_paid is never updated...

It's not possible that is_paid is never updated; that command *will*
replace the total, updated, and is_paid columns with *something*.
It may be that in the cases you've checked, it gets updated to the
same value it had before.  That's why I want to see the test cases.

regards, tom lane