[GENERAL] Automatically updating a timestamp upon UPDATE query?

2001-01-12 Thread Ed Loehr

How do you most easily create a timestamp column that updates to the
current time on every UPDATE statement?  I know you can use triggers, but
I thought there was a default value that did this for you.  'current'
does not seem appropriate, nor have I made it work (see below for my
example).

http://www.postgresql.org/docs/postgres/x1137.htm :
"'now' is resolved when the value is inserted, 'current' is 
 resolved everytime the value is retrieved."

Any clues?

Regards,
Ed Loehr

BTW, here's the sequence I used to conclude 'current' doesn't seem to
work:

DROP TABLE foo;
CREATE TABLE foo (
noteVARCHAR,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated TIMESTAMP NOT NULL DEFAULT 'current'
);

SELECT now();
INSERT INTO foo (note) VALUES ('Default timestamp test record');
SELECT created FROM foo;
SELECT now();
SELECT * FROM foo;

--  Wait a few seconds for the time to clearly change seconds...

SELECT now();
UPDATE foo SET note = 'Did the update_time change with an UPDATE query?';
SELECT now();
SELECT * FROM foo; 
SELECT updated FROM foo;



Re: [GENERAL] exit status 26

2001-01-12 Thread Tom Lane

Dan Moschuk <[EMAIL PROTECTED]> writes:
> So it is getting a virtual timer from somewhere.  A grep of the backend
> directory revealed no setitimer() occurances, so I can confirm that it
> isn't postmaster doing it.  And if the shells environment is unlimited, 
> which doesn't leave very many culprits left. :/

Yes.  It would be good to question my assumption that the exit code
is a signal --- I can't see what else it could be, but can you run
the backend with a breakpoint set at exit(), and get a backtrace?

regards, tom lane



Re: [GENERAL] exit status 26

2001-01-12 Thread Dan Moschuk


| > Signal 26 on FreeBSD is SIGVTARLM.
| 
| We don't ever set a virtual timer alarm, either.  Is it possible that
| you are running the postmaster with a ulimit-style limit on total
| process runtime?

No, I've tested this and postmaster is being started in an unlimited 
enviornment.  

The code that delivers the actual signal looks like this:

if (CLKF_USERMODE(frame) &&
timevalisset(&pstats->p_timer[ITIMER_VIRTUAL].it_value) &&
itimerdecr(&pstats->p_timer[ITIMER_VIRTUAL], tick) == 0)
psignal(p, SIGVTALRM);

So it is getting a virtual timer from somewhere.  A grep of the backend
directory revealed no setitimer() occurances, so I can confirm that it
isn't postmaster doing it.  And if the shells environment is unlimited, 
which doesn't leave very many culprits left. :/

-- 
Man is a rational animal who always loses his temper when he is called
upon to act in accordance with the dictates of reason.
-- Oscar Wilde



[GENERAL] Re: mnogosearch -- pgsql seem so slow, please help me find out why

2001-01-12 Thread Tom Lane

"Thomas T. Thai" <[EMAIL PROTECTED]> writes:
> On Fri, 12 Jan 2001, Tom Lane wrote:
>> "Thomas T. Thai" <[EMAIL PROTECTED]> writes:
 'select * from url' from psql monitor took 59 seconds.
>> 
>> How big is the table?  Your EXPLAIN mentions 99256 rows, but I can't
>> tell if that stat is up-to-date or not.

> it is 99256. i don't think it's that big of a table is it? typically the
> query under mnogo takes less than a second, at most a couple seconds but
> not 50+ secs.

Typical queries don't retrieve the whole of a large table.  SQL speed
is all about *not* doing that.  Do the math: while I don't know the
average width of your rows, it looked like 500 bytes would be in the
right ballpark.  So you're moving circa 50 megabytes of data in this
query, or a tad under a meg/second, which is not blazing but it's pretty
respectable when you consider the number of layers of software involved.
(What exactly were you doing with the data after it got to psql, BTW?
In my experience psql can chew up near as much CPU time as the backend
for this sort of bulk data display.)  To do this in under a second would
require total throughput exceeding 50 meg/second, which is probably a
good bit more than the sustained read speed your filesystem can achieve,
never mind any time for Postgres' and psql's processing.

If you want a fast search, you should be designing a WHERE clause that
exploits an index so that you *don't* read the entire darn table.

regards, tom lane



[GENERAL] Re: mnogosearch -- pgsql seem so slow, please help me find out why

2001-01-12 Thread Thomas T. Thai

On Fri, 12 Jan 2001, Tom Lane wrote:

> "Thomas T. Thai" <[EMAIL PROTECTED]> writes:
> > 'select * from url' from psql monitor took 59 seconds.
> 
> How big is the table?  Your EXPLAIN mentions 99256 rows, but I can't
> tell if that stat is up-to-date or not.

it is 99256. i don't think it's that big of a table is it? typically the
query under mnogo takes less than a second, at most a couple seconds but
not 50+ secs.

maybe Hermit has some input as he runs it for postgresql.org's search.

> A select like that is going to be pretty much all data transfer: read
> the disk blocks, format the data values, send 'em to the frontend.
> There's hardly anything that Postgres can do to optimize or pessimize
> it.  You might shave a few milliseconds by using a binary cursor (to
> avoid formatting the integer values into ASCII) but probably not a lot.
> 
> If you've done a whole lot of UPDATEs/DELETEs on the table since your
> last VACUUM, then reading empty disk blocks might be costing you some
> time.

i did vacuum analyze.




[GENERAL] on statement triggers

2001-01-12 Thread Peter Gubis


 
hi,
when will be implemented 'on statement' triggers ?
 
-- 
Peter Gubis
Ericsson Business Systems Slovakia Ltd.
Stará Vajnorská 90  Tel.:   +421 7 49499 254
SK 831 04 Bratislava
Slovakia

E-Mail  :   [EMAIL PROTECTED]
 


Re: [GENERAL] mnogosearch -- pgsql seem so slow, please help me find out why

2001-01-12 Thread Tom Lane

"Thomas T. Thai" <[EMAIL PROTECTED]> writes:
> 'select * from url' from psql monitor took 59 seconds.

How big is the table?  Your EXPLAIN mentions 99256 rows, but I can't
tell if that stat is up-to-date or not.

A select like that is going to be pretty much all data transfer: read
the disk blocks, format the data values, send 'em to the frontend.
There's hardly anything that Postgres can do to optimize or pessimize
it.  You might shave a few milliseconds by using a binary cursor (to
avoid formatting the integer values into ASCII) but probably not a lot.

If you've done a whole lot of UPDATEs/DELETEs on the table since your
last VACUUM, then reading empty disk blocks might be costing you some
time.

regards, tom lane



RE: [GENERAL] Correlated subquery/update

2001-01-12 Thread Nick Fankhauser

Sounds like the answer is to upgrade to v7+ 

Much thanks to Tom Lane and Len Morgan for the helpful responses! 

-Nick

>update one set b = (select d from two where one.a = two.c);



[GENERAL] mnogosearch -- pgsql seem so slow, please help me find out why

2001-01-12 Thread Thomas T. Thai

pgsql 7.1-current, mnogosearch-3.1.8, netbsd/alpha-1.5.1-current, dec
alpha 500, 1G ram, uw-scsi

i'm trying to find out why postgres is the bottle neck in my searches with
mnogosearch. i've tried both the search.c and php version of search and
they both took up to 50 seconds for a one word search. this is with the
system sitting pretty much idle.

'select * from url' from psql monitor took 59 seconds.

mnwork=# explain select * from url;
NOTICE:  QUERY PLAN:

Seq Scan on url  (cost=0.00..14473.56 rows=99256 width=144)

EXPLAIN

here is my url table:

mnwork=# \d url
   Table "url"
Attribute|  Type  |   Modifier
-++---
 rec_id  | integer| not null default
nextval('next_url_id'::text)
 status  | integer| not null default 0
 url | character varying(128) | not null
 content_type| character varying(48)  | not null default ''
 title   | character varying(128) | not null default ''
 txt | character varying(255) | not null default ''
 docsize | integer| not null default 0
 last_index_time | integer| not null
 next_index_time | integer| not null
 last_mod_time   | integer|
 referrer| integer| not null default 0
 tag | character varying(10)  | not null default ''
 hops| integer| not null default 0
 category| character varying(10)  | not null default ''
 keywords| character varying(255) | not null default ''
 description | character varying(100) | not null default ''
 crc32   | integer| not null default 0
 lang| character varying(2)   | not null default ' '
Indices: url_crc,
 url_pkey,
 url_url


i just can't tolerate this kind of results for searching. any suggestions
to overcome this would be appreciated. tia.





[GENERAL] postgres replication

2001-01-12 Thread Peter Gubis


hello,
i need help with online database replication.
is there any daemon, whitch can do that ?
right now i'm writting one based on triggers, but it will be great
if somebody has some link for something like that, witch is already made
.
i found erserver.com , but it's not complete right now and i can't find
any sources for that [i think that it is commercial product]
other solution is to make some nfs [based on coda for ex], but i'm not
sure if postmaster can share it's data directory with other postmasters
... i think it will not work.
thanks a lot for answer.
 
---
Peter Gubis
Ericsson Business Systems Slovakia Ltd.
Stará Vajnorská 90  Tel.:   +421 7 49499 254
SK 831 04 Bratislava
Slovakia

E-Mail  :   [EMAIL PROTECTED]
 


Re: [GENERAL] Correlated subquery/update

2001-01-12 Thread Tom Lane

"Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> In essence, I want to match up the records where one.a=two.c and update
> one.b with the value in two.d . In Oracle, I would use this statement:
> update one set b = (select d from two where one.a = two.c);
> in psql, I get a syntax error when I do this.

play=> create table one (a int, b text);
CREATE
play=> create table two (c int, d text);
CREATE
play=> update one set b = (select d from two where one.a = two.c);
UPDATE 0

Looks OK to me (at least in 7.0.* and 7.1).  What release are you using?

regards, tom lane



[GENERAL] Correlated subquery/update

2001-01-12 Thread Nick Fankhauser

Hi all-

I'm trying to match up two tables based on a primary key, and then copy
(update) a field in the second table to match a field in the first. Hard to
explain well... so here's an example:

select * from one;

a  | b
-
1  | null
2  | null


select * from two;

c  | d
-
1  | one
2  | two


In essence, I want to match up the records where one.a=two.c and update
one.b with the value in two.d . In Oracle, I would use this statement:

update one set b = (select d from two where one.a = two.c);

in psql, I get a syntax error when I do this. Either a correlated
subquery/update is not supported, or (more likely) I'm using the wrong
approach. Can anyone tell me whether this is supported, or how I can get the
job done using a different approach?

Thanks!
-Nick

-
Nick Fankhauser

Business:
[EMAIL PROTECTED]  Phone 1.765.935.4283  Fax 1.765.962.9788
Ray Ontko & Co.  Software Consulting Services  http://www.ontko.com/

Personal:
[EMAIL PROTECTED]   http://www.fankhausers.com




Re: [GENERAL] union query

2001-01-12 Thread Tom Lane

"Tamsin" <[EMAIL PROTECTED]> writes:
>select 'other' union select description from address;
>ERROR:  Unable to transform varchar to unknown
> Each UNION | EXCEPT | INTERSECT clause must have compatible target
> types

The behavior in 7.0.* and before (as far back as I recall) has been that
the first select determines the output type of the union, so the above
fails, but reversing it works.  7.1 is a little smarter about
unknown-type literals, so it accepts both of your cases, but it will
still do the Wrong Thing for examples like select int4 union select int8.

> I just wondered
> if the order of the selects in a union should matter?

It shouldn't, really, but without a complete type promotion hierarchy
we have a hard time doing anything intelligent with arbitrary pairs of
types.

regards, tom lane



Re: [GENERAL] MacOS X

2001-01-12 Thread Tom Lane

Sander Tichelaar <[EMAIL PROTECTED]> writes:
> Is a port of PostgreSQL planned for MacOS X?

We have reports from Apple that PG current sources work on OS X current
sources, ie, 7.1 release should work on OS X release.  IIRC, PG will not
work on the existing "public beta" of OS X.  See past traffic about
"Darwin" on pg-ports and/or pg-hackers if you want details.

regards, tom lane



[GENERAL] union query

2001-01-12 Thread Tamsin

Just spent an hour getting nowhere, til I discovered this (description is a
varchar field):

select description from address union select 'other';

fine but...

   select 'other' union select description from address;
   ERROR:  Unable to transform varchar to unknown
Each UNION | EXCEPT | INTERSECT clause must have compatible target
types

Its not a problem, as I now I've worked it out I can do what I wanted (a
long nasty union to get around some outer join issues), but I just wondered
if the order of the selects in a union should matter?

tamsin




[GENERAL] Re: xml middleware

2001-01-12 Thread Adam Lang

I don't recall names, but it came up in the interfaces list.  Like I said,
if anyone is actually working on it, I do not know, we just chatted and
threw out some ideas and said it would be really neat. :)

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Peter Cheng" <[EMAIL PROTECTED]>
To: "Adam Lang" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, January 11, 2001 9:09 PM
Subject: Re: xml middleware


> Adam Lang wrote:
> > Not that I know of, but a couple people have been toying with the idea
of
> > starting something like that... I think we are mostly all talk right
now. :)
> >
> Ok, did you know who is working on it?
> Thanks.
>
> > Adam Lang
> > Systems Engineer
> > Rutgers Casualty Insurance Company
> > http://www.rutgersinsurance.com
>
> --
> Peter Cheng
>
> 
> ©_¼¯¹q¤l«H½c¡E·¾³q¤ß¥@¬É  http://mail.kimo.com.tw
> < ºô ¸ô ¥Í ¬¡¡EºÉ ¦b ©_ ¼¯ >  http://www.kimo.com.tw




Re: [GENERAL] Removing the row limit

2001-01-12 Thread Adam Lang

Beta 3 of 7.1 has been released.  You won't have to worry about the row
limit and you'll also help check for any possible bugs. :)

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Mike Cannon-Brookes" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 11, 2001 12:25 AM
Subject: [GENERAL] Removing the row limit


> I'm using 7.0.2 I think, how can I remove the 8k row limit?
>
> Is this removed in 7.1 ? If so, is it worth trying the beta of that? Or is
> there another way to do it?
>
> (I realise this question has probably been asked a hundred times, but I
> can't find an answer in any of the online docs)
>
> Cheers,
> Mike
>
> 
> Mike Cannon-Brookes - Technology Director, Asia Pacific
> internet.com Corporation - The Internet Industry Portal
> Ph: (612) 9290 1088 - Mob: 0416 106090
>
> - The Media Network @ http://australia.internet.com
> - Meet A Guru @ http://www.breakfastforums.com.au
> - Subscribe Online @ http://www.enewsletters.com.au
> 
>
>
>




Re: [GENERAL] xml middleware

2001-01-12 Thread Adam Lang

I'd assume you'd need to make your own up (or whoever develops it.)  I'd say
develop a schema that can be accessible from the internet so the XML can be
interpreted.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Franck Martin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, January 10, 2001 5:14 PM
Subject: RE: [GENERAL] xml middleware


> Is there any xml standard to output database content?
>
> Franck Martin
> Database Development Officer
> SOPAC South Pacific Applied Geoscience Commission
> Fiji
> E-mail: [EMAIL PROTECTED]
> Web site: http://www.sopac.org/
>
> This e-mail is intended for its recipients only. Do not forward this
e-mail
> without approval. The views expressed in this e-mail may not be
necessarily
> the views of SOPAC.
>
> -Original Message-
> From: Joseph Shraibman [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, 11 January 2001 10:10
> To: Adam Lang
> Cc: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] xml middleware
>
> Adam Lang wrote:
> >
> > I was even thinking more along the lines of a java program that acts as
an
> > interface where it receives a query from the user, sends it to postgreql
> via
> > jdbc, receives the result set, translates it into XML and returns it to
> the
> > user.
> >
> > Performance would be the isue, but something like that would be best
> suited
> > for internet applications and the performance is not as big of an issue.
> > Plus, if you plan on using XML, you are expecting to take performance
hits
> > anyway.
>
> That sounds like a trivial program.
>
> Just loop through the result set and append to a StringBuffer.
>
> --
> Joseph Shraibman
> [EMAIL PROTECTED]
> Increase signal to noise ratio.  http://www.targabot.com




[GENERAL] MacOS X

2001-01-12 Thread Sander Tichelaar

Hi,

I'm sorry if this is yet another instance of an ever repeating question. I
searched the archives and FAQs but couldn't find an answer.

Is a port of PostgreSQL planned for MacOS X? It is based on BSD, so it
shouldn't be that hard...

We are looking for a cross-platform database to use for our configuration
management system for VisualWorks Smalltalk. PostgreSQL is a perfect option
as it support Solaris, Linux and Windows, and a VisualWorks connection
exists. If only the Mac was supported... I know we can set up a PostgreSQL
server on an already supported machine and connect from any client, also a
Mac, and that is what we will do in the beginning. But our Mac people would
like to be able to work standalone as well.

Thanks for any answers,

Sander

-- 
Software Composition Group, IAM, University of Berne,
Neubrueckstrasse 10, CH-3012 Bern, Switzerland.
tel: +41 (0)31 631 3568, www: http://www.iam.unibe.ch/~tichel



[GENERAL] Problems with connecting to Postgre

2001-01-12 Thread Uro Gruber

Hi!

I use postgre 7.0.3 on FreeBSD 4.1.1
I started my postgres with -i, because i want to connect over
internet. I tested this in localnet.

When i connect with PHP it takes very long time. If i refresh again i
get results immediately. If i refresh my page after some seconds it
takes long time to get results again.

I check the errorlog of postgre and i found this line on every
connection:

 pq_recvbuf: recv() failed: Connection reset by peer

What is wrong and what can i do. I use PHPLIB for connecting to
postgre.

If i connect with no persistant(pg_pconnect) connection i takes long time again but
no error recived.

Thanks

-- 
Uroš





Re: [GENERAL] timestamp update field across multiple tables

2001-01-12 Thread Oliver Elphick

Rajit Singh wrote:
  >I have five tables which are really intended to be one big table, but has be
  >en adapted to comply with Postgres's (irritating) 8kb tuple size limit (
  >ver 7.0.3).
  >
  >I have 'modtime' fields in each of these tables which record the last time a
  > record was updated.  But I would like the behaviour to be such that a m
  >odtime field in each of these associated tables is updated when any of t
  >he tables are updated.  And I would like all of the modtime fields to ta
  >ke the same value if possible.

If the rows in these five tables are meant to be all one, why have five
separate modtime fields?  That breaks the "rules" against redundant data.
Just have it in one table and put triggers to set it in all five.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "For the LORD is good; his mercy is everlasting; and 
  his truth endureth to all generations." 
 Psalms 100:5 





[GENERAL] timestamp update field across multiple tables

2001-01-12 Thread Rajit Singh

Dear List,

Thanks for those who helped me with my last question.

I have a slightly related question:

I have five tables which are really intended to be one big table, but has been adapted 
to comply with Postgres's (irritating) 8kb tuple size limit (ver 7.0.3).

I have 'modtime' fields in each of these tables which record the last time a record 
was updated.  But I would like the behaviour to be such that a modtime field in each 
of these associated tables is updated when any of the tables are updated.  And I would 
like all of the modtime fields to take the same value if possible.

The primary/foreign keys for each of the tables do not necessarily have the same name.

I've currently imagined a trigger for each table that updates every other table every 
time it is updated.  But I've not had much success executing SQL from inside a plpgsql 
function at the moment - and wouldn't it create a cycle of triggers by updating a 
table's modtime, which in turn causes another modtime update in all the other tables, 
which in turn perpetuates the circle?

Any help greatly appreciated,
Thanks,
Rajit



[GENERAL] Joins and field types

2001-01-12 Thread Zak McGregor

Hi all

What sort of performance penalty on joins using varchar(n) vs. int fields can I expect 
if both tables are fairly small (ie unlikely to have more than 2 rows each)?

Is there a good reference or two on these sorts of questions?

thanks a lot

Ciao

Zak McGregor



[GENERAL] Re: PostgreSQL Client Program

2001-01-12 Thread Anthony E . Greene

On Mon, 08 Jan 2001 19:50:00 Jreniz wrote:
>I´ve a Linux RH6.2 machine with PostgreSQL 6.5.3, this works fine with
>PHP module, but, I need create a GTK client program that sends queries
>to database.
>
>I saw the manuals and documentation but, i don´t know how compile these
>programs, and the manuals don´t say it!!
>
>Please send me any piece of code in C that conects the database using
>libpq or libpq++ and the mode of compilation!!!

See GtkSQL 

Version 0.3 was written for PostgreSQL 6.3, but I just compiled it on my
RH6.2 machine with PG 6.5.3 and it works fine. I made a few changes in the
Makefile:

  PGSQLIBS= -L/usr/lib/pgsql -lpq
  PGSQLCFLAGS = -I/usr/include/pgsql -DUSE_POSTGRESQL
  PGSQLOBJS   = postgres.o

I also commented out the MySQL section, since I don't have that on my
machine.


Tony
-- 
Anthony E. Greene <[EMAIL PROTECTED]> 
PGP Key: 0x6C94239D/7B3D BD7D 7D91 1B44 BA26  C484 A42A 60DD 6C94 239D
Chat:  AOL/Yahoo: TonyG05ICQ: 91183266
Linux. The choice of a GNU Generation.