Re: [GENERAL] 2 or more columns of type 'serial' in a table

2001-01-26 Thread Tom Lane

"Aggarwal , Ajay" <[EMAIL PROTECTED]> writes:
> I am having a problem, while trying to CREATE a table with more than one
> column of type 'serial'. Apparently only one sequence gets created and that
> is for the last 'serial' type column in the table.

How embarrassing ... I wonder how long that's been broken?

The fix in 7.0.3 is to change line 716 of src/backend/parser/analyze.c
from
blist = lcons(sequence, NIL);
to
blist = lappend(blist, sequence);

Fixed for 7.1.  Thanx for the report.

regards, tom lane



Re: [GENERAL] Wild Cards

2001-01-26 Thread Adam Haberlach

On Thu, Jan 25, 2001 at 08:40:23AM -0500,  wrote:
> I am not able to get Wildcards in PostgreSQL, I know its * (asterisk), but
> its not working. can someone show me a example or something?

It's % and you have to use the LIKE operator.

SELECT * FROM thistable WHERE name LIKE '%marley';

-- 
Adam Haberlach|A cat spends her life conflicted between a
[EMAIL PROTECTED]   |deep, passionate, and profound desire for
http://www.newsnipple.com |fish and an equally deep, passionate, and
'88 EX500 |profound desire to avoid getting wet.



Re: [GENERAL] Trouble porting postgreSQL to WinNT

2001-01-26 Thread Tom Lane

=?iso-8859-1?Q?"Felix=20K=F6nig"?= <[EMAIL PROTECTED]> writes:
> gcc -I../../../include -I../../../backend   -I/usr/local/include -O2 -I/usr/local/inc
> lude -DBUILDING_DLL=1 -Wall -Wmissing-prototypes -Wmissing-declarations -I../..-c
>  -o istrat.o istrat.c
> istrat.c: In function `OperatorRelationFillScanKeyEntry':
> istrat.c:494: `F_OIDEQ' undeclared (first use in this function)
> istrat.c:494: (Each undeclared identifier is reported only once
> istrat.c:494: for each function it appears in.)

F_OIDEQ (and a lot of other F_xxx macros) should be declared in
src/backend/fmgr.h, which is normally generated during build by
the shell script src/backend/utils/Gen_fmgrtab.sh.  I speculate
that fmgr.h is completely hosed (perhaps empty) due to script
execution problems, and that this just happens to be the first
place in the compile that references an F_xxx macro.

Gen_fmgrtab.sh depends on a shell, awk, sed, and a bunch of other
stuff, so I wouldn't be too surprised if it fails under Win2000.
Not sure why Sasha is seeing a problem, though, unless he's trying
to build under Windows ...

regards, tom lane



Re: [GENERAL] how do you run your backups?

2001-01-26 Thread Vadim Mikheev

> > Is there a way to start with yesterday's dump and load each transaction
> > log dump in order until you have a complete restore of the database?

WAL based BAR is not implemented in 7.1
Try to use Rserv from 7.1' contrib - it generates consistent incremental
snapshots.

> Also, does a pg_dump guarantee a consistent view of all of the tables in a
 > database at a given snapshot in time, ...

Yes. And I believe it's reflected in docs.

Vadim





Re: [GENERAL] high level specs on PL ?

2001-01-26 Thread Bruce Momjian

> George Johnson writes:
> 
> > Where to find high level specs on what is/isn't needed to make a new
> > procedural language?
> 
> The CREATE LANGUAGE reference page contains some pointers.
> 
> > With JNI I don't see why PL/Java would be an unreasonable undertaking and
> > sure-fire way to "Make Friends And Influence People".
> 
> Theoretically yes, but with current Java implementations it's a very
> frustrating experience.

Its on the TODO list:

* Allow Java server-side programming

and I agree Java isn't stable enough yet.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] This script will crash the connection

2001-01-26 Thread Tom Lane

"Richard Huxton" <[EMAIL PROTECTED]> writes:
> Doesn't seem to be the notify - backend crashes with
> create rule blah_upd as on update to blah do insert into foo (dummy) values
> ('test');

Actually, that was a completely different bug :-(.  Thanks for the
report.

regards, tom lane



Re: [GENERAL] Backends dying due to memory exhaustion--I'm stonkered

2001-01-26 Thread Doug McNaught

Tom Lane <[EMAIL PROTECTED]> writes:

> Doug McNaught <[EMAIL PROTECTED]> writes:
> > From what I've seen so far, all the backends (other than the one that
> > actually crashes) seem to survive the SIGTERM I send to the
> > postmaster.  How do I tell which one is which?  The command line?
> 
> SIGTERM to the postmaster commands polite shutdown, ie, don't accept
> new connections but allow existing clients to finish out their sessions.
> So unless your clients are short-lived I wouldn't expect SIGTERM'ing
> the postmaster to do much.
> 
> If you want to force things to happen then you should send SIGINT to
> the postmaster, which in turn will SIGTERM its backends, which in
> theory will abort their transactions and shut down. 

OK, this makes sense.  I must have missed it in the manual.  SIGINT it 
is.

Waiting for the next crash with anticipation...

-Doug



Re: [GENERAL] Too many open files! how do I fix on linux?

2001-01-26 Thread Alex Pilosov

On Fri, 26 Jan 2001, adb wrote:

> I understand that I need to up the max number of open files in the linux
> kernel but I'd like to know what's a good number to set it to for a high
> traffic postgres server and what's the best way to set it in linux so
> it's permanent?
I set to 16384. with about 100 clients this seems to be enough.
Best way to set it is in your local rc script (/etc/rc.d/rc3.d/*local)

> (These are during some performance tests I'm running with about 40
> simultaneous clients)




[GENERAL] ISO SQL Geographic Objects

2001-01-26 Thread Franck Martin

ISO/DIS 19125-2 Geographic information - Simple feature access - Part 2: SQL
option 

I'm trying to look for this document on the web, as it gives information on
how ISO Geographic Objects should be represented in SQL. Does anybody have
this or at least a summary of it. Or can give me some examples of SQL
representations.

Thanks

Franck Martin
Network and Database Development Officer
SOPAC South Pacific Applied Geoscience Commission
Fiji
E-mail: [EMAIL PROTECTED]  
Web site: http://www.sopac.org/
 
Support FMaps: http://fmaps.sourceforge.net/ 


This e-mail is intended for its addresses only. Do not forward this e-mail
without approval. The views expressed in this e-mail may not be necessarily
the views of SOPAC.





[GENERAL] Re: postgres limitation -what does it mean? (8k row limit)

2001-01-26 Thread Matt Friedman

I've been following this thread but it's not clear to me what an 8k row
limit means exactly.

Does it mean that the size of all the data in that row must not be greater
than 8k? That seems very small to me.

Also, how does one change BLCKSZ?

Some more detail on the subject would be great. thank you.

Matt Friedman




- Original Message -
From: "Mitch Vincent" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 26, 2001 7:28 PM
Subject: Re: postgres limitation


> > > the max size of a row...
> >
> > 8k in pre v7.1, no limit in v7.1 an dlater ...
>
> 32k really... BLCKSZ can be changed.. I've had no trouble running a
> production database with BLCKSZ set to 32k though there might be issues
I'm
> not aware of..
>
> -Mitch
>
>
>




Re: [GENERAL] Unique ID of connection

2001-01-26 Thread Bruce Momjian


Now there is a truly great, concise example.

> 
> Hi all!
> 
> Very usefull sample:
> 
>  CREATE FUNCTION getpid () returns int4 as '/lib/libc.so.6' LANGUAGE 'C';
>  SELECT getpid();
> 
> ;))
> 
> 
> P.S. Only for Linux!
> 
> 
> -- 
>   Trurl McByte, Capt. of StasisCruiser "Prince"
> |InterNIC: AR3200   RIPE: AR1627-RIPE|
> |--98 C3 78 8E 90 E3 01 35  87 1F 3F EF FD 6D 84 B3--|
> 
> 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] how do you run your backups?

2001-01-26 Thread David Wall

> Is there a way to start with yesterday's dump and load each transaction
> log dump in order until you have a complete restore of the database?

Does recovery of pg_dump and the transaction logs work with a backup and
then doing a rollforward on the transaction logs?  What would be the
procedure?  It doesn't seem as though there is any real connection between a
pg_dump and the backup of transaction logs.

Also, does a pg_dump guarantee a consistent view of all of the tables in a
database at a given snapshot in time, or does it simply dump the contents of
each table as they are encountered while dumping?

And what format of pg_dump is required to also dump the large objects so
that they can be automatically loaded back on failure recovery (or just
copying the database to another computer)?

David




Re: [GENERAL] Backends dying due to memory exhaustion--I'm stonkered

2001-01-26 Thread Tom Lane

Doug McNaught <[EMAIL PROTECTED]> writes:
> From what I've seen so far, all the backends (other than the one that
> actually crashes) seem to survive the SIGTERM I send to the
> postmaster.  How do I tell which one is which?  The command line?

SIGTERM to the postmaster commands polite shutdown, ie, don't accept
new connections but allow existing clients to finish out their sessions.
So unless your clients are short-lived I wouldn't expect SIGTERM'ing
the postmaster to do much.

If you want to force things to happen then you should send SIGINT to
the postmaster, which in turn will SIGTERM its backends, which in
theory will abort their transactions and shut down.  (No, this isn't
real consistent, but we wanted the right things to happen when a
system-wide shutdown sends SIGTERM to all the processes.  Usually
you shouldn't be manually killing individual backends anyway, so the
lack of consistency in signal meaning shouldn't mean much...)

As far as telling which is which, I doubt it much matters for this.
If you've compiled with -g then the backtraces should show the queries
that each one was executing, and that's as much info as we're likely
to need.

regards, tom lane



[GENERAL] Re: postgres limitation

2001-01-26 Thread Mitch Vincent

> > the max size of a row...
>
> 8k in pre v7.1, no limit in v7.1 an dlater ...

32k really... BLCKSZ can be changed.. I've had no trouble running a
production database with BLCKSZ set to 32k though there might be issues I'm
not aware of..

-Mitch




Re: [GENERAL]

2001-01-26 Thread Doug McNaught

[EMAIL PROTECTED] writes:

> I  am trying to make a function that takes four varchar arguments as in
> CREATE FUNCTION address(varchar,varchar,varchar,varchar) ...
> 
> when i try to pass the function a NULL value, it turns all the values into
> NULL:
> select address('foo','foo',NULL,'foo').

This is a bug (??) that's fixed in 7.1, which is currently in beta.

-Doug



[GENERAL] Too many open files! how do I fix on linux?

2001-01-26 Thread adb

Hi again, I'm getting stuff like

ERROR:  cannot write block 1056 of tradehistory [testdb] blind: Too many
open files in system

and 

postmaster: StreamConnection: accept: Too many open files in system

I understand that I need to up the max number of open files in the linux
kernel but I'd like to know what's a good number to set it to for a high
traffic postgres server and what's the best way to set it in linux so
it's permanent?

(These are during some performance tests I'm running with about 40
simultaneous clients)

Thanks,

Alex.




Re: [GENERAL] Backends dying due to memory exhaustion--I'm stonkered

2001-01-26 Thread Doug McNaught

Tom Lane <[EMAIL PROTECTED]> writes:

> Doug McNaught <[EMAIL PROTECTED]> writes:
> > I'm running VACUUM, then VACUUM ANALYZE (the docs seem to suggest that 
> > you need both).  Basically my script is:
> 
> VACUUM ANALYZE is a superset of VACUUM; you do not need both.

Good to know.

> > The example I sent was a crash during VACUUM.
> 
> Hm.  Another perfectly good theory shot to heck ;-).  It seems unlikely
> that VACUUM would fail because of corrupted data inside a tuple ...
> although corrupted tuple headers could kill it.  Again, though, one
> would think such a crash would be repeatable.

Agreed, given what you've said. 

> > Another thing that springs to mind--once the crash happens, the
> > database doesn't respond (or gives fatal errors) to new connections
> > and to queries on existing connections.  Killing the postmaster does
> > nothing--I have to send SIGTERM to all backends and the postmaster in
> > order to get it to exit.  I don't know if this helps...
> 
> Now *this* is interesting.  Normally the system recovers quite nicely
> from an elog(FATAL), or even from a backend coredump.  I now suspect
> something must be getting corrupted in shared memory.  The next time
> it happens, would you proceed as follows:
>   1. kill -INT the postmaster.
>   2. The backends *should* exit in response to the SIGTERM the
>  postmaster will have sent them.  Any backend that survives
>  more than a fraction of a second is stuck somehow.  For each
>  stuck backend, in turn:
>   3. kill -ABORT the backend, to create a corefile, and collect
>  a gdb backtrace from the corefile.  Be careful to get the
>  right corefile, if you are dealing with more than one
>  database.
> 
> That should give us some idea of what's stuck (especially if you compile
> with -g).

I don't remember if I did or not, and (like a moron) I blew away the
source tree.  I'll see what gdb tells me about the presence of
symbols.

>From what I've seen so far, all the backends (other than the one that
actually crashes) seem to survive the SIGTERM I send to the
postmaster.  How do I tell which one is which?  The command line?

> BTW, which version did you say you were running?  If it's less than
> 7.0.3 I'd recommend an update before we pursue this much further ...

Just double-checked and it is indeed 7.0.3.

I'll be back with more info once I get another crash...

-Doug



[GENERAL] how do you run your backups?

2001-01-26 Thread adb

Hi, I've read the administrator guide section on backups and
I'm wondering is there an easy way to do backups of the
transaction log similar to sybase or oracle?

I imagine I would use pg_dumpall nightly but I'm wondering if there's
something else to run every 10 minutes or so to dump the log.  Or
is it as simple as checkpoint and then copy the old log file?

Is there a way to start with yesterday's dump and load each transaction
log dump in order until you have a complete restore of the database?

Thanks,

Alex.




Re: [GENERAL] postgres limitation

2001-01-26 Thread The Hermit Hacker

On Fri, 26 Jan 2001, a wrote:

> Hi,
>
> Is there any document about the limitation of postgres
> like the total size of one table

none that we are aware of ...

> the max row number of a table

2^32 ... limitation is the OID size, which is currently a 32bit int ...
move to 64bit int's, and then your max row number is 2^64 :)

> the max size of a row...

8k in pre v7.1, no limit in v7.1 an dlater ...



>
> Thanks
> Feng
>
>
>

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org




[GENERAL] Re: get last sequence

2001-01-26 Thread Alfonso Peniche

"Mr. Chan" wrote:

> dear all,
> how to get last sequence from postgresql? I mean such mysql
> last_insert_id function or ms sql select @@identity.
> thanks!
> Chan

Select currval('sequence_name_id') from tablename;

this will give you the last sequence generated in tablename




[GENERAL] Unique ID of connection

2001-01-26 Thread Trurl McByte


Hi all!

Very usefull sample:

 CREATE FUNCTION getpid () returns int4 as '/lib/libc.so.6' LANGUAGE 'C';
 SELECT getpid();

;))


P.S. Only for Linux!


-- 
  Trurl McByte, Capt. of StasisCruiser "Prince"
|InterNIC: AR3200   RIPE: AR1627-RIPE|
|--98 C3 78 8E 90 E3 01 35  87 1F 3F EF FD 6D 84 B3--|




[GENERAL] Help

2001-01-26 Thread Paty



Hello.I'm using PostgreSQL 7.0.2 with webmail IMP, but have the 
follow error :Warning: Unable to connect to PostgresSQL server: 
pqReadData() --read failed: errno=32 Broken pipe in ./lib/db.pgsql on line 
126Warning: Unable to connect to PostgresSQL server: pqReadData() --read 
failed: errno=9 bad file descriptor in db_pgsql.inc on line 52
But this erro don't appear always.
 
Where is the problem?. Thank you.


[GENERAL] GNUe Forms 0.0.5 Released

2001-01-26 Thread James Thompson


GNUe Forms is a data entry/query tool that allows a designer to describe a
data aware user interface via an XML based form definition language.  
This file is then parsed at client startup and a proper user interface is
dynamically generated.  Currently the client portion of the system is
available, work on a screen painter still in the works.  To make creating
the forms easier (until the painter is ready) the package also includes a
script that makes generating basic screens based upon tables quite fast.

Some of the clients features include:
  Multi-platform capability: unix, linux, win32
  Multi UI support: GUIs supported as GTK, Motif, Win32 (via wxPython), 
Curses based text support (still rough)
  Multi Datasource support: PostgeSQL, MySQL, GEAS, and ODBC
(GEAS and ODBC still rough)
  2-tier and N-tier capable: N-tier support via GNUe GEAS
  Client side Master/Detail relationships
  Client Side Triggers

It has been successfully built on the following platforms
  Solaris
  Debian GNU/Linux
  RedHat Linux
  Win32 platforms (limited testing)

It is part of the GNU Enterprise project (www.gnue.org) A GNU project
which aims to develop a set of tools to build programs that together will
form a complete system to provide functionality for most business needs.

While the client is only at version 0.0.5 it has at earned our stamp of
"usefullmaybe" :)  You can find the python source code, rpms, and
win32 version as well as some outdated screenshots at www.gnue.org

Take Care
James ([EMAIL PROTECTED])

->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<
James Thompson138 Cardwell Hall  Manhattan, Ks   66506785-532-0561 
Kansas State University  Department of Mathematics
->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<






[GENERAL] ISA function?

2001-01-26 Thread Will Fitzgerald

Is there any PostgreSQL function, that, given an OID and a table (class)
name, would true true (or the OID) if the OID is a member of that class?

I want to write something like:

create function ISA(oid, name) returns oid as '
select oid from $2* where oid = $1
' language 'sql';


(But, of course, this doesn't work).

--
Will Fitzgerald
I/NET, Inc.




[GENERAL] Weird View behavior

2001-01-26 Thread Dan Wilson



I've got some wierd behavior with a view 
that I created.
 
devwindaily=# select 
version();   
version   
- PostgreSQL 
7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66(1 row)
 
devwindaily=# \d 
email_num  
View "email_num" Attribute 
|    Type | Modifier 
---+-+-- user_emailaddress | 
varchar(50) |  email_count   | 
integer | View definition: SELECT 
user_info.user_emailaddress, count(user_info.user_emailaddress) AS email_count 
FROM user_info GROUP BY user_info.user_emailaddress;
 
devwindaily=# select * from 
email_num;    user_emailaddress    | 
email_count 
-+-     
|   
1 [EMAIL PROTECTED]   
|   
1 asfdasdfadsfafdsaf  
|   
1 [EMAIL PROTECTED]  
|   
1 [EMAIL PROTECTED] 
|   
3 [EMAIL PROTECTED]   
|   
1 [EMAIL PROTECTED]   
|   
1 [EMAIL PROTECTED] 
|   
1     
|   1(9 
rows)
 
devwindaily=# select * from email_num where 
email_count > 1;    user_emailaddress    | 
email_count 
-+-     
|   
1 [EMAIL PROTECTED]   
|   
1 asfdasdfadsfafdsaf  
|   
1 [EMAIL PROTECTED]  
|   
1 [EMAIL PROTECTED] 
|   
3 [EMAIL PROTECTED]   
|   
1 [EMAIL PROTECTED]   
|   
1 [EMAIL PROTECTED] 
|   
1     
|   1(9 
rows)
 
devwindaily=# select * from email_num where email_count < 
2; user_emailaddress | email_count 
---+-(0 rows)
 
devwindaily=# select * from email_num where email_count < 
1; user_emailaddress | email_count 
---+-(0 rows)
 
devwindaily=# select * from email_num where email_count = 
1; user_emailaddress | email_count 
---+-(0 rows)
 
Now what is going on?!?!?  That just plain old doesn't make 
sense!
 
Thanks for any help!
 
-Dan


Re: [GENERAL] Update Trigger Inconsistency with 7.1?

2001-01-26 Thread Gregory Wood

> You can detect whether a field is actually being *changed* by comparing
> NEW.field and OLD.field.  You cannot distinguish the case where the
> UPDATE didn't mention a field from the case where it did but assigned
> the same value that was already there (eg UPDATE foo SET x = x).
> This behavior was not different in 7.0.

I had thought that I tested by trying update statements where the field
wasn't specified. I suppose I probably am overlooking some other fact that
was causing it to fail, but I could've sworn...

I suppose I'll have to NULL out the field on INSERTs and UPDATEs, or push it
into yet another field, then check for NULLness every time to verify that
it's updated. Sigh.

Greg





[GENERAL] Re: PostgreSQL 7 on Redhat 6.1

2001-01-26 Thread Martin A. Marques

El Vie 26 Ene 2001 17:02, Chean Fonk Joe Thong escribió:
> Hi to all,
>
> Currently I'm trying to upgrade my PostgreSQL from 6.5
> to 7.0.2 with rpm files on my Redhat 6.1 with Glibc
> 2.1.3.  The installation won't continue because RPM
> Manager told me that I need libc.so.6 (>= GLIBC 2.2).

I have postgresql-7.0.3 on a redhat with glibc-2.1.1, installed from rpms. 
Where did you get your postgres rpm from? Try getting the scr.rpm and 
recompile them (It's very easy).

> I've two questions:
> I forced the installation with no deps, when starting
> the postgresql server with the following command:
> /rc.d/init.d/postgresql start
> The script told me that I've an old version of
> PostgreSQL.  How should I solve this problem?

That OK, and it's because you didn't read the fine manuals. There is a part 
that talks about upgrading with rpms.

> How do I install GLIBC 2.2 onto my Redhat?

It's not necesary.

-- 
System Administration: It's a dirty job, 
but someone told I had to do it.
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [GENERAL] Working with large text blocks

2001-01-26 Thread Adam Lang

The field data type is TEXT.  I don't think you need to do anything special
(as long as you are using 7.1).

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Clinton James" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 26, 2001 2:45 PM
Subject: [GENERAL] Working with large text blocks


> I have some blocks of text than can be from 2K to 80K in size. I
understand
> 7.1 will have compression to use with text, but does anyone know of a way
I
> can use these large text blocks as a field?  I'll have Win NT as a client
> getting the data from a Linux box through ODBC so BLOBs don't appear a
good
> idea, or maybe I just don't understand.  Can anyone point me in the right
> direction?
>
> Clinton James




[GENERAL] PostgreSQL 7 on Redhat 6.1

2001-01-26 Thread Chean Fonk Joe Thong

Hi to all,

Currently I'm trying to upgrade my PostgreSQL from 6.5
to 7.0.2 with rpm files on my Redhat 6.1 with Glibc
2.1.3.  The installation won't continue because RPM
Manager told me that I need libc.so.6 (>= GLIBC 2.2). 


I've two questions:
I forced the installation with no deps, when starting
the postgresql server with the following command:
/rc.d/init.d/postgresql start
The script told me that I've an old version of
PostgreSQL.  How should I solve this problem?

How do I install GLIBC 2.2 onto my Redhat?  

Any help would be much appreciated.
Thank you.

Joe

__
Do You Yahoo!?
Yahoo! Auctions - Buy the things you want at great prices. 
http://auctions.yahoo.com/



Re: [GENERAL] Working with large text blocks

2001-01-26 Thread Frank Joerdens

On Fri, Jan 26, 2001 at 01:45:33PM -0600, Clinton James wrote:
> I have some blocks of text than can be from 2K to 80K in size. I understand
> 7.1 will have compression to use with text, but does anyone know of a way I
> can use these large text blocks as a field?

The unlimited row width feature in 7.1, aka TOAST (the best thing since
sliced bread), works transparently, so in actual fact, you can have your
text in a text field. You won't notice anything of the compression, if
TOAST decides it is needed.

Regards, Frank



[GENERAL] Working with large text blocks

2001-01-26 Thread Clinton James

I have some blocks of text than can be from 2K to 80K in size. I understand
7.1 will have compression to use with text, but does anyone know of a way I
can use these large text blocks as a field?  I'll have Win NT as a client
getting the data from a Linux box through ODBC so BLOBs don't appear a good
idea, or maybe I just don't understand.  Can anyone point me in the right
direction?

Clinton James




[GENERAL] Poor man's replication using WAL

2001-01-26 Thread adb

Has anyone implemented a form of one way replication with the new
write ahead logging in 7.1?

I'm looking for a way to have a warm or hot standby server waiting to
take over in the event of a machine failure.

I know there's discusion about adding replication to future releases,
I'm just wondering if anyone is doing it now.

Thanks,

Alex.




[GENERAL] pg_dump fails sanity check

2001-01-26 Thread Braeus Sabaco

Using version 7.0.2 (system is production, so I can't take it down for
minor releases) I enter:
$ pg_dump romanlegion > rdb.backup
and I get:
failed sanity check, table usergames was not found

Ok, fair enough, so:
$ psql romanlegion
romanlegion=# \d usergames
Table "usergames"
 Attribute  |  Type   | Modifier
+-+--
 gameuserid | integer | not null
 gameplayed | integer | not null
Index: unique_played

romanlegion=# select * from usergames ;
 gameuserid | gameplayed
+
 10 |  1

442 | 10
(477 rows)
romanlegion=#

Now, it can find the table, describe the table, select from the table,
and the more complex queries I do on the site work just fine also.  I
can make sure it is where I expect it to be.
$ su postgres
Password:
$ ls /var/lib/pgsql/data/base/romanlegion/usergames -al
-rw---   1 postgres postgres40960 Jan 25 23:43
/var/lib/pgsql/data/base/romanlegion/usergames


So now I'm out of ideas.  Is there some known bug?  Or is this just
being weird?




Re: [GENERAL] Performance: Unix sockets vs. TCP/IP sockets

2001-01-26 Thread Doug McNaught

Tom Lane <[EMAIL PROTECTED]> writes:

> Doug McNaught <[EMAIL PROTECTED]> writes:
> > For a localhost TCP socket, a write() has to be sent down the network
> > stack and (possibly) split into packets, which are then sent through
> > the routing engine and back up through the stack, flow-controlled,
> > reassembled, and submitted to the receiving socket.  Also, ACK packets 
> > have to be sent back to the sender through the same tortuous path.
> 
> My notion of a "properly designed" kernel is one that has a shortcircuit
> path for local TCP connections, to avoid precisely that overhead.  Not
> all do ... but any kernel wherein attention has been paid to X Windows
> performance (to mention just one important case) does.

Hmmm.  I would take issue with your notion of "properly designed"--one
might instead use the phrase "insanely bloated with special cases". ;)

X can, and should, use Unix sockets and/or shared memory for speed
when connecting to the local display.

I don't intend to start a flamewar--I just think "properly designed"
is a matter of philosophy.

> Of course, since the kernel is certainly capable of net socket
> throughput well in excess of 0.3 megabytes/sec on this machine, this
> example really just proves Doug's other point: the difference between a
> Unix socket and a TCP socket is unlikely to be important for Postgres
> purposes, because it'll be swamped by other factors.

Yeah, I think that was my main point, though I didn't really emphasize 
it very well.

Thanks for the followup.

-Doug



Re: [GENERAL] Update Trigger Inconsistency with 7.1?

2001-01-26 Thread Tom Lane

"Gregory Wood" <[EMAIL PROTECTED]> writes:
> To do this I created a trigger that would raise an exception "IF
> new.UpdateRequired ISNULL". In 7.0 this would work because
> new.UpdateRequired seemed to be NULL unless it was specified (I cannot test
> this any longer because I don't have a 7.0 server available).

I find that very hard to believe.  The NEW record contains the proposed
new tuple, which will include the old value of any fields that weren't
specified in the UPDATE statement.

You can detect whether a field is actually being *changed* by comparing
NEW.field and OLD.field.  You cannot distinguish the case where the
UPDATE didn't mention a field from the case where it did but assigned
the same value that was already there (eg UPDATE foo SET x = x).
This behavior was not different in 7.0.

regards, tom lane



Re: [GENERAL] Performance: Unix sockets vs. TCP/IP sockets

2001-01-26 Thread Tom Lane

Doug McNaught <[EMAIL PROTECTED]> writes:
>> On Thu, Jan 25, 2001 at 11:07:19PM -0500, Tom Lane wrote:
> On a properly designed kernel, there shouldn't be any measurable
> performance difference between a local TCP connection and a Unix-socket
> connection.

> Much as I hesitate to contradict Tom here, I think I need to qualify
> his statement. 

> For a localhost TCP socket, a write() has to be sent down the network
> stack and (possibly) split into packets, which are then sent through
> the routing engine and back up through the stack, flow-controlled,
> reassembled, and submitted to the receiving socket.  Also, ACK packets 
> have to be sent back to the sender through the same tortuous path.

My notion of a "properly designed" kernel is one that has a shortcircuit
path for local TCP connections, to avoid precisely that overhead.  Not
all do ... but any kernel wherein attention has been paid to X Windows
performance (to mention just one important case) does.

For example, doing a COPY OUT of about 10MB of data, I get numbers
like this:

$ time psql -h localhost -c "copy foo to stdout" regression >/dev/null

real0m29.05s
user0m1.40s
sys 0m0.12s

$ time psql -c "copy foo to stdout" regression >/dev/null

real0m28.97s
user0m1.39s
sys 0m0.10s

which is the same to within experimental error, considering there are
background daemons &etc on this machine (an HP-PA box running HPUX
10.20).

Of course, since the kernel is certainly capable of net socket
throughput well in excess of 0.3 megabytes/sec on this machine, this
example really just proves Doug's other point: the difference between a
Unix socket and a TCP socket is unlikely to be important for Postgres
purposes, because it'll be swamped by other factors.

regards, tom lane



[GENERAL] Re: Connection pooling

2001-01-26 Thread Steve Leibel

At 12:39 PM +0100 1/26/01, Gilles DAROLD wrote:
>Hi,
>
>Just to repeat that we have this stuff enabled using perl and mod_perl for a
>very
>long time. Just try to declare all your vars as global (without my or local)
>and
>see what's append between apache child process :-)
>
>But as I know Apache::DBI doesn't do db connexion pooling, this is a
>mistake.
>It just allows you to create persitante connexion. But you can declare
>multiple
>persistante connection and do a random access to each one if your really
>need
>to play with that. Personnaly I never had to use that, but I think that with
>a site
>with a large amout of simoultaneous user it can be usefull.



The point is that if two child processes share a connection (which is 
actually just a TCP or Unix socket) then you have two writers to the 
same output channel.  This is a no-no, since data corruption can 
result, just as if two writers share an open filehandle for writing.

It would in theory be possible for somebody to write a connection 
pool package for Apache that included proper concurrency control. 
However Apache::DBI doesn't do that.

Steve L




[GENERAL] FOUND Re: concat for pgsql?

2001-01-26 Thread Thomas T. Thai


sorry it was the very first thing in functions-sting.htm manual. i guess i
was looking for the actual function name CONCAT. 

string || string 

On Fri, 26 Jan 2001, Thomas T. Thai wrote:

> Date: Fri, 26 Jan 2001 11:57:55 -0600 (CST)
> From: Thomas T. Thai <[EMAIL PROTECTED]>
> To: PostgreSQL General <[EMAIL PROTECTED]>
> Subject: concat for pgsql?
> 
> i'm still porting my app from mysql to pgsql. i'm stuck on this one. in my
> mysql app where i have a tree structure defined as:
> 
> 01
> 0101
> 0102
> ...
> ZZ01
> ZZ02
> ZZ0301
> ...etc
> 
> 
> it's basically base 36
> 
> when i would move a node in the tree, i would do something like this in
> mysql:
> 
> UPDATE tbl SET code =
> CONCAT('NEWPREFIX',SUBSTRING(code,LENGTH('parent_code'))) WHERE \
> code LIKE 'parent_code%';
> 
> so if i moved all the nodes under 01010201 to AA, it would UPDATE all the
> paths starting with 01010201 with AA using CONCAT. i can't seem to find a
> similar function in pgsql. ofcourse i could pull the results into an array
> in php and loop through it to make the changes, and UPDATE them
> individually back to pgsql, but that's a very slow process.
> 
> 
> 
> 




[GENERAL] concat for pgsql?

2001-01-26 Thread Thomas T. Thai

i'm still porting my app from mysql to pgsql. i'm stuck on this one. in my
mysql app where i have a tree structure defined as:

01
0101
0102
...
ZZ01
ZZ02
ZZ0301
...etc


it's basically base 36

when i would move a node in the tree, i would do something like this in
mysql:

UPDATE tbl SET code =
CONCAT('NEWPREFIX',SUBSTRING(code,LENGTH('parent_code'))) WHERE \
code LIKE 'parent_code%';

so if i moved all the nodes under 01010201 to AA, it would UPDATE all the
paths starting with 01010201 with AA using CONCAT. i can't seem to find a
similar function in pgsql. ofcourse i could pull the results into an array
in php and loop through it to make the changes, and UPDATE them
individually back to pgsql, but that's a very slow process.






[GENERAL] Calculated values

2001-01-26 Thread Camm Maguire

Greetings!  What is the 'best way' or 'guiding philosophy' if there is
one for dealing with calculated values in a pg database?  For example,
say you have a table with a column of floats, and you want to make a
view showing this column as a percent of the total across the column.
Should you

a) create another table with the total, and put on triggers on insert,
update, and delete to modify the right total or

b) create the view with a specific subselect to recalculate the total
at select time.  This has the disadvantage that the total seems to be
recalculated for each row.  Is there any sql syntax which can merge a
dynamically generated aggregate, *calculated only once*, into each
output row?

Thanks!


-- 
Camm Maguire[EMAIL PROTECTED]
==
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah



[GENERAL] Update Trigger Inconsistency with 7.1?

2001-01-26 Thread Gregory Wood

We recently upgraded our development server to 7.1 and I believe I've
noticed an inconsistency with how update triggers behave on version 7.1
versus 7.0. Since I'm not sure which should be the expected behavior I have
no idea if it is a bug or not. Here is the situation:

I have a table in which on field (two actually) *needs* to be updated every
time, *even* if the new value is the same as the old value. For example:
UPDATE Foo SET UpdateOptional='something', UpdateRequired='updated' WHERE
Other='value' would work, but UPDATE Foo SET UpdateOptional='something'
WHERE Other='value' would not work, because no value was explicitly given
for UpdateRequired.

To do this I created a trigger that would raise an exception "IF
new.UpdateRequired ISNULL". In 7.0 this would work because
new.UpdateRequired seemed to be NULL unless it was specified (I cannot test
this any longer because I don't have a 7.0 server available). In 7.1 this
behavior seemed to change: new.UpdateRequired is now equal to the
old.UpdateRequired value if no new value is specified.

Which behavior should I expect... and how else can I tell if a value has
been given by an UPDATE statement (keeping in mind that the value may or may
not change)?

Greg




RE: [GENERAL] vacuum

2001-01-26 Thread Mikheev, Vadim

> The best way to solve this, would be to remove the feature of keeping
> deleted/updated records in the databasefiles and therefor no 
> need to vacuum.
> Is there any way to configure this when compiling? Or are there other
> possibilities?
  ^
There will be in, hopefully, 7.2, only -:(

Vadim



Re: [GENERAL] Performance: Unix sockets vs. TCP/IP sockets

2001-01-26 Thread Doug McNaught

Frank Joerdens <[EMAIL PROTECTED]> writes:

> On Thu, Jan 25, 2001 at 11:07:19PM -0500, Tom Lane wrote:
> > Frank Joerdens <[EMAIL PROTECTED]> writes:
> > > What performance penalty can I expect when going over TCP/IP sockets
> > > instead of Unix sockets?
> > 
> > On a properly designed kernel, there shouldn't be any measurable
> > performance difference between a local TCP connection and a Unix-socket
> > connection.
> 
> Ah. That's good hear. I'd heard that TCP/IP was _significantly_ slower
> than Unix sockets. But maybe that was just Linux.

Much as I hesitate to contradict Tom here, I think I need to qualify
his statement. 

For a localhost TCP socket, a write() has to be sent down the network
stack and (possibly) split into packets, which are then sent through
the routing engine and back up through the stack, flow-controlled,
reassembled, and submitted to the receiving socket.  Also, ACK packets 
have to be sent back to the sender through the same tortuous path.

For a Unix socket, the write() is merely copied into the socket
buffer, and the reader notified that data is ready.  Flow control
consists of simply blocking the writer if the buffer is full.

For bulk data, a Unix socket will almost certainly be faster due to
the reduced overhead.

For a Postgres connection, query compilation (if needed) and lookup
time, plus disk i/o if necessary, will dominate. and you shouldn't see 
a significant difference between the two types of socket. 

-Doug



Re: [GENERAL] Performance: Unix sockets vs. TCP/IP sockets

2001-01-26 Thread Adrian Phillips

> "Frank" == Frank Joerdens <[EMAIL PROTECTED]> writes:

Frank> On Thu, Jan 25, 2001 at 11:07:19PM -0500, Tom Lane wrote:
>> Frank Joerdens <[EMAIL PROTECTED]> writes: > What performance
>> penalty can I expect when going over TCP/IP sockets > instead
>> of Unix sockets?
>> 
>> On a properly designed kernel, there shouldn't be any
>> measurable performance difference between a local TCP
>> connection and a Unix-socket connection.

Frank> Ah. That's good hear. I'd heard that TCP/IP was
Frank> _significantly_ slower than Unix sockets. But maybe that
Frank> was just Linux.

I must admit from my mysql days that this comment was made, but we are
probably talking a couple of years ago, which would be Linux
2.0. Whether 2.2 was better, or 2.4 is I am completely clueless. Here
is an excerpt from the mysql manual (heres hoping I won't be lynched
for posting this :-) :-

You get the fastest executable when you link with -static. Using
Unix sockets rather than TCP/IP to connect to a database also
gives better performance.

and :-

Here is a list of some mesurements that we have done:



If you connect using TCP/IP rather than Unix sockets, the result
is 7.5% slower.



I searched on google for references to this, plenty of mysql hits, but
also other systems, for example Cold Fusion has this to say :-

In our testing, the use of the newly implemented TCP Network
socket communication will degrade performance by 10-15% or more
from the default Unix Domain socket

(although this may be because of Cold Fusion itself).

I must admit most of the other references to slower TCP than Unix
sockets are to older documents. I suppose the easiest thing to do is
setup a test and see for yourself.

Sorry if this is becoming to OT,

Sincerely,

Adrian Phillips

-- 
Your mouse has moved.
Windows NT must be restarted for the change to take effect.
Reboot now?  [OK]



Re: [GENERAL] Problem with inheritance

2001-01-26 Thread Oliver Elphick

Alfonso Peniche wrote:
  >Hi all
  >
  >I have the following inheritance relation:
  >
  >  user
  >  |
  >   --
  >   ||
  >student  employee
  >
  >If I insert John into table student, how can I insert him afterwards so
  >that he is also an employee (this could happen several days later)?
 

You probably need to rethink your hierarchy.  If you want to persist
with inheritance, you could define another table, student_employee,
which multiply-inherits student and employee.

Incidentally, `user' is a reserved word; you will have to change the 
table name for 7.0.3, or else double-quote it every time you use it.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "Wash me thoroughly from mine iniquity, and cleanse me 
  from my sin. For I acknowledge my transgressions; and 
  my sin is ever before me. Against thee, thee only, 
  have I sinned, and done this evil in thy sight..."
   Psalms 51:2-4 





Re: [GENERAL] Problem with inheritance

2001-01-26 Thread Marc SCHAEFER

On Fri, 26 Jan 2001, Alfonso Peniche wrote:

>   user
>   |
>--
>||
> student  employee

Why not store the common data between student and employee in user, and
then store the additional data for student and employee in the relation
itself, implemented as a table ?

CREATE TABLE user (id SERIAL,
   created_on TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
   first_name VARCHAR(30) NOT NULL,
   last_name VARCHAR(30) NOT NULL,
   birth TIMESTAMP NOT NULL,
   unix_uid INT2 NOT NULL,
   email VARCHAR(30) NOT NULL,
   UNIQUE(id), PRIMARY KEY(id));

CREATE TABLE is_student (user_id REFERENCES user NOT NULL,
 section VARCHAR(2) NOT NULL, /* CS, PH, etc */
 year INT4 NOT NULL DEFAULT 1);

CREATE TABLE is_employe (user_id REFERENCES user NOT NULL,
 laboratory INT4 NOT NULL,
 salary MONEY NOT NULL);

Probably the VARCHAR could be changed into TEXT.

Now, if you want to get all data about all student named 'Wilhelm Tell':

   SELECT u.*,is.section,is.year
   FROM user u, is_student is
   WHERE (u.first_name LIKE 'Whilhelm')
 AND (u.last_name LIKE 'Tell')
 AND (u.id = is.user_id);

When the student becomes an employee, as this happens some time, you just
need to do something like:

   BEGIN WORK;
  DELETE FROM is_student WHERE (user_id = ?);
  INSERT INTO is_employe (user, laboratory, salary)
 VALUES (?, 42, 5);
   COMMIT WORK;

? represents here the user id, as with the Perl DBI binding.




Re: [GENERAL] Problem with inheritance

2001-01-26 Thread Tom Lane

Alfonso Peniche <[EMAIL PROTECTED]> writes:
> I have the following inheritance relation:

>   user
>   |
>--
>||
> student  employee

> If I insert John into table student, how can I insert him afterwards so
> that he is also an employee (this could happen several days later)?

If a student could also be an employee, then your table layout is
fundamentally wrong.

regards, tom lane



[GENERAL] Problem with inheritance

2001-01-26 Thread Alfonso Peniche

Hi all

I have the following inheritance relation:

  user
  |
   --
   ||
student  employee

If I insert John into table student, how can I insert him afterwards so
that he is also an employee (this could happen several days later)?

Thanx

Alfonso Peniche




Re: [GENERAL] Performance: Unix sockets vs. TCP/IP sockets

2001-01-26 Thread Frank Joerdens

On Thu, Jan 25, 2001 at 11:07:19PM -0500, Tom Lane wrote:
> Frank Joerdens <[EMAIL PROTECTED]> writes:
> > What performance penalty can I expect when going over TCP/IP sockets
> > instead of Unix sockets?
> 
> On a properly designed kernel, there shouldn't be any measurable
> performance difference between a local TCP connection and a Unix-socket
> connection.

Ah. That's good hear. I'd heard that TCP/IP was _significantly_ slower
than Unix sockets. But maybe that was just Linux.

> 
> There are not-so-well-designed kernels out there, but I forget which
> they are (and you didn't bother to specify your platform anyway).

It's Solaris 7 (cf. the current thread in hackers).

> 
> If you want a reliable answer, fire up a data-transfer-intensive task,
> say a COPY OUT of a large table, and time it both ways.

I might try that in a couple of weeks' time when the current rush is over.

Ta, Frank



Re: [GENERAL] Re: Load a database into memory

2001-01-26 Thread Tom Lane

Guillaume =?KOI8-R?Q?Le=27mery?= <[EMAIL PROTECTED]> writes:
> CREATE TABLE parametre
> (
> id_parametre int4 not null primary key,
> id_regie int4 NULL ,
> par_id_parametre int4 NULL ,
> type INT4 not null,
> valeur_str varchar null,
> valeur_int int4 null,
> valeur_fl float8 null,
> valeur_txt varchar,
> date_pilotage timestamp NULL,
> id_ct1 int4
> );
> CREATE INDEX parametre_tracking_idx ON parametre(type, valeur_int);
> 
> The query :
> SELECT ae.id_regie,
> ae.num_campagne,
> ae.num_publicite,
> ae.ponderation_calculee * random(),
> FROM accord_editeur ae,
> parametre par
> WHERE ae.id_regie=1
> AND ae.num_editeur = 1494
> AND ae.num_site = 1
> AND ae.num_emplacement = 1
> AND ae.affichage_possible = 1
> AND ae.par_id_technologie = par.id_parametre
> AND par.type = 10
> AND par.valeur_int = 1
> 
> And the Explain :
> NOTICE:  QUERY PLAN:
> 
> Nested Loop  (cost=0.00..228.27 rows=1 width=56)
> -> Index Scan using accord_editeur_pkey on accord_editeur ae
> (cost=0.00..225.50 rows=1 width=48)
> -> Index Scan using parametre_tracking_idx on parametre par
> (cost=0.00..2.02 rows=1 width=8)

I think the parametre_tracking_idx index is actually counterproductive
here.  You'd probably be better off without it, since then the
planner would (presumably) choose to use the primary-key index,
which is unique.  I suspect that "WHERE par.type = 10 AND par.valeur_int
= 1" retrieves multiple records which then get thrown away.  It'd be
better to use "WHERE ae.par_id_technologie = par.id_parametre" to pull
out the single candidate row --- but the planner is evidently getting
fooled by the availability of two clauses instead of one for the extra
index, and is mistakenly guessing that the extra one is more selective.

Other than that, perhaps you need to readjust your notion of what's
slow.  100 concurrent transactions on a PIII is a pretty stiff workload;
if you're expecting millisecond turnaround then I think you're out of
luck.

regards, tom lane



[GENERAL] GNUe Forms 0.0.5 Released

2001-01-26 Thread James Thompson


*** Please forgive if this is double post.  
*** My first attempt was from the wrong account :(

GNUe Forms is a data entry/query tool that allows a designer to describe a
data aware user interface via an XML based form definition language.  
This file is then parsed at client startup and a proper user interface is
dynamically generated.  Currently the client portion of the system is
available, work on a screen painter still in the works.  To make creating
the forms easier (until the painter is ready) the package also includes a
script that makes generating basic screens based upon tables quite fast.

Some of the clients features include:
  Multi-platform capability: unix, linux, win32
  Multi UI support: GUIs supported as GTK, Motif, Win32 (via wxPython), 
Curses based text support (still rough)
  Multi Datasource support: PostgeSQL, MySQL, GEAS, and ODBC
(GEAS and ODBC still rough)
  2-tier and N-tier capable: N-tier support via GNUe GEAS
  Client side Master/Detail relationships
  Client Side Triggers

It has been successfully built on the following platforms
  Solaris
  Debian GNU/Linux
  RedHat Linux
  Win32 platforms (limited testing)

It is part of the GNU Enterprise project (www.gnue.org) A GNU project
which aims to develop a set of tools to build programs that together will
form a complete system to provide functionality for most business needs.

While the client is only at version 0.0.5 it has at earned our stamp of
"usefullmaybe" :)  You can find the python source code, rpms, and
win32 version as well as some outdated screenshots at www.gnue.org

Take Care
James ([EMAIL PROTECTED])

->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<
James Thompson138 Cardwell Hall  Manhattan, Ks   66506785-532-0561 
Kansas State University  Department of Mathematics
->->->->->->->->->->->->->->->->->->---<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<-<










Re: [GENERAL] The type int8 and the use of indexes

2001-01-26 Thread Tom Lane

"Igor V. Rafienko" <[EMAIL PROTECTED]> writes:
> on Jan 25, 2001, 15:14, Stephan Szabo std::cout'ed:
> | select * from table where pk=1::int8 should use the index if pk is an
> | int8 column.

> Would that work for other operators (such as <, >, etc.) as well?

It's a necessary prerequisite, anyway.

> Last time I tried something similar on Postgres-7.0.2, the index was
> used on equality but not on "less-than" (the number of rows in the
> result was very small).

It won't use an index unless the optimizer knows that the number of
rows to be selected is small, which for a one-sided "<" query would
depend on where the endpoints of the data range are.  Had you done
a VACUUM ANALYZE recently?

regards, tom lane



Re: [GENERAL] The type int8 and the use of indexes

2001-01-26 Thread Igor V. Rafienko

on Jan 26, 2001, 10:20, Tom Lane std::cout'ed:


| It won't use an index unless the optimizer knows that the number of
| rows to be selected is small, which for a one-sided "<" query would
| depend on where the endpoints of the data range are.  Had you done
| a VACUUM ANALYZE recently?


Yes, I believe I did run vacuum analyze just before the test.





ivr
-- 
"You have an internet connection on your calculator."
"You can type 70 words per minute -- on a TI-82."
"You've gotten electronic copies of text books in TI-82 format."
   -- Technology in IB




Re: [GENERAL] vacuum

2001-01-26 Thread Tom Lane

"Dr R.Adscheid" <[EMAIL PROTECTED]> writes:
> We are using PostgreSQL 6.3/Digital Unix 4.0B in an environment with 7x24
> availiability. There
> is one table, which has about 9000 new records per day and about 10% being
> updated. With an index over several columns the select on this table is quit
> short, but removing old entries and vacuuming is an very time
> consuming operation (about 1 hour for the whole database!) and because of
> the 7x24 production not acceptable.

A number of people have found that removing the index, vacuuming, and
recreating the index is faster than letting vacuum try to fix the
index for itself.

Another possibility is to try the "lazy vacuum" patches available from
http://people.freebsd.org/~alfred/vacfix/, though I don't fully trust
those yet.  (That would require updating to 7.0.3 ... but I tell you
in all honesty that you're a fool to still be using 6.3 for anything
critical.  The number of bugs fixed between 6.3 and 7.0.3 is enormous.)

regards, tom lane



[GENERAL] Connection pooling with C and libpq

2001-01-26 Thread Kassu Valtakari

Hei!

How do I do the connection pooling with libpq? I'm trying to modify
a radius server to fork or thread and pick a connection from the pool
and in that way be able to store 100..500 small records in a second
into postgresql.
I can create a global array of connections, but how is it done after
that? I'm sure there's people who think it's trivial and simple, but I
also think that seeing someone's ready tested code helps me a lot.

Also an example with threads or forks in the code could be interesting.
-- 
--  Kassu Valtakari, kassu at iki.fi, +49 89 722 43448
--  Quidquid latine dictum sit, altum viditur