AW: [HACKERS] heap page corruption not easy

2000-12-19 Thread Zeugswetter Andreas SB


> > The only source of serious problems is thus a bogus write of a page
> > segment (100 bytes ok 412 bytes chunk actually written to disk),
> > but this case is imho sufficiently guarded or at least detected
> > by disk hardware. 
> 
> With full page logging after checkpoint we would be safe from this
> case...

> Comments?
> 
> - full page backup on first after checkpoint modification

I guess you are right, especially since it solves above and index.
The "physical log" solution sounds a lot simpler and more robust
(I didn't know you use PageAddItem, sounds genially simple :-)

But we should probably try to do checkpoints less frequently by default, 
like every 20 min to avoid too much phys log.

Andreas



Re: [HACKERS] Re: [SQL] PostgreSQL crashes on me :(

2000-12-19 Thread Mathijs Brands

Hello all,

I would like to thank Tom, Ian and the other pgsql wizards for their prompt
response. This must surely be open source at it's best :)

I've worked around the situation by running a small script that continually
monitors postgres and takes appropriate action if postgres shuts down. I'm
assuming this problem won't lead to any data corruption.

Mathijs
-- 
"Borrowers of books -- those mutilators of collections, spoilers of the
 symmetry of shelves, and creators of odd volumes." 
Charles Lamb (1775-1834) 



Re: [HACKERS] performance modality in 7.1 for large text attributes?

2000-12-19 Thread Thomas Lockhart

> anybody know what i could be doing wrong?  (i'm also wondering why SELECT
> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless
> TOAST is doing a LOT better than i think.)

I would think that this is entirely due to planning the query. An INSERT
has no decisions to make, whereas a SELECT must decide among a variety
of possible plans. To hand-optimize selects, you can set some parameters
to force only some kinds of plans (such as index scan) but in general
you will need to remember to unset them afterwards or you run the risk
of bizarrely inappropriate plans for other queries in the same session.

> furthermore, are there any plans to offer a better libpq interface to INSERT?
> the things i'm doing now to quote the text, and the extra copy i'm maintaining,
> are painful.

What exactly are you looking for in "better"? Is it just the quoting
issue (a longstanding problem which persists for historical reasons :(

> ... but if someone can look at my code (which i'm running
> against the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file
> and help me enumerate the sources of my stupidity, i will be forever grateful.

Possible causes of the 1.5s "mode" (at least as a starting point):

o task scheduling on your test machine (not likely??)

o swapping/thrashing on your test machine (not likely??)

o WAL fsync() log commits and cleanup (aggregate throughput is great,
but every once in a while someone waits while the paperwork gets done.
Waiting may be due to processor resource competition)

o Underlying file system bookkeeping from the kernel. e.g. flushing
buffers to disk etc etc.

  - Thomas



Re: [HACKERS] performance modality in 7.1 for large text attributes?

2000-12-19 Thread Alex Pilosov

Paul,

1) Have you ran vacuum analyze after all these inserts to update database
statistics? :) Without vacuum, pgsql will opt to table scan even when
there's an index.

2) I'm not sure if you are executing pgcat 70k times or executing inner
loop in pgcat 70k times. Postgres connection establishment is expensive.

3) Postgres INSERT is not very efficient if you are doing a bulk load of
data (it has to reparse the statement every time). If you want to delete
everything and load new data, use "COPY", which is about 5 times faster.
Also, there's a patch by someone to do following: INSERT INTO (fields...)
VALUES (...), (...), (...), which results in parsing the statement only
once.

Oh...And since I have your attention, could you please resolve
long-standing discussion between me and Tom Lane? :) 

Question is whether proper (standard/most-commonly-used) format for
printing CIDR network address is 10/8 or 10.0.0.0/8 (i.e. should all
octets be printed even if they are 0). After search of RFCs, there's
nothing that specifies the standard, but 10.0.0.0/8 is used more often in
examples than 10/8 form.

Postgres uses 10/8 form, and I'm saying that 10.0.0.0/8 is more accepted
by everyone else. (I.E. all software can deal with that, but not all
software accepts 10/8).

-alex

On Mon, 18 Dec 2000, Paul A Vixie wrote:

> (plz cc me on your replies, i'm not on pgsql-hackers for some reason.)
> 
> http://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time
> of 70K executions of "pgcat" (shown below) using a CIDR key and TEXT value.
> (this is for storing the MAPS RSS, which we presently have in flat files.)
> 
> i've benchmarked this against a flat directory with IP addresses as filenames,
> and against a deep directory with squid/netnews style hashing (127/0/0/1.txt)
> and while it's way more predictable than either of those, there's nothing in
> my test framework which explains the 1.5s mode shown in the above *.png file.
> 
> anybody know what i could be doing wrong?  (i'm also wondering why SELECT
> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless
> TOAST is doing a LOT better than i think.)
> 
> furthermore, are there any plans to offer a better libpq interface to INSERT?
> the things i'm doing now to quote the text, and the extra copy i'm maintaining,
> are painful.  arbitrary-sized "text" attributes are a huge boon -- we would
> never have considered using postgres for MAPS RSS (or RBL) with "large
> objects".  (kudos to all who were involved, with both WAL and TOAST!)
> 
> here's the test jig -- please don't redistribute it yet since there's no man
> page and i want to try binary cursors and other things to try to speed it up
> or clean it up or both.  but if someone can look at my code (which i'm running
> against the 7.1 bits at the head of the pgsql cvs tree) and at the *.png file
> and help me enumerate the sources of my stupidity, i will be forever grateful.









[HACKERS] libpq enhancement for multi-process application

2000-12-19 Thread Sébastien Bonnet

Hi all, and mainly postresql developpers,

I've been reading old posts about the libpq interface related to multi-process
application. The main problem being that after a fork, each process has a DB
connexion, actually the same. If one closes it, the other one remains in a
unknown or not stable state.

This is a real problem when writing a highly loaded daemon. Let's consider my
example : a main daemon is receiving network requests, and makes heavy use of
the DB. It thus have a permanent connexion to the DB. Sometimes, this main
daemon forks, just to serve a couple of request. This child process doesn't
need a permanent connexion. But closing it would destroy his parent's one.

There is actually one very easy, but awful, solution : closing the database
connexion before forking, and reopening when needed in each process. But
that's really awful, cause the main daemon will always close, fork, and then
just after reopen. What a waste !

A second solution would be a clone of the PQfinish function which does NOT
send the disconnexion sequence to the backend but just does everything else
(release memory, close the socket, and so on).

The big frustration being that this clone actually exists in the library, but
is a private function. It's named freePGconn, and is called from PQfinish
besides closePGconn (which sends the disconnexion sequence to the backend).

So I guess you've understood my request. Great folks from postresql, would it
be possible to kinda export a nice version of freePGconn ? It would really,
really, help people writing multi-process application without having to manage
a single connexion with shared memory and other tricks, as suggested a few
months ago.

In the meantime, I use the ugly solution : freeing the pointer returned by
PQconnectdb in the child process, knowing some memory hasn't been released.
Hopefully, these child processes don't last long, and the garbage collector is
working fine !


Comments / other solutions are welcome !

Regards,

-- 
Sébastien Bonnet
  [EMAIL PROTECTED] http://bonseb.free.fr/



[HACKERS] Isn't init_irels() dangerous ?

2000-12-19 Thread Hiroshi Inoue
Hi all,

In InitPostgres()(postinit.c) I see the following code.

RelationCacheInitialize();/* pre-allocated reldescs created here
*/
InitializeTransactionSystem(); /* pg_log,etc init/crash recovery
here */

init_irels() is at the end of RelationCacheInitialize() and
accesses system tables to build some system index
relations. However InitializeTransactionSystem() isn't
called at this point and so TransactionIdDidCommit()
always returns true. Time qualification doesn't work
properly under such a situation.
It seems that init_irels() should be called after
InitializeTransactionSystem() was called.

Comments ?

Regards.
Hiroshi Inoue


[HACKERS] Is PQreset() proper ?

2000-12-19 Thread Hiroshi Inoue
HI all,

I've encountered a database freeze and found it's due
to the reset of connection after abort.
 
The following is a part of postmaster log.
A new backend(pid=395) started immedaitely after
a backend(pid=394) abort. OTOH postmaster tries
to kill all backends to cleanup shared memory.
However the process 394 ignored SIGUSR1 signal
and is waiting for some lock which would never be
released.

FATAL 2:  elog: error during error recovery, giving up!
DEBUG:  proc_exit(2)
DEBUG:  shmem_exit(2)
postmaster: ServerLoop: handling reading 5
postmaster: ServerLoop: handling reading 5
postmaster: ServerLoop: handling writing 5
postmaster: BackendStartup: pid 395 user reindex db reindex socket 5
DEBUG:  exit(2)
postmaster: reaping dead processes...
postmaster: CleanupProc: pid 394 exited with status 512
Server process (pid 394) exited with status 512 at Tue Dec 19 20:12:41 2000
Terminating any active server processes...
postmaster: CleanupProc: sending SIGUSR1 to process 395
postmaster child[395]: starting with (postgres -d2 -v131072 -p reindex )
FindExec: searching PATH ...
ValidateBinary: can't stat "/bin/postgres"
ValidateBinary: can't stat "/usr/bin/postgres"
ValidateBinary: can't stat "/usr/local/bin/postgres"
ValidateBinary: can't stat "/usr/bin/X11/postgres"
ValidateBinary: can't stat "/usr/lib/jdk1.2/bin/postgres"
ValidateBinary: can't stat "/home/freetools/bin/postgres"
FindExec: found "/home/freetools/reindex/bin/postgres" using PATH
DEBUG:  connection: host=[local] user=reindex database=reindex
DEBUG:  InitPostgres

Regards.
Hiroshi Inoue


Re: [HACKERS] libpq enhancement for multi-process application

2000-12-19 Thread Daniele Orlandi

Sébastien Bonnet wrote:
> 
> Hi all, and mainly postresql developpers,
> 
> I've been reading old posts about the libpq interface related to multi-process
> application. The main problem being that after a fork, each process has a DB
> connexion, actually the same. If one closes it, the other one remains in a
> unknown or not stable state.

Uhm... I always thought that sharing the same socket between processes
is wrong.

My multi-process daemon works like apache with a pool of processes
everyone with its own connection to the DB. The connection is only
opened AFTER the fork and remains open as long as the process lives just
to avoid a new connection for each accept.

Bye!



[HACKERS] Sample databases?

2000-12-19 Thread mlw

I am doing some testing and development on Postgres.

Is there, by chance, a good source of data which can be used as a test
database? I have been using a music database, but it is proprietary, and
makes me uncomfortable to post public tests.

What do you guys use?

Perhaps we can create a substantial test database? (Millions of records,
many tables, and a number of relations.) So when we see a problem, we
can all see it right away. I like "real world" data, because it is often
more organic than randomized test data, and brings out more issues. Take
index selection during a select, for instance.

-- 
http://www.mohawksoft.com



[HACKERS] Three types of functions, ala function redux.

2000-12-19 Thread mlw


Given this basic SQL statement:

select * from table where col = function() ;

There are three basic types of SQL behaviors that should be able to be
performed.

(1) "function()" returns a single value. Postgres should be able to
understand how to optimize this to be: "select * from table where col =
value" where value is the datum returned by function.

(2) "function()" returns a number of values that are independent of the
query. Postgres should be able to optimize this to be: "select * from
table where col in (val1, val2, val3, ..valn)." I guess Postgres can
loop until done, using the isDone flag?

(3) "function()" returns a value based on the query. (This seems to be
how it currently functions.) where "select * from table where col =
function()" will end up doing a full table scan. 


(1) and (2) are related, and could probably be implemented using the
same code. 
(3) Seems to be how Postgres is currently optimized.

It seems like Tom Lane laid the foundation for this behavior in 7.1
newC. (Does it now work this way?)

Does anyone see a problem with this thinking, and does it make sense to
attempt this for 7.2? I am looking into the function manager stuff to
see what would be involved.

-- 
http://www.mohawksoft.com



Re: [HACKERS] Sample databases?

2000-12-19 Thread Thomas Lockhart

> What do you guys use?

The regression database, which you can augment with some "insert into x
select * from x;" commands. It would also be useful to have a "database
generation" script, but of course this would be cooked data.

> Perhaps we can create a substantial test database? (Millions of records,
> many tables, and a number of relations.) So when we see a problem, we
> can all see it right away. I like "real world" data, because it is often
> more organic than randomized test data, and brings out more issues. Take
> index selection during a select, for instance.

The regression database is such a beast, but is not large enough for the
millions of records kinds of tests.

Suggestions?

 - Thomas



Re: [HACKERS] Three types of functions, ala function redux.

2000-12-19 Thread Stephan Szabo


[I was having trouble with the direct address so i'm only sending to
the list]

> select * from table where col = function() ;

> (2) "function()" returns a number of values that are independent of the
> query. Postgres should be able to optimize this to be: "select * from
> table where col in (val1, val2, val3, ..valn)." I guess Postgres can
> loop until done, using the isDone flag?

I disagree here.  I really don't think that changing = to mean "in"
in the system is a good idea.  If the user wants an in they should 
specify it.
I think "select * from table where col in (select function());" or
"select * from table where col in (select * from function());" or
even "select * from table where col in function();"
are better ways of specifying this sort of behavior.

If we do that (col = ) meaning in, then does
col = (select statement that returns multiple rows) mean in and what
about col = ?  I think doing it only for the function case is
a mistake.




Re: [HACKERS] Who is a maintainer of GiST code ?

2000-12-19 Thread Oleg Bartunov

On Tue, 19 Dec 2000, Hannu Krosing wrote:

> Date: Tue, 19 Dec 2000 02:04:02 +0200
> From: Hannu Krosing <[EMAIL PROTECTED]>
> To: Tom Lane <[EMAIL PROTECTED]>
> Cc: Oleg Bartunov <[EMAIL PROTECTED]>,
> Bruce Momjian <[EMAIL PROTECTED]>,
> PostgreSQL-development <[EMAIL PROTECTED]>
> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?
> 
> Tom Lane wrote:
> > 
> > Oleg Bartunov <[EMAIL PROTECTED]> writes:
> > > We have a problem with
> > > handling NULL values in GiST. Any thought how NULL values
> > > are handle in Rtree.
> > 
> > AFAIR, none of the index access methods except btree handle NULLs at
> > all --- they just ignore NULL values and don't store them in the index.
> > Feel free to improve on that ;-).  The physical representation of index
> > tuples can handle NULLs, the problem is teaching the index logic where
> > they should go in the index.
> > 
> > regards, tom lane
> 
> 
> and I can't see why btree stores them (as it seems to do judging by the 
> index file size) - at least it does not use it for searching for "IS
> NULL"


and what does this error means ?

create table rtree_test ( r box );
copy rtree_test from stdin;
\N
\N
\N
\N
 total 10,000 NULLS
\.

create index rtree_test_idx on rtree_test using rtree ( r );
--ERROR:  floating point exception! The last floating point operation either exceeded 
legal ranges or was a divide by zero

seems rtree doesn't ignore NULL ?


Regards,
Oleg


> 
> --8<8<8<8<8<8<8<8<--
> 
> hannu=# explain select * from nulltest where i is null;
> NOTICE:  QUERY PLAN:
> 
> Seq Scan on nulltest  (cost=0.00..293.80 rows=5461 width=8)
> 
> EXPLAIN
> hannu=# explain select * from nulltest where i =1;
> NOTICE:  QUERY PLAN:
> 
> Index Scan using nulltest_i_ndx on nulltest  (cost=0.00..96.95 rows=164
> width=8)
> 
> --8<8<8<8<8<8<8<8<--
> 
> nulltest is a 16k record table with numbers 1 to 16384 in field i
> 
> If it just ignored them we would have a nice way to fake partial indexes
> - 
> just define a function that returns field value or null and then index
> on that ;)
> 
> ---
> Hannu
> 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




Re: [HACKERS] Sample databases?

2000-12-19 Thread Jeff Hoffmann

Thomas Lockhart wrote:
>  
> > Perhaps we can create a substantial test database? (Millions of records,
> > many tables, and a number of relations.) So when we see a problem, we
> > can all see it right away. I like "real world" data, because it is often
> > more organic than randomized test data, and brings out more issues. Take
> > index selection during a select, for instance.
> 
> The regression database is such a beast, but is not large enough for the
> millions of records kinds of tests.
> 
> Suggestions?
> 

maybe the Tiger database.  it's certainly big enough & freely
available.  if you're not familiar with tiger, it's a street database
from the census department.  you can find it at
ftp://ftp.linuxvc.com/pub/US-map.  it's in plain text format, but
trivial to import.  it's set up in several (at least a dozen tables)
which are heavily interrelated & sometimes in fairly complex ways.

-- 

Jeff Hoffmann
PropertyKey.com



Re: [HACKERS] libpq enhancement for multi-process application

2000-12-19 Thread Sébastien Bonnet

> Uhm... I always thought that sharing the same socket between
> processes is wrong.

Well, I've never thought about it before this problem, but it definitely
appears to me like something not to do. Sharing remote object doesn't sound
right :-(

> My multi-process daemon works like apache with a pool of processes
> everyone with its own connection to the DB. The connection is only
> opened AFTER the fork and remains open as long as the process lives just
> to avoid a new connection for each accept.

When you can do it this way, that's nice'n'easy. In my case, I have to have a
connection before the fork, and keep it after in both parent and child,
eventhough it will be closed a few seconds later in the child. So, for now on,
the only-almost-clean-solution is to free the pgconn structure in the child
and reconnect when needed. This way, the parent process keeps its own
connexion. No other process is using it. Sounds safe, but kinda
"do-it-yourself" :-(

-- 
Sébastien Bonnet
  [EMAIL PROTECTED] http://bonseb.free.fr/



Re: [HACKERS] Who is a maintainer of GiST code ?

2000-12-19 Thread Christopher Masto

On Sun, Dec 17, 2000 at 11:30:23PM -0500, Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I think
> > there are some papers at Berkeley or a web site that goes into it in
> > detail.
> 
> I imagine there's some GiST stuff at the Berkeley papers repository
> http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/
> but I'd be surprised if it's more than an overview...

Well, there's this: http://gist.cs.berkeley.edu/
and this: http://gist.cs.berkeley.edu/pggist/
-- 
Christopher Masto Senior Network Monkey  NetMonger Communications
[EMAIL PROTECTED][EMAIL PROTECTED]http://www.netmonger.net

Free yourself, free your machine, free the daemon -- http://www.freebsd.org/



Re: [HACKERS] performance modality in 7.1 for large text attributes?

2000-12-19 Thread Christopher Masto

On Tue, Dec 19, 2000 at 03:03:43PM +, Thomas Lockhart wrote:
> o WAL fsync() log commits and cleanup (aggregate throughput is great,
> but every once in a while someone waits while the paperwork gets done.
> Waiting may be due to processor resource competition)
> 
> o Underlying file system bookkeeping from the kernel. e.g. flushing
> buffers to disk etc etc.

I was going to suggest the same, but it's interesting that it happens
on reads as well.  I can't tell for sure from the graph, but it looks
like it happens fairly consistently - every Nth time.  I'd be curious
to see how this changes if you artificially slow down your loop, or
adjust your OS's filesystem parameters.  It may give some more clues.
-- 
Christopher Masto Senior Network Monkey  NetMonger Communications
[EMAIL PROTECTED][EMAIL PROTECTED]http://www.netmonger.net

Free yourself, free your machine, free the daemon -- http://www.freebsd.org/



Re: [HACKERS] Who is a maintainer of GiST code ?

2000-12-19 Thread Oleg Bartunov

On Tue, 19 Dec 2000, Christopher Masto wrote:

> Date: Tue, 19 Dec 2000 13:33:58 -0500
> From: Christopher Masto <[EMAIL PROTECTED]>
> To: Tom Lane <[EMAIL PROTECTED]>, Bruce Momjian <[EMAIL PROTECTED]>
> Cc: Oleg Bartunov <[EMAIL PROTECTED]>,
> PostgreSQL-development <[EMAIL PROTECTED]>
> Subject: Re: [HACKERS] Who is a maintainer of GiST code ?
> 
> On Sun, Dec 17, 2000 at 11:30:23PM -0500, Tom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > I think
> > > there are some papers at Berkeley or a web site that goes into it in
> > > detail.
> > 
> > I imagine there's some GiST stuff at the Berkeley papers repository
> > http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/
> > but I'd be surprised if it's more than an overview...
> 
> Well, there's this: http://gist.cs.berkeley.edu/
> and this: http://gist.cs.berkeley.edu/pggist/

Thanks,

we do know this sites. We're working on implementation of
RD (Russian Doll) Tree using GiST interface. Current GiST sources
have some bugs, some of them we already fixed and currently we're a 
working with handling of NULL values. We're getting broken index for
data with NULLs. btw, how many people use GiST ? It would be nice
to test our changes after we solve our problems.

Regards,

Oleg

> -- 
> Christopher Masto Senior Network Monkey  NetMonger Communications
> [EMAIL PROTECTED][EMAIL PROTECTED]http://www.netmonger.net
> 
> Free yourself, free your machine, free the daemon -- http://www.freebsd.org/
> 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




Re: [HACKERS] Manual changes for ALTER TABLE OWNER

2000-12-19 Thread Mark Hollomon

On Sunday 17 December 2000 15:07, Bruce Momjian wrote:
> We need additions to alter_table.sgml for the new OWNER option mention
> in the features list.

Here it is.

-- 
Mark Hollomon

*** alter_table.sgml.orig   Tue Dec 19 17:32:47 2000
--- alter_table.sgmlTue Dec 19 17:39:27 2000
***
*** 36,41 
--- 36,43 
  RENAME TO newtable
  ALTER TABLE table
  ADD table constraint definition
+ ALTER TABLE table
+   OWNER TO new owner 

  

***
*** 100,105 
--- 102,117 
 

   
+ 
+  
+   New user 
+   
+
+   The userid of the new owner of the table.
+
+   
+  
+ 
  
 

***
*** 158,163 
--- 170,177 
 The ADD table constraint definition 
clause 
 adds a new constraint to the table using the same syntax as . 
+The OWNER clause chnages the owner of the table to the user 
+new user.

  




Re: [HACKERS] Re: Table name scope (was Re: [BUGS] Outer joins aren'tworking with views)

2000-12-19 Thread Peter Eisentraut

Tom Lane writes:

> SELECT * FROM (A NATURAL JOIN B) Jproduces ID, CA, CB
>
> SELECT J.* FROM (A NATURAL JOIN B) J  produces ID, CA, CB
>
> SELECT A.* FROM (A NATURAL JOIN B) J  produces ID, CA
>
> SELECT B.* FROM (A NATURAL JOIN B) J  produces ID, CB

ISTM that correlation names aren't allowed after joined tables in the
first place.

  ::=
 [ [ AS ] 
[] ]
  |  [ AS ] 
[]
  | 

  ::=

  | 
  |   

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




RE: [HACKERS] heap page corruption not easy

2000-12-19 Thread Mikheev, Vadim
> > > The point is, that the heap page is only modified in 
> > > places that were previously empty (except header).
> > > All previous row data stays exactly in the same place.
> > > Thus if a page is only partly written
> > > (any order of page segments) only a new row is affected.
> > 
> > Exception: PageRepairFragmentation() and PageIndexTupleDelete() are
> > called during vacuum - they change layout of tuples.
> >
> 
> Is it guaranteed that the result of PageRepairFragmentation()
> has already been written to disk when tuple movement is logged ?

No.

Vadim


RE: [HACKERS] Who is a maintainer of GiST code ?

2000-12-19 Thread Mikheev, Vadim

> > > AFAIR, none of the index access methods except btree 
> > > handle NULLs at all --- they just ignore NULL values
> > > and don't store them in the index.
...
> 
> and what does this error means ?
> 
> create table rtree_test ( r box );
> copy rtree_test from stdin;
> \N
>  total 10,000 NULLS
> \.
> 
> create index rtree_test_idx on rtree_test using rtree ( r );
> --ERROR:  floating point exception! The last floating point 
> operation either exceeded legal ranges or was a divide by zero
> 
> seems rtree doesn't ignore NULL ?

No, it doesn't. As well as GiST. Only hash ignores them.
And there is no code in GiST & rtree that take care about NULL
keys. It's probably ok for GiST which is "meta-index" - 
index/type methods implementator should decide how to handle NULLs.
As for rtree - seems it's better to ignore NULLs as we did before
for single key btree: rtree is just variation of it.

Vadim



RE: [HACKERS] OID Implicit limit

2000-12-19 Thread Christopher Kings-Lynne

> > Reading the documentation, I see that OIDs are unique through
the
> > whole database.
> > But since OIDs are int4, does that limit the number of rows I
can
> > have in a database to 2^32 = 4 billion ?
>
> Yep.
>
> Thanks for the answer - although that concerns me a bit.
> Maybe I could recompile it setting oid to int64 type...

If that really concerns you, then the rest of the hackers list I think would
be very interested in hearing of a real-world database with more than 4
billion rows/inserts/deletes.

Apparently it is somewhat more complicated than just 'recompiling as an
int64' to change this.  I believe that patches are currently being made to
facilitate a future move towards 64bit OIDs, but I am not certain of the
status.

Chris




Re: [HACKERS] OID Implicit limit

2000-12-19 Thread Bruce Momjian

We have an FAQ item on this now under OID's.

[ Charset ISO-8859-1 unsupported, converting... ]
> > > Reading the documentation, I see that OIDs are unique through
> the
> > > whole database.
> > > But since OIDs are int4, does that limit the number of rows I
> can
> > > have in a database to 2^32 = 4 billion ?
> >
> > Yep.
> >
> > Thanks for the answer - although that concerns me a bit.
> > Maybe I could recompile it setting oid to int64 type...
> 
> If that really concerns you, then the rest of the hackers list I think would
> be very interested in hearing of a real-world database with more than 4
> billion rows/inserts/deletes.
> 
> Apparently it is somewhat more complicated than just 'recompiling as an
> int64' to change this.  I believe that patches are currently being made to
> facilitate a future move towards 64bit OIDs, but I am not certain of the
> status.
> 
> Chris
> 
> 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] Re: [DOCS] 7.1 features list

2000-12-19 Thread Bruce Momjian

I added (Alpha) next to the mention of 64-bit CPUs on the Function
Manager section at the top.

> On Sat, 16 Dec 2000, Bruce Momjian wrote:
> 
> > Here is the list of features in 7.1.
> 
>   One thing that I think ought to be added is that with 7.1,
> PostgreSQL will compile out of the box (i.e. without any extra patches)
> for Linux/Alpha. This might not be a big deal for most people, but for
> those of who run pgsql on Linux/Alpha, it is, and I feel it at least
> deserves a mention in the 7.1 feature list.
>   I looked for it (i.e. grep -i alpha) in the list, but did not see
> it. Your choice which heading it goes under.
>   Also, I have not tested any recent snapshots or betas on
> Linux/Alpha lately, but I plan to shortly and will let the hackers list
> know of any problems. I have every intention of making sure the 7.1
> release does indeed work out of box on Linux/Alpha. Thanks, TTYL.
> 
> ---
> |   "For to me to live is Christ, and to die is gain."|
> |--- Philippians 1:21 (KJV)   |
> ---
> |   Ryan Kirkpatrick  |  Boulder, Colorado  |  http://www.rkirkpat.net/   |
> ---
> 
> 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] performance modality in 7.1 for large text attributes?

2000-12-19 Thread Bruce Momjian

> furthermore, are there any plans to offer a better libpq interface to INSERT?
> the things i'm doing now to quote the text, and the extra copy i'm maintaining,
> are painful.  arbitrary-sized "text" attributes are a huge boon -- we would
> never have considered using postgres for MAPS RSS (or RBL) with "large
> objects".  (kudos to all who were involved, with both WAL and TOAST!)

If you are asking for a binary interface to TOAST values, I really wish
we had that in 7.1.  It never got finished for 7.1.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] Manual changes for ALTER TABLE OWNER

2000-12-19 Thread Bruce Momjian

Thanks.  Applied.

> On Sunday 17 December 2000 15:07, Bruce Momjian wrote:
> > We need additions to alter_table.sgml for the new OWNER option mention
> > in the features list.
> 
> Here it is.
> 
> -- 
> Mark Hollomon

[ Attachment, skipping... ]


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] Help me for "DBI->connect failed: Sorry, too many clients already."

2000-12-19 Thread Joseph



my cgi program is 
test.cgi:###require 
"./connectdb.pl";&connectdatabase();$query="select count(*) from 
messages";$sth=$dbh->prepare($query);$sth->execute();$count=$sth->fetchrow_array();print 
"Content-type: text/html\n\n";print 
<<"TAG"; The count is 
$count. TAGexit 
0;#my connectdb.pl :sub connectdatabase {#  my 
($dbusername,$dbpassword)=@_;  $dbusername="postgres";  
$dbpassword="lokicom";  $dbname="mboardsony";  use 
DBI;  
$dbh=DBI->connect("dbi:Pg:dbname=$dbname",$dbusername,$dbpassword) or die 
"cannot connect to $dbname\n";}1;###my 
os is Redhat 6.2,and perl 5.005,and  web server is Apache.The problem 
is:when I run test.cgi,it can work properly.But when I press F5to refresh 
the web page for sever minutes,the Apache will have error 
message:"DBI->connect failed: Sorry, too many clients already."  
Who can help me?  Thank you ahead.    My email: [EMAIL PROTECTED]


Re: [GENERAL] Re: [HACKERS] Trigger

2000-12-19 Thread Bruce Momjian

> PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if 
> anyone works on this and answer was: "yes, but 'he' is waiting for new 
> fmgr design". Tom's fmgr is done... IMHO it's big worse - The Python 
> has very good design for integration to other programs.

Good question.  I don't remember this old message, though.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] PostgreSQL pre-7.1 Linux/Alpha Status...

2000-12-19 Thread Ryan Kirkpatrick


I have had the time to test today's (12/19) snapshot on my
Linux/Alpha and the good news is that only two regression tests are
failing. The bad news is that these regression tests do not fail on
Linux/Intel. :( [1]
Specifically, the oid and misc regression tests failed. Here are
the gory details:


oid: Inserting a negative oid should wrap that oid around to an unsigned
 value, but instead pgsql just spits it back out with an error
 message. i.e.:

CREATE TABLE OID_TBL(f1 oid);
...
INSERT INTO OID_TBL(f1) VALUES ('-1040');
ERROR:  oidin: error reading "-1040": value too large

 Probably not a major problem (who inserts negative oids?), but I
 could be wrong. Hopefully it has an easy fix.


misc: This one is nasty... Any attempts to use the '*' operator in the
  context of inheritance causes pgsql to lose its mind and wander off
  into the weeds never to be seen again. Example from 'misc' tests:

SELECT p.name, p.hobbies.name FROM person* p;
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
connection to server was lost

  Definitely needs to be fixed, but I have a feeling it will not be
  easy.

Other than those two issues, everything seems to run great. I would go
digging into the source to find the source of these problems, but I
thought I would throw it out to the list first. [2]
Therefore, if anyone has any ideas as to what is failing, how to
fix it, or at least a general direction to head in (i.e. look in these
source files...), please speak up. If you want more information on the
above problems, feel free to ask. Just tell me what you want, and if it is
not obvious, how to get it.
Looking forward to a new version pgsql that compiles out of the
box on Linux/Alpha! TTYL.


[1] For those who missed my poor attempt at a joke... I mean that the
Linux/Alpha regression failures are specific to that platform, and
therefore my problem to solve, not a more general problem I could leave to
the pg-hackers to solve

[2] That, and I am definitely not familiar with the pgsql source, so it
would probably take me a while to make any headway if I just started
digging with out any direction...

---
|   "For to me to live is Christ, and to die is gain."|
|--- Philippians 1:21 (KJV)   |
---
|   Ryan Kirkpatrick  |  Boulder, Colorado  |  http://www.rkirkpat.net/   |
---





[HACKERS] CHECK constraint names

2000-12-19 Thread Christopher Kings-Lynne

Hi,

Is it correct behaviour that unnamed table-level check constraints get the
names '$1', '$2', '$3', etc. in Postgres 7.0.3???

Eg, using table constraints:


test=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',
'F')));
CREATE
test=# select rcname from pg_relcheck;
rcname
$1
(1 row)

And, even worse - I think this has got to be a bug:
---

test=# create table test (temp char(1) NOT NULL, CHECK (temp IN ('M',
'F')));
CREATE
test=# create table test2 (temp char(1) NOT NULL, CHECK (temp IN ('M',
'F')));
CREATE
test=# select rcname from pg_relcheck;
 rcname

 $1
 $1
(2 rows)

Two constraints with the same name

And if you use column constraints:
--

test=# create table test (temp char(1) NOT NULL CHECK (temp IN ('M', 'F')));
CREATE
test=# select rcname from pg_relcheck;
  rcname
---
 test_temp
(1 row)

--
Christopher Kings-Lynne
Family Health Network (ACN 089 639 243)




[HACKERS] 7.1 snapshot on i386 BSD MAJOR failure

2000-12-19 Thread bpalmer

Is this bad,  or are there expected to be known problems like this for
OBSD?

7.1beta1 had roughly the same errors..


- BEGIN ---

bpalmer@mizer:~/PG7.1/postgresql-snapshot>uname -a
OpenBSD mizer 2.8 GENERIC#399 i386


bpalmer@mizer:~/PG7.1/postgresql-snapshot>gmake check
gmake -C doc all
gmake[1]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/doc'
gmake[1]: Nothing to be done for `all'.
gmake[1]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/doc'
...
...  (no errors)
...
gmake[2]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/src/test/regress'
gmake -C ../../../contrib/spi REFINT_VERBOSE=1 refint.so autoinc.so
gmake[3]: Entering directory `/home/bpalmer/PG7.1/postgresql-snapshot/contrib/spi'
gmake[3]: `refint.so' is up to date.
gmake[3]: `autoinc.so' is up to date.
gmake[3]: Leaving directory `/home/bpalmer/PG7.1/postgresql-snapshot/contrib/spi'
/bin/sh ./pg_regress --temp-install --top-builddir=../../.. 
--schedule=./parallel_schedule --multibyte=
== removing existing temp installation==
== creating temporary installation==
== initializing database system   ==
== starting postmaster==
running on port 65432 with pid 29043
== creating database "regression" ==
CREATE DATABASE
== installing PL/pgSQL==
== running regression test queries==
parallel group (13 tests):  boolean varchar int8 numeric text int4 char oid int2 
float4 name float8 bit
 boolean  ... FAILED
 char ... ok
 name ... ok
 varchar  ... FAILED
 text ... ok
 int2 ... FAILED
 int4 ... FAILED
 int8 ... FAILED
 oid  ... ok
 float4   ... ok
 float8   ... FAILED
 bit  ... ok
 numeric  ... FAILED
test strings  ... FAILED
test numerology   ... ok
parallel group (18 tests):  box type_sanity point abstime tinterval interval reltime 
inet oidjoins path comments timestamp date circle time lseg polygon opr_sanity
 point... ok
 lseg ... ok
 box  ... FAILED
 path ... FAILED
 polygon  ... ok
 circle   ... FAILED
 date ... FAILED
 time ... FAILED
 timestamp... FAILED
 interval ... FAILED
 abstime  ... FAILED
 reltime  ... ok
 tinterval... FAILED
 inet ... ok
 comments ... FAILED
 oidjoins ... FAILED
 type_sanity  ... FAILED
 opr_sanity   ... ok
test geometry ... FAILED
test horology ... FAILED
test create_function_1... ok
test create_type  ... ok
test create_table ... ok
test create_function_2... ok
test copy ... ok
parallel group (7 tests):  inherit create_aggregate create_operator triggers 
create_misc constraints create_index
 constraints  ... ok
 triggers ... ok
 create_misc  ... ok
 create_aggregate ... ok
 create_operator  ... ok
 create_index ... ok
 inherit  ... FAILED
test create_view  ... ok
test sanity_check ... FAILED
test errors   ... ok
test select   ... ok
parallel group (16 tests):  random union select_distinct select_into arrays portals 
transactions select_distinct_on select_having subselect select_implicit aggregates 
case join btree_index hash_index
 select_into  ... ok
 select_distinct  ... FAILED
 select_distinct_on   ... ok
 select_implicit  ... ok
 select_having... ok
 subselect... FAILED
 union... FAILED
 case ... ok
 join ... ok
 aggregates   ... ok
 transactions ... FAILED
 random   ... failed (ignored)
 portals  ... FAILED
 arrays   ... FAILED
 btree_index  ... ok
 hash_index   ... ok
test misc ... FAILED
parallel group (5 tests):  portals_p2 select_views alter_table foreign_key rules
 select_views ... ok
 alter_table  ... ok
 portals_p2   ... ok
 rules... ok
 foreign_key  ... ok
parallel group (3 tests):  temp limit plpgsql
 limit... ok
 plpgsql  ... ok
 temp ... ok
== shutting down postmaster   ==

PL/Python (was: Re: [GENERAL] Re: [HACKERS] Trigger)

2000-12-19 Thread Karel Zak


On Tue, 19 Dec 2000, Bruce Momjian wrote:

> > PS. -hackers: What happen with PL/Python? Before 1/2 of year I ask if 
> > anyone works on this and answer was: "yes, but 'he' is waiting for new 
> > fmgr design". Tom's fmgr is done... IMHO it's big worse - The Python 
> > has very good design for integration to other programs.
> 
> Good question.  I don't remember this old message, though.

 ... but I remember, in the archive is following message:

> Re: Hello PL/Python
> 
> 
>  * From: Hannu Krosing <[EMAIL PROTECTED]>
>  * To: Karel Zak <[EMAIL PROTECTED]>
>  * Subject: Re: Hello PL/Python
>  * Date: Thu, 20 Jul 2000 12:30:54 +0300
>  _
> 
> Karel Zak wrote:
>> 
>>   Today afternoon I a little study libpython1.5 and I mean create
>>  new PL language is not a problem.
>> 
>>   I a little play with it, and here is effect:
>> 
>>  test=# CREATE FUNCTION py_test() RETURNS text AS '
>>  test'# a = ''Hello '';
>>  test'# b = ''PL/Python'';
>>  test'# plpython.retval( a + b );
>>  test'# ' LANGUAGE 'plpython';
>>  CREATE
>>  test=#
>>  test=#
>>  test=# SELECT py_test();
>>   py_test
>>  -
>>   Hello PL/Python
>>  (1 row)
>> 
>>   Comments? Works on this already anyone?
> 
> There is a semi-complete implementation (i.e. no trigger procedures)
> by Vello Kadarpik ([EMAIL PROTECTED]).
> 
> He is probably waiting for fmgr redesign or somesuch to complete before
> releasing it.
> 
> -
> Hannu


 Where is possible found it? IMHO it's really interesting feature.

Karel