Re: [GENERAL] safety of vacuum verbose analyze on active tables

2000-04-18 Thread Wim Aarts

 Andrew Snow wrote:
 
 what is the relative safety of doing a vacuum verbose analyze
   on a 24Gb
 table while there are selects and updates/inserts happening on it?
   
As far as I know, the table is locked completely during a Vacuum.
Any
transactions attempting to do inserts/updates will be paused
   safely. So go
ahead and schedule your vacuums for whenever you need to.
 
   There have been reports of problems (corruption, etc.) when trying to
do
   this.  See the archive for a discussion along these lines a few
   months ago.
 
  You've got to be joking.
 
  Is the table locking mechanism in Postgresql broken??

 I have no idea (but I doubt it).  I simply recall this question being
asked
 a few months back and a couple folks said something like "Hey, we tried
 this and had problems."  That was with 6.5.* or earlier.  Maybe one of
 those folks can pipe up again.  I couldn't find them in the archive...

 Regards,
 Ed Loehr


I've had problems with vacuum on postgres 6.5.3
The didn't occur in 6.3.2 or 7.0beta1.

The problems occured after many revoke and grant statements on the database.
And after and/or during that running a vacuum twice. The second would start
saying something like:
NOTICE:  Rel pg_class: TID 294/3: InsertTransactionInProgress 20065 - can't
shrink relation
NOTICE:  Rel pg_class: TID 294/4: InsertTransactionInProgress 20065 - can't
shrink relation
NOTICE:  Index pg_class_relname_index: NUMBER OF INDEX' TUPLES (1615) IS NOT
THE SAME AS HEAP' (1587)
NOTICE:  Index pg_class_oid_index: NUMBER OF INDEX' TUPLES (1615) IS NOT THE
SAME AS HEAP' (1587)
VACUUM

But it works in 7.0

Cheers Wim.






[GENERAL] Struggles with RC1 and RPMS

2000-04-18 Thread Mike Mascari

Hello,

I struggled (just a little) to get the new RC1 RPMS installed and
working. The first thing I did was to backup my existing
database:

pg_dump stocks  stocks

Things went fine. I then uninstalled the existing RPMS using
GnoRPM. I downloaded the following RPMS from postgresql.org:

postgresql-7.0RC1-0.5.i386.rpm 
postgresql-tcl-7.0RC1-0.5.i386.rpm
postgresql-devel-7.0RC1-0.5.i386.rpm   
postgresql-tk-7.0RC1-0.5.i386.rpm
postgresql-server-7.0RC1-0.5.i386.rpm

and installed them:

rpm -i postgresql*.rpm

After I installed them, I tried to start up the server:

[root@ferrari init.d]# ./postgresql start
Checking postgresql installation: looks good!
Starting postgresql service: failed.

Damn. I read the postgresql script to see that its checking for
the existence of the a previous installation to determine whether
or not to perform an initdb. GnoRPM didn't complain about "unable
to delete directory x" but I realized that the existence of the
previous structure was fooling the script, so I removed it:

rm -rf /var/lib/pgsql

I'm sure that most people would have performed an upgrade (rpm
-Uvh), but, ironically enough, I prefer to uninstall and
reinstall for safety's sake. Somehow, that should work :-(. Now I
tried to start the server again:

[root@ferrari init.d]# ./postgresql start
Checking postgresql installation: looks good!
Starting postgresql service: postmaster [2528]

Great. I'll load up my database and get going. First, I'll become
user postgres, create my 'mascarm' id, then as user mascarm, I'll
create the database 'stocks' and then import the stocks.dat file.
Somehow, pg_dump should be able to generate an appropriate dump
file to not require the user to have to recreate both the
PostgreSQL user and database before a restore:

psql stocks  stocks.dat

(Some time later...) Damn. I've got a lot of data in there and I
know it will take PostgreSQL hours if I don't run it with fsync
off:

^C

Okay. I dropped the database and recreated it. Now to turn
fsync() off. The /etc/rc.d/init.d/postgresql script has changed.
Its now using pg_ctl instead of calling the postmaster directly.
Fine. I'll just read the pg_ctl man page:

[mascarm@ferrari mascarm]$ man pg_ctl
No manual entry for pg_ctl

No luck today, I guess. I'll use lynx and read the html
documentation:

[mascarm@ferrari postgres]$ cd
/usr/doc/postgresql-7.0RC1/postgres
[mascarm@ferrari postgres]$ fgrep "pg_ctl" *.htm
[mascarm@ferrari postgres]$

Not a good sign. Maybe I'll get lucky and pg_ctl is a script:

[root@ferrari data]# more /usr/bin/pg_ctl 
#! /bin/sh
...

Okay. More script reading today...It appears pg_ctl uses the -o
flag as the flag to send options to the "postmaster" on startup.
Now using -o -F to send an option to the "postgres" process in
the pre-pg_ctl days is confusing enough. Now I have to use the -o
option to "pg_ctl" to send the -o option to the "postmaster" to
send the -F option to each "postgres" backend. I found a
"postmaster.opts.default" file in /var/lib/pgsql/data, but it's
empty withought sample options. At least Oracle's "initSID.ora"
files contains some sample comments. So I guess its safest to
modify /etc/rc.d/init.d/postgresql to run pg_ctl as:

su -l postgres -c "/usr/bin/pg_ctl -o '-o -F' -w -D $PGDATA -p
/usr/bin/postmaster start /dev/null 21"

Probability that a novice user will be able to run PostgreSQL
with fsync() off: 0%

Now I can import my database:

psql stocks  stocks.dat

...Occassional CREATE statements... and "You are now connected as
new user mascarm." statements. I don't care for the word "new" in
this output. It almost implies a user mascarm has been created
for you. But, of course, you couldn't have got this far without
realizing your going to have to create it yourself.

Finally. The data is loaded and I'm ready to start pgaccess:

[mascarm@ferrari mascarm]$ pgaccess
Error in startup script: couldn't read file
"/usr/pgaccess/main.tcl": no such file or directory

Damn. Another script I'm going to have to change:

[mascarm@ferrari mascarm]$ rpm -qil postgresql-tk

I see that main.tcl is actually in
"/usr/lib/pgsql/pgaccess/lib/". Now, su back to root and edit
"/usr/bin/pgaccess", changing:

PGACCESS_HOME=/usr/pgaccess

to 

PGACCESS_HOME=/usr/lib/pgsql/pgaccess

Try to start up pgaccess. Nope. The postmaster isn't running with
'-i'. Change "/etc/rc.d/init.d/postgresql" again so that pg_ctl
hands the '-i' option to the postmaster. 

Finally, success.


Mike Mascari



[GENERAL] Re: [ANNOUNCE] PostgreSQL book completed though chapter 14

2000-04-18 Thread Denis V. Dmitrienko

Bruce Momjian [EMAIL PROTECTED] writes:

 I have completed the first draft of my book through chapter 14.
 New chapters include:

 The books is accessible at:

 http://www.postgresql.org/docs/awbook.html

 Comments welcomed.

Thanks for your great book! But the latest version of it in pdf format is
broken. :(

-- 
Denis V. Dmitrienko | E-mail: [EMAIL PROTECTED] | ICQ#: 5538614
Home page: http://www.cn.ua/~denis
History became legend... Legend became myth.
And some things that should not have been forgotten ...were lost.



[GENERAL] Starting a new project, any opinions about using 7.0?

2000-04-18 Thread Michael S. Kelly

I'm starting a new project.  We've selected PostgreSQL and I'm wondering if
we shouldn't just jump in with 7.0 and avoid conversion hassles later on.  I
guess the issues for me are:

1.  Stability.  How good is Beta 5 (it is 5 now isn't it?)
2.  Documentation.  Are the docs ready?
3.  Support.  Who out there can help when things go wrong?

I'd also be interested in what you feel are the most compelling features of
7.0.  Is it the foreign keys, the overhauled optimizer, the updated psql, or
other features?

Thanks,

-=michael=-





[GENERAL] Re: [HACKERS] Struggles with RC1 and RPMS

2000-04-18 Thread Lamar Owen

Thomas Lockhart wrote:
 
   No manual entry for pg_ctl
  Waiting on that man page
 
 The man pages are done and available at a secret, hidden location ;)

To be accessed by my Captain Postgres Secret Decoder Ring (TM)? ;-)

If these man pages are going into the 7.0 final (or an RC2, if we have
one), I can wait to package them until then; although I will go ahead,
download, and experiment.  Of course, we still have three weeks... I may
put together a 0.7 of RC1 in a few days if needed.  

Oh, Thomas, let me know how the RC1-0.6 RPM's act on Mandrake 7, if you
can.

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] Win32 Install

2000-04-18 Thread Ron Peterson

Joseph wrote:

You're also asking everyone who reads this newsgroup to send you a
reciept.  Could you turn that auto-ask-for-a-reciept feature off,
please?

Ron Peterson
[EMAIL PROTECTED]



Re: [GENERAL] Starting a new project, any opinions about using 7.0?

2000-04-18 Thread Ron Peterson

"Michael S. Kelly" wrote:
 
 I'm starting a new project.  We've selected PostgreSQL and I'm wondering if
 we shouldn't just jump in with 7.0 and avoid conversion hassles later on.

Yes.

I haven't seen any discussions on this list since 7.0beta1 was released
that would indicate you should be concerned about it's stability.  Quite
the contrary - there have been a number of discussions which concluded
by saying something like "...why don't you just try upgrading to
7.0betaX and maybe your problems will go away."  At which point the
discussion usually ceased.

Also, by the time your project, whatever it is, is really underway, I
think you can expect to see the final release of 7.0.

Don't forget that one of the contributions you can make to the
PostgreSQL effort is to provide feedback.  Of course feedback on the
current product is more useful than feedback on past versions.

The features you mention are significant additions to PostgreSQL's
capabilities.  I'm particularly grateful for the addition of referential
integrity constraints.

Ron Peterson
[EMAIL PROTECTED]



Re: [GENERAL] full-text indexing

2000-04-18 Thread Brett W. McCoy

On Tue, 18 Apr 2000, Bruce Momjian wrote:

 I have one word for you:  CLUSTER.  Without it, index lookups are too
 slow.  With it, they are rapid.  I have done some work like this
 commerically with Ingres, which has an ISAM type that keeps the matching
 rows pretty close on a newly-created ISAM index.  In PostgreSQL, and
 regular CLUSTER will keep you good.

I agree!  The last bit of advice given in the full text README.  As I
said, I'd built full-text stuff for experimentation (I had maybe 30k of
raw text, which amounted to several 100,000 indexed entries), and I had
clustered it, and it was pretty darn fast, even on a Pentium 233 with only
48 megs of RAM.  I have significantly better hardware to run it on now.
The original project called MySQL, but it just didn't have what we needed
to put something like this together.

 If you find it slow, let me know.  I have done some benchmarking with
 the author and he found it pretty fast, usually a few seconds.  See the
 section in my book on CLUSTER for information on _why_ it helps.

Thanks, Bruce.

Brett W. McCoy
  http://www.chapelperilous.net
---
Twenty two thousand days.
Twenty two thousand days.
It's not a lot.
It's all you've got.
Twenty two thousand days.
-- Moody Blues, "Twenty Two Thousand Days"




Re: [GENERAL] full-text indexing

2000-04-18 Thread Brett W. McCoy

On Tue, 18 Apr 2000, Bruce Momjian wrote:

  I agree!  The last bit of advice given in the full text README.  As I
  said, I'd built full-text stuff for experimentation (I had maybe 30k of
  raw text, which amounted to several 100,000 indexed entries), and I had
  clustered it, and it was pretty darn fast, even on a Pentium 233 with only
  48 megs of RAM.  I have significantly better hardware to run it on now.
  The original project called MySQL, but it just didn't have what we needed
  to put something like this together.
 
 With the original author, testing was fast, but when he loaded all the
 data, it got very slow.  The problem was that as soon as his data
 exceeded the buffer cache, performance became terrible.

How much data are we talking here?  How can one get around this buffer
cache problem?

Brett W. McCoy
  http://www.chapelperilous.net
---
Twenty two thousand days.
Twenty two thousand days.
It's not a lot.
It's all you've got.
Twenty two thousand days.
-- Moody Blues, "Twenty Two Thousand Days"




Re: [GENERAL] full-text indexing

2000-04-18 Thread Bruce Momjian

 On Tue, 18 Apr 2000, Bruce Momjian wrote:
 
  I have one word for you:  CLUSTER.  Without it, index lookups are too
  slow.  With it, they are rapid.  I have done some work like this
  commerically with Ingres, which has an ISAM type that keeps the matching
  rows pretty close on a newly-created ISAM index.  In PostgreSQL, and
  regular CLUSTER will keep you good.
 
 I agree!  The last bit of advice given in the full text README.  As I
 said, I'd built full-text stuff for experimentation (I had maybe 30k of
 raw text, which amounted to several 100,000 indexed entries), and I had
 clustered it, and it was pretty darn fast, even on a Pentium 233 with only
 48 megs of RAM.  I have significantly better hardware to run it on now.
 The original project called MySQL, but it just didn't have what we needed
 to put something like this together.

With the original author, testing was fast, but when he loaded all the
data, it got very slow.  The problem was that as soon as his data
exceeded the buffer cache, performance became terrible.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Connecting website with SQL-database.....

2000-04-18 Thread Alex Pilosov

On 18 Apr 2000, Manuel Lemos wrote:

 I may be mistaken, but the last time that I looked at Perl DBI, it didn't
 seem to a complete database abstraction layer than it is needed.  For
 instance, you want retrieve data from date fields the results come
 formatted in a database dependent way.  This means that your DBI
 applications can't really be that much database independent as you still
 have to handle datatype differences in the application code.

I have used another database abstraction layer, that wants to be
all-singing, all-dancing. It is called ODBC. It sucked.

There are add-ons to DBI which allow you to further abstract from your
database, if you choose so. For most of them, you need to still write
database-specific code yourself, it just gives you a cleaner interface on
how to do it. I believe that in general, this is the superior approach
instead of trying to abstract it all in the system/driver code. 

The developer always knows what database-dependent features he is using,
and should appropriately abstract them into different file).

 With this Metabase package in PHP date fields are always returned formatted
 in the industry standard ISO 3166 (-MM-DD HH:MI:SS).  Then you do
 whatever processing you want with dates formatted this way, but it's always
 DBMS independent.
Reformatting things every time kills performance. Then again, since you
are using PHP, you are probably not worried about performance that much.

 Another thing that seems to be lacking in DBI and other database abstraction
 layers is support for DBMS independent schema installation.  I mean if you
 want to install a given database schema (tables, fields, indexes,
 sequences) you still have to hand code database dependent SQL commands to
 create them.
Because of the great variety in types, refint restrictions and other
restrictions supported by databases (and don't get me started on SQL
standards), its hard for _driver_ to know what exactly you want to create.
DBI drivers now provide information on types the database supports and
more-or-less standardized 'description' of them, but its up to you to make
a use of it.


 As I explained before, with this Metabase PHP package you only need to
 describe the database schema in a custom XML format that looks like this:
 
 ?xml version="1.0" encoding="ISO-8859-1" ?
 database
 
  nametest/name
  create1/create
 
  table
   nameusers/name
   declaration
field nameuser_id/name   typeinteger/type default0/default 
notnull1/notnull /field
field nameuser_name/name typetext/type  
/field
field namepassword/name  typetext/type  
/field
field namereminder/name  typetext/type  
/field
field namename/name  typetext/type  
/field
field nameemail/name typetext/type  
/field
index
 nameusers_id_index/name
 unique1/unique
 field nameuser_id/name /field
/index
   /declaration
  /table
 
  sequence
   nameuser_id/name
   start1/start
   on tableusers/table fielduser_id/field /on
  /sequence
 /database
What if database doesn't support named sequences? (i.e. it only has
'sequence' as column type, but you can't create a sequence with a name).

 Metabase will install this schema description on any SQL based database.
 Furthermore, if you change the schema later you may tell Metabase to apply
 the changes without affected any data that was added to the database
 afterwards.
Sounds like a pipedream. (Or like ER/win tool, which is probably what you
_really_ want to use if you have tens of tables which periodically need
revision).

 There are other neat features like support for requesting just a range of
 rows of a SELECT query.  In some DBMS it would be as simple as specifying
 the LIMIT clause, but it is not that simple in many others. Metabase
 abstracts all that for you because those are desirable features that all
 database abstraction layers should provide.
If database doesn't support something, it is not necessarily a feature to
transparently provide emulation for it. Sometimes failing with an error
and forcing programmer to provide emulation code or forcing programmer to
ASK for emulation is the right thing.

-alex




[GENERAL] On functions and stored procs

2000-04-18 Thread Graeme Merrall

I'm just getting into functions in postgres and I've bumped up against a
couple issues which I think I need explained. I've had a wee read of the
archives on this but haven't turned up to much. I think it may be a
conceptual problem on my part though :)

Is it possible on postgres, using pl/pgsql to create a function that is
essentially a stored procedure? i.e. go through and execute a series of SQL
statements and return a value - success or failure for example. The examples
in the docs revolve more about creating functions from, well, a function
point of view rather than a stored procedure type of view.
In that I should probably say my only exposure with SP's in from MS-SQL so I
mean that definition. :)

Aditionally, from reading the docs on pl/pgsql is it possible to loop
through a set of rows returned from a query and perform an action on each
iteration?
As an example, I query a table for a set of user accounts that need
processing on a certain day (today). Is it possible to query the table,
return a set of results then loop through those results and on each pass,
insert data into another table(s)?
The conditional I'm referring to in the docs is
[label]
FOR record | row IN select_clause LOOP
statements
END LOOP;

Any examples of this sort of thing? Quiet obviously, the docs generally need
updating when it comes to functions. Is there a documentation project/team?

Regards,
 Graeme