Re: [GENERAL] failed archive command

2011-07-07 Thread Joe Lester
I tried to use only %p to specify the path, but it does not seem to output the 
full path according to the server log. It only starts at /pg_xlog:

archive_command = '/usr/bin/scp -B %p 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

DETAIL:  The failed archive command was: /usr/bin/scp -B 
pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086

Also, if I specify the full path (like I was doing before) and execute the scp 
command as the postgres user on the master, it works (see output below). So I 
don't understand why it's not working when the postgres server tries to execute 
the same command.

mybox:~ admin$ su postgres
Password:
bash-3.2$ /usr/bin/scp -B 
/Volumes/DataDrive/data/pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
000100740086  100%   16MB  16.0MB/s   
00:01
bash-3.2$
 
 %p is expanded to the *full* path, so /Volumes/DataDrive/data/%p might
 not be the correct. I'd use just %p instead of it.
 I'm trying to setup wal archiving to a secondary computer on OSX 10.6.5 
 using postgres 9.0.3.
 
 Here are my settings in postgresql.conf on the primary box:
 
 wal_level = archive
 archive_mode = on
 max_wal_senders = 1
 archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p 
 postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'
 
 The problem is that I keep getting the following message over and over again 
 in the postgres log:
 
 FATAL:  archive command failed with exit code 255
 DETAIL:  The failed archive command was: /usr/bin/scp -B 
 /Volumes/DataDrive/data/pg_xlog/000100740086 
 postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
 LOG:  archiver process (PID 17771) exited with exit code 1
 unknown user 502
 
 The archive command works if I copy and paste it into the Terminal under the 
 postgres user. Any pointers as to what I'm doing wrong? Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] failed archive command

2011-07-06 Thread Joe Lester
I'm trying to setup wal archiving to a secondary computer on OSX 10.6.5 using 
postgres 9.0.3.

Here are my settings in postgresql.conf on the primary box:

wal_level = archive
archive_mode = on
max_wal_senders = 1
archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

The problem is that I keep getting this message over and over again in the 
postgres log:

FATAL:  archive command failed with exit code 255
DETAIL:  The failed archive command was: /usr/bin/scp -B 
/Volumes/DataDrive/data/pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
LOG:  archiver process (PID 17771) exited with exit code 1
unknown user 502

The archive command works if I copy and paste it into the Terminal under the 
postgres user. Any pointers as to what I'm doing wrong? Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] failed archive command

2011-07-06 Thread Joe Lester
I'm trying to setup wal archiving to a secondary computer on OSX 10.6.5 using 
postgres 9.0.3.

Here are my settings in postgresql.conf on the primary box:

wal_level = archive
archive_mode = on
max_wal_senders = 1
archive_command = '/usr/bin/scp -B /Volumes/DataDrive/data/%p 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/%f'

The problem is that I keep getting the following message over and over again in 
the postgres log:

FATAL:  archive command failed with exit code 255
DETAIL:  The failed archive command was: /usr/bin/scp -B 
/Volumes/DataDrive/data/pg_xlog/000100740086 
postgres@172.20.204.55:/Volumes/DataDrive/wals_from_master/000100740086
LOG:  archiver process (PID 17771) exited with exit code 1
unknown user 502

The archive command works if I copy and paste it into the Terminal under the 
postgres user. Any pointers as to what I'm doing wrong? Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] restoring a backup, incompatible with server

2006-08-08 Thread Joe Lester
I'm having a little trouble restoring a backup (from the production  
server to our development server).


I recently updated both production and development servers to  
postgres 8.1.4 from 8.0.x. When I updated production, I did a fresh  
initdb and imported from a previous dump. This server is running well.


Then I did my normal restore-from-backup routine... I copied the  
$PGDATA folder on production over to development (making sure that  
the necessary log file was copied over as well into the pg_xlog  
folder). However, when I tried to start the development server I got  
this message.


FATAL:  database files are incompatible with server
DETAIL:  The database cluster was initialized with PG_CONTROL_VERSION  
738394112, but the server was compiled with PG_CONTROL_VERSION 812.

HINT:  It looks like you need to initdb.

I'm a little confused since the $PGDATA folder from production was a  
fresh initdb on 8.1.4... and I'm copying it to the development  
server, which is also 8.1.4.


Any advise would be appreciated. Thanks.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] restoring a backup, incompatible with server

2006-08-08 Thread Joe Lester
Hi,Development is Mac OS 10.4.0 (PPC). Production is Mac OS 10.4.7 (Intel). Is that the kicker... PPC vs Intel?On Aug 8, 2006, at 1:46 PM, Talha Khan wrote:Hey Joe!!   Which OS are you running on development server and  production server?? regards Talha Khan On 8/8/06, Joe Lester [EMAIL PROTECTED] wrote: I'm having a little trouble restoring a backup (from the productionserver to our development server).I recently updated both production and development servers topostgres 8.1.4 from 8.0.x. When I updated production, I did a fresh initdb and imported from a previous dump. This server is running well.Then I did my normal restore-from-backup routine... I copied the$PGDATA folder on production over to development (making sure thatthe necessary log file was copied over as well into the pg_xlog folder). However, when I tried to start the development server I gotthis message.FATAL:  database files are incompatible with serverDETAIL:  The database cluster was initialized with PG_CONTROL_VERSION 738394112, but the server was compiled with PG_CONTROL_VERSION 812.HINT:  It looks like you need to initdb.I'm a little confused since the $PGDATA folder from production was afresh initdb on 8.1.4... and I'm copying it to the development server, which is also 8.1.4.Any advise would be appreciated. Thanks.---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster 

[GENERAL] Detect Locked Row Without Blocking

2005-11-08 Thread Joe Lester
Is there a recommended postgres way to determine if a certain row is 
locked... without blocking?


In my custom postgres client app I'd like to be able to determine if 
another user is modifying a given record. If so, I would present a 
dialog to the user such as Record Locked. Sam Smith is already 
modifying this record. Try again later.


I've looked at SELECT FOR UPDATE which looks good except for that it 
blocks. I don't want my UI to freeze up... I just want to notify the 
user that it's locked and move on. Any thoughts?


Thanks.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Detect Locked Row Without Blocking

2005-11-08 Thread Joe Lester

I see...

For my purposes, I'd still rather notify the user up-front that the 
record is in modify (kind of like FileMaker does)... even though now 
I understand that a row lock is not the right mechanism for that.


Is there a best-practice for this approach?

What about using a field to flag the record as in modify? But I guess 
then you'd have to protect against two different users 
selecting/updating the field at roughly the same time, each user then 
thinking that he has gained modify privileges for that record. I'm not 
sure a row lock would help any in this circumstance... and I don't want 
to resort to table locks for performance reasons.


On Nov 8, 2005, at 10:14 AM, Tom Lane wrote:


Joe Lester [EMAIL PROTECTED] writes:

In my custom postgres client app I'd like to be able to determine if
another user is modifying a given record. If so, I would present a
dialog to the user such as Record Locked. Sam Smith is already
modifying this record. Try again later.


However, I think the question is moot because it's predicated on a
terrible underlying approach.  You should NEVER design a DB app to hold
a lock while some user is editing a record (and answering the phone,
going out to lunch, etc).
Fetch the data and then let the user edit
it while you are not in a transaction.  When he clicks UPDATE, do
BEGIN;
SELECT the row FOR UPDATE;
check for any changes since you fetched the data originally
if none, UPDATE and commit
else rollback and tell user about it

If you do see conflicting changes, then you have enough info to resolve
the conflicts or abandon the update.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgres filling up hard drive with swap files

2005-04-29 Thread Joe Lester
http://archives.postgresql.org/pgsql-general/2004-08/msg00972.php
This bug appears to be fixed in Tiger (Mac OS 10.4). That's great, if 
it's really true. Can anyone confirm?


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Joe Lester
Watch out for the memory leak bug on OS 10.3.2 though 10.3.5 (this is 
apparently an OS X bug, not a Postgres bug):

http://archives.postgresql.org/pgsql-general/2004-08/msg00972.php
I don't know if it exists on 10.3.6 or 10.3.7 (the current version). I 
submitted a bug report to Apple many months ago, but it has not seen 
any activity yet.

Joe
On Feb 9, 2005, at 1:35 AM, John DeSoi wrote:
On Feb 9, 2005, at 10:39 AM, renato.barrios wrote:
Please tell me if Postgresql runns in an iMac.
Sure, assuming your iMac is running OS X. You'll need to install 
Apple's free developer tools to compile it or perhaps you might want 
to use one of the freely available installers such as:

http://www.entropy.ch/software/macosx/postgresql/

John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Undefined symbols: _poll

2004-09-30 Thread Joe Lester
I'm just moved my Postgres client project from Mac 10.2.8 to 10.3.5. 
It's an Objective-C program that links to libpq.a (the header file is 
libpq-fe.h). However, when I try to compile now I get the following 
linking error:

ld: Undefined symbols: _poll
Does this sound familiar to anyone? My guess is that _poll is 
referenced somewhere in libpq.a. Anyone have any ideas on how to get 
the linking happening on Mac OS 10.3.5?

Thanks!
Joe

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Postgres filling up hard drive with swap files

2004-08-20 Thread Joe Lester
On Aug 20, 2004, at 2:43 PM, Tom Lane wrote:
Bill Moran [EMAIL PROTECTED] writes:
Joe Lester [EMAIL PROTECTED] wrote:
I'm wondering, however, if you have a connection leak instead.  i.e.
is it possible that your client application is opening a whole bunch
of connections and never closing them?
No. The clients open only one connection (and hang onto it for dear
life  :-).

If these clients aren't utilizing the database, it might be worthwhile
to have them disconnect after a period of inactivity, and reconnect 
when
things get busy again.
If my theory is right, this would actually be counterproductive.  The
leak I think I'm seeing is associated with backend exit and so the way
to slow it as much as possible is to prolong backend lifetime as much
as possible.  Joe, what is the mean lifetime of your connections 
anyway?
I assume they don't stay up forever.
They are permanent connections, meaning that the same connection 
stays open on the server as long as the client application is running. 
And it's common for the clients to stay running for days at a time. I'd 
say the average length of a connection is 3 days.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Postgres filling up hard drive with swap files

2004-08-20 Thread Joe Lester
On Aug 20, 2004, at 3:01 PM, Bill Moran wrote:
Please don't wrap machine-generated output ... it makes it VERY 
difficult
to understand.
This is usually caused by a setting in your mail client that reads
something like wrap lines at 72 characters being turned on.
You should wrap your text at 72 chars when you're typing, (so it 
displays
readibly on most mail programs) but it's not a good idea to arbitrarily
wrap _all_ text in a message to any line length.  Doing so usually ends
up making a mess of some part of the message.
I'll try to be sensitive to that. Unfortunately, my mail client forces 
the text to wrap and provides no override preference. I'm using 
Mail.app, so if anyone knows of a workaround or solution, please let me 
know. Thanks.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] shared_buffers Question

2004-08-17 Thread Joe Lester
Thanks for the suggestion Scott. I did a...

find / -type f -size +10 -print

The results contained 9 Gig! of swap files:
/private/var/vm/swapfile0
/private/var/vm/swapfile1
/private/var/vm/swapfile10
 [plus many more entries]

That seems to indicate to me a memory leak of some sort. My symptoms mirror almost exactly those of this fellow, who's thread was never resolved as far as I can see:

http://archives.postgresql.org/pgsql-bugs/2004-06/msg00013.php

Anyone have any other suggestions on what to look for? At this rate I'm leaking about 2 to 4 Gigs of memory (swap) per week. I'm running postgres 7.4.1 on an 700MHz eMac, 512MB RAM, OS 10.3.2. Thanks.

> Scott Ribe:
> Also check to make sure that some rogue process somewhere isn't filling your
> hard disk with some huge log file. I don't remember the UNIX commands
> offhand, but you should sudo a search starting in / for all large files, say

Joe's Original Message:
I've been running a postgres server on a Mac (10.3, 512MB RAM) with 200 clients connecting for about 2 months without a crash. However just yesterday the database and all the clients hung. When I looked at the Mac I'm using as the postgres server it had a window up that said that there was no more disk space available to write memory too. I ended up having to restart the whole machine. I would like to configure postgres so that is does not rely so heavily on disk-based memory but, rather, tries to stay within the scope of the 512MB of physical memory in the Mac.

Re: [GENERAL] Connection reset by peer

2004-04-21 Thread Joe Lester
That makes sense since the connection reset by peer statement is 
always followed immediately by

unexpected EOF on client connection

I should have noticed that before :-0

Thanks!

On Apr 20, 2004, at 10:04 PM, Doug McNaught wrote:

Joe Lester [EMAIL PROTECTED] writes:

I'm seeing this message a couple times per day in my postgres log:

2004-04-20 14:47:46 LOG:  could not receive data from client:
Connection reset by peer
What does it mean? I've seen in the archives that it seems to be some
kind of system error. Does anyone know how serious this message is?
Does it mean that one of my clients is failing to connect or, worse,
losing data in transit?
It generally means a client is exiting, or getting killed, without
shutting down the PG connection properly.
-Doug

---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Optimal configuration to eliminate out of file descriptors error

2004-04-16 Thread Joe Lester
Yeah. It was my shell that was the bottleneck. What did the trick was 
adding this line in /etc/profile:

ulimit -n 8000

Thanks!

Bruno Wolff III [EMAIL PROTECTED] writes:
It sounds like what is really happening is that
you are hitting an OS limit on the number of open files. You should be
able to increase that limit. There have also been some discussions 
about
postgres doing a better job of telling when it has opened too many 
files
within the last several months. I don't remember much about the 
details
of the change or which version they were applied to.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Optimal configuration to eliminate out of file descriptors error

2004-04-14 Thread Joe Lester
I'm trying to figure out what the optimal Postgres configuration would be for my server (with 200 connecting clients, even though I'd really like to get it up to 500).

I've got a 700 MHz eMac running Mac OS 10.3.2 (Panther) with 512 MB of RAM. I've messed around with some settings but I'm still getting an occasional out of file descriptor error, especially when performing a VACUUM. Like so...

2004-04-13 23:30:05 LOG:  out of file descriptors: Too many open files; release and retry
CONTEXT:  writing block 1 of relation 67553/16604

I'm going to do my best to provide my current system settings that relate to Postgres. It would be great if someone could tell me where I'm way off, and get me on the right track. I'm under the impression that my machine should be able to handle 200 to 500 client connections. If that's not the case, I'm fine with getting new hardware, I just don't want to go to that step willy nilly. Thanks!

1. Snipped from postgresql.conf (the only three settings I've changed)

max_connections = 200
...
shared_buffers = 2000 
...
max_files_per_process = 100


2. Snipped from /etc/profile

ulimit -u 512


3. Snipped from /etc/rc

sysctl -w kern.sysv.shmmax=167772160
sysctl -w kern.sysv.shmmin=1
sysctl -w kern.sysv.shmmni=32
sysctl -w kern.sysv.shmseg=8
sysctl -w kern.sysv.shmall=65536


4. Snipped from etc/sysctl.conf

# Turn up maxproc 
kern.maxproc=2048

# Turn up the maxproc per user
kern.maxprocperuid=512

Re: [GENERAL] Storing jpgs

2004-04-06 Thread Joe Lester
Would anyone have some example code they could share using libpq to 
encode an image into a text field? Right now, I'm converting my image 
into a hexadecimal string representation in my SQL statement. I'm sure 
there must be a better (faster) way. The hex encodeing/decoding slows 
things down for me since my app deals with a lot of images.

On Apr 5, 2004, at 2:03 PM, Gavin M. Roy wrote:

I'm one for using base64 encoded text in a text field.  It's easy to 
deal with queries, it's easy to deal with spitting out images, and 
it's easy to back up.  Others do it differently, there are many ways 
to skin a cat, and each person who skins cats most likely thinks their 
way is best.
Gavin

C G wrote:

Dear All,

What's the best way to store jpgs in postgresql to use in a web page?


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Using NOTIFY... Slow Client Querys

2004-02-17 Thread Joe Lester
Yes, my client receives the notification and then it immediately 
executes a query that hangs for a while.

On Feb 15, 2004, at 12:07 PM, Tom Lane wrote:
Hmm.  Are you certain that the clients have received the NOTIFY?
Perhaps the bottleneck is in delivering the NOTIFY messages, not in
executing the subsequent query.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Using NOTIFY... Slow Client Querys

2004-02-17 Thread Joe Lester
Thanks. I was kind of suspecting that. But it's nice to have it 
confirmed.

I might try a random delay on the client side after receiving the 
notification, before I query. That may help to break up the load on the 
server.

On Feb 16, 2004, at 10:27 AM, Mikhail Terekhov wrote:
I'd say it is related to the design of the application. Imagine what 
happens:

1. You have 140 backends, most/all of them are sleeping.
2. One client sends a NOTIFY.
3. All 140 backends get awake all together and send a notify message 
to their clients.
4. All 140 clients almost at the same time send a query to the same 
table.
5. Unless you have a _very_ powerful server it will be _very_ slow.

It is a classical multitask bottleneck problem.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Using NOTIFY... Slow Client Querys

2004-02-13 Thread Joe Lester
I'm using PostgreSQL 7.4.1. I have 140 clients connected on average 
using libpq. When one client sends NOTIFY timeclock; to the server 
all 140 clients are listening for it.

After receiving a notification from libpq (PQnotifies), each client 
proceeds to execute a query for the last five records in the timeclock 
table.

SELECT * FROM timeclock ORDER BY touched DESC LIMIT 5;

It varies, but it's often the case that clients wait up to 3 minutes 
before the results come back. This seems like a really long time for a 
query that I would think would go quickly. In fact, I can execute the 
same query from a third party client and it runs fine, even while my 
own client is still waiting for results.

Any ideas? It seems to be related to NOTIFY/LISTEN. Thanks!



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Joe Lester
Would this be kern.maxfiles? There's also one called 
kern.maxfilesperproc.

Is it OK to set these before starting the server? Or should I set them 
in /etc/rc?

On Feb 10, 2004, at 10:04 AM, Tom Lane wrote:

Also look at increasing the kernel's limit on number of open files
(I remember seeing it in sysctl's output yesterday, but I forget
what it's called).


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-09 Thread Joe Lester
Joe Lester [EMAIL PROTECTED] writes:
I installed Postgres 7.4.1 on a dual processor G5 running Mac OS
10.3.2. I'm trying to increase the max_connections to 300 and running
into some trouble.
Hmm, it WorksForMe (TM).  You did reboot after changing /etc/rc, no?
Yes, I did a Restart.

Try sysctl -a | grep sysv to verify that the settings took effect.
That's odd. It's giving me a -1 for the shmmax value. I assume that's 
NOT normal. Why would that be?

[lester2:~] joe% sysctl -a | grep sysv
kern.sysv.shmmax: -1
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 50772160
kern.sysv.semmni: 87381
kern.sysv.semmns: 87381
kern.sysv.semmnu: 87381
kern.sysv.semmsl: 87381
kern.sysv.semume: 10
Note that there's not much percentage in setting shmmax higher than
shmall * pagesize.  I see hw.pagesize = 4096 according to sysctl,
which means your shmall=65536 constrains the total allocation to 256MB,
so setting shmmax to 500M doesn't do anything...
I'm not quite clear on this. Does this mean that shmmax and shmall 
should be set to the same value? Could anyone share with me their own 
settings for shmmax and shmall?

Thanks.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-09 Thread Joe Lester
Joe Lester [EMAIL PROTECTED]> writes:
That's odd. It's giving me a -1 for the shmmax value. I assume that's 
NOT normal. Why would that be?

It's not --- you should get back the same value you set.  I speculate
that you tried to set a value that exceeded some internal sanity check
in the kernel.  I wouldn't be too surprised if the kernel rejects values
larger than available RAM, for instance.

I tried a few different things to try to get the shmmax value to be something other than 4194304 (the default in /etc/rc).

First, I restarted my mac, then, as the root user...

I tried setting it to a high number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=9194304
kern.sysv.shmmax: -1  

No luck. It set it back to -1

Then I tried setting it to a low number:
[lester2:~] root# sysctl -w kern.sysv.shmmax=3194304
kern.sysv.shmmax: -1

Still no action.

Then I tried setting it to 4194304 (the default in /etc/rc):
[lester2:~] root# sysctl -w kern.sysv.shmmax=4194304
kern.sysv.shmmax: -1 -> 4194304

It took this time! BUT... I need to increase the number because my postgres error log is telling me that it needs to be at least 4620288:

DETAIL:  Failed system call was shmget(key=5432001, size=4620288, 03600).
HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter.  You can either reduce the request size or reconfigure the kernel with larger SHMMAX.  To reduce the request size (currently 4620288 bytes), reduce PostgreSQL's shared_buffers parameter (currently 300) and/or its max_connections parameter (currently 100).

Any ideas? Again, I am running Mac OS 10.3.2 and Postgres 7.4.1 on a dual processor G5. Thanks.