Re: [ADMIN] Mac OS 10.3 Panther make questions

2004-02-18 Thread Shane Wright

There nay be something I'm missing about why you're compiling from source, but I'm having a great time with the binary package from here...

http://www.entropy.ch/software/macosx/postgresql/

It's 7.4, not 7.4.1 so I guess that may be why - but all the complicated bits are done and it works well!  No dependencies on fink or anything else (yeah i know, it includes a copy of everything, wasteful but lets me get my job done...)

S


On 17 Feb 2004, at 00:05, Kevin Barnard wrote:

I am trying to get PostgreSQL 7.4.1 to make with Mac OS 10.3 I run configure which complains about readline.  I'm not sure how Apple compiled bash which is the new default shell but I can't find the library or headers either.  I have installed Xcode and found a few notes online about postgres 7.3.4 and OS 10.2 which talk about making readline and using the /sw directory which does not exist on system.  Does anybody know what I am missing?

Thank you in advance for any help
Kevin Barnard


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match


Shane Wright
Technical Manager
eDigitalResearch.com
2 Berrywood Business Village
Hedge End
Hampshire
SO30 2UN
T +44 (0) 1489 772920
F +44 (0) 1489 772922
 
This message is sent in confidence for the addressee only.  The contents are not to be disclosed to anyone other than the addressee.  Unauthorised recipients must preserve this confidentiality and should please advise the sender immediately of any error in transmission.
 
Any attachment(s) to this message has been checked for viruses, but please rely on your own virus checker and procedures.


Re: [ADMIN] hanging for 30sec when checkpointing

2004-02-06 Thread Shane Wright
Hi

Thanks to you all for your help!  I'm continually impressed with how responsive and knowledgeable y'all are!

To clarify; it's an IDE drive with a reiserfs filesystem.  DMA is definately enabled, sequential reads pull 35Mb/sec sustained.

The I/O caused by the checkpoint just seems to be too much while other transactions are running.  As it's a managed server at our ISP throwing more hardware at it isn't an option at the moment unfortunately, so I think I'm left with optimising the app to reduce the number of INSERTs/UPDATEs.

Is what Iain said correct about [committed] transactions only being written to WAL, and actual table data files are only updated at checkpoint?

I guess really it's something I hadn't thought of - in effect, the database is able to handle _bursts_ of high load, but sustaining it is hard (because checkpoint happens sooner or later).  

Hmm that gives me an idea, for bulk processing, is there a way of detecting from a client when a checkpoint is about to happen so it can wait until it's finished?  Some way that's easier than -z `ps fax | grep post | grep checkpoint` that is ;)

Cheers

Shane




On 3 Feb 2004, at 22:35, Shane Wright wrote:

Hi,

I'm running a reasonable sized (~30Gb) 7.3.4 database on Linux and I'm getting some weird performance at times.

When the db is under medium-heavy load, it periodically spawns a 'checkpoint subprocess' which runs for between 15 seconds and a minute.  Ok, fair enough, the only problem is the whole box becomes pretty much unresponsive during this time - from what I can gather it's because it writes out roughly 1Mb (vmstat says ~1034 blocks) per second until its done.

Other processes can continue to run (e.g. vmstat) but other things do not (other queries, mostly running 'ps fax', etc).  So everything gets stacked up till the checkpoint finishes and all is well again, untill the next time...

This only really happens under medium/high load, but doesn't seem related to the length/complexity of transactions done.

The box isn't doing a lot else at the same time - most queries some in from separate web server boxes.

The disks, although IDE, can definately handle more than 1Mb/sec - even with multiple concurrent writes.  The box is powerful (2.6Ghz Xeon, 2Gb RAM).  Its a clean compile from source of 7.3.4, although I can't really upgrade to 7.4.x at this time as I can't afford the 18 hours downtime to dump/restore the database.  Fsync is on.  Most other settings at their defaults.

I've looked at the documentation and various bits about adjusting checkpoint segments and timings - but it seems reducing segments/timeout is implied to be bad, but it seems to me that increasing either will just make the same thing happen less often but more severely.

If it makes any odds, this seems to happen much more often when doing bulk UPDATEs and INSERTs - athough these are in transactions grouping them together - and they don't affect the same tables as other queries that still get stalled (no lock contention causing the problem).

What am I missing?  I'm sure I'm missing something blatantly obvious, but as it's only really happening on production systems (only place with the load and the volume of data) I'm loathe to experiment.

Any help appreciated,

Cheers,

Shane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Shane Wright
Technical Manager
eDigitalResearch.com
2 Berrywood Business Village
Hedge End
Hampshire
SO30 2UN
T +44 (0) 1489 772920
F +44 (0) 1489 772922
 
This message is sent in confidence for the addressee only.  The contents are not to be disclosed to anyone other than the addressee.  Unauthorised recipients must preserve this confidentiality and should please advise the sender immediately of any error in transmission.
 
Any attachment(s) to this message has been checked for viruses, but please rely on your own virus checker and procedures.


Re: [ADMIN] hanging for 30sec when checkpointing

2004-02-06 Thread Shane Wright
Tom,

Damn, why didn't I think of that myself...

Although, is there any performance implication of not doing checkpoints very often?  (aside from, I assume, that each checkpoint will take longer and hence saturate available I/O for longer)

Cheers

Shane

On 6 Feb 2004, at 15:33, Tom Lane wrote:

Shane Wright [EMAIL PROTECTED]> writes:
Hmm that gives me an idea, for bulk processing, is there a way of
detecting from a client when a checkpoint is about to happen so it can
wait until it's finished?

No, but maybe you could think about scheduling checkpoints yourself
to not coincide with your bulk jobs.  You could issue CHECKPOINT
commands from cron or whatever is dispatching your bulk jobs, and then
widen the checkpoint-timing parameters in postgresql.conf enough to
avoid automatic CHECKPOINTs.

The only real drawback of less-frequent CHECKPOINTs is that the amount
of WAL space required goes up proportionally.  (Oh, and it might take
proportionally longer to recover after a crash, too.)

regards, tom lane


Shane Wright
Technical Manager
eDigitalResearch.com
2 Berrywood Business Village
Hedge End
Hampshire
SO30 2UN
T +44 (0) 1489 772920
F +44 (0) 1489 772922
 
This message is sent in confidence for the addressee only.  The contents are not to be disclosed to anyone other than the addressee.  Unauthorised recipients must preserve this confidentiality and should please advise the sender immediately of any error in transmission.
 
Any attachment(s) to this message has been checked for viruses, but please rely on your own virus checker and procedures.


[ADMIN] hanging for 30sec when checkpointing

2004-02-03 Thread Shane Wright
Hi,

I'm running a reasonable sized (~30Gb) 7.3.4 database on Linux and I'm 
getting some weird performance at times.

When the db is under medium-heavy load, it periodically spawns a 
'checkpoint subprocess' which runs for between 15 seconds and a minute.  
Ok, fair enough, the only problem is the whole box becomes pretty much 
unresponsive during this time - from what I can gather it's because it 
writes out roughly 1Mb (vmstat says ~1034 blocks) per second until its done.

Other processes can continue to run (e.g. vmstat) but other things do 
not (other queries, mostly running 'ps fax', etc).  So everything gets 
stacked up till the checkpoint finishes and all is well again, untill 
the next time...

This only really happens under medium/high load, but doesn't seem 
related to the length/complexity of transactions done.

The box isn't doing a lot else at the same time - most queries some in 
from separate web server boxes.

The disks, although IDE, can definately handle more than 1Mb/sec - even 
with multiple concurrent writes.  The box is powerful (2.6Ghz Xeon, 2Gb 
RAM).  Its a clean compile from source of 7.3.4, although I can't really 
upgrade to 7.4.x at this time as I can't afford the 18 hours downtime to 
dump/restore the database.  Fsync is on.  Most other settings at their 
defaults.

I've looked at the documentation and various bits about adjusting 
checkpoint segments and timings - but it seems reducing segments/timeout 
is implied to be bad, but it seems to me that increasing either will 
just make the same thing happen less often but more severely.

If it makes any odds, this seems to happen much more often when doing 
bulk UPDATEs and INSERTs - athough these are in transactions grouping 
them together - and they don't affect the same tables as other queries 
that still get stalled (no lock contention causing the problem).

What am I missing?  I'm sure I'm missing something blatantly obvious, 
but as it's only really happening on production systems (only place with 
the load and the volume of data) I'm loathe to experiment.

Any help appreciated,

Cheers,

Shane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100%

2003-11-14 Thread Shane Wright
Hi,

I've used this command to dump just the table concerned, and I get a 
zero length file!?!  No errors or anything, just a silent exit.

pg_dump -U xx-S xx -F t -a -t tablename mydatabase  ~/backup/mytable

Any ideas?  I'm thinking the zero-sized file could be what was causing 
the read()s to fail, but as to why it's happening in the first place I 
can't think..

Cheers

Shane

On 11 Nov 2003, at 19:52, Jeff wrote:

read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
fd 0 is usually stdin, unless the program disconnects stdin.
Maybe pg_restore is waiting for input, perhaps a password?
--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling

2003-11-12 Thread Shane Wright
Hi,

fd 0 is usually stdin, unless the program disconnects stdin.
Maybe pg_restore is waiting for input, perhaps a password?
certainly shouldn't be - the table where the problem happens is no 
different to any of the others, but I will try doing just that table 
later today and see if that makes any difference.

Wouldn't it have outputted something though; like a prompt for input or 
something?

Regards

Shane

On 11 Nov 2003, at 19:52, Jeff wrote:

read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
fd 0 is usually stdin, unless the program disconnects stdin.
Maybe pg_restore is waiting for input, perhaps a password?
--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of 
broadcast)---
TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] [pg 7.1.rc2] pg_restore and large tables

2003-11-12 Thread Shane Wright
Hi,

I have found, on 7.3.4, a _massive_ performance difference on restoring 
without indices - on a 25million row table from 8 hours down to 1 
hour!

I've found the best way is to do this... (there may be a script 
somewhere that automates this)

- do a --schema-only restore to create the tables

- manually drop the indices using psql

- do a --data-only restore, also using --disable-triggers

- manually recreate the indices.

IIRC, it also helps to turn off fsync

Hope that helps,

Shane

On 12 Nov 2003, at 16:55, ow wrote:

Hi,

Trying to restore a table that has about 80 million records. The 
database was
dumped and restored according to the following procedure:

1) dump the db, data only
time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z 
--format=c
--compress=6 -U postgres testdb

2) create db schema from a separate file, including table structures,
constraints, indexes
3) edit restore order to satisfy the constraints
4) restore the db
time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a
./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose
pg_restore has been running for 14 hours now and it does not appear 
that
there's any end in sight. Meanwhile, postmaster is slowly eating away 
at the
memory, it now has 46% of all available memory with about 900MB on 
swap. HD
activity is non-stopping.

In retrospective, I guess, the table with 80M records could've been 
created
without indexes (it has 3, pk  ak constraints and fk index) to speed 
up the
pg_restore ... but then I'm not sure if creating the indexes afterwards
would've been much faster. Anything I'm doing wrong?

Thanks









__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100%

2003-11-11 Thread Shane Wright
Gaetano,

er, shedloads of this

read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
methinks thats not a good sign?

Shane

Gaetano Mendola wrote:

Shane Wright wrote:

Hi,

I'm trying to upgrade our 25Gb database from 7.1.3 to 7.3.4 - pg_dump
worked fine, although piping through split to get a set of 1Gb files.
But, after a few attempts on using pg_restore to get the data into the
new installation I'm having a few problems; basically it restores  the
first few tables fine (big tables too), but now it's just hanging on
one table; using 100% CPU but hardly touching the disk at all (vmstat
reports about 50kb every few minutes).


what show an strace on that process ?

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100% CPU w/ no disk access

2003-11-10 Thread Shane Wright
Hi,

I'm trying to upgrade our 25Gb database from 7.1.3 to 7.3.4 - pg_dump
worked fine, although piping through split to get a set of 1Gb files.

But, after a few attempts on using pg_restore to get the data into the
new installation I'm having a few problems; basically it restores  the
first few tables fine (big tables too), but now it's just hanging on
one table; using 100% CPU but hardly touching the disk at all (vmstat
reports about 50kb every few minutes).

pg_dump command was something like...

pg_dump --user=xxx --superuser=xxx --format=t | split -b 10

pg_restore commands are these:

cat xaa xab xac | pg_restore --dbname=xx --superuser=xx --username=xx
--schema-only
cat xaa xab xac | pg_restore --dbname=xx --superuser=xx --username=xx
--data-only --rearrange --disable-triggers--verbose

(dropped a few indices between the two pg_restore's to make things
quicker).

Anyway, it ploughed through the first bit (including a 25 million row
table) in under an hour, but now it's been stalled on one table for 5
hours.

Dunno if it's coincidence or not but this table is the only one with a
BYTEA column, it has a few million rows I think.

Any ideas what might be causing it?  I'm hoping its not some bug in
pg_restore causing an infinite loop or something.

if it helps, the CPU usage (according to top) is all in pg_restore
(not a postmaster process), and (according to vmstat) its about 40%
user and 60% system.

Both installs are (I think, ISP did it), clean source from
PostgreSQL's FTP site, both are running on RH Linux, 7.1.3 on a dual
PIII, 7.3.4 on a P4 Xeon.  Total size of dumped database is 3Gb
(across 3 files).

Any help appreciated,

Thanks,

Shane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100%

2003-11-10 Thread Shane Wright
Gaetano,

er, shedloads of this

read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
read(0, , 4096)   = 0
methinks thats not a good sign?

Shane

Gaetano Mendola wrote:

Shane Wright wrote:

Hi,

I'm trying to upgrade our 25Gb database from 7.1.3 to 7.3.4 - pg_dump
worked fine, although piping through split to get a set of 1Gb files.
But, after a few attempts on using pg_restore to get the data into the
new installation I'm having a few problems; basically it restores  the
first few tables fine (big tables too), but now it's just hanging on
one table; using 100% CPU but hardly touching the disk at all (vmstat
reports about 50kb every few minutes).


what show an strace on that process ?

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings