[GENERAL] LARGE db dump/restore for upgrade question

2001-08-14 Thread Philip Crotwell


Hi

I have a very large database of seismic data. It is about 27 Gb now, and
growing at about the rate of 1 Gb every 3-4 days. I am running
postgres 7.1.2. I might possibly try to upgrade to 7.2 when it comes out,
but I don't know if it will be possible for me to do 7.3 due to
the pg_dump/pg_restore problem. In a little over a year the database will
probably pass the halfway point on my raid and so it will physically be
impossible to upgrade. Most of the space is probably taken up by large
objects, which I am hoping will make a solution at least a little bit
easier.

I am trying a pg_dump right now, and in the first 25 minutes it dumped
54Mb, which means that a full dump will take about 200 hours! I would
guess the restore would take about the same amount of time, so I would be
looking at 17 DAYS of downtime to upgrade! Maybe it will speed up later in
the dump, I don't know. And in about 3 months or so it will take me twice
that amout of time. Also, this is on a 4 processor sun E450 with a A1000
hardware raid, so it not that I am using old slow hardware. Just for
comparison, a file system dump to tape took 6 hours, and that was back
when I only had a software raid!

So, my question is, is it likely that one day postgres will no longer
require dump/restores for upgrades? I would assume that there will always
be a need to tweak the internal layout of files, but I wonder if there
isn't a way to do this "in place" or at least to allow a file system move
of the large objects without requiring them to be dumped as well?
Even better would be if each new version of postgres could read the
immediatly previous version tables, and could convert them in the
background. Maybe just dreaming here. :)

Could something related to making the upgrade less painful for very large
databases be added to the ToDo list even if it isn't a high priority?

Not that I am complaining, postgres seems to handle this data volume quite
well, and it is certainly worth very dollar I didn't pay for it. :)

Any suggestion on how to prepare for the next upgrade would be
appreciated.

thanks,
Philip



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] ERROR: Bad interval external representation '0'

2001-06-15 Thread Philip Crotwell


Hi

Is there a problem with averaging time intervals? I get this error with
AVG( inserttime - last)

ERROR:  Bad interval external representation '0'

but SUM( inserttime - last) / COUNT (inserttime) seems to work just fine.

Details below, database version is 7.1.

thanks,
Philip


sceppdata=> \d data_chunk
Table "data_chunk"
   Attribute   |   Type   |   Modifier
---+--+---
 chunkid   | integer  | not null
 seq_num   | integer  | 
 channel_dbid  | integer  | 
 first | timestamp with time zone | 
 first_nanos   | integer  | 
 first_leapsec | integer  | 
 last  | timestamp with time zone | 
 last_nanos| integer  | 
 last_leapsec  | integer  | 
 inserttime| timestamp with time zone | default now()
Indices: data_chunk__first_idx,
 data_chunk__last_idx,
 data_chunk__seq_num_idx,
 data_chunk_pkey

sceppdata=> SELECT AVG( inserttime - last) AS latency
 FROM data_chunk WHERE channel_dbid = 1 AND
last > '2001-06-15 10:35:54.85-04';
ERROR:  Bad interval external representation '0'
sceppdata=> SELECT SUM( inserttime - last) / COUNT (inserttime) AS latency
 FROM data_chunk WHERE channel_dbid = 1 AND
last > '2001-06-15 10:35:54.85-04';
   latency   
-
 00:00:07.07
(1 row)

sceppdata=> SELECT inserttime, last, (inserttime - last) AS latency
sceppdata->  FROM data_chunk WHERE channel_dbid = 1 AND
last > '2001-06-15 10:35:54.85-04';
   inserttime   |   last|   latency   
+---+-
 2001-06-15 10:37:38-04 | 2001-06-15 10:37:29.70-04 | 00:00:08.30
 2001-06-15 10:39:36-04 | 2001-06-15 10:39:30.16-04 | 00:00:05.84
(2 rows)




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] defunct postmasters

2001-05-11 Thread Philip Crotwell


Hi

Once more, this time with feeling :)
Sorry, not a regular user of gdb, but I figured out my error, does this
help?

Anything else before kill -9?

thanks,
PHilip

# gdb postmaster 29214
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "i386-redhat-linux"...

/usr/local/pgsql/bin/29214: No such file or directory.
Attaching to program: /usr/local/pgsql/bin/postmaster, Pid 29214
Reading symbols from /lib/libcrypt.so.1...done.
Reading symbols from /lib/libresolv.so.2...done.
Reading symbols from /lib/libnsl.so.1...done.
Reading symbols from /lib/libdl.so.2...done.
Reading symbols from /lib/libm.so.6...done.
Reading symbols from /lib/libc.so.6...done.
Reading symbols from /lib/ld-linux.so.2...done.
Reading symbols from /lib/libnss_files.so.2...done.
0x4013da02 in __libc_accept () from /lib/libc.so.6
(gdb) bt
#0  0x4013da02 in __libc_accept () from /lib/libc.so.6
#1  0x80c34b9 in StreamConnection ()
#2  0x80e07b1 in ConnCreate ()
#3  0x80e0239 in ServerLoop ()
#4  0x80dfdb3 in PostmasterMain ()
#5  0x80c3fa5 in main ()
#6  0x400a39cb in __libc_start_main (main=0x80c3ec0 , argc=4,
argv=0xbb14, init=0x80651d0 <_init>, 
fini=0x813697c <_fini>, rtld_fini=0x4000ae60 <_dl_fini>,
stack_end=0xbb0c) at ../sysdeps/generic/libc-start.c:92
(gdb) info frame
Stack level 0, frame at 0xb400:
 eip = 0x4013da02 in __libc_accept; saved eip 0x80e07b1
 (FRAMELESS), called by frame at 0xb400
 source language unknown.
 Arglist at 0xb400, args: 
 Locals at 0xb400, Previous frame's sp is 0x0
 Saved registers:
  ebp at 0xb400, eip at 0xb404
(gdb) 



On Fri, 11 May 2001, Tom Lane wrote:

> Philip Crotwell <[EMAIL PROTECTED]> writes:
> > I am running postgres7.1 on redhat 6.2 and my database has gone belly up.
> 
> > I know i am not supposed to "kill -9 " the postmaster, but it has become
> > completely unresponsive. pgsql just hangs as does stopping with the
> > rc.d script.
> 
> Actually, kill -9 should be perfectly safe in PG 7.1; it was only
> earlier releases that didn't like it.  But before you do that,
> would you attach to the top postmaster process (29214) with gdb
> and get a stack trace?
> 
> > PS I don't know why this happened, but the only theory I have is that I am
> > running with -i to allow jdbc connections and I had port scanned the
> > machine with nmap shortly before noticing that I could no longer connect.
> 
> Hmm, would you see if that's repeatable?
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] defunct postmasters

2001-05-11 Thread Philip Crotwell


Hi

Not sure if this is helpful, but...
Am I doing this correctly, anything else to try before "pulling the plug"?
thanks,
PHilip


# gdb postmaster 29214
GNU gdb 19991004
Copyright 1998 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you
are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "i386-redhat-linux"...

postmaster: No such file or directory.


/usr/local/src/29214: No such file or directory.
Attaching to Pid 29214
0x4013da02 in ?? ()
(gdb) bt
#0  0x4013da02 in ?? ()
#1  0x80e07b1 in ?? ()
#2  0x80e0239 in ?? ()
#3  0x80dfdb3 in ?? ()
#4  0x80c3fa5 in ?? ()
#5  0x400a39cb in ?? ()
(gdb) info frame
Stack level 0, frame at 0xb400:
 eip = 0x4013da02; saved eip 0x80e07b1
 called by frame at 0xb414
 Arglist at 0xb400, args: 
 Locals at 0xb400, Previous frame's sp is 0x0
 Saved registers:
  ebp at 0xb400, eip at 0xb404


On Fri, 11 May 2001, Tom Lane wrote:

> Philip Crotwell <[EMAIL PROTECTED]> writes:
> > I am running postgres7.1 on redhat 6.2 and my database has gone belly up.
> 
> > I know i am not supposed to "kill -9 " the postmaster, but it has become
> > completely unresponsive. pgsql just hangs as does stopping with the
> > rc.d script.
> 
> Actually, kill -9 should be perfectly safe in PG 7.1; it was only
> earlier releases that didn't like it.  But before you do that,
> would you attach to the top postmaster process (29214) with gdb
> and get a stack trace?
> 
> > PS I don't know why this happened, but the only theory I have is that I am
> > running with -i to allow jdbc connections and I had port scanned the
> > machine with nmap shortly before noticing that I could no longer connect.
> 
> Hmm, would you see if that's repeatable?
> 
>   regards, tom lane
> 


Philip Crotwell   (803)777-0955  (803)777-0906 fax  [EMAIL PROTECTED]




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] pgdump, large objects and 7.0->7.1

2001-03-16 Thread Philip Crotwell


Hi

I am having problems with large objects in 7.0.3, high disk usage, slow
access and deletes of large objects and occasional selects that hang with
the background process going to 98% of the CPU and staying there. Having
read that there are alot of large object improvements in 7.1, I was
thinking of trying the beta out to see if these problems would disappear.

But, 7.0->7.1 needs a pgdumpall/restore. Which wouldn't be a problem, but
pgdumpall in 7.0 doesn't dump large objects. :(

So, 3 questions that basically boil down to "What is the best way to move
large objects from 7.0 to 7.1."

1) Can I use the 7.1 pgdumpall to dump a 7.0.3 database? The docs say no,
but worth a try.

2) What does "large objects... must be handled manually" in the 7.0 pgdump
docs mean? Does this mean that there is a way to manually copy the
xinv files? I have ~23000 of them at present.

3) Do I need to preserve oid's when with pgdump using large objects?

thanks,
Philip

PS It would be great if something about this could be added to the 7.1
docs. I would guess that others will have this same problem when 7.1 is
released.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] overhead of "small" large objects

2000-12-10 Thread Philip Crotwell


Hi

I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large? 

What might I be doing wrong?

Is there a better way to store these chunks?

thanks,
Philip