[ADMIN] encoding question

2007-05-03 Thread Warren Little
I'm getting the following error from a query being generated from MS  
Access :

character 0xefbfbd of encoding UTF8 has no equivalent in LATIN9

Upon reviewing the archives I found the following:

Kevin McCarthy kemccarthy1 ( at ) gmail ( dot ) com writes:
 Often users will copy and paste text directly from MS Word docs  
into the
 forms which will invariably contain Microsoft's proprietary  
formatting of
 characters such as 'smart' quotes, trademark, copyright symbols,  
accent
 grave, etc. We've set the HTML pages as UTF-8 and the database  
connection to
 UTF-8. However when our calls to import the data that includes any  
of these

 characters into the database, the queries fail complaining that e.g.
 [nativecode=ERROR:  character 0xe28093 of encoding UTF8 has no  
equivalent

 in LATIN9]

That error suggests that your database encoding is LATIN9, not UTF-8.
You need to change it.  Beware that you need the server's locale
settings to be in step, too.

regards, tom lane

-


So I ran
tigris=# show server_encoding;
server_encoding
-
UTF8
(1 row)

and

tigris=# show client_encoding;
client_encoding
-
UTF8
(1 row)


What else should I be looking at?

thanks

Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





Re: [ADMIN] encoding question

2007-05-03 Thread Warren Little

fyi,
I upgraded the odbc driver on the client and every thing worked.

thanks

On May 3, 2007, at 1:45 PM, Warren Little wrote:

I'm getting the following error from a query being generated from  
MS Access :

character 0xefbfbd of encoding UTF8 has no equivalent in LATIN9

Upon reviewing the archives I found the following:

Kevin McCarthy kemccarthy1 ( at ) gmail ( dot ) com writes:
 Often users will copy and paste text directly from MS Word docs  
into the
 forms which will invariably contain Microsoft's proprietary  
formatting of
 characters such as 'smart' quotes, trademark, copyright symbols,  
accent
 grave, etc. We've set the HTML pages as UTF-8 and the database  
connection to
 UTF-8. However when our calls to import the data that includes  
any of these

 characters into the database, the queries fail complaining that e.g.
 [nativecode=ERROR:  character 0xe28093 of encoding UTF8 has no  
equivalent

 in LATIN9]

That error suggests that your database encoding is LATIN9, not UTF-8.
You need to change it.  Beware that you need the server's locale
settings to be in step, too.

regards, tom lane

-


So I ran
tigris=# show server_encoding;
server_encoding
-
UTF8
(1 row)

and

tigris=# show client_encoding;
client_encoding
-
UTF8
(1 row)


What else should I be looking at?

thanks

Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





Re: [ADMIN] trying to run PITR recovery

2007-03-30 Thread Warren Little

Simon,
I have no issues with how the error was handled, just the  
notification that an error was encountered.



@ 2007-03-23 05:57:33 MDTLOG:  restored log file
0001011A00FD from archive
@ 2007-03-23 05:57:35 MDTLOG:  incorrect resource manager data
checksum in record at 11A/FD492B20
@ 2007-03-23 05:57:35 MDTLOG:  redo done at 11A/FD492210


The first message says it restored the file, the second message looks  
like an error, but for myself, who does this process very seldom, its  
hard to tell what exactly transpired.


On slightly different topic, is there some way to determine the  
timeline of the corrupted segment, ie what was the original time of  
the last restored transaction.






On Mar 30, 2007, at 5:16 AM, Simon Riggs wrote:


On Fri, 2007-03-23 at 17:16 -0600, Warren Little wrote:


My concern is that there were many more logfiles to be played
following 001011A00FD
(ie 0001011E005C) yet it appears the recovery stop at  
that

point and called it good.
I would assume all WAL logs would be restored.


I'm interested in your feedback here. How would you like it to have
acted?

The WAL file was clearly corrupt.

1. Don't continue and don't come up. Have the recovery fail. In  
order to

bring the server up, we would have to restart recovery with an
additional command to say I note that my recovery has failed and  
would

like recovery to come up at the last possible point.

2. Attempt to continue after we fail the CRC check. This is both
dangerous and in many cases won't work either, since this is one of  
the

normal ending points.

3. Continue after a CRC check, don't attempt to apply the records,  
just

look at them to determine if they look correct. i.e. see if the CRC
error applies to just that record

4. Add a command to ignore specific WAL records
ignore_record = '11A/FD492B20'

These may also not work very well at all, since many records depend  
upon

previous data changes, so could quickly end in further errors.

What would you suggest?

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com




Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





[ADMIN] vacuum to cleanup transaction wrap around

2007-03-23 Thread Warren Little

All,
have a cluster that hit the wrap-around issue.
We are trying to run the backend vacuum as describe in the manual to  
clear up the problem.


I not sure how I should be interrupting the messages I'm receiving  
from the vacuum command.

Here is the approach I take so far (note pg 8.1.4 Linux 64bit)
from command line  run postgres dbname
at the backend prompt run vacuum
the console spits out the following repeated many times
2007-03-23 08:51:00 MDTWARNING:  database preR14 must be vacuumed  
within 999134 transactions
2007-03-23 08:51:00 MDTHINT:  To avoid a database shutdown, execute a  
full-database VACUUM in preR14.


There are a few databases in this cluster ( about 6)

Any suggestions would be greatly appreciated.


Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





[ADMIN] trying to run PITR recovery

2007-03-23 Thread Warren Little

Hello,
I'm testing my PITR recovery procedures and something doesn't look  
right.
The following is from the logs upon starting postgres with  
recovery.conf file in place


2007-03-23 05:56:00 MDTLOG:  database system was interrupted at  
2007-03-18 05:09:15 MDT

@ 2007-03-23 05:56:00 MDTLOG:  starting archive recovery
@ 2007-03-23 05:56:00 MDTLOG:  restore_command = cp /data/pgLocal/ 
archive/WAL_restore/%f %p
cp: cannot stat `/data/pgLocal/archive/WAL_restore/0001.history':  
No such file or directory
@ 2007-03-23 05:56:00 MDTLOG:  restored log file  
0001011A00EE.004E0060.backup from archive
@ 2007-03-23 05:56:00 MDTLOG:  restored log file  
0001011A00EE from archive

@ 2007-03-23 05:56:00 MDTLOG:  checkpoint record is at 11A/EE4E0060
@ 2007-03-23 05:56:00 MDTLOG:  redo record is at 11A/EE4E0060; undo  
record is at 0/0; shutdown FALSE
@ 2007-03-23 05:56:00 MDTLOG:  next transaction ID: 2415965426; next  
OID: 81701223
@ 2007-03-23 05:56:00 MDTLOG:  next MultiXactId: 43380; next  
MultiXactOffset: 92368

@ 2007-03-23 05:56:00 MDTLOG:  automatic recovery in progress
@ 2007-03-23 05:56:01 MDTLOG:  redo starts at 11A/EE4E00B0
@ 2007-03-23 05:56:15 MDTLOG:  restored log file  
0001011A00EF from archive
@ 2007-03-23 05:56:27 MDTLOG:  restored log file  
0001011A00F0 from archive
@ 2007-03-23 05:56:31 MDTLOG:  restored log file  
0001011A00F1 from archive
@ 2007-03-23 05:56:34 MDTLOG:  restored log file  
0001011A00F2 from archive
@ 2007-03-23 05:56:40 MDTLOG:  restored log file  
0001011A00F3 from archive
@ 2007-03-23 05:56:50 MDTLOG:  restored log file  
0001011A00F4 from archive
@ 2007-03-23 05:57:02 MDTLOG:  restored log file  
0001011A00F5 from archive
@ 2007-03-23 05:57:07 MDTLOG:  restored log file  
0001011A00F6 from archive
@ 2007-03-23 05:57:11 MDTLOG:  restored log file  
0001011A00F7 from archive
@ 2007-03-23 05:57:14 MDTLOG:  restored log file  
0001011A00F8 from archive
@ 2007-03-23 05:57:18 MDTLOG:  restored log file  
0001011A00F9 from archive
@ 2007-03-23 05:57:21 MDTLOG:  restored log file  
0001011A00FA from archive
@ 2007-03-23 05:57:27 MDTLOG:  restored log file  
0001011A00FB from archive
@ 2007-03-23 05:57:29 MDTLOG:  restored log file  
0001011A00FC from archive
@ 2007-03-23 05:57:33 MDTLOG:  restored log file  
0001011A00FD from archive
@ 2007-03-23 05:57:35 MDTLOG:  incorrect resource manager data  
checksum in record at 11A/FD492B20

@ 2007-03-23 05:57:35 MDTLOG:  redo done at 11A/FD492210
@ 2007-03-23 05:57:36 MDTLOG:  restored log file  
0001011A00FD from archive

@ 2007-03-23 05:57:36 MDTLOG:  archive recovery complete
@ 2007-03-23 05:57:36 MDTLOG:  could not truncate directory  
pg_multixact/members: apparent wraparound

@ 2007-03-23 05:59:00 MDTLOG:  database system is ready
@ 2007-03-23 05:59:00 MDTLOG:  transaction ID wrap limit is  
3065701724, limited by database postgres
@ 2007-03-23 12:14:04 MDTLOG:  autovacuum: processing database  
postgres


My concern is that there were many more logfiles to be played  
following 001011A00FD
(ie 0001011E005C) yet it appears the recovery stop at  
that point and called it good.

I would assume all WAL logs would be restored.

The recovery.conf was simple:
restore_command = 'cp /data/pgLocal/archive/WAL_restore/%f %p'

The backup file 0001011A00EE.004E0060.backup contained:
START WAL LOCATION: 11A/EE4E0060 (file 0001011A00EE)
STOP WAL LOCATION: 11A/EFF68AB8 (file 0001011A00EF)
CHECKPOINT LOCATION: 11A/EE4E0060
START TIME: 2007-03-17 20:29:16 MDT
LABEL: 076_pgdata.tar
STOP TIME: 2007-03-18 05:16:17 MDT

Does the line:  incorrect resource manager data checksum in record at  
11A/FD492B20

mean there is a corrupted WAL log file?

Any insight here would be helpful
version PG 8.1.2 64 bit Linux


thanks


Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763





[ADMIN] auto vacuum doens't appear to be working

2006-06-30 Thread Warren Little

We have a bytea column where we store large attachments (ie pdf file).
every so often (2 weekly) we replace the large a attachment (5-15mb) with a 
much smaller binary file (15k).  

when I run 
SELECT sum( relpages*8/1024 ) as MB FROM pg_class where relname != '' 
I get a value of 104995 which I interpret to mean I have 104GB of stored data 
in the database and this value has remained relatively static (+/- 1GB) over 
the past couple of weeks. 
We I to a df -h on the filesystem holding the database cluster I get a usage 
of 140GB.  Again I interpret this to mean I have nearly 35GB of uncleaned 
data.

Is this a case where I should be running the vacuum manually or is auto vacuum 
all that should be necessary to keep track and mark the updated tuple space 
ready for re-use.

thanks

-- 
Warren Little
Chief Technology Officer
Meridias Capital, Inc
1006 Atherton Dr
Salt Lake City, UT 84123
ph. 866.369.7763

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[ADMIN] query optimization - mysql vs postgresql

2006-05-18 Thread Warren Little




Hello,
my team is in the process of migrating some data from a mysql (5.0) database to our core postgres (8.1.3) database.
We are running into some performance issues with the postgres versions of the queries.
MySQL takes about 150ms to run the query where postgres is taking 2500ms.
The servers for the two database clusters are identical (dual amd 64bit dual core opteron 4GB ram scsi raid array Suse 9.x)

The table schemas have not changed (to the best of my knowledge) and the indexes were created to mimic the mysql versions as well.

I have attached one particular query along with the explain output.
Does anyone see anything in the explain that might help in diagnosing the problem.

thx 





Warren J. Little
CTO
Meridias Capital
1018 West Atherton Dr
Salt Lake City, UT 84123
Ph 866.369.7763









Re: [ADMIN] query optimization - mysql vs postgresql

2006-05-18 Thread Warren Little




Sorry,
here are the attachments.
Not sure about the statistics question, I have done a vacuum analyze on every table in the database.

On Thu, 2006-05-18 at 09:12 -0700, Tomeh, Husam wrote:

It looks like you forgot to attach the query sample. Have you collected statistics on your tables/indexes to help the planner select a better plan?

--
Husam
http://firstdba.googlepages.com






From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Warren Little
Sent: Thursday, May 18, 2006 9:06 AM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] query optimization - mysql vs postgresql


Hello,
my team is in the process of migrating some data from a mysql (5.0) database to our core postgres (8.1.3) database.
We are running into some performance issues with the postgres versions of the queries.
MySQL takes about 150ms to run the query where postgres is taking 2500ms.
The servers for the two database clusters are identical (dual amd 64bit dual core opteron 4GB ram scsi raid array Suse 9.x)

The table schemas have not changed (to the best of my knowledge) and the indexes were created to mimic the mysql versions as well.

I have attached one particular query along with the explain output.
Does anyone see anything in the explain that might help in diagnosing the problem.

thx 





Warren J. Little
CTO
Meridias Capital
1018 West Atherton Dr
Salt Lake City, UT 84123
Ph 866.369.7763







**
This message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter.

Thank you.

 FADLD Tag
**





Warren J. Little
CTO
Meridias Capital
1018 West Atherton Dr
Salt Lake City, UT 84123
Ph 866.369.7763







;; This buffer is for notes you don't want to save, and for Lisp evaluation.
;; If you want to create a file, visit that file with C-x C-f,
;; then enter the text in that file's own buffer.



 SELECT *,
 MAX(cashoutMaxAmt) cashoutMaxAmt, 
 product.prodKey AS prodKey, 
 UNIX_TIMESTAMP(product.lastModifiedTs) AS lastModifiedTs,
 product.comment AS prodComment,
 productGeneral.comment AS condComment FROM product, productCondition, productLockTerm, productGeneral LEFT JOIN productPropertyUse  ON (productGeneral.prodGeneralKey=productPropertyUse.prodGeneralKey) LEFT JOIN productPrepay   ON (productGeneral.prodGeneralKey=productPrepay.prodGeneralKey) LEFT JOIN productLoanPurpose  ON (productGeneral.prodGeneralKey=productLoanPurpose.prodGeneralKey) LEFT JOIN productDocLevel ON (productGeneral.prodGeneralKey=productDocLevel.prodGeneralKey) LEFT JOIN productPropertyType ON (productGeneral.prodGeneralKey=productPropertyType.prodGeneralKey) LEFT JOIN productStateON (productGeneral.prodGeneralKey=productState.prodGeneralKey) LEFT JOIN productMortgageLate AS mortgageLate0  ON (productGeneral.mortLateKey  =mortgageLate0.prodMortLateKey) LEFT JOIN productMortgageLate AS mortgageLate12 ON (productGeneral.mortLate12Key=mortgageLate12.prodMortLateKey) LEFT JOIN productMortgageLate AS mortgageLate24 ON (productGeneral.mortLate24Key=mortgageLate24.prodMortLateKey) WHERE product.prodKey = productLockTerm.prodKey  AND product.prodKey = productCondition.prodKey  AND productGeneral.prodGeneralKey = productCondition.prodGeneralKey  AND lockTerm = 'B30'  AND productGeneral.disable = 'E'  AND product.disable = 'E'  AND productLockTerm .disable = 'E'  AND lienPri = '1' AND exception='YES' AND ((loanAmtFrom=0   AND loanAmtTo=0) ||  (loanAmtFrom IS NULL AND loanAmtTo IS NULL) ||  (loanAmtTo = '40' AND loanAmtFrom = '40')) AND ((ltvFromPct=0AND ltvToPct=0)  || (ltvFromPct IS NULL AND ltvToPct IS NULL) || (ltvToPct = '80.000'  AND ltvFromPct = '80.000')) AND ((cltvFromPct=0   AND cltvToPct=0) || (cltvFromPct IS NULL AND cltvToPct IS NULL) || (cltvToPct = '100.000'AND cltvFromPct = '100.000')) AND ((crdscrFrom=0AND crdscrTo=0)  || (crdscrFrom IS NULL AND crdscrTo IS NULL) || (crdscrTo = '720'   AND crdscrFrom = '720')) AND ((totalLienMinAmt=0   AND totalLienMaxAmt=0) || (totalLienMinAmt IS NULL AND totalLienMaxAmt IS NULL) || (totalLienMaxAmt  = '40' AND totalLienMinAmt  = '40')) AND ((secondaryFinance='NO' AND '80.000

Re: [ADMIN] questions on toast tables

2006-04-30 Thread Warren Little
Tom,
thanks much for your help, the cluster command did the trick.
fyi running 8.1.2



On Sat, 2006-04-29 at 14:48 -0400, Tom Lane wrote:
 Warren Little [EMAIL PROTECTED] writes:
  Could this be the reference to the toast table that is preventing the
  vacuum from deleting the toast data?  And what purges dropped columns
  if not a full vacuum.
 
 Actually, the way that toast references work is that they'll go away at
 the next update of the row containing the reference.  The reason you've
 still got a pile of unremovable toast data is evidently that a lot of
 the parent table's rows have remained untouched since the wide bytea
 column existed.  (We choose not to do this housekeeping immediately
 during DROP COLUMN, but to defer it until the next row update.)
 
 One way to clean up the junk would be to do a trivial full-table update
 (UPDATE foo SET f1 = f1) and then VACUUM FULL, but there are other
 ways that are more efficient.  If you're using a PG version released
 within the last year, CLUSTER will do the job nicely.
 
   regards, tom lane
-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr 
SLC, UT 84123
ph 866.369.7763

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] questions on toast tables

2006-04-29 Thread Warren Little
I am now a little confused.

I ran the following with all but localhost connections disabled

vacuumdb --full --verbose -t casedocument -d tigris  vacfull.log 21

which produced the following output:

INFO:  vacuuming public.casedocument
INFO:  casedocument: found 0 removable, 39663 nonremovable row
versions in 852 pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 138 to 1953 bytes long.
There were 1855 unused item pointers.
Total free space (including removable row versions) is 65168 bytes.
0 pages are or will become empty, including 0 at the end of the table.
90 pages containing 14644 free bytes are potential move destinations.
CPU 0.02s/0.00u sec elapsed 0.24 sec.
INFO:  index copycasedoc_pkey now contains 39663 row versions in 387
pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.00u sec elapsed 0.56 sec.
INFO:  casedocument: moved 0 row versions, truncated 852 to 852 pages
DETAIL:  CPU 0.00s/0.01u sec elapsed 0.03 sec.
INFO:  vacuuming pg_toast.pg_toast_24216115
INFO:  pg_toast_24216115: found 0 removable, 23125332 nonremovable row
versions in 5781284 pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 45 to 2030 bytes long.
There were 1 unused item pointers.
Total free space (including removable row versions) is 185874460 bytes.
0 pages are or will become empty, including 0 at the end of the table.
24019 pages containing 24668944 free bytes are potential move
destinations.
CPU 217.21s/16.97u sec elapsed 1745.04 sec.
INFO:  index pg_toast_24216115_index now contains 23125332 row
versions in 94046 pages
DETAIL:  0 index row versions were removed.
1427 index pages have been deleted, 1427 are currently reusable.
CPU 4.60s/1.05u sec elapsed 49.34 sec.
INFO:  pg_toast_24216115: moved 0 row versions, truncated 5781284 to
5781284 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

Not sure if I'm reading the output correctly, but it appears no rows in
the toast table were removed.   
What else could be holding onto the data in pg_toast_24216115 ???







On Fri, 2006-04-28 at 16:03 -0400, Tom Lane wrote:
 Warren Little [EMAIL PROTECTED] writes:
  3) I know that once upon a time the table had a bytea column, but that
  was dropped.  Do I need to do a full vacuum on that table to get rid of
  the related toast data?
 
 Yup.  (I take it it was a pretty darn bulky bytea column, too)
 
   regards, tom lane
-- 
Warren Little
Chief Technology Office
Meridias Capital Inc
1018 W Atherton Dr 
SLC, UT 84123
ph 866.369.7763

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] questions on toast tables

2006-04-29 Thread Warren Little
selecting pg_attributes for the casedocument table shows
there is column with the following attributes

24216115|pg.dropped.2|0|0|-1|2|0|-1|-1|f|x|i|f|f|
t|t|

I also noticed a few other tables have a similar dropped column.

Could this be the reference to the toast table that is preventing the
vacuum from deleting the toast data?  And what purges dropped columns
if not a full vacuum.



On Sat, 2006-04-29 at 06:52 -0600, Warren Little wrote:
 I am now a little confused.
 
 I ran the following with all but localhost connections disabled
 
 vacuumdb --full --verbose -t casedocument -d tigris  vacfull.log 21
 
 which produced the following output:
 
 INFO:  vacuuming public.casedocument
 INFO:  casedocument: found 0 removable, 39663 nonremovable row
 versions in 852 pages
 DETAIL:  0 dead row versions cannot be removed yet.
 Nonremovable row versions range from 138 to 1953 bytes long.
 There were 1855 unused item pointers.
 Total free space (including removable row versions) is 65168 bytes.
 0 pages are or will become empty, including 0 at the end of the table.
 90 pages containing 14644 free bytes are potential move destinations.
 CPU 0.02s/0.00u sec elapsed 0.24 sec.
 INFO:  index copycasedoc_pkey now contains 39663 row versions in 387
 pages
 DETAIL:  0 index row versions were removed.
 0 index pages have been deleted, 0 are currently reusable.
 CPU 0.01s/0.00u sec elapsed 0.56 sec.
 INFO:  casedocument: moved 0 row versions, truncated 852 to 852 pages
 DETAIL:  CPU 0.00s/0.01u sec elapsed 0.03 sec.
 INFO:  vacuuming pg_toast.pg_toast_24216115
 INFO:  pg_toast_24216115: found 0 removable, 23125332 nonremovable row
 versions in 5781284 pages
 DETAIL:  0 dead row versions cannot be removed yet.
 Nonremovable row versions range from 45 to 2030 bytes long.
 There were 1 unused item pointers.
 Total free space (including removable row versions) is 185874460 bytes.
 0 pages are or will become empty, including 0 at the end of the table.
 24019 pages containing 24668944 free bytes are potential move
 destinations.
 CPU 217.21s/16.97u sec elapsed 1745.04 sec.
 INFO:  index pg_toast_24216115_index now contains 23125332 row
 versions in 94046 pages
 DETAIL:  0 index row versions were removed.
 1427 index pages have been deleted, 1427 are currently reusable.
 CPU 4.60s/1.05u sec elapsed 49.34 sec.
 INFO:  pg_toast_24216115: moved 0 row versions, truncated 5781284 to
 5781284 pages
 DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
 VACUUM
 
 Not sure if I'm reading the output correctly, but it appears no rows in
 the toast table were removed.   
 What else could be holding onto the data in pg_toast_24216115 ???
 
 
 
 
 
 
 
 On Fri, 2006-04-28 at 16:03 -0400, Tom Lane wrote:
  Warren Little [EMAIL PROTECTED] writes:
   3) I know that once upon a time the table had a bytea column, but that
   was dropped.  Do I need to do a full vacuum on that table to get rid of
   the related toast data?
  
  Yup.  (I take it it was a pretty darn bulky bytea column, too)
  
  regards, tom lane
-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr 
SLC, UT 84123
ph 866.369.7763

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] questions on toast tables

2006-04-28 Thread Warren Little

I have a toast table that is referenced by a single user table define
below:

CREATE TABLE casedocument
(
  pid varchar(12) NOT NULL,
  createdt timestamp NOT NULL,
  descr varchar(40),
  docformat varchar(10) NOT NULL,
  version int4 NOT NULL,
  casepid varchar(12) NOT NULL,
  createuserpid varchar(12) NOT NULL,
  typepid varchar(12) NOT NULL,
  sent bool DEFAULT false,
  active bool DEFAULT true,
  auxpid varchar(12),
  CONSTRAINT copycasedoc_pkey PRIMARY KEY (pid)
) 
WITH OIDS;


There are 40k rows in the user table
The toast table contains 5781417 pages

Something does not seem right here.


1) should the user table even be relying on a toast table

2) the 40k rows and data sizes do not seem to equal the number of pages
in the related toast table.

3) I know that once upon a time the table had a bytea column, but that
was dropped.  Do I need to do a full vacuum on that table to get rid of
the related toast data?

thanks




-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] questions on toast tables

2006-04-28 Thread Warren Little
Tom,

I'll run the vacuum over the weekend and see how that goes.

And, yes, large pdf documents (4-24mb a piece).

thanks 

On Fri, 2006-04-28 at 16:03 -0400, Tom Lane wrote:
 Warren Little [EMAIL PROTECTED] writes:
  3) I know that once upon a time the table had a bytea column, but that
  was dropped.  Do I need to do a full vacuum on that table to get rid of
  the related toast data?
 
 Yup.  (I take it it was a pretty darn bulky bytea column, too)
 
   regards, tom lane
-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[ADMIN] need a bit of help

2006-04-26 Thread Warren Little
Hello,
got myself in a pickle with the database running out of space.

I have a question regarding pg_toast tables and their relationships to
the parent tables.

If I run the following query
select * from pg_class where 
reltoastrelid like '%944' or reltoastidxid like '%944'

I receive a single row:
pg_toast_62130940,99,62130943,39934,0,62130942,0,7602767,3.04112e
+07,0,62130944,t,f,t,3,0,0,0,0,0,f,t,f,f,

which appears to be the pg_toast entry.  Shouldn't there be a pg_class
whose reltoastrelid equals the reltoastidxid of the pg_toast instance

I'm trying to determine if I have some tuples that are not being
vacuumed.

thanks

-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] need a bit of help

2006-04-26 Thread Warren Little
ok

thanks

On Wed, 2006-04-26 at 19:22 -0400, Tom Lane wrote:
 Warren Little [EMAIL PROTECTED] writes:
  which appears to be the pg_toast entry.  Shouldn't there be a pg_class
  whose reltoastrelid equals the reltoastidxid of the pg_toast instance
 
 No.  The deal is
 
 regular table's reltoastrelid points to OID of its toast table
 
 toast table's reltoastidxid points to OID of its index
 
 There's no back links.
 
   regards, tom lane
-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] Emergency - Need assistance

2006-01-02 Thread warren little
I received the following error message when trying to copy a table from
one database to another on the same cluster:

pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_restore: [custom archiver] could not read data block -- expected 1,
got 0
pg_restore: *** aborted because of error

The table contains a bytea column which houses pdf documents.
Is this a sign of corrupted data and if so would setting
zero_damaged_pages = true allow the copy to proceed?

The table is about 25GB in size and takes a long time to dump/restore
and I'm running out of time to get the cluster back into production.

note running:
PostgreSQL 8.1beta4 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)


-- 
Warren Little
CTO
Meridias Capital Inc
ph 866.369.7763

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] Emergency - Need assistance

2006-01-02 Thread warren little
Tom,
The extent of the messages I received from the command
pg_dump -Fc --table=casedocument -d tigrissave | pg_restore --verbose -d
tigris is listed below:

 pg_dump: SQL command failed
pg_dump: Error message from server: server closed the connection
unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_restore: [custom archiver] could not read data block -- expected 1,
got 0
pg_restore: *** aborted because of error


I had removed all the files in pg_log prior to getting this error and no
new logfile was created.  I'm guessing I screwed up the logger when
removing all the files, but I assumed that when writing to the error
logs the backend would create a file if one did not exist.

I currently attempt to run the dump/restore with the zero_damaged_pages
turned on to see if the results yield something more useful.  

About the beta version, this is temporary, hadn't really planned on
running production on our development box.  Haven't had any issues with
8.1beta for a few months and will be moving to 8.1.x as soon as some new
hardware arrives (about a week).

thanks 

On Mon, 2006-01-02 at 15:10 -0500, Tom Lane wrote:
 warren little [EMAIL PROTECTED] writes:
  I received the following error message when trying to copy a table from
  one database to another on the same cluster:
 
  pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
  pg_restore: [custom archiver] could not read data block -- expected 1,
  got 0
  pg_restore: *** aborted because of error
 
 You seem to have omitted the messages that would indicate what's
 actually wrong; the above is all just subsidiary damage after whatever
 caused the FETCH to fail.
 
  The table is about 25GB in size and takes a long time to dump/restore
  and I'm running out of time to get the cluster back into production.
 
  note running:
  PostgreSQL 8.1beta4 on x86_64-unknown-linux-gnu, compiled by GCC gcc
  (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
 
 You're running a production database on a beta release??
 
   regards, tom lane
-- 
Warren Little
CTO
Meridias Capital Inc
ph 866.369.7763

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Emergency - Need assistance

2006-01-02 Thread warren little
The dump/restore failed even with the zero_damaged_pages=true.
The the logfile (postgresql-2006-01-02_130023.log)
did not have much in the way of useful info. I've attached the section
of the logfile around the time of the crash.  I cannot find any sign of
a core file.  Where might the core dump have landed?

Regarding your comments about losing the evidence, the data I'm trying
to load is in another database in the same cluster which I have no
intention of purging until a can get the table moved to the new
database.

thanks




On Mon, 2006-01-02 at 16:34 -0500, Tom Lane wrote:
 warren little [EMAIL PROTECTED] writes:
   pg_dump: SQL command failed
  pg_dump: Error message from server: server closed the connection
  unexpectedly
  This probably means the server terminated abnormally
  before or while processing the request.
  pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
 
 Hmm.  This could mean corrupted data files, but it's hard to be sure
 without more info.
 
  I had removed all the files in pg_log prior to getting this error and no
  new logfile was created.  I'm guessing I screwed up the logger when
  removing all the files, but I assumed that when writing to the error
  logs the backend would create a file if one did not exist.
 
 The file *does* exist, there's just no directory link to it anymore :-(
 You need to force a logfile rotation, which might be most easily done by
 stopping and restarting the postmaster.
 
 What you need to do is see the postmaster log entry about the backend
 crash.  If it's dying on a signal (likely sig11 = SEGV) then inspecting
 the core file might yield useful information.
 
  I currently attempt to run the dump/restore with the zero_damaged_pages
  turned on to see if the results yield something more useful.  
 
 That really ought to be the last resort not the first one, because it
 will destroy not only data but most of the evidence about what went
 wrong...
 
   regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [ADMIN] Emergency - Need assistance

2006-01-02 Thread warren little
Sorry,
forget the attachment.

On Mon, 2006-01-02 at 15:24 -0700, warren little wrote:
 The dump/restore failed even with the zero_damaged_pages=true.
 The the logfile (postgresql-2006-01-02_130023.log)
 did not have much in the way of useful info. I've attached the section
 of the logfile around the time of the crash.  I cannot find any sign of
 a core file.  Where might the core dump have landed?
 
 Regarding your comments about losing the evidence, the data I'm trying
 to load is in another database in the same cluster which I have no
 intention of purging until a can get the table moved to the new
 database.
 
 thanks
 
 
 
 
 On Mon, 2006-01-02 at 16:34 -0500, Tom Lane wrote:
  warren little [EMAIL PROTECTED] writes:
pg_dump: SQL command failed
   pg_dump: Error message from server: server closed the connection
   unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.
   pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
  
  Hmm.  This could mean corrupted data files, but it's hard to be sure
  without more info.
  
   I had removed all the files in pg_log prior to getting this error and no
   new logfile was created.  I'm guessing I screwed up the logger when
   removing all the files, but I assumed that when writing to the error
   logs the backend would create a file if one did not exist.
  
  The file *does* exist, there's just no directory link to it anymore :-(
  You need to force a logfile rotation, which might be most easily done by
  stopping and restarting the postmaster.
  
  What you need to do is see the postmaster log entry about the backend
  crash.  If it's dying on a signal (likely sig11 = SEGV) then inspecting
  the core file might yield useful information.
  
   I currently attempt to run the dump/restore with the zero_damaged_pages
   turned on to see if the results yield something more useful.  
  
  That really ought to be the last resort not the first one, because it
  will destroy not only data but most of the evidence about what went
  wrong...
  
  regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
@ 2006-01-02 15:02:02 MST:LOG:  autovacuum: processing database tigris
@ 2006-01-02 15:03:01 MST:LOG:  server process (PID 28772) was terminated by 
signal 11
@ 2006-01-02 15:03:01 MST:LOG:  terminating any other active server processes
[EMAIL PROTECTED] 2006-01-02 15:03:01 MST:WARNING:  terminating connection 
because of crash of another server process
[EMAIL PROTECTED] 2006-01-02 15:03:01 MST:DETAIL:  The postmaster has commanded 
this server process to roll back the current transaction and exit, because 
another server process exited abnormally and possibly corrupted shared memory.
[EMAIL PROTECTED] 2006-01-02 15:03:01 MST:HINT:  In a moment you should be able 
to reconnect to the database and repeat your command.
192.168.19.129(50732)@192.168.19.129 2006-01-02 15:03:01 MST:WARNING:  
terminating connection because of crash of another server process
192.168.19.129(50732)@192.168.19.129 2006-01-02 15:03:01 MST:DETAIL:  The 
postmaster has commanded this server process to roll back the current 
transaction and exit, because another server process exited abnormally and 
possibly corrupted shared memory.
192.168.19.129(50732)@192.168.19.129 2006-01-02 15:03:01 MST:HINT:  In a moment 
you should be able to reconnect to the database and repeat your command.
192.168.19.129(50730)@192.168.19.129 2006-01-02 15:03:01 MST:WARNING:  
terminating connection because of crash of another server process
192.168.19.129(50730)@192.168.19.129 2006-01-02 15:03:01 MST:DETAIL:  The 
postmaster has commanded this server process to roll back the current 
transaction and exit, because another server process exited abnormally and 
possibly corrupted shared memory.
192.168.19.129(50730)@192.168.19.129 2006-01-02 15:03:01 MST:HINT:  In a moment 
you should be able to reconnect to the database and repeat your command.
192.168.19.129(50731)@192.168.19.129 2006-01-02 15:03:01 MST:WARNING:  
terminating connection because of crash of another server process
192.168.19.129(50731)@192.168.19.129 2006-01-02 15:03:01 MST:DETAIL:  The 
postmaster has commanded this server process to roll back the current 
transaction and exit, because another server process exited abnormally and 
possibly corrupted shared memory.
192.168.19.129(50731)@192.168.19.129 2006-01-02 15:03:01 MST:HINT:  In a moment 
you should be able to reconnect to the database and repeat your command.
@ 2006-01-02 15:03:01 MST:LOG:  all server processes terminated; reinitializing
@ 2006-01-02 15:03:01 MST:LOG:  database system was interrupted at 2006-01-02 
15:02:47 MST
@ 2006-01-02 15:03:01 MST:LOG:  checkpoint

[ADMIN]

2005-11-03 Thread Warren Little
Hello,
I'm trying to make a copy of a database using the following syntax:
pg_dump -v -Fc -b cert | pg_restore -v -d prodcopy

The output looks good until it the following:
pg_dump: socket not open
pg_dump: SQL command to dump the contents of table casedocument
failed: PQendcopy() failed.
pg_dump: Error message from server: socket not open
pg_dump: The command was: COPY public.casedocument (pid, content,
createdt, descr, docformat, version, casepid, createuserpid, typepid,
sent, active) TO stdout;
pg_dump: *** aborted because of error
pg_restore: [custom archiver] could not read data block -- expected 1,
got 0
pg_restore: *** aborted because of error

casedocument schema looks like:

CREATE TABLE casedocument
(
  pid varchar(12) NOT NULL,
  content bytea NOT NULL,
  createdt timestamp NOT NULL,
  descr varchar(40),
  docformat varchar(10) NOT NULL,
  version int4 NOT NULL,
  casepid varchar(12) NOT NULL,
  createuserpid varchar(12) NOT NULL,
  typepid varchar(12) NOT NULL,
  sent bool DEFAULT false,
  active bool DEFAULT true,
  CONSTRAINT casedocument_pkey PRIMARY KEY (pid),
  CONSTRAINT casedocument_r0 FOREIGN KEY (pid) REFERENCES pobject (pid)
ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT casedocument_r1 FOREIGN KEY (casepid) REFERENCES cas (pid)
ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT casedocument_r2 FOREIGN KEY (createuserpid) REFERENCES
party (pid) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT casedocument_r3 FOREIGN KEY (typepid) REFERENCES
casedocumenttype (pid) ON UPDATE NO ACTION ON DELETE NO ACTION
) 
WITH OIDS;
ALTER TABLE casedocument OWNER TO tigris;

Is there any way to determine what data the copy doesn't like

thanks




-- 
Warren Little
Chief Technology Officer
Meridias Capital Inc
Tel: 866.369.7763

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Error when trying to do a recovery

2005-03-04 Thread Warren Little




Tom,
Thanks for your response. I found an old thread you had responded to a while back 
regarding 32 vs 64 bit architectures (which I have) and compatibility issues. 
So until I upgrade my backup/test box looks like I'm stuck using pg_dump :(

Thanks
 

On Wed, 2005-03-02 at 01:22 -0500, Tom Lane wrote:


Warren Little [EMAIL PROTECTED] writes:
 When I start postgres (pg_ctl -l logfile start). The following text is
 all that exists in the logfile:

 2005-03-01 14:21:43 MSTFATAL:  incorrect checksum in control file

 What control file is it referring to?

$PGDATA/global/pg_control.  I think this must mean that you are trying
to start a postmaster that is fundamentally incompatible with the old
postmaster --- enough so that it thinks the pg_control struct should be
a different size than the old postmaster thought.

			regards, tom lane






[ADMIN] Error when trying to do a recovery

2005-03-01 Thread Warren Little
Hello,
I trying to do a restore using from the online backups.  I have restored
the data backup (untarred) and removed all files from the pg_xlog
directory.  I also deleted and recreated the archive_status directory
under pg_xlog.  I created the recovery.conf file with only the
restore_command in it.
When I start postgres (pg_ctl -l logfile start). The following text is
all that exists in the logfile:

2005-03-01 14:21:43 MSTFATAL:  incorrect checksum in control file


What control file is it referring to?  Is there some way to generate
more verbose logging?


thanks

Warren Little
Meridias Capital


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[ADMIN] changing the size of a varchar column

2004-09-08 Thread Warren Little




Hello,
 Does pg7.4.x support resizing a varchar column 
ie varchar(30) - varchar(200)

If not does the feature in 8.0 relating to changing column types support this?

thx




-- 
Warren Little
Senior Vice President
Secondary Markets and IT Manager
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8079








Re: [ADMIN] HELP - need way to restore only blobs

2004-02-18 Thread Warren Little
Thanks to all who responded.
Found the pg_dumplo tool in contrib which did exactly what I needed.

On Wed, 2004-02-18 at 05:54, Jeff Boes wrote:
 At some point in time, [EMAIL PROTECTED] (Warren Little) wrote:
 
 I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall
 tool which I now realize does not capture blobs.
 
 I now need to move only the blob data to the 7.4 database.
 The problem with redoing the dump with pg_dump -b is the database is now
 in production and writing over the top of changes to the database is not
 exceptable.  The blob data is very static so if there was some way to
 copy the physical files from disk and modify some records in the system
 tables to properly locate the  blob records that would be best.  Another
 option I was looking at was to restore the archived database with the
 blobs intact and then restore the production version over the top
 without destroying the blob data.
 
 In a similar situation, we wrote a script that constructs \lo_export commands
 to dump each large object to a file. The file name for each contains the
 information needed to reconstruct the object at the other end. For instance, if
 you have a table like this:
 
 CREATE TABLE foobar (primary_id INTEGER PRIMARY KEY, large_obj OID, ...)
 
 you'd want a series of commands that look like this:
 
 \lo_export large_obj primary_id.dmp
 
 Then, given a directory full of such files, you construct another series of
 commands that look like this:
 
 \lo_import primary_id.dmp
 
 UPDATE foobar SET large_obj = new_value WHERE primary_id = primary_id;
 
 The trick is capturing the output of the \lo_import command and parsing it to
 get the large object OID after it is created.
 
 I don't know if I have permission to post or email the script, but if you
 contact me offline I should know by then.
 
 jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m
 
 --
 | Genius may have its limitations, but stupidity is not
 Jeff Boes   | thus handicapped.
 [EMAIL PROTECTED]   |--Elbert Hubbard (1856-1915), American author
 
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
-- 
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] large objects missing

2004-02-17 Thread Warren Little
In an attempt to migrate from 7.3 to 7.4 doing a pg_dumpall I did not
get any of my large objects.  Is there a special process which needs to
take place and is there a way to simple copy the large objects
seperately?

-- 
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [ADMIN] large objects missing HELP

2004-02-17 Thread Warren Little
I'm in a bit of a pickle on this, so if anyone has some immediate
suggestion it would be very much appreciated

On Tue, 2004-02-17 at 10:10, Warren Little wrote:
 In an attempt to migrate from 7.3 to 7.4 doing a pg_dumpall I did not
 get any of my large objects.  Is there a special process which needs to
 take place and is there a way to simple copy the large objects
 seperately?
-- 
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[ADMIN] HELP - need way to restore only blobs

2004-02-17 Thread Warren Little
I recently posted a similar message but left some key info out:

I migrated my database from 7.3 to 7.4 this weekend using the pg_dumpall
tool which I now realize does not capture blobs.

I now need to move only the blob data to the 7.4 database.
The problem with redoing the dump with pg_dump -b is the database is now
in production and writing over the top of changes to the database is not
exceptable.  The blob data is very static so if there was some way to
copy the physical files from disk and modify some records in the system
tables to properly locate the  blob records that would be best.  Another
option I was looking at was to restore the archived database with the
blobs intact and then restore the production version over the top
without destroying the blob data.

All suggestions welcome, I'm dying here.


-- 
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---(end of broadcast)---
TIP 8: explain analyze is your friend


[ADMIN] setting statement_timeout on live postmaster

2004-01-16 Thread Warren Little
Is there anyway to modify the statement_timeout value set
in postgresql.conf  we creating a connection?
I would like to have the value set to something like 60 seconds on a
regular basis but my vacuumdb statements run longer and timeout without
completing the vacuum

-- 
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly