Re: [GENERAL] Adding TEXT columns tanks performance?

2007-02-08 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/08/07 20:50, Arturo Perez wrote:
> Hi all,
> 
> Saturday I changed a table to add a varchar(24) and a TEXT column.  It's
> used for some reporting purposes (small potatoe stuff really) and the
> TEXT column remains mostly empty.  However, this week performance has
> gotten terrible.  Queries joining against the aforementioned table have
> gone from 40s to 1500s.  The schema change is the only explanation I
> have for the 30x slower queries.  The queries have definitely gotten
> disk-bound (I can see the connection process sitting in the D state for
> several minutes).
> 
> This is pg8.1.4 on Linux RedHat. Would adding a TEXT column impact
> things that much?  The change to the schema was
> 
> alter table foo add column title text;

Could the issue be record fragmentation?  Does PG even suffer from
record fragmentation?

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFy/YRS9HxQb37XmcRAlM3AKCmcuEQR0OBrNrs6vtdfrxtMgmswgCggVKS
d8W9YEzDUmCfT53jorbCq9w=
=xpiF
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Adding TEXT columns tanks performance?

2007-02-08 Thread Arturo Perez

Hi all,

Saturday I changed a table to add a varchar(24) and a TEXT column.   
It's used for some reporting purposes (small potatoe stuff really)  
and the TEXT column remains mostly empty.  However, this week  
performance has gotten terrible.  Queries joining against the  
aforementioned table have gone from 40s to 1500s.  The schema change  
is the only explanation I have for the 30x slower queries.  The  
queries have definitely gotten disk-bound (I can see the connection  
process sitting in the D state for several minutes).


This is pg8.1.4 on Linux RedHat. Would adding a TEXT column impact  
things that much?  The change to the schema was


alter table foo add column title text;

tia,
arturo


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Move data between two databases SQL-ASCII to UTF8

2007-02-08 Thread Michael Fuhr
On Thu, Feb 08, 2007 at 08:22:40PM -0500, Chad Wagner wrote:
> On 2/8/07, Clodoaldo <[EMAIL PROTECTED]> wrote:
> >Use pg_dump to dump the db and use iconv on the generated file:
> >
> >iconv -f ASCII -t UTF-8 mydb.dump -o mydb_utf8.dump

Converting the data from ASCII to UTF-8 doesn't make much sense:
if the data is ASCII then it doesn't need conversion; if the data
needs conversion then it isn't ASCII.

> Wouldn't it be adequate to set the client encoding to SQL_ASCII in the dump
> file (if that was infact the encoding on the original database)?

http://www.postgresql.org/docs/8.2/interactive/multibyte.html#AEN24118

"If the client character set is defined as SQL_ASCII, encoding
conversion is disabled, regardless of the server's character set."

As Clodoaldo mentioned, if the data is strictly ASCII then no
conversion is necessary because the UTF-8 representation will be
the same.  If you set client_encoding to SQL_ASCII and the data
contains non-ASCII characters that aren't valid UTF-8 then you'll
get the error 'invalid byte sequence for encoding "UTF8"'.  In that
case set client_encoding to whatever encoding the data is really
in; likely guesses for Western European languages are LATIN1, LATIN9,
or perhaps WIN1252.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Possibly odd question; diff results?

2007-02-08 Thread Madison Kelly

Hi all,

  I've got a 'history' schema that records changes in the public schema
tables over time. I use a trigger and function to do this. What I would
like to do though, and this may not even be possible, is say something
like (pseudo-code) "SELECT DIFF foo_name FROM history.foo WHERE
foo_id=X;" and have a *nix 'diff' style results shown (sort of like
looking at diffs in CVS/SVN).

  Has anyone done this or should I just look into writing a small
program to do this outside of postgres?

Thanks!

Madi


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Move data between two databases SQL-ASCII to UTF8

2007-02-08 Thread Chad Wagner

On 2/8/07, Clodoaldo <[EMAIL PROTECTED]> wrote:


Use pg_dump to dump the db and use iconv on the generated file:

iconv -f ASCII -t UTF-8 mydb.dump -o mydb_utf8.dump



Wouldn't it be adequate to set the client encoding to SQL_ASCII in the dump
file (if that was infact the encoding on the original database)?

SET client_encoding TO SQL_ASCII;

And then let the database do the conversion?  I would think since the db is
UTF8 and the client is claiming SQL_ASCII then it would convert the data to
UTF8.

I have done this in the past with SQL dumps that had characters that UTF8
didn't like, and I just added the "SET client_encoding TO LATIN1;" since I
knew the source encoding was LATIN1.


--
Chad
http://www.postgresqlforums.com/


Re: [GENERAL] Move data between two databases SQL-ASCII to UTF8

2007-02-08 Thread Clodoaldo

2007/2/8, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:


I need to convert my database to UTF8. Is there a way to do a SELECT ...
INSERT from the old database table to the new one? Would the INSERT correct
data errors between the two data types? I only have 10 tables and the
biggest has < 8000 rows.


Use pg_dump to dump the db and use iconv on the generated file:

iconv -f ASCII -t UTF-8 mydb.dump -o mydb_utf8.dump


If the characters are strictly ASCII (<=127) then the conversion will
not be necessary. But if there are characters bigger than 127 then the
conversion will have to be made from iso-8859-1 to utf-8:

iconv -f ISO_8859-1 -t UTF-8 mydb.dump -o mydb_utf8.dump


Regards,
--
Clodoaldo Pinto Neto

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Move data between two databases SQL-ASCII to UTF8

2007-02-08 Thread MargaretGillon
I need to convert my database to UTF8. Is there a way to do a SELECT ... 
INSERT from the old database table to the new one? Would the INSERT 
correct data errors between the two data types? I only have 10 tables and 
the biggest has < 8000 rows. 

Running Version 8.1.4 on Redhat 9
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 
*** ***
Margaret Gillon, IS Dept., Chromalloy Los Angeles, ext. 297

This e-mail message and any attachment(s) are for the sole use of the 
intended recipient(s) and may contain proprietary and/or confidential 
information which may be privileged or otherwise protected from 
disclosure.  Any unauthorized review, use, disclosure or distribution is 
prohibited.  If you are not the intended recipient(s), please contact the 
sender by reply email and destroy the original message and any copies of 
the message as well as any attachment(s) to the original message.

Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread Merlin Moncure

On 2/9/07, Tom Lane <[EMAIL PROTECTED]> wrote:

Jim Nasby <[EMAIL PROTECTED]> writes:
> Note that DOMAIN support unfortunately isn't perfect; for example,
> plpgsql doesn't enforce domain constraints (IIRC there's some other
> bugs as well).

Fixed in 8.2 ... not that there aren't necessarily bugs left, but the
above as a blanket statement is obsolete.



biggest limitation of domains is IMO a lack of an array type.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PgSQL dump

2007-02-08 Thread Lukas
Hello,

 we have one table in database (db has over 200 tables), which has one
blob filed for storing user photos. We are making backup every night, but
it is now too large because of blobs. We do not need to make backup of
photos every night, but the question is how to backup database without
blob field?
 I tried to look at keys of pgdump, but found nothing...

Now we are using it like this:
pg_dump -f /home/backup/DB.sql -d -Fp -v DB

thx
Lukas


-- 
This message has been scanned for viruses and
dangerous content, and is believed to be clean.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread MargaretGillon
>Alvaro Herrera <[EMAIL PROTECTED]> wrote on 02/08/2007 02:51:52 
PM:
> > [EMAIL PROTECTED] wrote:
> > Using PGADMINIII I added a column of this type to my database but when 
I 
> > try to update it I get an error.
> > 
> > column:
> > ALTER TABLE datatype ADD COLUMN charflag "char"[];
> 
> You added an array of "char", which is not the same.
> 
> -- 
> Alvaro Herrera http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support

ALTER TABLE datatype ADD COLUMN charflag "char";

Got it. Thank you. Margaret Gillon.

Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:

> Using PGADMINIII I added a column of this type to my database but when I 
> try to update it I get an error.
> 
> column:
> ALTER TABLE datatype ADD COLUMN charflag "char"[];

You added an array of "char", which is not the same.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread MargaretGillon
>Alvaro Herrera <[EMAIL PROTECTED]> wrote on 02/08/2007 01:58:20 
PM:
> The "char" type (including quotes) is a very different animal from all
> those character types the manual you quote is talking about.  "char" is
> a single byte, while varchar(1) and all the rest are a single character,
> meaning there can be multiple bytes in presence of a multibyte encoding;
> so Postgres is forced to use a variable-length structure to store it.
> "char" has no such requirement.  It's used in the system catalogs as a
> "poor man's enum", for example in pg_class.relkind.
> 
> -- 
> Alvaro Herrera http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.

Using PGADMINIII I added a column of this type to my database but when I 
try to update it I get an error.

column:
ALTER TABLE datatype ADD COLUMN charflag "char"[];
ALTER TABLE datatype ALTER COLUMN charflag SET STORAGE EXTENDED;
COMMENT ON COLUMN datatype.charflag IS 'testing';

update:
update datatype set charflag = 'A'; 

results:
ERROR: array value must start with "{" or dimension information
SQL state: 22P02

Margaret Gillon

Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> >Jim Nasby <[EMAIL PROTECTED]> wrote on 02/08/2007 12:12:00 PM:

> > If you do end up back at using foreign keys, I suggest using either a 
> > smallint or "char"... the savings across the number of fields you're 
> > looking at would start to add up, especially if you start putting a 
> > decent number of rows in the table.
> 
> I ended up using Varchar(1). According to the help there is no speed 
> difference in the character types, on am I misunderstanding something? 

The "char" type (including quotes) is a very different animal from all
those character types the manual you quote is talking about.  "char" is
a single byte, while varchar(1) and all the rest are a single character,
meaning there can be multiple bytes in presence of a multibyte encoding;
so Postgres is forced to use a variable-length structure to store it.
"char" has no such requirement.  It's used in the system catalogs as a
"poor man's enum", for example in pg_class.relkind.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problems shutting down Postmaster

2007-02-08 Thread Andrew Edson
Okay, for some reason we still had a problem connecting after a pg_ctl reload, 
but the shutdown went through this time, and everything's working now.  Thanks 
for the assistance.

Tom Lane <[EMAIL PROTECTED]> wrote:  Andrew Edson writes:
> Someone else added the IP address for the other system to the
> pg_hba.conf file later, but since we had already started postgres by
> then, it didn't take.

You don't need to shut down the server to adjust pg_hba.conf ... just
"pg_ctl reload".

> And now, for some reason, postgres doesn't seem to want to let me shut
> down.

A standard shutdown doesn't finish until all existing sessions exit of
their own accord. Perhaps you wanted "-m fast"?

regards, tom lane


  
-
Looking for earth-friendly autos? 
 Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.  

Re: [GENERAL] Problems shutting down Postmaster

2007-02-08 Thread Richard Troy

On Thu, 8 Feb 2007, Andrew Edson wrote:

>   Someone else added the IP address for the other system to the
> pg_hba.conf file later, but since we had already started postgres by
> then, it didn't take.  And now, for some reason, postgres doesn't seem
> to want to let me shut down.

Try telling Postgres about your new pg_hba.cfg file by using

pg_ctl reload

I'm not aware why you couldn't just stop it with

   pg_ctl stop

even with the wrong pg_hba.conf file.

HTH,
Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Problems shutting down Postmaster

2007-02-08 Thread Rodrigo Gonzalez

Andrew Edson wrote:
I need to add a system to our postgres pg_hba.conf file; the main server 
went down a few days ago, and this system was mistakenly left off the 
list of allowed systems when the server was brought back up.  (Older 
version of the .conf file; I managed to accidentally delete the more 
current one while trying to copy it for safekeeping.  Proves what an 
idiot I am...)
 
Someone else added the IP address for the other system to the 
pg_hba.conf file later, but since we had already started postgres by 
then, it didn't take.  And now, for some reason, postgres doesn't seem 
to want to let me shut down.  I keep getting the following error when 
trying to shut the postmaster down:
 
./pg_ctl stop -D /usr/local/pgsql/data/
waiting for postmaster to shut 
down... failed

pg_ctl: postmaster does not shut down
 
I am doing this as the postgres user, so I'm not sure what else I might 
be doing wrong.  Does anyone have any kind of suggestions about how to 
solve this?  Alternately, would someone please let me know of other ways 
to shut down the postmaster so we can restart it with the updated .conf 
file?
 
Thank you for your consideration.



Don't be flakey. Get Yahoo! Mail for Mobile 
 and
always stay connected 
 to 
friends.


Take a look at the -m option for pg_ctl

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problems shutting down Postmaster

2007-02-08 Thread Tom Lane
Andrew Edson <[EMAIL PROTECTED]> writes:
>   Someone else added the IP address for the other system to the
>   pg_hba.conf file later, but since we had already started postgres by
>   then, it didn't take.

You don't need to shut down the server to adjust pg_hba.conf ... just
"pg_ctl reload".

> And now, for some reason, postgres doesn't seem to want to let me shut
> down.

A standard shutdown doesn't finish until all existing sessions exit of
their own accord.  Perhaps you wanted "-m fast"?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread MargaretGillon
>Jim Nasby <[EMAIL PROTECTED]> wrote on 02/08/2007 12:12:00 PM:
> Also, I suggest using "char" instead of just char. "char" is a 
> special data type that's limited to storing a single character; the 
> advantage is that it's much smaller and faster than a char.
> 
> If you do end up back at using foreign keys, I suggest using either a 
> smallint or "char"... the savings across the number of fields you're 
> looking at would start to add up, especially if you start putting a 
> decent number of rows in the table.
> --
> Jim Nasby[EMAIL PROTECTED]
> EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

Hi Jim,

I ended up using Varchar(1). According to the help there is no speed 
difference in the character types, on am I misunderstanding something? 

Tip: There are no performance differences between these three types, apart 
from the increased storage size when using the blank-padded type. While 
character(n) has performance advantages in some other database systems, it 
has no such advantages in PostgreSQL. In most situations text or character 
varying should be used instead. 
from: http://www.postgresql.org/docs/8.2/static/datatype-character.html

Margaret Gillon 


[GENERAL] Problems shutting down Postmaster

2007-02-08 Thread Andrew Edson
I need to add a system to our postgres pg_hba.conf file; the main server went 
down a few days ago, and this system was mistakenly left off the list of 
allowed systems when the server was brought back up.  (Older version of the 
.conf file; I managed to accidentally delete the more current one while trying 
to copy it for safekeeping.  Proves what an idiot I am...)
   
  Someone else added the IP address for the other system to the pg_hba.conf 
file later, but since we had already started postgres by then, it didn't take.  
And now, for some reason, postgres doesn't seem to want to let me shut down.  I 
keep getting the following error when trying to shut the postmaster down:
   
  ./pg_ctl stop -D /usr/local/pgsql/data/
waiting for postmaster to shut 
down... failed
pg_ctl: postmaster does not shut down
   
  I am doing this as the postgres user, so I'm not sure what else I might be 
doing wrong.  Does anyone have any kind of suggestions about how to solve this? 
 Alternately, would someone please let me know of other ways to shut down the 
postmaster so we can restart it with the updated .conf file?
   
  Thank you for your consideration.


 
-
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.

Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread Tom Lane
Jim Nasby <[EMAIL PROTECTED]> writes:
> Note that DOMAIN support unfortunately isn't perfect; for example,  
> plpgsql doesn't enforce domain constraints (IIRC there's some other  
> bugs as well).

Fixed in 8.2 ... not that there aren't necessarily bugs left, but the
above as a blanket statement is obsolete.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Ouch!

2007-02-08 Thread Tom Lane
andy rost <[EMAIL PROTECTED]> writes:
> Running 8.1.3 on FreeBSD 6.0 server and got the following (I'm most 
> interested in the "PANIC" entry):

> 2007-02-06 17:22:22 CST PANIC:  right sibling's left-link doesn't match

> Any advice?

Update to 8.1.8.  I'm not sure about this specific symptom, but I know
we have fixed a couple of btree logic errors recently.

Did you by any chance save a physical copy of the index before
reindexing?  I'd have liked to look into it to gather more detail.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Installing PostgreSQL on Windows XP

2007-02-08 Thread Alvaro Herrera
Jim Nasby wrote:
> BTW, 8.2.2 has a rather nasty bug; you'll want to go grab 8.2.3 or  
> 8.2.1.

You don't want 8.2.1 either because it has a nasty security bug.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Installing PostgreSQL on Windows XP

2007-02-08 Thread Jim Nasby
BTW, 8.2.2 has a rather nasty bug; you'll want to go grab 8.2.3 or  
8.2.1.


On Feb 7, 2007, at 10:30 AM, Christian Henning wrote:


Hi there, first post. I would like to share a solution for Windows
users having problems installing PostgreSQL 8.2.2. I couldn't find
anything in the docs and FAQ.

My installation routine went fine until it tried to start the postgres
service. This failed and there was no hint of why. So went to the
Service panel myself and tried to start the service. Now at least I
got the following error message:

Error 1068: The dependency service or group failed to start.

Surfing the net was no success but having a look at the event viewer
offered the following:

The PostgreSQL Database Server 8.2 service depends on the Net Logon
service which failed to start because of the following error:
%%0

Another important hint was this Net Logon error message

This computer is configured as a member of a work group, not as a
member of a domain. The Netlogon service does not need to run in this
configuration.


So putting all these information finally let to the MS kb269375 which
solved my problem. See http://support.microsoft.com/kb/269375

Let's see what's next. ;-)
Christian

---(end of  
broadcast)---

TIP 6: explain analyze is your friend



--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Limit for number of Joins in a View? Version 8.1.4 on Redhat 9

2007-02-08 Thread Jim Nasby

On Feb 7, 2007, at 10:05 AM, [EMAIL PROTECTED] wrote:
I was using the flag table to keep the flags consistent between all  
the tables in the database that might use them. I didn't know about  
CREATE DOMAIN which will do what I want perfectly


Note that DOMAIN support unfortunately isn't perfect; for example,  
plpgsql doesn't enforce domain constraints (IIRC there's some other  
bugs as well). So you should probably do a test to make sure  
everything you'll be doing with domains will work before you re-code  
everything.


Also, I suggest using "char" instead of just char. "char" is a  
special data type that's limited to storing a single character; the  
advantage is that it's much smaller and faster than a char.


If you do end up back at using foreign keys, I suggest using either a  
smallint or "char"... the savings across the number of fields you're  
looking at would start to add up, especially if you start putting a  
decent number of rows in the table.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Ouch!

2007-02-08 Thread andy rost


Running 8.1.3 on FreeBSD 6.0 server and got the following (I'm most 
interested in the "PANIC" entry):


2007-02-06 17:15:11 CST LOG:  archived transaction log file 
"000104A30025"
2007-02-06 17:15:27 CST LOG:  archived transaction log file 
"000104A30026"
2007-02-06 17:15:39 CST LOG:  archived transaction log file 
"000104A30027"
2007-02-06 17:16:16 CST LOG:  archived transaction log file 
"000104A30028"
2007-02-06 17:16:31 CST LOG:  archived transaction log file 
"000104A30029"

2007-02-06 17:16:36 CST LOG:  statistics buffer is full
2007-02-06 17:16:36 CST LOG:  statistics buffer is full
2007-02-06 17:16:36 CST LOG:  statistics buffer is full
2007-02-06 17:18:10 CST LOG:  archived transaction log file 
"000104A3002A"
2007-02-06 17:19:40 CST LOG:  archived transaction log file 
"000104A3002B"
2007-02-06 17:20:18 CST LOG:  archived transaction log file 
"000104A3002C"
2007-02-06 17:21:12 CST LOG:  archived transaction log file 
"000104A3002D"
2007-02-06 17:21:34 CST LOG:  archived transaction log file 
"000104A3002E"
2007-02-06 17:21:55 CST LOG:  archived transaction log file 
"000104A3002F"
2007-02-06 17:22:03 CST LOG:  archived transaction log file 
"000104A30030"

2007-02-06 17:22:22 CST PANIC:  right sibling's left-link doesn't match
2007-02-06 17:22:22 CST STATEMENT:  update winddir set value = 20, qc = 
0, updated = '2007-02-06 17:18:22', revised = '2007-02-06 23:17:00', 
actual = '2007-02-06 22:55:0
0', off_set = 300, feed = 'NP', feed_history = 'NPiNPu', qcd = 
'1900-01-01 00:00:00' where station_id = 'KHEZ' and date = '2007-02-06 
23:00:00' and value != 20 and ( ( o
ff_set > 300 ) or  ( value = - ) or ( ( actual = '2007-02-06 
22:55:00' ) and ( revised > '2007-02-06 23:17:00' ) ) )
2007-02-06 17:22:24 CST LOG:  server process (PID 58978) was terminated 
by signal 6

2007-02-06 17:22:24 CST LOG:  terminating any other active server processes
2007-02-06 17:22:24 CST WARNING:  terminating connection because of 
crash of another server process
2007-02-06 17:22:24 CST DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited ab

normally and possibly corrupted shared memory.
2007-02-06 17:22:24 CST HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2007-02-06 17:22:24 CST WARNING:  terminating connection because of 
crash of another server process
2007-02-06 17:22:24 CST DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited ab

normally and possibly corrupted shared memory.
2007-02-06 17:22:24 CST HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.


The server restarted successfully. Good. The server panicked again the 
next time the same table was accessed. Boo. Of course this happened at 
night - over and over again. Especial problematic during our PITR 
archive cycle during which an archived WAL required by recovery process 
was shuffled around preventing a successful restart.


I sorted out my archived WAL issues, brought the server up, and 
reindexed the offending table. Life has been good ever since.


It looks like the server behaved admirably. However, I'm wondering if 
there is any more light that can be shed on this. Archived inquiries 
indicate a) there may be hardware issues causing this problem and b) a 
potential software problem related to this condistion may have been 
addressed in 8.2.n.


Any advice?


--

Andrew Rost
National Operational Hydrologic Remote Sensing Center (NOHRSC)
National Weather Service, NOAA
1735 Lake Dr. West, Chanhassen, MN 55317-8582
Voice: (952)361-6610 x 234
Fax: (952)361-6634
[EMAIL PROTECTED]
http://www.nohrsc.noaa.gov



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] DBI-Link 2.0

2007-02-08 Thread David Fetter
On Thu, Feb 08, 2007 at 03:38:25AM -0800, [EMAIL PROTECTED] wrote:
> On Feb 8, 8:54 am, Hannes Dorbath <[EMAIL PROTECTED]> wrote:
> > On 07.02.2007 17:59, [EMAIL PROTECTED] wrote:
> >
> > > Is there any form of manual for DBI-link out there?
> >
> > > Any link is greatly appreciated!
> >
> > I think all available documentation comes with it in various
> > README files. Do you have a specific problem?
> >
> > -- Regards, Hannes Dorbath
> 
> I worked out all the problems that i had (and there were some), but
> i have to pass on the information now to a developper that has no
> experience with dbi-link.  I'll just try to explain him what i know
> already and pass him the implementation.txt file.

Hi,

I'm the author of DBI-Link, and if you have suggestions on how better
to document it, I'd be happy to make improvements to it.  Of course,
the best kind of suggestions come with a context-style diff against
the current CVS tree. :)

Cheers,
David
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL textbook

2007-02-08 Thread Michael Schmidt
Ted,
Google the publisher's name to get to its site.  
mike
  - Original Message - 
  From: Ted Byers 
  To: Michael Schmidt ; 
pgsql-general@postgresql.org 
  Sent: Thursday, February 08, 2007 11:05 AM
  Subject: Re: [GENERAL] SQL textbook


  >
  >- Original Message - 
  >From: Michael Schmidt
  >To: Jan Mura ; 
pgsql-general@postgresql.org
  >Sent: Thursday, February 08, 2007 12:54 PM
  >Subject: Re: [GENERAL] SQL textbook
  >
  >Many books are available for download.  Joe Celko is a well respected 
  >author and just about anything with his name on it is good.  Here are some 
  >specific suggestions.

  Do you have any URLs for the books that are available for download?

  Cheers

  Ted 




Re: [GENERAL] SQL textbook

2007-02-08 Thread Rich Shepard

On Thu, 8 Feb 2007, Ted Byers wrote:


Do you have any URLs for the books that are available for download?


  Google is your friend.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SQL textbook

2007-02-08 Thread Rich Shepard

On Thu, 8 Feb 2007, Michael Schmidt wrote:


There are a lot of resources available.  Each database program has a
different dialect of SQL, and I find the PostgreSQL on-line manual has an
excellent introduction of SQL and even some theoretical issues.  I would
start there (its free and directly applicable to using PostgreSQL).



 I would like to ask for a hint for a good SQL textbook. I don't want just
 a reference manual but real textbook with a theoretical background so I
 could to design my databases following the general rules as normal forms
 and so on. I mean something on the net of course.


  I think Jan is confusing two separate concepts: database design
(particularly how to normalize the tables) and the SQL language used to work
with whatever design one's created.

  For the language itself, I concur that anything written by Joe Celko is
worth reading. I believe he wrote a monthly column for Database Advisor in
the 1980s that was always worth reading.

  For database design, I suggest a visit to a library, technical bookstore,
or a Google search to find something written to answer the questions Jan
has. Seems like every author has a different definition of 'database
design,' and some confuse design tools with design concepts. Only Jan can
decide what references are pertinent to his needs. Again, Google is your
friend.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] SQL textbook

2007-02-08 Thread Ted Byers


- Original Message - 
From: Michael Schmidt

To: Jan Mura ; pgsql-general@postgresql.org
Sent: Thursday, February 08, 2007 12:54 PM
Subject: Re: [GENERAL] SQL textbook

Many books are available for download.  Joe Celko is a well respected 
author and just about anything with his name on it is good.  Here are some 
specific suggestions.


Do you have any URLs for the books that are available for download?

Cheers

Ted 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SQL textbook

2007-02-08 Thread Michael Schmidt
Jan,
There are a lot of resources available.  Each database program has a different 
dialect of SQL, and I find the PostgreSQL on-line manual has an excellent 
introduction of SQL and even some theoretical issues.  I would start there (its 
free and directly applicable to using PostgreSQL).  

Searching on the web will yield some tutorials but these tend to be pretty 
brief.  There is a discussion group at DbForums that has some information.  
There are also occasional subject specific articles, such as on data modeling.

Many books are available for download.  Joe Celko is a well respected author 
and just about anything with his name on it is good.  Here are some specific 
suggestions.

An introduction to database vocabulary and terms:
Date, C.J. (2006).  The relational database dictionary.  Sebastopol, CA:  
O"Reilly

Database design and concepts:
Hernandez, M.J. (2003). Database design for mere mortals (2nd ed.).  New York: 
Addison-Wesley.

Bagui, S. & Earp, R. (2003).  Database design using entity-relationship 
diagrams.  New York: Auerbach Publications.

Celko, J.  (1999).  Joe Celko's data and databases:  concepts in practice.  San 
Francisco: Morgan Kaufman.

SQL
Celko, J. (2005).  Joe Celko's SQL programming style.  Amsterdam: Elsevier.

Gulutzan, P., & Peltzer, T.  (2002).  SQL performance tuning.  New York:  
Addison Wesley.

Faroult, S., & Robson, P.  (2006).  The are ot SQL.  Sebastopol, CA:  O'Reilly.

Tow, D.  (2003).  SQL tuning.  Sebastopol, CA:  O'Reilly.
  - Original Message - 
  From: Jan Mura 
  To: pgsql-general@postgresql.org 
  Sent: Wednesday, February 07, 2007 9:43 PM
  Subject: [GENERAL] SQL textbook


  Hello,

  I would like to ask for a hint for a good SQL textbook. I don't want just a
  reference manual but real textbook with a theoretical background so I could
  to design my databases following the general rules as normal forms and so
  on.
  I mean something on the net of course.

  Thank you

  Jan Mura


  ---(end of broadcast)---
  TIP 9: In versions below 8.0, the planner will ignore your desire to
 choose an index scan if your joining column's datatypes do not
 match


Re: [GENERAL] Database corruption.

2007-02-08 Thread Brusser, Michael
 
>> Zeroing out the whole block containing it is the usual recipe. 

Something like this worked for me in the past:

% dd bs=8k count=X < /dev/zero >> clog-file

I had to calculate X, because I usually had a situation with truncated
clog-file, and a failed attempt to read it from offset XYZ.
And I used pg_controldata to check on the block-size.

Mike.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database corruption.

2007-02-08 Thread Tom Lane
Michael Guerin <[EMAIL PROTECTED]> writes:
> You're suggesting to zero out the block in the underlying table files, 
> or creating the missing pg_clog file and start filling with zero's?

The former.  Making up clog data is unlikely to help --- the bad xmin is
just the first symptom of what's probably more widespread corruption in
the table page.  If you want, once you've identified the page containing
the problem, use "pg_filedump -i -f" to dump out that page and we can
have a look to see just how bad it is.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Client connection: Port 5432 / Postgres

2007-02-08 Thread Mikko Partio

Nicolas Gignac wrote:
I have installed Postgres 8.2 on a internal server having Windows 
Server 2003 (IIS 6) up and running.
- I have configure the hp_config file to: host
all 
0.0.0.0./0md5
   ^   


I think it should be like:

host   all   all   0.0.0.0/0   md5


MP

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Database corruption.

2007-02-08 Thread Michael Guerin



Zeroing out the whole block containing it is the usual recipe.  I forget
the exact command but if you trawl the archives for mention of "dd" and
"/dev/zero" you'll probably find it.  Keep in mind you want to stop the
postmaster first, to ensure it doesn't have a copy of the bad block
cached in memory.

regards, tom lane
  
You're suggesting to zero out the block in the underlying table files, 
or creating the missing pg_clog file and start filling with zero's?  I 
just want to clarify, b/c all references I've read so far dealing with 
this error talk about zapping the pg_clog file?


Would it be safer to just find the row(s) that are having problems, 
create the missing clog file to get past the error, then delete these 
rows from the db?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error on table insert: attribute 13 has wrong type

2007-02-08 Thread Tom Lane
"Andrus" <[EMAIL PROTECTED]> writes:
> I installed 8.2 server on Windows XP
> Today every instert to table causes error:

> ERROR:  attribute 13 has wrong type
> DETAIL:  Table has type character, but query expects character.

> Any idea how to fix it ?

Update to 8.2.3.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Service startup - troubleshooting

2007-02-08 Thread Richard Huxton

Mike wrote:

Hello,

I know this is probably a borderline Linux question but I think
someone with PostgreSQL expertise will better be able to answer it (I
already struck out on the Linux side and after much searching on the
Web).

I installed Postgres 8.1 on Ubuntu 6.10 using Synaptic. Postgres is
listed under System|Administration|Services in the UI, and there's an
init script (which references /usr/share/postgresql-common/init.d-
functions) in the appropriate run level directory. However, Postgres
still won't run at startup.


As root, try running:
  /etc/init.d/postgresql status
If that says it's running, check with ps, otherwise try
  /etc/init.d/postgresql start
Error messages should be printed to console or to a log somewhere like 
/var/log/postgresql



--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Client connection: Port 5432 / Postgres

2007-02-08 Thread Richard Huxton

Nicolas Gignac wrote:

Hello,

I have installed Postgres 8.2 on a internal server having Windows Server
2003 (IIS 6) up and running.
- I have configure the hp_config file to: host
all 0.0.0.0./0
md5


Don't you mean pg_hba.conf?


- I have change the listening address to '*' in the postgres.conf file
- No Firewall activated on this internal server
- I have restart the server and I can connect to postgres from remote
computer, PostgreSQL works only from the local host
- When I do a Netstat I got this: TCP   127.0.0.0.1:5432
0.0.0.0.0  LISTENING  440


It's either not re-read the postgresql.conf file or there is a syntax 
error in the file. Otherwise you'd see it listening. What do the logs 
say when you restart PostgreSQL?



- When I tried to connect from local host to the port with telnet localhost
5432, I got an error: on port 23, connect failed.


That looks like you're trying to connect to port 23.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Client connection: Port 5432 / Postgres

2007-02-08 Thread A. Kretschmer
am  Wed, dem 07.02.2007, um 11:28:56 -0500 mailte Nicolas Gignac folgendes:
> Hello,
> 
> I have installed Postgres 8.2 on a internal server having Windows Server 2003
> (IIS 6) up and running.
> - I have configure the hp_config file to: host   
> all 0.0.0.0./0md5

wrong file. Use pg_hba.conf instead.

> - I have change the listening address to '*' in the postgres.conf file

wrong file. Use postgresql.conf instead.


> - No Firewall activated on this internal server

Sure?


> - I have restart the server and I can connect to postgres from remote 
> computer,
> PostgreSQL works only from the local host

You can or you can't connect from remote?


> - When I tried to connect from local host to the port with telnet localhost
> 5432, I got an error: on port 23, connect failed.

Stupid windows. Perhaps a Port-Forwarding? A plain 'telnet localhost 5432' 
should do a connect to this port, not to 23.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Error on table insert: attribute 13 has wrong type

2007-02-08 Thread Bruce Momjian
Andrus wrote:
> I installed 8.2 server on Windows XP
> 
> Today every instert to table causes error:
> 
> ERROR:  attribute 13 has wrong type
> DETAIL:  Table has type character, but query expects character.
> 
> Any idea how to fix it ?

You need to upgrade to 8.2.3.  That was a rush release to fix a bug
introduced in 8.2.2.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Database corruption.

2007-02-08 Thread Tom Lane
Michael Guerin <[EMAIL PROTECTED]> writes:
> Ok, so I'm trying to track down the rows now (big table slow queries :( 
> )  How does one zero out a corrupt row, plain delete?  I see references 
> for creating the missing pg_clog file but I don't believe that's what 
> you're suggesting..

Zeroing out the whole block containing it is the usual recipe.  I forget
the exact command but if you trawl the archives for mention of "dd" and
"/dev/zero" you'll probably find it.  Keep in mind you want to stop the
postmaster first, to ensure it doesn't have a copy of the bad block
cached in memory.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Database corruption.

2007-02-08 Thread Michael Guerin

Tom Lane wrote:

Michael Guerin <[EMAIL PROTECTED]> writes:
  

Hmm, that makes it sound like a plain old data-corruption problem, ie,
trashed xmin or xmax in some tuple header.  Can you do a "select
count(*)" from this table without getting the error?

  

no,  select count(*) fails around 25 millions rows.



OK, so you should be able to narrow down the corrupted row(s) and zero
them out, which'll at least let you get back the rest of the table.
See past archives for the standard divide-and-conquer approach to this.

regards, tom lane
  
Ok, so I'm trying to track down the rows now (big table slow queries :( 
)  How does one zero out a corrupt row, plain delete?  I see references 
for creating the missing pg_clog file but I don't believe that's what 
you're suggesting..


-michael


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgres training down under.

2007-02-08 Thread Tony Veinberg




Paul Lambert wrote:
Are
there any companies in the great land of Australia that offer Postgres
training courses?
  
  
I see a number listed for around the US on the postgresql.org website -
just curious if anything similar goes on down under.
  
  
Cheers,
  
P.
  
  

Fujitsu

-- 
Cheers
Tony Veinberg
DBA

LISAsoft Pty. Ltd.
38 Greenhill Rd
Wayville SA 5034
Phone: 08 8272 1555
Mobile: 0400 525115
Email: [EMAIL PROTECTED]




[GENERAL] Error on table insert: attribute 13 has wrong type

2007-02-08 Thread Andrus
I installed 8.2 server on Windows XP

Today every instert to table causes error:

ERROR:  attribute 13 has wrong type
DETAIL:  Table has type character, but query expects character.

Any idea how to fix it ?

Andrus. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] DBI-Link 2.0

2007-02-08 Thread [EMAIL PROTECTED]
Hi,

Is there any form of manual for DBI-link out there?

Any link is greatly appreciated!

Regards,
Jo.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DBI-Link 2.0

2007-02-08 Thread [EMAIL PROTECTED]
On Feb 8, 8:54 am, Hannes Dorbath <[EMAIL PROTECTED]> wrote:
> On 07.02.2007 17:59, [EMAIL PROTECTED] wrote:
>
> > Is there any form of manual for DBI-link out there?
>
> > Any link is greatly appreciated!
>
> I think all available documentation comes with it in various README
> files. Do you have a specific problem?
>
> --
> Regards,
> Hannes Dorbath

I worked out all the problems that i had (and there were some), but i
have to pass on the information now to a developper that has no
experience with dbi-link.  I'll just try to explain him what i know
already and pass him the implementation.txt file.

thanks!

Jo De Haes.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Service startup - troubleshooting

2007-02-08 Thread Mike
Hello,

I know this is probably a borderline Linux question but I think
someone with PostgreSQL expertise will better be able to answer it (I
already struck out on the Linux side and after much searching on the
Web).

I installed Postgres 8.1 on Ubuntu 6.10 using Synaptic. Postgres is
listed under System|Administration|Services in the UI, and there's an
init script (which references /usr/share/postgresql-common/init.d-
functions) in the appropriate run level directory. However, Postgres
still won't run at startup.

I've searched various log files for any messages from Postgres but
found none. My guess would be that this is a permission-related
problem, but I don't know for sure.

Can anyone share ideas on how to troubleshoot the (non-) starting of
Postgres at boot time and how to resolve this problem?

Thank you!


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres SQL Syntax

2007-02-08 Thread Jim C.
Richard Huxton wrote:
> Jim C. wrote:
>>> You probably want one of the mysql converter projects, e.g.
>>>   http://pgfoundry.org/projects/mysql2pgsql/
>>> Also read the "converting from other databases" section here:
>>>   http://www.postgresql.org/docs/techdocs
>>
>> I tried several conversion tools and did get some minor success with one
>> or two but mostly they didn't work. The article at this site seems good
>> at first blush but then one notices that it doesn't seem to have been
>> updated any time in the last five years.
> 
> I have to admit it's been at least a couple of years since I've used any
> of the mysql2pg scripts.

Perhaps this might be an issue the developers wish to address? No
sarcasm here.  I know they are busy and may not have the resources any
time soon. If it hasn't been much under consideration for several years
they may want to think about it though.


Jim C.






signature.asc
Description: OpenPGP digital signature


[GENERAL] line folding versus shop line

2007-02-08 Thread [EMAIL PROTECTED]
Hi,

I'm using postgresql 7.4. If I execute SQL Select statement, then the
records fold around lines.

I would like to turn this off, so that lines do not fold.

I know the option in less, which one has to set to "-S". I have
exported the variable and it works with textfiles.

However, if I use psql then lines are folded even so the enviornment
variable of LESS is set to S.

How do I turn line folding off, so that records do not wrap around???

Fritz


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Client connection: Port 5432 / Postgres

2007-02-08 Thread Nicolas Gignac

Hello,

I have installed Postgres 8.2 on a internal server having Windows Server
2003 (IIS 6) up and running.
- I have configure the hp_config file to: host
all 0.0.0.0./0
md5
- I have change the listening address to '*' in the postgres.conf file
- No Firewall activated on this internal server
- I have restart the server and I can connect to postgres from remote
computer, PostgreSQL works only from the local host
- When I do a Netstat I got this: TCP   127.0.0.0.1:5432
0.0.0.0.0  LISTENING  440
- When I tried to connect from local host to the port with telnet localhost
5432, I got an error: on port 23, connect failed.

What are the options/ways to allow connection by PostgreSQL remote Clients
to the 5432 port on my server in this internal network? Or is there any
problem with my steps, something I missed?

Thanks for your help.

Nicolas Gignac


Re: [GENERAL] Under Red Hat AS in a WIndows Virtual Machine

2007-02-08 Thread Richard Huxton

Matt Miller wrote:

I'm looking at running PG under Red Hat AS, but within a virtual
machine.  The host will be a Windows 2003 Enterprise server running
Windows Virtual Machine 2005 Enterprise.  From the archives it seems
that people are doing this type of thing successfully, but I just wanted
to see if anyone had any advice.


Performance with I/O tends to suck on virtual machines, but otherwise I 
don't think PG will know.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Data migration from version 7.2.1 to 8.1.5

2007-02-08 Thread Arindam

On 2/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:

Arindam <[EMAIL PROTECTED]> writes:
> On 2/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Now you *should* be using the newer pg_dump to dump from the old
>> database --- that's considered good practice.  But you need an
>> up-to-date pg_restore to read its output.

> Well this time I got it dumped using 8.1.5 and tried restoring it too
> with 8.1.5 - however no difference in the result.

[ experiments a bit... ]  Apparently you have still a third version
laying about someplace, because that message actually must be coming
from a 7.3 or 7.4 pg_restore.  Check your PATH.

$ pg_dump -Ft -s regression >zzz.tar
$ ~postgres/version82/bin/pg_restore zzz.tar | wc
6450 17782 136294
$ ~postgres/version81/bin/pg_restore zzz.tar | wc
6455 17788 136330
$ ~postgres/version80/bin/pg_restore zzz.tar | wc
6587 18184 139822
$ ~postgres/version74/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] unsupported version (1.10) in file header
0 0 0
$ ~postgres/version73/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] unsupported version (1.10) in file header
0 0 0
$ ~postgres/version72/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] input file does not appear to be a valid archive
0 0 0
$ ~postgres/version71/bin/pg_restore zzz.tar | wc
Archiver: input file does not appear to be a valid archive
0 0 0
$


Lousy me. I qualified it with the full path - it worked. Earlier I was
running pg_restore and it was coming from /usr/bin - version 7.3.6.

Thank you.



   regards, tom lane




--
Against stupidity, the very Gods themselves contend in vain --
Friedrich Schiller

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Data migration from version 7.2.1 to 8.1.5

2007-02-08 Thread Tom Lane
Arindam <[EMAIL PROTECTED]> writes:
> On 2/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Now you *should* be using the newer pg_dump to dump from the old
>> database --- that's considered good practice.  But you need an
>> up-to-date pg_restore to read its output.

> Well this time I got it dumped using 8.1.5 and tried restoring it too
> with 8.1.5 - however no difference in the result.

[ experiments a bit... ]  Apparently you have still a third version
laying about someplace, because that message actually must be coming
from a 7.3 or 7.4 pg_restore.  Check your PATH.

$ pg_dump -Ft -s regression >zzz.tar
$ ~postgres/version82/bin/pg_restore zzz.tar | wc
6450 17782 136294
$ ~postgres/version81/bin/pg_restore zzz.tar | wc
6455 17788 136330
$ ~postgres/version80/bin/pg_restore zzz.tar | wc
6587 18184 139822
$ ~postgres/version74/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] unsupported version (1.10) in file header
0 0 0
$ ~postgres/version73/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] unsupported version (1.10) in file header
0 0 0
$ ~postgres/version72/bin/pg_restore zzz.tar | wc
pg_restore: [archiver] input file does not appear to be a valid archive
0 0 0
$ ~postgres/version71/bin/pg_restore zzz.tar | wc
Archiver: input file does not appear to be a valid archive
0 0 0
$

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Data migration from version 7.2.1 to 8.1.5

2007-02-08 Thread Arindam

On 2/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:

Arindam <[EMAIL PROTECTED]> writes:
> I tried dumping data in tar format using the pg_dump utility (with
> version 7.2.1) and restoring it using pg_restore utility (with version
> 8.1.5). But on running this command:

> pg_restore -d mydb -C -u 
> I get the following error:
> pg_restore: [archiver] unsupported version (1.10) in file header

It looks to me a whole lot like you got it backward, ie, used the 8.1
pg_dump and then tried to restore with the 7.2 pg_restore.

Now you *should* be using the newer pg_dump to dump from the old
database --- that's considered good practice.  But you need an
up-to-date pg_restore to read its output.


Well this time I got it dumped using 8.1.5 and tried restoring it too
with 8.1.5 - however no difference in the result.



   regards, tom lane



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Under Red Hat AS in a WIndows Virtual Machine

2007-02-08 Thread Matt Miller
I'm looking at running PG under Red Hat AS, but within a virtual
machine.  The host will be a Windows 2003 Enterprise server running
Windows Virtual Machine 2005 Enterprise.  From the archives it seems
that people are doing this type of thing successfully, but I just wanted
to see if anyone had any advice.

I'll probably also be setting up some kind of standby/replication site
running under a similar environment.  The standby, though, will run the
non-Enterprise flavor of the Microsoft stuff.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] [Security] New RPM Sets for Fedora Core / Red Hat Enterprise Linux

2007-02-08 Thread Devrim GUNDUZ
-
PostgreSQL New RPM Sets
2007-02-08

Versions: 8.2.3, 8.1.8, 8.0.12

Set label: 8.2.3-1PGDG, 8.1.8-1PGDG, 8.0.12-1PGDG
-

-
Release Info:

PostgreSQL RPM Building Project has released RPMs for new PostgreSQL
minor releases, and they are available in main FTP site and its mirrors.
Users should upgrade to these versions as soon as possible.

We currently have RPMs for:

- Fedora Core 2-x86_64
- Fedora Core 4
- Fedora Core 5
- Fedora Core 5-x86_64
- Fedora Core 6
- Fedora Core 6-x86_64
- Red Hat Enterprise Linux Enterprise Server 3.0
- Red Hat Enterprise Linux Enterprise Server 4
- Red Hat Enterprise Linux Enterprise Server 4-x86_64
- Red Hat Enterprise Linux Advanced Server 4
- Red Hat Enterprise Linux Advanced Server 4-x86_64

More may come later. I want to thank every package builder for
this great number of supported platforms. Support for Red Hat 9, RHEL
2.1 and Fedora Core 1 and 3 was already abandoned . Fedora Core 2 and
Fedora Core 4 support will be abandoned in future releases. Please let
us know if you can assist us in building RPMS of missing Red Hat /
Fedora Core platforms. 

There is a new package layout as of PostgreSQL 8.2.2. All previous
releases were including a single postgresql-pl package which included
PL/Perl, PL/Python and PL/Tcl. Since the old layout made people install
all dependencies for all languages, we splitted this package into 3
subpackage to have less dependencies:

* postgresql-plperl
* postgresql-plpython
* postgresql-pltcl

Either of these packages will obsolete postgresql-pl, so install
whichever you want during the upgrade.

8.2.3 set contains a package that was introduced in 8.2.0:
compat-postgresql-libs. Fedora Core 5, Fedora Core 6, Red Hat Enterprise
Linux 4 and Red Hat Enterprise Linux 5 ship with libpq.so.4. Any apps
that are built against libpq need this, like php-pgsql, However, since
PostgreSQL 8.2 ships with libpq.so.5, users will need old libs in order
not to break apps.

For example, if you are running PHP+PostgreSQL 8.1.X on FC6 and if you
want to upgrade to PostgreSQL 8.2, you will need this package before
upgrading to 8.2. We are shipping libpq.so.4 with this new set. Please
install compat-postgresql-libs before installing other packages.

For complete list of changes in RPM sets, please refer to the changelogs
in the RPMs. Use 
 rpm -q -changelog package_name
for querying the changelog.

Since this release is not a major release, it will not requires a
dump/reload from the previous release. However, if you are upgrading
from very early releases, you may need to upgrade. Please see the
Release Notes to confirm procedures for this.

The SRPMs are also provided. Please note that we have one SRPM for all
platforms. 

We also have a howto document about RPM installation of PostgreSQL:

http://pgfoundry.org/docman/?group_id=148

Please follow the instructions before installing/upgrading.

Almost each RPM has been signed by the builder, and each directory
contains CURRENT_MAINTAINER file which includes the name/email of the
package builder and link to their PGP key. 

If you experience problems with the RPMs or if you have feature
requests, please join

pgsqlrpms-general ( at ) pgfoundry ( dot ) org

More info about the list is found at:

http://lists.pgfoundry.org/mailman/listinfo/pgsqlrpms-general

The project page is:

http://pgfoundry.org/projects/pgsqlrpms

Please do not use these resources for issue running or using PostgreSQL
once it is installed.  Please download these files from:

http://www.postgresql.org/ftp/binary/v8.2.3/linux/
http://www.postgresql.org/ftp/binary/v8.1.8/linux/
http://www.postgresql.org/ftp/binary/v8.0.12/linux/


Regards,
-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Data migration from version 7.2.1 to 8.1.5

2007-02-08 Thread Tom Lane
Arindam <[EMAIL PROTECTED]> writes:
> I tried dumping data in tar format using the pg_dump utility (with
> version 7.2.1) and restoring it using pg_restore utility (with version
> 8.1.5). But on running this command:

> pg_restore -d mydb -C -u 
> I get the following error:
> pg_restore: [archiver] unsupported version (1.10) in file header

It looks to me a whole lot like you got it backward, ie, used the 8.1
pg_dump and then tried to restore with the 7.2 pg_restore.

Now you *should* be using the newer pg_dump to dump from the old
database --- that's considered good practice.  But you need an
up-to-date pg_restore to read its output.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Data migration from version 7.2.1 to 8.1.5

2007-02-08 Thread Arindam

Hi,

I am trying to write a script which will migrate all the data from a
particular database on version 7.2.1 and restore the same data along
with the full schema on an 8.1.5 database. The database itself will be
created and does not need to be created.

I tried dumping data in tar format using the pg_dump utility (with
version 7.2.1) and restoring it using pg_restore utility (with version
8.1.5). But on running this command:

pg_restore -d mydb -C -u 

I get the following error:

pg_restore: [archiver] unsupported version (1.10) in file header

Any idea where I am going wrong.

Thanks,
Arindam

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Array OUT columns in a record returned from a function - part deux

2007-02-08 Thread Ged

OK I reread the array documentation and something that didn't stand
out before now does :).

"By default, the lower bound index value of an array's dimensions is
set to one. If any of an array's dimensions has a lower bound index
not equal to one, an additional decoration that indicates the actual
array dimensions will precede the array structure decoration. This
decoration consists of square brackets ([]) around each array
dimension's lower and upper bounds, with a colon (:) delimiter
character in between. The array dimension decoration is followed by an
equal sign (=)." -- 8.10.6. Array Input and Output Syntax.

Sorry to have been such a noob, I'll go and do what it says.


Cheers,
Ged.


On 08/02/07, Ged <[EMAIL PROTECTED]> wrote:

I was on here a couple of days ago with problems getting relational
data *into* some array variables (now solved thank you), but now I'm
here with problems getting data *out* again.

A week or so ago I did a little test page with a little test table
pulled from some example about postgres arrays. Everything worked
perfectly and you can still see the test page on my dev site at
http://www.gedsguides.com/playpen/.

So last night I started working with the real app, pulling the data
out of the function and supposedly iterating through the array data
using exactly the same technique as on the test page. Oh dear! JSP
exception! Bad integer! So I switched to one of the text fields and it
now displays, but the *first* item out of the array has an array
bounds descriptor stuck on the front of it like so: "[0:1]". I had
seen this before when looking at the function results in psql, but
presumed that this was an artefact of the display process, and that it
would go away when iterated through in jdbc. I didn't remember that it
*hadn't* been there when looking at the array data in the little test
table.

You can see the current state of affairs at
http://www.gedsguides.com/wow/quests/1. I'm iterating through the
quest starters, but just blatting the array out in one for quest
enders a.t.m.

The only difference I can think of is that the playpen test page is
looking at data from a table that has native array columns whereas the
function is building arrays on the fly. All the data is in the same
database (8.0.8), coming out of the same version of Tomcat (5.5.20)
and using the same jdbc driver (dunno where they keep that, it's the
ISP's driver not mine).

I enclose some selects in psql. You can see that when I select the
whole array field there's a descriptor present in the function data
that isn't there in the table data. But when I specifically select
element 0 the descriptor disappears, so psql is obviously doing
something right that the driver isn't :-) :


ggw_dev=> select * from sal_emp;
 name |  pay_by_quarter   |schedule
--+---+-
 Bill | {1,1,1,1} | {meeting,lunch}
 Ged  | {1,1,11000,11000} | {training,presentation,clients}
(2 rows)

ggw_dev=> select id, name, start_id, start_name from get_quest_page_details(1);
 id |name |start_id |start_name
+-+-+---
  1 | Uncovering the Past | [0:1]={124,123} | [0:1]={Quae,"Prospector Whelgar"}
(1 row)

ggw_dev=> select id, name, start_id[0], start_name[0] from
get_quest_page_details(1);
 id |name | start_id | start_name
+-+--+
  1 | Uncovering the Past |  124 | Quae
(1 row)


Cheers,
Ged.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Tom Lane
Richard Huxton  writes:
> Well, both cluster and vacuum full will require a lock on the table. But 
>   they're more or less doing the same thing, so why the one should work 
> and the other not I don't know.

CLUSTER isn't MVCC-safe --- it'll drop rows that are committed dead
even if they are potentially still visible to some open transaction.
The OP has *clearly* got an open-transaction problem.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Richard Huxton

Malcolm McLean wrote:

Richard Huxton wrote:

Malcolm McLean wrote:

I tested this theory by stopping java applications that were

connected

to the database and all other connections that were using

transactions

and the full vacuum was still unable to remove the dead rows.

What I'm still wondering about, is why the dead row count rises
incredibly high, then all of a sudden drops to 0 again when the java
apps never stop running.
Are you certain there's no open transaction? Perhaps keep an eye on 
SELECT * FROM pg_stat_activity - there might be something you don't
know 

about.


Okay, I just stopped all java processes again and all pg_stat_activity
returned were IDLE's and no IDLE in transactions. The strange this is
that a cluster command removes the dead rows, and this can only be run
when all the java apps have been stopped.


Well, both cluster and vacuum full will require a lock on the table. But 
 they're more or less doing the same thing, so why the one should work 
and the other not I don't know.



If it was the autovacuum interfering, I'd expect a lock failure.


I doubt autovacuum is interfering, I think it is the process that is
clearing the dead rows. If that is the case, then why is only autovacuum
able to clear them and not a manual vacuum.


Makes no sense to me - autovacuum has no magic about it, it just runs 
vacuum.


You've got the permissions to vacuum the table, otherwise you'd get an 
error. You've checked for transactions in pg_stat_activity. Don't see 
how it's worth checking pg_locks if you don't have any transactions. 
This one's puzzling me.


I've had a quick look at the release notes for 8.1.x and can't see 
anything obvious on this issue, but it might be worth upgrading to 8.1.8 
to see if that just makes it go away.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Malcolm McLean
Richard Huxton wrote:
> Malcolm McLean wrote:
>> I tested this theory by stopping java applications that were
connected
>> to the database and all other connections that were using
transactions
>> and the full vacuum was still unable to remove the dead rows.
>> 
>> What I'm still wondering about, is why the dead row count rises
>> incredibly high, then all of a sudden drops to 0 again when the java
>> apps never stop running.
> 
> Are you certain there's no open transaction? Perhaps keep an eye on 
> SELECT * FROM pg_stat_activity - there might be something you don't
know 
> about.

Okay, I just stopped all java processes again and all pg_stat_activity
returned were IDLE's and no IDLE in transactions. The strange this is
that a cluster command removes the dead rows, and this can only be run
when all the java apps have been stopped.

> If it was the autovacuum interfering, I'd expect a lock failure.

I doubt autovacuum is interfering, I think it is the process that is
clearing the dead rows. If that is the case, then why is only autovacuum
able to clear them and not a manual vacuum.

Malcolm
This information is intended only for the person or entity to which it is 
addressed and may contain private, confidential, proprietary and/or privileged 
material and may be subject to confidentiality agreements. If you are not the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this e-mail is strictly prohibited. If you have 
received this e-mail in error, please notify us immediately by return email or 
telephone +27 12 673-1400 and destroy the original message. 

Any review, retransmission, dissemination, or any other use of or taking of any 
action in reliance upon this information, by persons or entities other than the 
intended recipient, is prohibited. 

The company is neither liable for proper, complete transmission of the 
information contained in this communication, any delay in its receipt or that 
the mail is virus-free.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Richard Huxton

Malcolm McLean wrote:

Richard Huxton wrote:

What is causing those dead rows to not get cleared even by a full
vacuum? Is there any way keep them low without having to run a

cluster

command as that is a locking statement and requires me to close all

java

applications that are connecting to that table before running the
cluster. 
Aha! I'll bet your java app (or something in the stack) is issuing a 
BEGIN and just sitting there. Try disconnecting the apps and seeing if



vacuum recovers rows then. If so, you'll need to get your java code to



stop sitting on open transactions.


I tested this theory by stopping java applications that were connected
to the database and all other connections that were using transactions
and the full vacuum was still unable to remove the dead rows.

What I'm still wondering about, is why the dead row count rises
incredibly high, then all of a sudden drops to 0 again when the java
apps never stop running.


Are you certain there's no open transaction? Perhaps keep an eye on 
SELECT * FROM pg_stat_activity - there might be something you don't know 
about.


If it was the autovacuum interfering, I'd expect a lock failure.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] temp tables in functions?

2007-02-08 Thread Shoaib Mir

Agreed :) I guess missed out some details from there as I just thought he
needed to drop a temp table inside a function like this:

CREATE OR REPLACE function tempfunc (int) returns int
AS
$$
 begin
 execute 'create temporary table test (a numeric) on commit drop';
  execute 'INSERT INTO test  values (1);';
return 1;
end;
$$ LANGUAGE 'plpgsql'

 used number by mistake so sorry for any inconvenience caused as I was
trying it with EnterpriseDB (where 'number 'is added for Oracle
compatibility)

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 2/8/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:


On Thu, Feb 08, 2007 at 12:29:37PM +0500, Shoaib Mir wrote:
> Something like this will help you:
>
> execute immediate 'create temporary table test (a number) on commit
drop';

PL/pgSQL doesn't recognize "immediate" and number isn't a PostgreSQL
type so the above yields a syntax error.  Also, EXECUTE isn't
necessary for the CREATE TABLE statement, although as Bruno mentioned
EXECUTE will be necessary for other statements due to plan caching.
And ON COMMIT DROP won't help if you call the function multiple
times in the same transaction.

--
Michael Fuhr



Re: [GENERAL] temp tables in functions?

2007-02-08 Thread Michael Fuhr
On Thu, Feb 08, 2007 at 12:29:37PM +0500, Shoaib Mir wrote:
> Something like this will help you:
> 
> execute immediate 'create temporary table test (a number) on commit drop';

PL/pgSQL doesn't recognize "immediate" and number isn't a PostgreSQL
type so the above yields a syntax error.  Also, EXECUTE isn't
necessary for the CREATE TABLE statement, although as Bruno mentioned
EXECUTE will be necessary for other statements due to plan caching.
And ON COMMIT DROP won't help if you call the function multiple
times in the same transaction.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Array OUT columns in a record returned from a function - part deux

2007-02-08 Thread Ged

I was on here a couple of days ago with problems getting relational
data *into* some array variables (now solved thank you), but now I'm
here with problems getting data *out* again.

A week or so ago I did a little test page with a little test table
pulled from some example about postgres arrays. Everything worked
perfectly and you can still see the test page on my dev site at
http://www.gedsguides.com/playpen/.

So last night I started working with the real app, pulling the data
out of the function and supposedly iterating through the array data
using exactly the same technique as on the test page. Oh dear! JSP
exception! Bad integer! So I switched to one of the text fields and it
now displays, but the *first* item out of the array has an array
bounds descriptor stuck on the front of it like so: "[0:1]". I had
seen this before when looking at the function results in psql, but
presumed that this was an artefact of the display process, and that it
would go away when iterated through in jdbc. I didn't remember that it
*hadn't* been there when looking at the array data in the little test
table.

You can see the current state of affairs at
http://www.gedsguides.com/wow/quests/1. I'm iterating through the
quest starters, but just blatting the array out in one for quest
enders a.t.m.

The only difference I can think of is that the playpen test page is
looking at data from a table that has native array columns whereas the
function is building arrays on the fly. All the data is in the same
database (8.0.8), coming out of the same version of Tomcat (5.5.20)
and using the same jdbc driver (dunno where they keep that, it's the
ISP's driver not mine).

I enclose some selects in psql. You can see that when I select the
whole array field there's a descriptor present in the function data
that isn't there in the table data. But when I specifically select
element 0 the descriptor disappears, so psql is obviously doing
something right that the driver isn't :-) :


ggw_dev=> select * from sal_emp;
name |  pay_by_quarter   |schedule
--+---+-
Bill | {1,1,1,1} | {meeting,lunch}
Ged  | {1,1,11000,11000} | {training,presentation,clients}
(2 rows)

ggw_dev=> select id, name, start_id, start_name from get_quest_page_details(1);
id |name |start_id |start_name
+-+-+---
 1 | Uncovering the Past | [0:1]={124,123} | [0:1]={Quae,"Prospector Whelgar"}
(1 row)

ggw_dev=> select id, name, start_id[0], start_name[0] from
get_quest_page_details(1);
id |name | start_id | start_name
+-+--+
 1 | Uncovering the Past |  124 | Quae
(1 row)


Cheers,
Ged.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Malcolm McLean
Richard Huxton wrote:
>> What is causing those dead rows to not get cleared even by a full
>> vacuum? Is there any way keep them low without having to run a
cluster
>> command as that is a locking statement and requires me to close all
java
>> applications that are connecting to that table before running the
>> cluster. 
>
> Aha! I'll bet your java app (or something in the stack) is issuing a 
> BEGIN and just sitting there. Try disconnecting the apps and seeing if

> vacuum recovers rows then. If so, you'll need to get your java code to

> stop sitting on open transactions.

I tested this theory by stopping java applications that were connected
to the database and all other connections that were using transactions
and the full vacuum was still unable to remove the dead rows.

What I'm still wondering about, is why the dead row count rises
incredibly high, then all of a sudden drops to 0 again when the java
apps never stop running.

Malcolm

-- 
   Richard Huxton
   Archonet Ltd
This information is intended only for the person or entity to which it is 
addressed and may contain private, confidential, proprietary and/or privileged 
material and may be subject to confidentiality agreements. If you are not the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this e-mail is strictly prohibited. If you have 
received this e-mail in error, please notify us immediately by return email or 
telephone +27 12 673-1400 and destroy the original message. 

Any review, retransmission, dissemination, or any other use of or taking of any 
action in reliance upon this information, by persons or entities other than the 
intended recipient, is prohibited. 

The company is neither liable for proper, complete transmission of the 
information contained in this communication, any delay in its receipt or that 
the mail is virus-free.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Persistent dead rows

2007-02-08 Thread Richard Huxton

Malcolm McLean wrote:

Hi,
 
We are currently having a problem with one of our tables containing far

too many dead rows. The table in question will have a few hundred
thousand inserts and deletes per day and usually builds up quite a large
dead row count that starts to affect the performance of the queries
select from the table.
 
However, it seems that when the dead row count reaches around 70 it

drops to 0 again and all is fast once more.
 
I know that vacuuming is supposed to recover these, but it doesn't seem

to be happening. Here is output from my analyze, vacuum, reindex and
cluster commands:

claim=# ANALYZE VERBOSE trans_queue;
INFO:  analyzing "public.trans_queue"
INFO:  "trans_queue": scanned 1749 of 1749 pages, containing 20383 live
rows and 137327 dead rows; 20383 rows in sample, 20383 estimated total
rows
ANALYZE
claim=# VACUUM VERBOSE ANALYZE trans_queue;



INFO:  "trans_queue": found 0 removable, 157730 nonremovable row
versions in 1749 pages
DETAIL:  137344 dead row versions cannot be removed yet.


This is usually because a transaction is hanging around that might be 
able to see them. The vacuum can't recover them until that transaction 
has completed.



What is causing those dead rows to not get cleared even by a full
vacuum? Is there any way keep them low without having to run a cluster
command as that is a locking statement and requires me to close all java
applications that are connecting to that table before running the
cluster. 


Aha! I'll bet your java app (or something in the stack) is issuing a 
BEGIN and just sitting there. Try disconnecting the apps and seeing if 
vacuum recovers rows then. If so, you'll need to get your java code to 
stop sitting on open transactions.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] simplifying SQL

2007-02-08 Thread Richard Huxton

garry saddington wrote:

I have this query which essentially returns records that exist in one
table (studentclass) and not another (effortandattainment) to check when
teachers have entered their student grades.

select


from studentclass left join effortandattainment 



and studentclass.classid not in (



I was wondering whether a simpler method exists to do the same thing or
is this OK.


The formatting on the query isn't great, but I can't quite see what the 
"left join" is doing with the "not in" too. I'd probably just have the 
left join and check that "effortandattainment.classid IS NULL".


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL textbook

2007-02-08 Thread Richard Huxton

Jan Mura wrote:

Hello,

I would like to ask for a hint for a good SQL textbook. I don't want just a
reference manual but real textbook with a theoretical background so I could
to design my databases following the general rules as normal forms and so
on.


Try "An Introduction to Database Systems" by C.J. Date - that'll give 
you a good background. Anything will do for SQL.



I mean something on the net of course.


I don't know that there is anything on the net that covers background 
information well.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Storing database in cluster (Memory)

2007-02-08 Thread Richard Huxton

roopa perumalraja wrote:

Hi all,

As I am inserting 100million rows daily into partitioned tables
(daily wise), it is getting slower.



What is - the inserts? By how much? What tables? What indexes? How
are you inserting these rows?


I take my words back as 100million rows. The insert of 20million rows
everyday takes only 10minutes as I use copy statement to copy into
temperory table from flat files then do some manipulation to the data
& insert it into the paritioned tables. I have solved the problem. I
have paritioned the tables date-wise. (Is partitioning the tables
monthly is recommanded?)


Depends upon your usage - some people partition daily if they have a lot 
of incoming data.



The proble is with another insert, selecting data from one of the
paritioned tables and doing some calculations then inserting into
another table. That is around 28 rows every day. This takes
really a long time (almost a day) if somebody is doing something with
the database. If nobody is using the database then it takes almost
two hours. Even a select statement to other tables in the datbase
affects this insert. While inserting I use

BEGIN WORK SELECT foo1 LOCK TABLE foo2 IN EXCLUSIVE MODE INSERT INTO
foo2 (SELECT..) COMMIT WORK

All the tables are indexed. I am using 4 indexes including the pkey
index. Will dropping the index before inserting and reindexing it
after the insert will help?


Maybe. It is often quicker to recreate an index if you are updating lots 
of rows.



Even the retrivel of data, select statement on those tables takes
about 30 mintues.



All selects take 30 minutes, regardless what work they do? Or do
you have specific selects that are causing problems?


Any statement more than one running on database takes time. Say for
example if I am inserting 20million rows into one table & at the same
time if I try updating into another table, that takes a lot of time.


You haven't given details of the select statements, or what the 
bottleneck is in your system. It sounds like it might be disk I/O. Have 
you tried monitoring activity with vmstat/iostat (assuming you're on a 
Linux/Unix system)?



I have tried increasing the parameters in postgres.conf but still
that doesn't help me much as the no of rows are huge.



What parameters, to what values? What hardware are you running on?
What load do you place on it?


parameters in postgres.conf which I increased are

fsysn on

> wal_buffers 128
> checkpoint_segments 256 (2 GB)

checkpoint_timeout 3600 (1 hour)


How did you calculate these? Is this based on your bulk data load.

> work_mem: set to 128MB

maintenance_work_mem: to 512MB


I assume you have enough memory to support these settings? Something 
over 4GB, yes?



I wanted to increase shared_buffer to 60,000 but I am not able to
restart the database if I change it even to 1. It says 'could not
start postmaster'. (max_connection is 100.)


"It" says? What do the logs say?


Will the idea of storing the database in cluster (memory)
increase the performance of insert/update/select in the table in
a suitation like this? Thanks a lot in advance.



A better question would be "Why isn't my data being cached?". The
operating-system should cache regularly used files.


So I think the problem is when two things are happening in database,
it takes a long time. Thats the reason I was wondering if storing the
database in cluster will solve the problem.


The data should be cached anyway. That's the point.

You still haven't told us any of the information needed to help you. 
Some of the questions we still need answers for:

1. What queries are giving you trouble, and how many rows do they deal with?
2. What hardware is the database running on?
3. What load is on the system (try "vmstat 10" or similar)

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Persistent dead rows

2007-02-08 Thread Malcolm McLean
Hi,
 
We are currently having a problem with one of our tables containing far
too many dead rows. The table in question will have a few hundred
thousand inserts and deletes per day and usually builds up quite a large
dead row count that starts to affect the performance of the queries
select from the table.
 
However, it seems that when the dead row count reaches around 70 it
drops to 0 again and all is fast once more.
 
I know that vacuuming is supposed to recover these, but it doesn't seem
to be happening. Here is output from my analyze, vacuum, reindex and
cluster commands:

claim=# ANALYZE VERBOSE trans_queue;
INFO:  analyzing "public.trans_queue"
INFO:  "trans_queue": scanned 1749 of 1749 pages, containing 20383 live
rows and 137327 dead rows; 20383 rows in sample, 20383 estimated total
rows
ANALYZE
claim=# VACUUM VERBOSE ANALYZE trans_queue;
INFO:  vacuuming "public.trans_queue"
INFO:  index "tq_index_1" now contains 100499 row versions in 307 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.01u sec elapsed 0.09 sec.
INFO:  index "pk_trans_queue" now contains 157730 row versions in 477
pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.00u sec elapsed 0.00 sec.
INFO:  index "tq_index_2" now contains 157730 row versions in 1316 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.00u sec elapsed 0.26 sec.
INFO:  index "tq_index_3" now contains 157730 row versions in 641 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.16 sec.
INFO:  "trans_queue": found 0 removable, 157730 nonremovable row
versions in 1749 pages
DETAIL:  137344 dead row versions cannot be removed yet.
There were 1 unused item pointers.
0 pages are entirely empty.
CPU 0.02s/0.03u sec elapsed 0.65 sec.
INFO:  analyzing "public.trans_queue"
INFO:  "trans_queue": scanned 1749 of 1749 pages, containing 20386 live
rows and 137344 dead rows; 20386 rows in sample, 20386 estimated total
rows
VACUUM
claim=# VACUUM FULL VERBOSE ANALYZE trans_queue;
INFO:  vacuuming "public.trans_queue"
INFO:  "trans_queue": found 4 removable, 157808 nonremovable row
versions in 1749 pages
DETAIL:  137417 dead row versions cannot be removed yet.
Nonremovable row versions range from 79 to 99 bytes long.
There were 1 unused item pointers.
Total free space (including removable row versions) is 131552 bytes.
0 pages are or will become empty, including 0 at the end of the table.
25 pages containing 8472 free bytes are potential move destinations.
CPU 0.00s/0.02u sec elapsed 0.11 sec.
INFO:  index "tq_index_1" now contains 100509 row versions in 307 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.01u sec elapsed 0.11 sec.
INFO:  index "pk_trans_queue" now contains 157808 row versions in 477
pages
DETAIL:  4 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.02u sec elapsed 0.02 sec.
INFO:  index "tq_index_2" now contains 157808 row versions in 1316 pages
DETAIL:  4 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.03u sec elapsed 0.26 sec.
INFO:  index "tq_index_3" now contains 157808 row versions in 641 pages
DETAIL:  4 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.01s/0.01u sec elapsed 0.18 sec.
INFO:  "trans_queue": moved 76 row versions, truncated 1749 to 1749
pages
DETAIL:  CPU 0.00s/0.01u sec elapsed 0.23 sec.
INFO:  index "tq_index_1" now contains 100509 row versions in 308 pages
DETAIL:  64 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "pk_trans_queue" now contains 157808 row versions in 478
pages
DETAIL:  76 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.01u sec elapsed 0.10 sec.
INFO:  index "tq_index_2" now contains 157808 row versions in 1317 pages
DETAIL:  76 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.02u sec elapsed 0.10 sec.
INFO:  index "tq_index_3" now contains 157808 row versions in 641 pages
DETAIL:  76 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.01u sec elapsed 0.23 sec.
INFO:  analyzing "public.trans_queue"
INFO:  "trans_queue": scanned 1749 of 1749 pages, containing 20391 live
rows and 137417 dead rows; 20391 rows in sample, 20391 estimated total
rows
VACUUM
claim=# REINDEX TABLE trans_queue;
REINDEX
claim=# ANALYZE VERBOSE trans_queue;
INFO:  analyzing "public.trans_queue"
INFO:  "trans_queue": scanned 1760 of 1760 pages, containing 21390 live
rows and 137417 dead rows; 21390 rows in sample, 21390 estimated total
rows
ANALYZE
claim=# CLUSTER trans_queue;
CLUSTER
claim=# ANALYZE VERBOSE tra