[HACKERS] Working happily on 8.1 (Was: panic on 7.3)

2006-01-21 Thread Rick Gigger
While this little bug was a menace to me at a bad time my biggest  
problem was that I didn't have a good enough vacuum strategy and my  
reasonable sized database became the size of the world.  At that  
point it couldn't be vacuumed without jamming up the whole server.   
So I have some questions.  If this is the wrong place let me know and  
I will submit it to general.


1) What about these settings.  It is a dual 2.8 ghz xeon box with 6  
RAID 5 (I know I should be using 0+1 or something) 15,000 rpm scsi  
drives and 2 gigs  of ram.


max_connections = 1024
shared_buffers = 15000
work_mem = 1024
maintenance_work_mem = 10
max_fsm_pages = 100
checkpoint_segments = 10
checkpoint_timeout = 1000
effective_cache_size = 5

My base directory is 618 MB.  All other performance related  
settings I left at the defaults.


I know it depends on my data set and load etc, but it would be great  
if someone could tell me if anything in there is a little crazy.  The  
max_fsm_pages seemed a bit high but I really want vacuum to go fast  
and painless and if I read everything right it still doesn't take up  
much memory.


2) I didn't touch the Vacuum delay, background writer or autovacuum  
settings because I wasn't familiar enough with them.  Are the default  
values very restricting?  I realized a little too late that leaving  
some of the 7.3 defaults in place came back to bite me when my load  
went up.  Since these are performance enhancing features and they  
didn't exist in older versions I figured that the defaults would  
still be better than 7.3 without those features.  Or are the defaults  
too conservative and I need to change them ASAP?


3) Several times there were backends  running that were just bringing  
down the system.  Is there a way to signal a single backend to die  
without restarting the whole db server?  I looked on google, searched  
the archives and in the docs and couldn't find any way to do this.


Thanks again,

Rick


On Jan 21, 2006, at 12:05 AM, Rick Gigger wrote:


Thanks very much!

I've decided to go straight to 8.1 though.  There are just too many  
performance improvements at this point that I might regret not  
having and I don't want to do a dump reload again.  I am about to  
compile it now.  If it isn't a panic grade failure in the latest  
8.1 code then I'd just assume take the stock release source code.   
I don't care at all if this kills one connection at the ultra-low  
frequency with which it occurs but what I can't have is the whole  
server rebooting itself in the middle of processing hundreds of  
transactions.  Once that happens all of the web clients hang onto  
their bad connections and then eventually die.  Considering that  
I'm moving to 8.1 and am not too familiar with applying patches am  
I crazy for just going with the stock 8.1 code?


On Jan 20, 2006, at 10:36 PM, Tom Lane wrote:


Rick Gigger [EMAIL PROTECTED] writes:

I don't know if 2K could have passed since the last checkpoint.
...
now that I think about it I was getting about 400 pages requests /
minute and each of those would have have been doing at least 2
transactions so yes, I guess that is very likely.


Good, 'cause if you didn't have a couple thousand transactions  
between

checkpoints then we need another theory ;-)


You realize of course that that's pretty old ...



Yes.  I will be upgrading immediately.


You'll want to include this patch:
http://archives.postgresql.org/pgsql-committers/2006-01/msg00289.php
(or see adjacent messages if you plan to move to something newer than
7.3.*).  We probably will not put out another set of releases until
next month, unless something really big comes along.  This one  
doesn't

qualify as really big IMHO, because it's not a PANIC-grade failure in
the later branches.  But having been burnt once, I'm sure you'll want
a patched copy ...

regards, tom lane




---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

  http://archives.postgresql.org




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] makesign() broken in tsearch2

2006-01-21 Thread Teodor Sigaev



the bottom half might be messed up too.  So we are failing to exploit
the full intended sign space, which presumably is costing something
in index efficiency.

 


You are absolutly right. My fault.


It looks to me like the values calculated by this routine end up on
disk, and therefore we can't fix it without forcing an initdb, or
at least REINDEX of all affected indexes.  Is that correct?
 

No.  query_gist.c exists only in HEAD branch.  That code is devloped  
not a lot time ago, and itsn't tested well yet.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[HACKERS] Commands per transaction

2006-01-21 Thread Rod Taylor
Is there any way of bumping this limit or am I stuck breaking up the
transaction?

ERROR:  cannot have more than 2^32-1 commands in a transaction
-- 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Rod Taylor
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
 Rod Taylor wrote:
  Is there any way of bumping this limit or am I stuck breaking up the
  transaction?
 
 Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
 stuck (short of changing CommandId to 64 bits, which would bloat your
 tables considerably ...)

It was a complex plpgsql function iterating through a large volume of
data. It would appear that each IF statement in plpgsql is taken as a
separate statement because I don't believe I was dealing with more than
about 1.2B rows in a FOR ... EXECUTE 'SELECT ...' loop.


-- 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Alvaro Herrera
Rod Taylor wrote:
 Is there any way of bumping this limit or am I stuck breaking up the
 transaction?

Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)

-- 
Alvaro Herrera   Developer, http://www.PostgreSQL.org
[PostgreSQL] is a great group; in my opinion it is THE best open source
development communities in existence anywhere.(Lamar Owen)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] makesign() broken in tsearch2

2006-01-21 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
 It looks to me like the values calculated by this routine end up on
 disk, and therefore we can't fix it without forcing an initdb, or
 at least REINDEX of all affected indexes.  Is that correct?
 
 No.  query_gist.c exists only in HEAD branch.

Oh, good, that makes it easy.  Do you want to apply the fix or shall I?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Working happily on 8.1 (Was: panic on 7.3)

2006-01-21 Thread Tom Lane
Rick Gigger [EMAIL PROTECTED] writes:
 2) I didn't touch the Vacuum delay, background writer or autovacuum  
 settings because I wasn't familiar enough with them.  Are the default  
 values very restricting?

By default, autovacuum isn't even turned on --- you have to enable it
and also stats_row_level if you want to use autovac.  I don't have
enough experience with it to say whether the other settings are
adequate.

 3) Several times there were backends  running that were just bringing  
 down the system.  Is there a way to signal a single backend to die  
 without restarting the whole db server?

SIGINT (ie query cancel) is safe enough.  If that doesn't work within a
few seconds, try SIGTERM (there is controversy over how safe that is,
but people do use it).

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Heikki Linnakangas

On Sat, 21 Jan 2006, Alvaro Herrera wrote:


Rod Taylor wrote:

Is there any way of bumping this limit or am I stuck breaking up the
transaction?


Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)


Would it be possible to increment the command counter only for commands 
that do updates?


- Heikki

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Working happily on 8.1 (Was: panic on 7.3)

2006-01-21 Thread Rick Gigger

Rick Gigger [EMAIL PROTECTED] writes:

2) I didn't touch the Vacuum delay, background writer or autovacuum
settings because I wasn't familiar enough with them.  Are the default
values very restricting?


By default, autovacuum isn't even turned on --- you have to enable it
and also stats_row_level if you want to use autovac.  I don't have
enough experience with it to say whether the other settings are
adequate.


Yes, I realized this not long after I started things up, so I will  
have to wait till a time when I can restart postgres to try it out.   
For now I have come up with something that I think will work fine.   
Vacuum seems to be about a million times faster now due to a number  
of factors.  I am going to keep a close eye on the sessions table  
making sure that it can't start getting bloated again and I think  
I'll be ok.   It's a saturday though so we'll really see how it holds  
up on monday.





3) Several times there were backends  running that were just bringing
down the system.  Is there a way to signal a single backend to die
without restarting the whole db server?


SIGINT (ie query cancel) is safe enough.  If that doesn't work  
within a

few seconds, try SIGTERM (there is controversy over how safe that is,
but people do use it).


Thanks again!

Rick

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Rod Taylor
On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
 Rod Taylor wrote:
  Is there any way of bumping this limit or am I stuck breaking up the
  transaction?
 
 Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
 stuck (short of changing CommandId to 64 bits, which would bloat your
 tables considerably ...)

Does a subtransaction receive an independent command counter?  If so, I
could shove a bunch of work into large subtransactions allowing me to
avoid bumping the main transactions command counter.

Heck, PostgreSQL could automatically create a subtransaction (savepoint)
with no rollback name. The client sending a commit or rollback always
operates on the parent transaction and the child follows (committing or
rolling back when required).

-- 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Working happily on 8.1 (Was: panic on 7.3)

2006-01-21 Thread Tom Lane
Rick Gigger [EMAIL PROTECTED] writes:
 By default, autovacuum isn't even turned on --- you have to enable it
 and also stats_row_level if you want to use autovac.  I don't have
 enough experience with it to say whether the other settings are
 adequate.

 Yes, I realized this not long after I started things up, so I will  
 have to wait till a time when I can restart postgres to try it out.   

As long as you left stats_start_collector on, that's not so.  autovacuum
and stats_row_level can both be changed without a postmaster restart;
just fix the config file and SIGHUP.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Would it be possible to increment the command counter only for commands 
 that do updates?

Probably that would work, but I'm unsure whether we have infrastructure
that would let you detect that reliably...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Tom Lane
Rod Taylor [EMAIL PROTECTED] writes:
 Does a subtransaction receive an independent command counter?

No such luck, and IIRC that would not be easy at all to change :-(

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] makesign() broken in tsearch2

2006-01-21 Thread Teodor Sigaev



Oh, good, that makes it easy.  Do you want to apply the fix or shall I?
 

I will have normal access to Internet only on Monday. It can easy wait 
for me :)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Tino Wildenhain
Rod Taylor schrieb:
 On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
 
Rod Taylor wrote:

Is there any way of bumping this limit or am I stuck breaking up the
transaction?

Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
stuck (short of changing CommandId to 64 bits, which would bloat your
tables considerably ...)
 
...

As a quick fix (as is quickly fixed, quick running ;)
You could load your update data to a temp table via
COPY and then update a large table in one sweep.
I guess you dont have 2^31 tables to update? ;)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Commands per transaction

2006-01-21 Thread Rod Taylor
On Sat, 2006-01-21 at 22:30 +0100, Tino Wildenhain wrote:
 Rod Taylor schrieb:
  On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
  
 Rod Taylor wrote:
 
 Is there any way of bumping this limit or am I stuck breaking up the
 transaction?
 
 Wow, I never heard of anyone reaching the limit :-(  Sorry, you are
 stuck (short of changing CommandId to 64 bits, which would bloat your
 tables considerably ...)

 As a quick fix (as is quickly fixed, quick running ;)
 You could load your update data to a temp table via
 COPY and then update a large table in one sweep.
 I guess you dont have 2^31 tables to update? ;)

I wouldn't have written a plpgsql function if I was doing the same thing
with every tuple. Running multiple statements across the 40GB heap would
take much longer to complete.

I've resorted to a client side script that COPYs the data out of various
partitions in the main structure, applies the logic, then COPYs it back
in again to a new partition. Once complete drop the original partition
so the datasets swap.

This is somewhat slower than my original method of doing it all on the
server side but not by too much since the reduced CPU load seems to
offset the increased network IO.

-- 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Need help in installing postgresql 8.1.2 on Windows

2006-01-21 Thread Sarvjot Kaur
  Sir  I am trying to install Globus Toolkit4 on Windows machine. Postgresql8.1.2 is required software for installing GT4. But i cant get installation steps from anywhere..  Please help me and do reply  Thanks  Sarvjot
		Yahoo! Photos – Showcase holiday pictures in hardcover 
Photo Books. You design it and we’ll bind it!